From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7ADB5C17441 for ; Tue, 12 Nov 2019 15:20:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 23F4521E6F for ; Tue, 12 Nov 2019 15:20:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UPHhAY1Q" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 23F4521E6F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CA8CC6B0003; Tue, 12 Nov 2019 10:20:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C7DE26B0005; Tue, 12 Nov 2019 10:20:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B477F6B0006; Tue, 12 Nov 2019 10:20:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id 9C54F6B0003 for ; Tue, 12 Nov 2019 10:20:23 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 589AD8249980 for ; Tue, 12 Nov 2019 15:20:23 +0000 (UTC) X-FDA: 76147986726.28.cloth92_350b9c26fb0e X-HE-Tag: cloth92_350b9c26fb0e X-Filterd-Recvd-Size: 13824 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Tue, 12 Nov 2019 15:20:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573572021; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:openpgp:openpgp:autocrypt:autocrypt; bh=RRLU4CBhbqbxo/+0eIqjZe11ynAK8yZqhlPFAMavub4=; b=UPHhAY1Q90bmln91mGpFH70PzpjS2qn5KdFw6Uc3Di3vmEoWsjgXk/ujutBUVRKUhc+HI3 EWzfQmOXhgIpd/WjO29FV9bvqJ5oIqUKgTzFOjoO9wZclfJHQctrZpdLT8HvJn06Zf6YOe hbnOIVRADyAPSCt76QPn0+iL4HNiowc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-296-3oWsIPZVNjCE0jKc0MrqBg-1; Tue, 12 Nov 2019 10:20:18 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3DECC8713B2; Tue, 12 Nov 2019 15:20:15 +0000 (UTC) Received: from [10.18.17.169] (dhcp-17-169.bos.redhat.com [10.18.17.169]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0F82460D7C; Tue, 12 Nov 2019 15:19:51 +0000 (UTC) Subject: Re: + mm-introduce-reported-pages.patch added to -mm tree To: Alexander Duyck Cc: Michal Hocko , Andrew Morton , Andrea Arcangeli , Alexander Duyck , Dan Williams , Dave Hansen , David Hildenbrand , Konrad Rzeszutek Wilk , lcapitulino@redhat.com, Mel Gorman , mm-commits@vger.kernel.org, "Michael S. Tsirkin" , Oscar Salvador , Pankaj Gupta , Paolo Bonzini , Rik van Riel , Vlastimil Babka , "Wang, Wei W" , Matthew Wilcox , Yang Zhang , linux-mm References: <20191106000547.juQRi83gi%akpm@linux-foundation.org> <20191106121605.GH8314@dhcp22.suse.cz> From: Nitesh Narayan Lal Openpgp: preference=signencrypt Autocrypt: addr=nitesh@redhat.com; prefer-encrypt=mutual; keydata= mQINBFl4pQoBEADT/nXR2JOfsCjDgYmE2qonSGjkM1g8S6p9UWD+bf7YEAYYYzZsLtbilFTe z4nL4AV6VJmC7dBIlTi3Mj2eymD/2dkKP6UXlliWkq67feVg1KG+4UIp89lFW7v5Y8Muw3Fm uQbFvxyhN8n3tmhRe+ScWsndSBDxYOZgkbCSIfNPdZrHcnOLfA7xMJZeRCjqUpwhIjxQdFA7 n0s0KZ2cHIsemtBM8b2WXSQG9CjqAJHVkDhrBWKThDRF7k80oiJdEQlTEiVhaEDURXq+2XmG jpCnvRQDb28EJSsQlNEAzwzHMeplddfB0vCg9fRk/kOBMDBtGsTvNT9OYUZD+7jaf0gvBvBB lbKmmMMX7uJB+ejY7bnw6ePNrVPErWyfHzR5WYrIFUtgoR3LigKnw5apzc7UIV9G8uiIcZEn C+QJCK43jgnkPcSmwVPztcrkbC84g1K5v2Dxh9amXKLBA1/i+CAY8JWMTepsFohIFMXNLj+B RJoOcR4HGYXZ6CAJa3Glu3mCmYqHTOKwezJTAvmsCLd3W7WxOGF8BbBjVaPjcZfavOvkin0u DaFvhAmrzN6lL0msY17JCZo046z8oAqkyvEflFbC0S1R/POzehKrzQ1RFRD3/YzzlhmIowkM BpTqNBeHEzQAlIhQuyu1ugmQtfsYYq6FPmWMRfFPes/4JUU/PQARAQABtCVOaXRlc2ggTmFy YXlhbiBMYWwgPG5pbGFsQHJlZGhhdC5jb20+iQI9BBMBCAAnBQJZeKUKAhsjBQkJZgGABQsJ CAcCBhUICQoLAgQWAgMBAh4BAheAAAoJEKOGQNwGMqM56lEP/A2KMs/pu0URcVk/kqVwcBhU SnvB8DP3lDWDnmVrAkFEOnPX7GTbactQ41wF/xwjwmEmTzLrMRZpkqz2y9mV0hWHjqoXbOCS 6RwK3ri5e2ThIPoGxFLt6TrMHgCRwm8YuOSJ97o+uohCTN8pmQ86KMUrDNwMqRkeTRW9wWIQ EdDqW44VwelnyPwcmWHBNNb1Kd8j3xKlHtnS45vc6WuoKxYRBTQOwI/5uFpDZtZ1a5kq9Ak/ MOPDDZpd84rqd+IvgMw5z4a5QlkvOTpScD21G3gjmtTEtyfahltyDK/5i8IaQC3YiXJCrqxE r7/4JMZeOYiKpE9iZMtS90t4wBgbVTqAGH1nE/ifZVAUcCtycD0f3egX9CHe45Ad4fsF3edQ ESa5tZAogiA4Hc/yQpnnf43a3aQ67XPOJXxS0Qptzu4vfF9h7kTKYWSrVesOU3QKYbjEAf95 NewF9FhAlYqYrwIwnuAZ8TdXVDYt7Z3z506//sf6zoRwYIDA8RDqFGRuPMXUsoUnf/KKPrtR ceLcSUP/JCNiYbf1/QtW8S6Ca/4qJFXQHp0knqJPGmwuFHsarSdpvZQ9qpxD3FnuPyo64S2N Dfq8TAeifNp2pAmPY2PAHQ3nOmKgMG8Gn5QiORvMUGzSz8Lo31LW58NdBKbh6bci5+t/HE0H pnyVf5xhNC/FuQINBFl4pQoBEACr+MgxWHUP76oNNYjRiNDhaIVtnPRqxiZ9v4H5FPxJy9UD Bqr54rifr1E+K+yYNPt/Po43vVL2cAyfyI/LVLlhiY4yH6T1n+Di/hSkkviCaf13gczuvgz4 KVYLwojU8+naJUsiCJw01MjO3pg9GQ+47HgsnRjCdNmmHiUQqksMIfd8k3reO9SUNlEmDDNB XuSzkHjE5y/R/6p8uXaVpiKPfHoULjNRWaFc3d2JGmxJpBdpYnajoz61m7XJlgwl/B5Ql/6B dHGaX3VHxOZsfRfugwYF9CkrPbyO5PK7yJ5vaiWre7aQ9bmCtXAomvF1q3/qRwZp77k6i9R3 tWfXjZDOQokw0u6d6DYJ0Vkfcwheg2i/Mf/epQl7Pf846G3PgSnyVK6cRwerBl5a68w7xqVU 4KgAh0DePjtDcbcXsKRT9D63cfyfrNE+ea4i0SVik6+N4nAj1HbzWHTk2KIxTsJXypibOKFX 2VykltxutR1sUfZBYMkfU4PogE7NjVEU7KtuCOSAkYzIWrZNEQrxYkxHLJsWruhSYNRsqVBy KvY6JAsq/i5yhVd5JKKU8wIOgSwC9P6mXYRgwPyfg15GZpnw+Fpey4bCDkT5fMOaCcS+vSU1 UaFmC4Ogzpe2BW2DOaPU5Ik99zUFNn6cRmOOXArrryjFlLT5oSOe4IposgWzdwARAQABiQIl BBgBCAAPBQJZeKUKAhsMBQkJZgGAAAoJEKOGQNwGMqM5ELoP/jj9d9gF1Al4+9bngUlYohYu 0sxyZo9IZ7Yb7cHuJzOMqfgoP4tydP4QCuyd9Q2OHHL5AL4VFNb8SvqAxxYSPuDJTI3JZwI7 d8JTPKwpulMSUaJE8ZH9n8A/+sdC3CAD4QafVBcCcbFe1jifHmQRdDrvHV9Es14QVAOTZhnJ vweENyHEIxkpLsyUUDuVypIo6y/Cws+EBCWt27BJi9GH/EOTB0wb+2ghCs/i3h8a+bi+bS7L FCCm/AxIqxRurh2UySn0P/2+2eZvneJ1/uTgfxnjeSlwQJ1BWzMAdAHQO1/lnbyZgEZEtUZJ x9d9ASekTtJjBMKJXAw7GbB2dAA/QmbA+Q+Xuamzm/1imigz6L6sOt2n/X/SSc33w8RJUyor SvAIoG/zU2Y76pKTgbpQqMDmkmNYFMLcAukpvC4ki3Sf086TdMgkjqtnpTkEElMSFJC8npXv 3QnGGOIfFug/qs8z03DLPBz9VYS26jiiN7QIJVpeeEdN/LKnaz5LO+h5kNAyj44qdF2T2AiF HxnZnxO5JNP5uISQH3FjxxGxJkdJ8jKzZV7aT37sC+Rp0o3KNc+GXTR+GSVq87Xfuhx0LRST NK9ZhT0+qkiN7npFLtNtbzwqaqceq3XhafmCiw8xrtzCnlB/C4SiBr/93Ip4kihXJ0EuHSLn VujM7c/b4pps Organization: Red Hat Inc, Message-ID: Date: Tue, 12 Nov 2019 10:19:50 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-MC-Unique: 3oWsIPZVNjCE0jKc0MrqBg-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 11/11/19 5:00 PM, Alexander Duyck wrote: > On Mon, Nov 11, 2019 at 10:52 AM Nitesh Narayan Lal w= rote: >> >> On 11/6/19 7:16 AM, Michal Hocko wrote: >>> I didn't have time to read through newer versions of this patch series >>> but I remember there were concerns about this functionality being pulle= d >>> into the page allocator previously both by me and Mel [1][2]. Have thos= e been >>> addressed? I do not see an ack from Mel or any other MM people. Is ther= e >>> really a consensus that we want something like that living in the >>> allocator? >>> >>> There has also been a different approach discussed and from [3] >>> (referenced by the cover letter) I can only see >>> >>> : Then Nitesh's solution had changed to the bitmap approach[7]. However= it >>> : has been pointed out that this solution doesn't deal with sparse memo= ry, >>> : hotplug, and various other issues. >>> >>> which looks more like something to be done than a fundamental >>> roadblocks. >>> >>> [1] http://lkml.kernel.org/r/20190912163525.GV2739@techsingularity.net >>> [2] http://lkml.kernel.org/r/20190912091925.GM4023@dhcp22.suse.cz >>> [3] http://lkml.kernel.org/r/29f43d5796feed0dec8e8bb98b187d9dac03b900.c= amel@linux.intel.com >>> >> [...] >> >> Hi, >> >> I performed some experiments to find the root cause for the performance >> degradation Alexander reported with my v12 patch-set. [1] >> >> I will try to give a brief background of the previous discussion >> under v12: (Alexander can correct me if I am missing something). >> Alexander suggested two issues with my v12 posting: [2] >> (This is excluding the sparse zone and memory hotplug/hotremove support) >> >> - A crash which was caused because I was not using spinlock_irqsave() >> (Fix suggestion came from Alexander). >> >> - Performance degradation with Alexander's suggested setup. Where we are= using >> modified will-it-scale/page_fault with THP, CONFIG_SLAB_FREELIST_RANDO= M & >> CONFIG_SHUFFLE_PAGE_ALLOCATOR. When I was using (MAX_ORDER - 2) as the >> PAGE_REPORTING_MIN_ORDER, I also observed significant performance degr= adation >> (around 20% in the number of threads launched on the 16th vCPU). Howev= er, on >> switching the PAGE_REPORTING_MIN_ORDER to (MAX_ORDER - 1), I was able = to get >> the performance similar to what Alexander is reporting. >> >> PAGE_REPORTING_MIN_ORDER: is the minimum order of a page to be captured = in the >> bitmap and get reported to the hypervisor. >> >> For the discussion where we are comparing the two series, the performanc= e >> aspect is more relevant and important. >> It turns out that with the current implementation the number of vmexit w= ith >> PAGE_REPORTING_MIN_ORDER as pageblock_order or (MAX_ORDER - 2) are signi= ficantly >> large when compared to (MAX_ODER - 1). >> >> One of the reason could be that the lower order pages are not getting su= fficient >> time to merge with each other as a result they are somehow getting repor= ted >> with 2 separate reporting requests. Hence, generating more vmexits. Wher= e >> as with (MAX_ORDER - 1) we don't have that kind of situation as I never = try >> to report any page which has order < (MAX_ORDER - 1). >> >> To fix this, I might have to further limit the reporting which could all= ow the >> lower order pages to further merge and hence reduce the VM exits. I will= try to >> do some experiments to see if I can fix this. In any case, if anyone has= a >> suggestion I would be more than happy to look in that direction. > That doesn't make any sense. My setup using MAX_ORDER - 2, aka > pageblock_order, as the limit doesn't experience the same performance > issues the bitmap solution does. That leads me to believe the issue > isn't that the pages have not had a chance to be merged. > So, I did run your series as well with a few syfs variables to see how many pages of order (MAX_ORDER - 1) or (MAX_ORDER - 2) are reported at the end o= f will-it-scale/page_fault4 test. What I observed is the number of (MAX_ORDER - 2) pages which were getting reported in your case were lesser than what has been reported in mine with pageblock_order. As you have mentioned below about putting pages in a certain part of the free list might have also an impact. >> Following are the numbers I gathered on a 30GB single NUMA, 16 vCPU gues= t >> affined to a single host-NUMA: >> >> On 16th vCPU: >> With PAGE_REPORTING_MIN_ORDER as (MAX_ORDER - 1): >> % Dip on the number of Processes =3D 1.3 % >> % Dip on the number of Threads =3D 5.7 % >> >> With PAGE_REPORTING_MIN_ORDER as With (pageblock_order): >> % Dip on the number of Processes =3D 5 % >> % Dip on the number of Threads =3D 20 % > So I don't hold much faith in the threads numbers. I have seen the > variability be as high as 14% between runs. That's interesting. Do you see the variability even with an unmodified kern= el? Somehow, for me it seems pretty consistent. However, if you are running wit= h multiple NUMA nodes it might have a significant impact on the numbers. For now, I am only running a single NUMA guest affined to a single NUMA of host. >> Michal's suggestion: >> I was able to get the prototype which could use page-isolation API: >> start_isolate_page_range()/undo_isolate_page_range() to work. >> But the issue mentioned above was also evident with it. >> >> Hence, I think before moving to the decision whether I want to use >> __isolate_free_page() which isolates pages from the buddy or >> start/undo_isolate_page_range() which just marks the page as MIGRATE_ISO= LATE, >> it is important for me to resolve the above-mentioned issue. > I'd be curious how you are avoiding causing memory starvation if you > are isolating ranges of memory that have been recently freed. I would still be marking only 32 pages as MIGRATE_ISOLATE at a time. It is exactly same as that of isolating limited chunk of pages from the buddy. For example if I have a pfn:x of order y then I pass start_isolate_page_range(x, x+y, mt, 0). So at the end we will have 32 such entries marked as MIGRATE_ISOLATE. >> Previous discussions: >> More about how we ended up with these two approaches could be found at [= 3] & >> [4] explained by Alexander & David. >> >> [1] https://lore.kernel.org/lkml/20190812131235.27244-1-nitesh@redhat.co= m/ >> [2] https://lkml.org/lkml/2019/10/2/425 >> [3] https://lkml.org/lkml/2019/10/23/1166 >> [4] https://lkml.org/lkml/2019/9/12/48 >> > So one thing you may want to consider would be how placement of the > buffers will impact your performance. > > One thing I realized I was doing wrong with my approach was scanning > for pages starting at the tail and then working up. It greatly hurt > the efficiency of my search since in the standard case most of the > free memory will be placed at the head and only with shuffling enabled > do I really need to worry about things getting mixed up with the tail. > > I suspect you may be similarly making things more difficult for > yourself by placing the reported pages back on the head of the list > instead of placing them at the tail where they will not be reallocated > immediately. hmm, I see. I will try and explore this. --=20 Thanks Nitesh