From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2D32C432C3 for ; Wed, 13 Nov 2019 18:39:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C16CC206F3 for ; Wed, 13 Nov 2019 18:39:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ev090OB/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C16CC206F3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 088BA6B0003; Wed, 13 Nov 2019 13:39:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 05F4F6B0005; Wed, 13 Nov 2019 13:39:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E42866B0006; Wed, 13 Nov 2019 13:39:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id CAF796B0003 for ; Wed, 13 Nov 2019 13:39:47 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 7638E180AD81D for ; Wed, 13 Nov 2019 18:39:47 +0000 (UTC) X-FDA: 76152118014.22.range37_38472fcdb3b4a X-HE-Tag: range37_38472fcdb3b4a X-Filterd-Recvd-Size: 16256 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Wed, 13 Nov 2019 18:39:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573670386; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:openpgp:openpgp:autocrypt:autocrypt; bh=BsPzIioY1tWb4vsIAmwlJFLvaQyTaXRRp2YUfMQw6oQ=; b=ev090OB/LCG8dpdWHfMooFP9DlTaWNacjQ90gddYntze43I2o61f+7ydMC4+uHe2hNpCjB j/fN0Yn6PZKFWKO0ebjewFKBhreHUzLHh0kUFGweJSS4hsPAeoywOvmMziJYT52yMWgssA XNOJ6rXFHa0x0S6moKOLU0AcgzSA9kA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-7-UxBiYvf3OPOq9l4-MYGmiQ-1; Wed, 13 Nov 2019 13:39:44 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 97969801E6A; Wed, 13 Nov 2019 18:39:41 +0000 (UTC) Received: from [10.40.204.128] (ovpn-204-128.brq.redhat.com [10.40.204.128]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 921805F789; Wed, 13 Nov 2019 18:39:15 +0000 (UTC) Subject: Re: + mm-introduce-reported-pages.patch added to -mm tree To: Alexander Duyck , Alexander Duyck , David Hildenbrand Cc: Michal Hocko , Andrew Morton , Andrea Arcangeli , Dan Williams , Dave Hansen , Konrad Rzeszutek Wilk , lcapitulino@redhat.com, Mel Gorman , mm-commits@vger.kernel.org, "Michael S. Tsirkin" , Oscar Salvador , Pankaj Gupta , Paolo Bonzini , Rik van Riel , Vlastimil Babka , "Wang, Wei W" , Matthew Wilcox , Yang Zhang , linux-mm References: <20191106000547.juQRi83gi%akpm@linux-foundation.org> <20191106121605.GH8314@dhcp22.suse.cz> From: Nitesh Narayan Lal Openpgp: preference=signencrypt Autocrypt: addr=nitesh@redhat.com; prefer-encrypt=mutual; keydata= mQINBFl4pQoBEADT/nXR2JOfsCjDgYmE2qonSGjkM1g8S6p9UWD+bf7YEAYYYzZsLtbilFTe z4nL4AV6VJmC7dBIlTi3Mj2eymD/2dkKP6UXlliWkq67feVg1KG+4UIp89lFW7v5Y8Muw3Fm uQbFvxyhN8n3tmhRe+ScWsndSBDxYOZgkbCSIfNPdZrHcnOLfA7xMJZeRCjqUpwhIjxQdFA7 n0s0KZ2cHIsemtBM8b2WXSQG9CjqAJHVkDhrBWKThDRF7k80oiJdEQlTEiVhaEDURXq+2XmG jpCnvRQDb28EJSsQlNEAzwzHMeplddfB0vCg9fRk/kOBMDBtGsTvNT9OYUZD+7jaf0gvBvBB lbKmmMMX7uJB+ejY7bnw6ePNrVPErWyfHzR5WYrIFUtgoR3LigKnw5apzc7UIV9G8uiIcZEn C+QJCK43jgnkPcSmwVPztcrkbC84g1K5v2Dxh9amXKLBA1/i+CAY8JWMTepsFohIFMXNLj+B RJoOcR4HGYXZ6CAJa3Glu3mCmYqHTOKwezJTAvmsCLd3W7WxOGF8BbBjVaPjcZfavOvkin0u DaFvhAmrzN6lL0msY17JCZo046z8oAqkyvEflFbC0S1R/POzehKrzQ1RFRD3/YzzlhmIowkM BpTqNBeHEzQAlIhQuyu1ugmQtfsYYq6FPmWMRfFPes/4JUU/PQARAQABtCVOaXRlc2ggTmFy YXlhbiBMYWwgPG5pbGFsQHJlZGhhdC5jb20+iQI9BBMBCAAnBQJZeKUKAhsjBQkJZgGABQsJ CAcCBhUICQoLAgQWAgMBAh4BAheAAAoJEKOGQNwGMqM56lEP/A2KMs/pu0URcVk/kqVwcBhU SnvB8DP3lDWDnmVrAkFEOnPX7GTbactQ41wF/xwjwmEmTzLrMRZpkqz2y9mV0hWHjqoXbOCS 6RwK3ri5e2ThIPoGxFLt6TrMHgCRwm8YuOSJ97o+uohCTN8pmQ86KMUrDNwMqRkeTRW9wWIQ EdDqW44VwelnyPwcmWHBNNb1Kd8j3xKlHtnS45vc6WuoKxYRBTQOwI/5uFpDZtZ1a5kq9Ak/ MOPDDZpd84rqd+IvgMw5z4a5QlkvOTpScD21G3gjmtTEtyfahltyDK/5i8IaQC3YiXJCrqxE r7/4JMZeOYiKpE9iZMtS90t4wBgbVTqAGH1nE/ifZVAUcCtycD0f3egX9CHe45Ad4fsF3edQ ESa5tZAogiA4Hc/yQpnnf43a3aQ67XPOJXxS0Qptzu4vfF9h7kTKYWSrVesOU3QKYbjEAf95 NewF9FhAlYqYrwIwnuAZ8TdXVDYt7Z3z506//sf6zoRwYIDA8RDqFGRuPMXUsoUnf/KKPrtR ceLcSUP/JCNiYbf1/QtW8S6Ca/4qJFXQHp0knqJPGmwuFHsarSdpvZQ9qpxD3FnuPyo64S2N Dfq8TAeifNp2pAmPY2PAHQ3nOmKgMG8Gn5QiORvMUGzSz8Lo31LW58NdBKbh6bci5+t/HE0H pnyVf5xhNC/FuQINBFl4pQoBEACr+MgxWHUP76oNNYjRiNDhaIVtnPRqxiZ9v4H5FPxJy9UD Bqr54rifr1E+K+yYNPt/Po43vVL2cAyfyI/LVLlhiY4yH6T1n+Di/hSkkviCaf13gczuvgz4 KVYLwojU8+naJUsiCJw01MjO3pg9GQ+47HgsnRjCdNmmHiUQqksMIfd8k3reO9SUNlEmDDNB XuSzkHjE5y/R/6p8uXaVpiKPfHoULjNRWaFc3d2JGmxJpBdpYnajoz61m7XJlgwl/B5Ql/6B dHGaX3VHxOZsfRfugwYF9CkrPbyO5PK7yJ5vaiWre7aQ9bmCtXAomvF1q3/qRwZp77k6i9R3 tWfXjZDOQokw0u6d6DYJ0Vkfcwheg2i/Mf/epQl7Pf846G3PgSnyVK6cRwerBl5a68w7xqVU 4KgAh0DePjtDcbcXsKRT9D63cfyfrNE+ea4i0SVik6+N4nAj1HbzWHTk2KIxTsJXypibOKFX 2VykltxutR1sUfZBYMkfU4PogE7NjVEU7KtuCOSAkYzIWrZNEQrxYkxHLJsWruhSYNRsqVBy KvY6JAsq/i5yhVd5JKKU8wIOgSwC9P6mXYRgwPyfg15GZpnw+Fpey4bCDkT5fMOaCcS+vSU1 UaFmC4Ogzpe2BW2DOaPU5Ik99zUFNn6cRmOOXArrryjFlLT5oSOe4IposgWzdwARAQABiQIl BBgBCAAPBQJZeKUKAhsMBQkJZgGAAAoJEKOGQNwGMqM5ELoP/jj9d9gF1Al4+9bngUlYohYu 0sxyZo9IZ7Yb7cHuJzOMqfgoP4tydP4QCuyd9Q2OHHL5AL4VFNb8SvqAxxYSPuDJTI3JZwI7 d8JTPKwpulMSUaJE8ZH9n8A/+sdC3CAD4QafVBcCcbFe1jifHmQRdDrvHV9Es14QVAOTZhnJ vweENyHEIxkpLsyUUDuVypIo6y/Cws+EBCWt27BJi9GH/EOTB0wb+2ghCs/i3h8a+bi+bS7L FCCm/AxIqxRurh2UySn0P/2+2eZvneJ1/uTgfxnjeSlwQJ1BWzMAdAHQO1/lnbyZgEZEtUZJ x9d9ASekTtJjBMKJXAw7GbB2dAA/QmbA+Q+Xuamzm/1imigz6L6sOt2n/X/SSc33w8RJUyor SvAIoG/zU2Y76pKTgbpQqMDmkmNYFMLcAukpvC4ki3Sf086TdMgkjqtnpTkEElMSFJC8npXv 3QnGGOIfFug/qs8z03DLPBz9VYS26jiiN7QIJVpeeEdN/LKnaz5LO+h5kNAyj44qdF2T2AiF HxnZnxO5JNP5uISQH3FjxxGxJkdJ8jKzZV7aT37sC+Rp0o3KNc+GXTR+GSVq87Xfuhx0LRST NK9ZhT0+qkiN7npFLtNtbzwqaqceq3XhafmCiw8xrtzCnlB/C4SiBr/93Ip4kihXJ0EuHSLn VujM7c/b4pps Organization: Red Hat Inc, Message-ID: <6ef98f30-e2f1-0d38-9aa3-a8d7c781bf89@redhat.com> Date: Wed, 13 Nov 2019 13:39:11 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-MC-Unique: UxBiYvf3OPOq9l4-MYGmiQ-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 11/12/19 11:18 AM, Alexander Duyck wrote: > On Tue, 2019-11-12 at 10:19 -0500, Nitesh Narayan Lal wrote: >> On 11/11/19 5:00 PM, Alexander Duyck wrote: >>> On Mon, Nov 11, 2019 at 10:52 AM Nitesh Narayan Lal = wrote: >>>> On 11/6/19 7:16 AM, Michal Hocko wrote: >>>>> I didn't have time to read through newer versions of this patch serie= s >>>>> but I remember there were concerns about this functionality being pul= led >>>>> into the page allocator previously both by me and Mel [1][2]. Have th= ose been >>>>> addressed? I do not see an ack from Mel or any other MM people. Is th= ere >>>>> really a consensus that we want something like that living in the >>>>> allocator? >>>>> >>>>> There has also been a different approach discussed and from [3] >>>>> (referenced by the cover letter) I can only see >>>>> >>>>> : Then Nitesh's solution had changed to the bitmap approach[7]. Howev= er it >>>>> : has been pointed out that this solution doesn't deal with sparse me= mory, >>>>> : hotplug, and various other issues. >>>>> >>>>> which looks more like something to be done than a fundamental >>>>> roadblocks. >>>>> >>>>> [1] http://lkml.kernel.org/r/20190912163525.GV2739@techsingularity.ne= t >>>>> [2] http://lkml.kernel.org/r/20190912091925.GM4023@dhcp22.suse.cz >>>>> [3] http://lkml.kernel.org/r/29f43d5796feed0dec8e8bb98b187d9dac03b900= .camel@linux.intel.com >>>>> >>>> [...] >>>> >>>> Hi, >>>> >>>> I performed some experiments to find the root cause for the performanc= e >>>> degradation Alexander reported with my v12 patch-set. [1] >>>> >>>> I will try to give a brief background of the previous discussion >>>> under v12: (Alexander can correct me if I am missing something). >>>> Alexander suggested two issues with my v12 posting: [2] >>>> (This is excluding the sparse zone and memory hotplug/hotremove suppor= t) >>>> >>>> - A crash which was caused because I was not using spinlock_irqsave() >>>> (Fix suggestion came from Alexander). >>>> >>>> - Performance degradation with Alexander's suggested setup. Where we a= re using >>>> modified will-it-scale/page_fault with THP, CONFIG_SLAB_FREELIST_RAN= DOM & >>>> CONFIG_SHUFFLE_PAGE_ALLOCATOR. When I was using (MAX_ORDER - 2) as t= he >>>> PAGE_REPORTING_MIN_ORDER, I also observed significant performance de= gradation >>>> (around 20% in the number of threads launched on the 16th vCPU). How= ever, on >>>> switching the PAGE_REPORTING_MIN_ORDER to (MAX_ORDER - 1), I was abl= e to get >>>> the performance similar to what Alexander is reporting. >>>> >>>> PAGE_REPORTING_MIN_ORDER: is the minimum order of a page to be capture= d in the >>>> bitmap and get reported to the hypervisor. >>>> >>>> For the discussion where we are comparing the two series, the performa= nce >>>> aspect is more relevant and important. >>>> It turns out that with the current implementation the number of vmexit= with >>>> PAGE_REPORTING_MIN_ORDER as pageblock_order or (MAX_ORDER - 2) are sig= nificantly >>>> large when compared to (MAX_ODER - 1). >>>> >>>> One of the reason could be that the lower order pages are not getting = sufficient >>>> time to merge with each other as a result they are somehow getting rep= orted >>>> with 2 separate reporting requests. Hence, generating more vmexits. Wh= ere >>>> as with (MAX_ORDER - 1) we don't have that kind of situation as I neve= r try >>>> to report any page which has order < (MAX_ORDER - 1). >>>> >>>> To fix this, I might have to further limit the reporting which could a= llow the >>>> lower order pages to further merge and hence reduce the VM exits. I wi= ll try to >>>> do some experiments to see if I can fix this. In any case, if anyone h= as a >>>> suggestion I would be more than happy to look in that direction. >>> That doesn't make any sense. My setup using MAX_ORDER - 2, aka >>> pageblock_order, as the limit doesn't experience the same performance >>> issues the bitmap solution does. That leads me to believe the issue >>> isn't that the pages have not had a chance to be merged. >>> >> So, I did run your series as well with a few syfs variables to see how m= any >> pages of order (MAX_ORDER - 1) or (MAX_ORDER - 2) are reported at the en= d of >> will-it-scale/page_fault4 test. >> What I observed is the number of (MAX_ORDER - 2) pages which were gettin= g >> reported in your case were lesser than what has been reported in mine wi= th >> pageblock_order. >> As you have mentioned below about putting pages in a certain part of the >> free list might have also an impact. > Another thing you may want to check is how often your notifier is > triggering. One thing I did was to intentionally put a fairly significant > delay from the time the notification is scheduled to when it will start. = I > did this because when an application is freeing memory it will take some > time to completely free it, and if it is going to reallocate it anyway > there is no need to rush since it would just invalidate the pages you > reported anyway. Yes, I agree with this. This could have an impact on the performance. > >>>> Following are the numbers I gathered on a 30GB single NUMA, 16 vCPU gu= est >>>> affined to a single host-NUMA: >>>> >>>> On 16th vCPU: >>>> With PAGE_REPORTING_MIN_ORDER as (MAX_ORDER - 1): >>>> % Dip on the number of Processes =3D 1.3 % >>>> % Dip on the number of Threads =3D 5.7 % >>>> >>>> With PAGE_REPORTING_MIN_ORDER as With (pageblock_order): >>>> % Dip on the number of Processes =3D 5 % >>>> % Dip on the number of Threads =3D 20 % >>> So I don't hold much faith in the threads numbers. I have seen the >>> variability be as high as 14% between runs. >> That's interesting. Do you see the variability even with an unmodified k= ernel? >> Somehow, for me it seems pretty consistent. However, if you are running = with >> multiple NUMA nodes it might have a significant impact on the numbers. >> >> For now, I am only running a single NUMA guest affined to a single NUMA >> of host. > My guest should be running in a single node, and yes I saw it with just > the unmodified kernel. I am running on the linux-next 20191031 kernel. I am using Linus linux tree and working on top of Linux 5.4-rc5. Not sure how much difference will that make. > It > did occur to me that it seems like the performance for the threads number > recently increased. There might be a guest config option impacting things > as well since I know I have changed a number of variables since then. This is quite interesting because if I remember correctly then you reported= a huge degradation of over 30% with my patch-set. So far, I was able to reproduce significant degradation with the number of threads launched on the 16th vcpu but not in the number of processes which = you are observing. I am wondering if I am still missing something in my test-setup. > >>>> Michal's suggestion: >>>> I was able to get the prototype which could use page-isolation API: >>>> start_isolate_page_range()/undo_isolate_page_range() to work. >>>> But the issue mentioned above was also evident with it. >>>> >>>> Hence, I think before moving to the decision whether I want to use >>>> __isolate_free_page() which isolates pages from the buddy or >>>> start/undo_isolate_page_range() which just marks the page as MIGRATE_I= SOLATE, >>>> it is important for me to resolve the above-mentioned issue. >>> I'd be curious how you are avoiding causing memory starvation if you >>> are isolating ranges of memory that have been recently freed. >> I would still be marking only 32 pages as MIGRATE_ISOLATE at a time. It = is >> exactly same as that of isolating limited chunk of pages from the buddy. >> For example if I have a pfn:x of order y then I pass >> start_isolate_page_range(x, x+y, mt, 0). So at the end we >> will have 32 such entries marked as MIGRATE_ISOLATE. > I get that you are isolating the same amount of memory. What I was gettin= g > at is that __isolate_free_page has a check in it to make certain you are > not pulling memory that would put you below the minimum watermark. As far > as I know there isn't anything like that for the page isolation framework > since it is normally used for offlining memory before it is hotplugged > away. Yes, that is correct. I will have to take care of that explicitly. > >>>> Previous discussions: >>>> More about how we ended up with these two approaches could be found at= [3] & >>>> [4] explained by Alexander & David. >>>> >>>> [1] https://lore.kernel.org/lkml/20190812131235.27244-1-nitesh@redhat.= com/ >>>> [2] https://lkml.org/lkml/2019/10/2/425 >>>> [3] https://lkml.org/lkml/2019/10/23/1166 >>>> [4] https://lkml.org/lkml/2019/9/12/48 >>>> >>> So one thing you may want to consider would be how placement of the >>> buffers will impact your performance. >>> >>> One thing I realized I was doing wrong with my approach was scanning >>> for pages starting at the tail and then working up. It greatly hurt >>> the efficiency of my search since in the standard case most of the >>> free memory will be placed at the head and only with shuffling enabled >>> do I really need to worry about things getting mixed up with the tail. >>> >>> I suspect you may be similarly making things more difficult for >>> yourself by placing the reported pages back on the head of the list >>> instead of placing them at the tail where they will not be reallocated >>> immediately. >> hmm, I see. I will try and explore this. >> > --=20 Thanks Nitesh