From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50504C433E0 for ; Wed, 1 Jul 2020 18:03:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 210AA20748 for ; Wed, 1 Jul 2020 18:03:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 210AA20748 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A2EF98D0058; Wed, 1 Jul 2020 14:03:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9DFD48D0016; Wed, 1 Jul 2020 14:03:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8CEDE8D0058; Wed, 1 Jul 2020 14:03:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0170.hostedemail.com [216.40.44.170]) by kanga.kvack.org (Postfix) with ESMTP id 771CA8D0016 for ; Wed, 1 Jul 2020 14:03:03 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 08B74824999B for ; Wed, 1 Jul 2020 18:03:03 +0000 (UTC) X-FDA: 76990278246.28.cows91_6207d5626e82 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 77AA16D71 for ; Wed, 1 Jul 2020 18:02:58 +0000 (UTC) X-HE-Tag: cows91_6207d5626e82 X-Filterd-Recvd-Size: 5408 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Wed, 1 Jul 2020 18:02:57 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 7D55EAB98; Wed, 1 Jul 2020 18:02:56 +0000 (UTC) Subject: Re: [PATCH v6 2/6] mm/vmscan: protect the workingset on anonymous LRU To: js1304@gmail.com, Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Mel Gorman , kernel-team@lge.com, Joonsoo Kim References: <1592371583-30672-1-git-send-email-iamjoonsoo.kim@lge.com> <1592371583-30672-3-git-send-email-iamjoonsoo.kim@lge.com> From: Vlastimil Babka Message-ID: <4591b38d-fdd0-e2e6-bf11-6e5669575736@suse.cz> Date: Wed, 1 Jul 2020 20:02:51 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0 MIME-Version: 1.0 In-Reply-To: <1592371583-30672-3-git-send-email-iamjoonsoo.kim@lge.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 77AA16D71 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/17/20 7:26 AM, js1304@gmail.com wrote: > From: Joonsoo Kim Hi, how about a more descriptive subject, such as mm/vmscan: add new anonymous pages to inactive LRU list > In current implementation, newly created or swap-in anonymous page > is started on active list. Growing active list results in rebalancing > active/inactive list so old pages on active list are demoted to inactive > list. Hence, the page on active list isn't protected at all. > > Following is an example of this situation. > > Assume that 50 hot pages on active list. Numbers denote the number of > pages on active/inactive list (active | inactive). > > 1. 50 hot pages on active list > 50(h) | 0 > > 2. workload: 50 newly created (used-once) pages > 50(uo) | 50(h) > > 3. workload: another 50 newly created (used-once) pages > 50(uo) | 50(uo), swap-out 50(h) > > This patch tries to fix this issue. > Like as file LRU, newly created or swap-in anonymous pages will be > inserted to the inactive list. They are promoted to active list if > enough reference happens. This simple modification changes the above > example as following. > > 1. 50 hot pages on active list > 50(h) | 0 > > 2. workload: 50 newly created (used-once) pages > 50(h) | 50(uo) > > 3. workload: another 50 newly created (used-once) pages > 50(h) | 50(uo), swap-out 50(uo) > > As you can see, hot pages on active list would be protected. > > Note that, this implementation has a drawback that the page cannot > be promoted and will be swapped-out if re-access interval is greater than > the size of inactive list but less than the size of total(active+inactive). > To solve this potential issue, following patch will apply workingset > detection that is applied to file LRU some day before. detection similar to the one that's already applied to file LRU. > v6: Before this patch, all anon pages (inactive + active) are considered > as workingset. However, with this patch, only active pages are considered > as workingset. So, file refault formula which uses the number of all > anon pages is changed to use only the number of active anon pages. a "v6" note is more suitable for a diffstat area than commit log, but it's good to mention this so drop the 'v6:'? > Acked-by: Johannes Weiner > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka One more nit below. > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -476,23 +476,24 @@ void lru_cache_add(struct page *page) > EXPORT_SYMBOL(lru_cache_add); > > /** > - * lru_cache_add_active_or_unevictable > + * lru_cache_add_inactive_or_unevictable > * @page: the page to be added to LRU > * @vma: vma in which page is mapped for determining reclaimability > * > - * Place @page on the active or unevictable LRU list, depending on its > + * Place @page on the inactive or unevictable LRU list, depending on its > * evictability. Note that if the page is not evictable, it goes > * directly back onto it's zone's unevictable list, it does NOT use a > * per cpu pagevec. > */ > -void lru_cache_add_active_or_unevictable(struct page *page, > +void lru_cache_add_inactive_or_unevictable(struct page *page, > struct vm_area_struct *vma) > { > + bool unevictable; > + > VM_BUG_ON_PAGE(PageLRU(page), page); > > - if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED)) > - SetPageActive(page); > - else if (!TestSetPageMlocked(page)) { > + unevictable = (vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) == VM_LOCKED; > + if (unevictable && !TestSetPageMlocked(page)) { I guess this could be "if (unlikely(unevictable) && ..." to match the previous if (likely(evictable)) else ... > /* > * We use the irq-unsafe __mod_zone_page_stat because this > * counter is not modified from interrupt context, and the pte > diff --git a/mm/swapfile.c b/mm/swapfile.c > index c047789..38f6433 100644