From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 434B6C56201 for ; Tue, 24 Nov 2020 11:21:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 985A62073C for ; Tue, 24 Nov 2020 11:21:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 985A62073C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 01AA36B00B6; Tue, 24 Nov 2020 06:21:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EE6AC6B00B8; Tue, 24 Nov 2020 06:21:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DAF216B00B9; Tue, 24 Nov 2020 06:21:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id C10B76B00B6 for ; Tue, 24 Nov 2020 06:21:30 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7C739181AEF21 for ; Tue, 24 Nov 2020 11:21:30 +0000 (UTC) X-FDA: 77519071140.09.rub62_17021ec2736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 64E37180AD80F for ; Tue, 24 Nov 2020 11:21:30 +0000 (UTC) X-HE-Tag: rub62_17021ec2736d X-Filterd-Recvd-Size: 7776 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 11:21:29 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id A19CAAC66; Tue, 24 Nov 2020 11:21:28 +0000 (UTC) Subject: Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up To: Alex Shi , Matthew Wilcox Cc: Andrew Morton , Hugh Dickins , Yu Zhao , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1605859413-53864-1-git-send-email-alex.shi@linux.alibaba.com> <20201120151307.4d9e3ef092ba01a325db7ce2@linux-foundation.org> <20201122123552.GF4327@casper.infradead.org> <728874d7-2d93-4049-68c1-dcc3b2d52ccd@linux.alibaba.com> From: Vlastimil Babka Message-ID: <46ad053f-1401-31e8-50cf-09acda588f6f@suse.cz> Date: Tue, 24 Nov 2020 12:21:28 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.4.3 MIME-Version: 1.0 In-Reply-To: <728874d7-2d93-4049-68c1-dcc3b2d52ccd@linux.alibaba.com> Content-Type: text/plain; charset=gbk; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 11/22/20 3:00 PM, Alex Shi wrote: > Thanks a lot for all comments, I picked all up and here is the v3: > > From 167131dd106a96fd08af725df850e0da6ec899af Mon Sep 17 00:00:00 2001 > From: Alex Shi > Date: Fri, 20 Nov 2020 14:49:16 +0800 > Subject: [PATCH v3 next] mm/vmscan: __isolate_lru_page_prepare clean up > > The function just return 2 results, so use a 'switch' to deal with its > result is unnecessary, and simplify it to a bool func as Vlastimil > suggested. > > Also remove 'goto' by reusing list_move(), and take Matthew Wilcox's > suggestion to update comments in function. I wouldn't mind if the goto stayed, but it's not repeating that much without it (list_move() + continue, 3 times) so... Acked-by: Vlastimil Babka > Signed-off-by: Alex Shi > Cc: Andrew Morton > Cc: Matthew Wilcox > Cc: Hugh Dickins > Cc: Yu Zhao > Cc: Vlastimil Babka > Cc: Michal Hocko > Cc: linux-mm@kvack.org > Cc: linux-kernel@vger.kernel.org > --- > include/linux/swap.h | 2 +- > mm/compaction.c | 2 +- > mm/vmscan.c | 68 ++++++++++++++++++++------------------------ > 3 files changed, 33 insertions(+), 39 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 596bc2f4d9b0..5bba15ac5a2e 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -356,7 +356,7 @@ extern void lru_cache_add_inactive_or_unevictable(struct page *page, > extern unsigned long zone_reclaimable_pages(struct zone *zone); > extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order, > gfp_t gfp_mask, nodemask_t *mask); > -extern int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode); > +extern bool __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode); > extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, > unsigned long nr_pages, > gfp_t gfp_mask, > diff --git a/mm/compaction.c b/mm/compaction.c > index b68931854253..8d71ffebe6cb 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -988,7 +988,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, > if (unlikely(!get_page_unless_zero(page))) > goto isolate_fail; > > - if (__isolate_lru_page_prepare(page, isolate_mode) != 0) > + if (!__isolate_lru_page_prepare(page, isolate_mode)) > goto isolate_fail_put; > > /* Try isolate the page */ > diff --git a/mm/vmscan.c b/mm/vmscan.c > index c6f94e55c3fe..4d2703c43310 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1536,19 +1536,17 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, > * page: page to consider > * mode: one of the LRU isolation modes defined above > * > - * returns 0 on success, -ve errno on failure. > + * returns true on success, false on failure. > */ > -int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) > +bool __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) > { > - int ret = -EBUSY; > - > /* Only take pages on the LRU. */ > if (!PageLRU(page)) > - return ret; > + return false; > > /* Compaction should not handle unevictable pages but CMA can do so */ > if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE)) > - return ret; > + return false; > > /* > * To minimise LRU disruption, the caller can indicate that it only > @@ -1561,7 +1559,7 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) > if (mode & ISOLATE_ASYNC_MIGRATE) { > /* All the caller can do on PageWriteback is block */ > if (PageWriteback(page)) > - return ret; > + return false; > > if (PageDirty(page)) { > struct address_space *mapping; > @@ -1577,20 +1575,20 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) > * from the page cache. > */ > if (!trylock_page(page)) > - return ret; > + return false; > > mapping = page_mapping(page); > migrate_dirty = !mapping || mapping->a_ops->migratepage; > unlock_page(page); > if (!migrate_dirty) > - return ret; > + return false; > } > } > > if ((mode & ISOLATE_UNMAPPED) && page_mapped(page)) > - return ret; > + return false; > > - return 0; > + return true; > } > > /* > @@ -1674,35 +1672,31 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, > * only when the page is being freed somewhere else. > */ > scan += nr_pages; > - switch (__isolate_lru_page_prepare(page, mode)) { > - case 0: > - /* > - * Be careful not to clear PageLRU until after we're > - * sure the page is not being freed elsewhere -- the > - * page release code relies on it. > - */ > - if (unlikely(!get_page_unless_zero(page))) > - goto busy; > - > - if (!TestClearPageLRU(page)) { > - /* > - * This page may in other isolation path, > - * but we still hold lru_lock. > - */ > - put_page(page); > - goto busy; > - } > - > - nr_taken += nr_pages; > - nr_zone_taken[page_zonenum(page)] += nr_pages; > - list_move(&page->lru, dst); > - break; > + if (!__isolate_lru_page_prepare(page, mode)) { > + /* It is being freed elsewhere */ > + list_move(&page->lru, src); > + continue; > + } > + /* > + * Be careful not to clear PageLRU until after we're > + * sure the page is not being freed elsewhere -- the > + * page release code relies on it. > + */ > + if (unlikely(!get_page_unless_zero(page))) { > + list_move(&page->lru, src); > + continue; > + } > > - default: > -busy: > - /* else it is being freed elsewhere */ > + if (!TestClearPageLRU(page)) { > + /* Another thread is already isolating this page */ > + put_page(page); > list_move(&page->lru, src); > + continue; > } > + > + nr_taken += nr_pages; > + nr_zone_taken[page_zonenum(page)] += nr_pages; > + list_move(&page->lru, dst); > } > > /* >