From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DC0BC5519F for ; Sun, 22 Nov 2020 12:02:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A1F4120760 for ; Sun, 22 Nov 2020 12:02:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A1F4120760 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CF1A86B006E; Sun, 22 Nov 2020 07:02:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CA17F6B0070; Sun, 22 Nov 2020 07:02:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B90C26B0071; Sun, 22 Nov 2020 07:02:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 8951E6B006E for ; Sun, 22 Nov 2020 07:02:17 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3991A180AD817 for ; Sun, 22 Nov 2020 12:02:17 +0000 (UTC) X-FDA: 77511916314.18.hill81_220bf2e2735c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 20186100EC66C for ; Sun, 22 Nov 2020 12:02:17 +0000 (UTC) X-HE-Tag: hill81_220bf2e2735c X-Filterd-Recvd-Size: 7782 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Sun, 22 Nov 2020 12:02:14 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R971e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04407;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0UG8CY14_1606046523; Received: from IT-FVFX43SYHV2H.lan(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UG8CY14_1606046523) by smtp.aliyun-inc.com(127.0.0.1); Sun, 22 Nov 2020 20:02:04 +0800 Subject: Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up To: Andrew Morton Cc: Hugh Dickins , Yu Zhao , Vlastimil Babka , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1605859413-53864-1-git-send-email-alex.shi@linux.alibaba.com> <20201120151307.4d9e3ef092ba01a325db7ce2@linux-foundation.org> From: Alex Shi Message-ID: Date: Sun, 22 Nov 2020 20:00:19 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201120151307.4d9e3ef092ba01a325db7ce2@linux-foundation.org> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =D4=DA 2020/11/21 =C9=CF=CE=E77:13, Andrew Morton =D0=B4=B5=C0: > On Fri, 20 Nov 2020 16:03:33 +0800 Alex Shi wrote: >=20 >> The function just return 2 results, so use a 'switch' to deal with its >> result is unnecessary, and simplify it to a bool func as Vlastimil >> suggested. >> >> Also removed 'goto' in using by reusing list_move(). >> >> ... >> >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -1540,7 +1540,7 @@ unsigned int reclaim_clean_pages_from_list(struc= t zone *zone, >> */ >> int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode= ) >> { >> - int ret =3D -EBUSY; >> + int ret =3D false; >> =20 >> /* Only take pages on the LRU. */ >> if (!PageLRU(page)) >> @@ -1590,7 +1590,7 @@ int __isolate_lru_page_prepare(struct page *page= , isolate_mode_t mode) >> if ((mode & ISOLATE_UNMAPPED) && page_mapped(page)) >> return ret; >> =20 >> - return 0; >> + return true; >> } >=20 > The resulting __isolate_lru_page_prepare() is rather unpleasing. >=20 > - Why return an int and not a bool? >=20 > - `int ret =3D false' is a big hint that `ret' should have bool type! >=20 > - Why not just remove `ret' and do `return false' in all those `return > ret' places? >=20 > - The __isolate_lru_page_prepare() kerneldoc still says "returns 0 on > success, -ve errno on failure". =20 >=20 Hi Andrew, Thanks a lot for caching and sorry for the bad patch. It initially a 'int' version, and change it to bool in a hurry weekend. I am sorry. >From 36c4fbda2d55633d3c1a3e79f045cd9877453ab7 Mon Sep 17 00:00:00 2001 From: Alex Shi Date: Fri, 20 Nov 2020 14:49:16 +0800 Subject: [PATCH v2 next] mm/vmscan: __isolate_lru_page_prepare clean up The function just return 2 results, so use a 'switch' to deal with its result is unnecessary, and simplify it to a bool func as Vlastimil suggested. Also remove 'goto' by reusing list_move(). Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Hugh Dickins Cc: Yu Zhao Cc: Vlastimil Babka Cc: Michal Hocko Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/compaction.c | 2 +- mm/vmscan.c | 69 +++++++++++++++++++++++-------------------------- 2 files changed, 34 insertions(+), 37 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index b68931854253..8d71ffebe6cb 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -988,7 +988,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, if (unlikely(!get_page_unless_zero(page))) goto isolate_fail; =20 - if (__isolate_lru_page_prepare(page, isolate_mode) !=3D 0) + if (!__isolate_lru_page_prepare(page, isolate_mode)) goto isolate_fail_put; =20 /* Try isolate the page */ diff --git a/mm/vmscan.c b/mm/vmscan.c index c6f94e55c3fe..ab2fdee0828e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1536,19 +1536,17 @@ unsigned int reclaim_clean_pages_from_list(struct= zone *zone, * page: page to consider * mode: one of the LRU isolation modes defined above * - * returns 0 on success, -ve errno on failure. + * returns ture on success, false on failure. */ -int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) +bool __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) { - int ret =3D -EBUSY; - /* Only take pages on the LRU. */ if (!PageLRU(page)) - return ret; + return false; =20 /* Compaction should not handle unevictable pages but CMA can do so */ if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE)) - return ret; + return false; =20 /* * To minimise LRU disruption, the caller can indicate that it only @@ -1561,7 +1559,7 @@ int __isolate_lru_page_prepare(struct page *page, i= solate_mode_t mode) if (mode & ISOLATE_ASYNC_MIGRATE) { /* All the caller can do on PageWriteback is block */ if (PageWriteback(page)) - return ret; + return false; =20 if (PageDirty(page)) { struct address_space *mapping; @@ -1577,20 +1575,20 @@ int __isolate_lru_page_prepare(struct page *page,= isolate_mode_t mode) * from the page cache. */ if (!trylock_page(page)) - return ret; + return false; =20 mapping =3D page_mapping(page); migrate_dirty =3D !mapping || mapping->a_ops->migratepage; unlock_page(page); if (!migrate_dirty) - return ret; + return false; } } =20 if ((mode & ISOLATE_UNMAPPED) && page_mapped(page)) - return ret; + return false; =20 - return 0; + return true; } =20 /* @@ -1674,35 +1672,34 @@ static unsigned long isolate_lru_pages(unsigned l= ong nr_to_scan, * only when the page is being freed somewhere else. */ scan +=3D nr_pages; - switch (__isolate_lru_page_prepare(page, mode)) { - case 0: + if (!__isolate_lru_page_prepare(page, mode)) { + /* else it is being freed elsewhere */ + list_move(&page->lru, src); + continue; + } + /* + * Be careful not to clear PageLRU until after we're + * sure the page is not being freed elsewhere -- the + * page release code relies on it. + */ + if (unlikely(!get_page_unless_zero(page))) { + list_move(&page->lru, src); + continue; + } + + if (!TestClearPageLRU(page)) { /* - * Be careful not to clear PageLRU until after we're - * sure the page is not being freed elsewhere -- the - * page release code relies on it. + * This page may in other isolation path, + * but we still hold lru_lock. */ - if (unlikely(!get_page_unless_zero(page))) - goto busy; - - if (!TestClearPageLRU(page)) { - /* - * This page may in other isolation path, - * but we still hold lru_lock. - */ - put_page(page); - goto busy; - } - - nr_taken +=3D nr_pages; - nr_zone_taken[page_zonenum(page)] +=3D nr_pages; - list_move(&page->lru, dst); - break; - - default: -busy: - /* else it is being freed elsewhere */ + put_page(page); list_move(&page->lru, src); + continue; } + + nr_taken +=3D nr_pages; + nr_zone_taken[page_zonenum(page)] +=3D nr_pages; + list_move(&page->lru, dst); } =20 /* --=20 2.29.GIT