From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FE07C19F2D for ; Wed, 10 Aug 2022 03:00:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 82F908E0002; Tue, 9 Aug 2022 23:00:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DE4D8E0001; Tue, 9 Aug 2022 23:00:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6CCAC8E0002; Tue, 9 Aug 2022 23:00:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5F3168E0001 for ; Tue, 9 Aug 2022 23:00:34 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 00C9740545 for ; Wed, 10 Aug 2022 03:00:33 +0000 (UTC) X-FDA: 79782179988.27.55663EE Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf29.hostedemail.com (Postfix) with ESMTP id 1F04A120166 for ; Wed, 10 Aug 2022 03:00:31 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4M2ZPr1BWtzmVLF; Wed, 10 Aug 2022 10:58:24 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 10 Aug 2022 11:00:27 +0800 Subject: Re: [PATCH v2 03/16] mm/page_alloc: Export free_frozen_pages() instead of free_unref_page() To: "Matthew Wilcox (Oracle)" , CC: David Hildenbrand , William Kucharski References: <20220809171854.3725722-1-willy@infradead.org> <20220809171854.3725722-4-willy@infradead.org> From: Miaohe Lin Message-ID: <373fd5ac-82ae-bf12-279b-1fd6f4b79364@huawei.com> Date: Wed, 10 Aug 2022 11:00:27 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20220809171854.3725722-4-willy@infradead.org> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660100433; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xd/TERG7eFXTgglZqB8E1n7rzXnxhidBRaIGodw5W/E=; b=MxxGQofjsqVNYAkS6JtrXJDDV0U3l/f6oQBkel4e9GGz2t8aFcCrb4kChDIHAGKfP37xcA eyK6zHO0gRSQE4SVyVJuuQarDemwa5WWXua3J7VpoBB9O9bUWfnbzDcxU0Kst9TO/DJvJc 8M/L9c+R8edWcsV5W1XS4BYMaGi+trY= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf29.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660100433; a=rsa-sha256; cv=none; b=ob2dvjQFRCbeVmR0T/uauC+Zm44nzHmlUJzuNr696MRY5RESRY6am4CEAIirrDm4cPsCt8 ScPox8ZyugC3xJUxmnT2ybJIEQqFUcQR4NW9cYTS6rNmRmAt5NPqmVSM8+QyR4IsMqsWFm xJpxIrXnQUvnai2LgHQyPh1s/So2AHE= X-Rspamd-Queue-Id: 1F04A120166 Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf29.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: 5y3p9u1kj7zpffwmsyrycm5wonncm69f X-HE-Tag: 1660100431-421791 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/8/10 1:18, Matthew Wilcox (Oracle) wrote: > This API makes more sense for slab to use and it works perfectly > well for swap. > > Signed-off-by: Matthew Wilcox (Oracle) > Reviewed-by: David Hildenbrand > Reviewed-by: William Kucharski Looks good to me. Thanks. Reviewed-by: Miaohe Lin > --- > mm/internal.h | 4 ++-- > mm/page_alloc.c | 18 +++++++++--------- > mm/swap.c | 2 +- > 3 files changed, 12 insertions(+), 12 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 785409805ed7..08d0881223cf 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -362,8 +362,8 @@ extern void post_alloc_hook(struct page *page, unsigned int order, > gfp_t gfp_flags); > extern int user_min_free_kbytes; > > -extern void free_unref_page(struct page *page, unsigned int order); > -extern void free_unref_page_list(struct list_head *list); > +void free_frozen_pages(struct page *, unsigned int order); > +void free_unref_page_list(struct list_head *list); > > extern void zone_pcp_update(struct zone *zone, int cpu_online); > extern void zone_pcp_reset(struct zone *zone); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 04260b5a7699..30e7a5974d39 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -761,14 +761,6 @@ static inline bool pcp_allowed_order(unsigned int order) > return false; > } > > -static inline void free_frozen_pages(struct page *page, unsigned int order) > -{ > - if (pcp_allowed_order(order)) /* Via pcp? */ > - free_unref_page(page, order); > - else > - __free_pages_ok(page, order, FPI_NONE); > -} > - > /* > * Higher-order pages are called "compound pages". They are structured thusly: > * > @@ -3464,7 +3456,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, > /* > * Free a pcp page > */ > -void free_unref_page(struct page *page, unsigned int order) > +static void free_unref_page(struct page *page, unsigned int order) > { > unsigned long flags; > unsigned long __maybe_unused UP_flags; > @@ -3504,6 +3496,14 @@ void free_unref_page(struct page *page, unsigned int order) > pcp_trylock_finish(UP_flags); > } > > +void free_frozen_pages(struct page *page, unsigned int order) > +{ > + if (pcp_allowed_order(order)) /* Via pcp? */ > + free_unref_page(page, order); > + else > + __free_pages_ok(page, order, FPI_NONE); > +} > + > /* > * Free a list of 0-order pages > */ > diff --git a/mm/swap.c b/mm/swap.c > index 6525011b715e..647f6f77193f 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -102,7 +102,7 @@ static void __folio_put_small(struct folio *folio) > { > __page_cache_release(folio); > mem_cgroup_uncharge(folio); > - free_unref_page(&folio->page, 0); > + free_frozen_pages(&folio->page, 0); > } > > static void __folio_put_large(struct folio *folio) >