From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FD06C3DA6E for ; Mon, 8 Jan 2024 08:25:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1608D6B0081; Mon, 8 Jan 2024 03:25:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 110366B0083; Mon, 8 Jan 2024 03:25:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F413E6B0085; Mon, 8 Jan 2024 03:25:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E43206B0081 for ; Mon, 8 Jan 2024 03:25:48 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id AF13D4031C for ; Mon, 8 Jan 2024 08:25:48 +0000 (UTC) X-FDA: 81655460376.09.E1AFD35 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf16.hostedemail.com (Postfix) with ESMTP id 2CFA418001E for ; Mon, 8 Jan 2024 08:25:45 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704702346; a=rsa-sha256; cv=none; b=M3F1T2gfOrYP4G7VHtNzVbNoi7FO1j3FKkovACC4gKNJrRtcl3FYFTI7lN5Z1mNearNED1 T6XuFZE9XFxiBFCONDxVxMUa0YdvpHFK1OQ2k1NATK929ybPqy+S4ORLqkqHW60DnRPfT+ jyPHDtOXMhAYrOTcr/YJ1oVswDHPbxs= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704702346; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eMpiz3AFfpgGbLcF55pvDxAD504vAm/IW/GfFMY9g4E=; b=GFYUlAm4+qrl3zzKUZS6Oc9uZdZzqUtyuXxUqNfwb5J0WDrus/A4GvsU/u2JKbOYiw5QVX 3SKongjEygBJkIR21vPIjJfx85KNIEeh1dBNJsj1Ss2c+A0JZoRhsAq1zhLZKiHo9ZcSK3 GzzDhcmdN/PHlZZypSkXv8tSCvDbAgw= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4T7nCc0ZBmz1gx9K; Mon, 8 Jan 2024 16:24:12 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 07150140384; Mon, 8 Jan 2024 16:25:42 +0800 (CST) Received: from [10.69.30.204] (10.69.30.204) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 8 Jan 2024 16:25:41 +0800 Subject: Re: [PATCH net-next 2/6] page_frag: unify gfp bits for order 3 page allocation To: Alexander H Duyck , , , CC: , , "Michael S. Tsirkin" , Jason Wang , Andrew Morton , Eric Dumazet , , , References: <20240103095650.25769-1-linyunsheng@huawei.com> <20240103095650.25769-3-linyunsheng@huawei.com> From: Yunsheng Lin Message-ID: <1d40427d-78e3-ef40-a63f-206c0697bda2@huawei.com> Date: Mon, 8 Jan 2024 16:25:41 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.69.30.204] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2CFA418001E X-Stat-Signature: 7dowdzxn8694g1rj99qomwfhejrhfpit X-HE-Tag: 1704702345-391999 X-HE-Meta: U2FsdGVkX19I0xvxJH0toJCUbEkbXMh7CYC6SJojCHC/hvNv9mVwd5UfxtnIEkDAnIAn0liA8xtczRKrjv3ug2+hPZGCWTde9g6A08r0TF7I1y8uRqGAtJUuSnR56dM7vIKH4QpJVK6FECpurl14d5jR6U/Ky6/qlmLhWIoQ65JOkd4Y1FxsoAnLggSPdwKcTh9Gzgq9DahYF6IHoTTHof50lf2fnpnB7yvDhOJGd77xIR9Y/vOaDpDasT1YpEBuBGAsa1MONZayYSKHGdzheF6t4BXBfkUI8+QYYNspykSGdHQNSi/JB3DGtsgOqxl+aUCtnAAWzA5ZqYGEElkeg/uHYtDrkotZNWU7zq+20tyoiAzYI++A4YIigm39YyXkytKixdRaC+TYghfaYB4XhNlxblVXdiXEVqv5iNwmQe8c+TAxi/kkjk01JAyKwoNiGjQ7r6wx7SjrqYadsTTJiH2LTp4kNZmdyH0/kfS0M2P4yj9eqMfdHWORhP8cZV456axjjjbog0c1ZxTl9Fxe6dZAoo3s7OL0IobtZLT0sfgTY5F94+uZiWSZFlPQmKalQacivxUnTNUhB+4iawx47ysRk1Yb66OSLexPHlYx7lGRQ03IEttZjJbF4srcM2zcoE7AtrRXe53+C5+nHNevKO2ZQfxhCk+oGBllfaGcY3cHRVF6wwqclssW+IdhShiYkQfVSycSUZ0ieh9m/lAEWMh7KjQ9ZSutR8LY/eUUpOhZy948r8EaMkcQse4eBW/DpxuVcDItSyhaL9FtSBAMjcOQzpDfmlPbN2OLAcJZXFZvL1v6Q0nYG2M7aeLtf9qaObVGslO+4KDuWSgyAbrvyvqIMfZqkQeX2rN3N/jhPc2ti/B075Byq2G/dWtsfg9dedoD3+PhaZhCxEyvbBWOMmwrzHj5NGOB3sL81BJipSQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/1/5 23:35, Alexander H Duyck wrote: > On Wed, 2024-01-03 at 17:56 +0800, Yunsheng Lin wrote: >> Currently there seems to be three page frag implementions >> which all try to allocate order 3 page, if that fails, it >> then fail back to allocate order 0 page, and each of them >> all allow order 3 page allocation to fail under certain >> condition by using specific gfp bits. >> >> The gfp bits for order 3 page allocation are different >> between different implementation, __GFP_NOMEMALLOC is >> or'd to forbid access to emergency reserves memory for >> __page_frag_cache_refill(), but it is not or'd in other >> implementions, __GFP_DIRECT_RECLAIM is masked off to avoid >> direct reclaim in skb_page_frag_refill(), but it is not >> masked off in __page_frag_cache_refill(). >> >> This patch unifies the gfp bits used between different >> implementions by or'ing __GFP_NOMEMALLOC and masking off >> __GFP_DIRECT_RECLAIM for order 3 page allocation to avoid >> possible pressure for mm. >> >> Signed-off-by: Yunsheng Lin >> CC: Alexander Duyck >> --- >> drivers/vhost/net.c | 2 +- >> mm/page_alloc.c | 4 ++-- >> net/core/sock.c | 2 +- >> 3 files changed, 4 insertions(+), 4 deletions(-) >> >> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c >> index f2ed7167c848..e574e21cc0ca 100644 >> --- a/drivers/vhost/net.c >> +++ b/drivers/vhost/net.c >> @@ -670,7 +670,7 @@ static bool vhost_net_page_frag_refill(struct vhost_net *net, unsigned int sz, >> /* Avoid direct reclaim but allow kswapd to wake */ >> pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | >> __GFP_COMP | __GFP_NOWARN | >> - __GFP_NORETRY, >> + __GFP_NORETRY | __GFP_NOMEMALLOC, >> SKB_FRAG_PAGE_ORDER); >> if (likely(pfrag->page)) { >> pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER; >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 9a16305cf985..1f0b36dd81b5 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -4693,8 +4693,8 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, >> gfp_t gfp = gfp_mask; >> >> #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) >> - gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | >> - __GFP_NOMEMALLOC; >> + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | >> + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; >> page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, >> PAGE_FRAG_CACHE_MAX_ORDER); >> nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; >> diff --git a/net/core/sock.c b/net/core/sock.c >> index 446e945f736b..d643332c3ee5 100644 >> --- a/net/core/sock.c >> +++ b/net/core/sock.c >> @@ -2900,7 +2900,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp) >> /* Avoid direct reclaim but allow kswapd to wake */ >> pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | >> __GFP_COMP | __GFP_NOWARN | >> - __GFP_NORETRY, >> + __GFP_NORETRY | __GFP_NOMEMALLOC, >> SKB_FRAG_PAGE_ORDER); >> if (likely(pfrag->page)) { >> pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER; > > Looks fine to me. > > One thing you may want to consider would be to place this all in an > inline function that could just consolidate all the code. Do you think it is possible to further unify the implementations of the 'struct page_frag_cache' and 'struct page_frag', so adding a inline function for above is unnecessary? > > Reviewed-by: Alexander Duyck > > . >