From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCBDAC2BD09 for ; Wed, 3 Jul 2024 12:36:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 543EA6B008A; Wed, 3 Jul 2024 08:36:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F44F6B009C; Wed, 3 Jul 2024 08:36:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BAE56B00A1; Wed, 3 Jul 2024 08:36:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1E94F6B009C for ; Wed, 3 Jul 2024 08:36:11 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C7C4E160BE4 for ; Wed, 3 Jul 2024 12:36:10 +0000 (UTC) X-FDA: 82298388900.19.B2E2AE6 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf14.hostedemail.com (Postfix) with ESMTP id 950E5100010 for ; Wed, 3 Jul 2024 12:36:07 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720010136; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vyth+KEtMCXyfij04iZVbabfZ82CAWc/YE3EEXzG+Jc=; b=YGdE/7KqZfP87E4mmT70Rp9tO0y7cnaHqTOhu6QHKyQe/HB04ObEEhbuYRC68ogig3EZ9z sVY0LA+4U+DT1ed9uhTiiEYbyfstufXhdam2azwkEBIh1bjwOxj0qQTzFiZv2QMTStWUy+ Aow0FLOUzN7JTHZwweREkiAG59wAhNc= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720010136; a=rsa-sha256; cv=none; b=hNM2CG2lBXKuDlh2ljI6bZDcN0RvmTAgkdelIU1NaWHyG98VZC/4Tw7d4vKLLeNn182zIw NPOUG/EyHQYUzIzv6cKwp7omGVpnOnXJ8hwCFwe/3lqJ2H+WsyrEW2NfdxCN81uUGeSR10 CMHcAA3LwkbJ7QL+EBp3WjSrzRawQRs= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4WDfL73ypgzQk7y; Wed, 3 Jul 2024 20:32:15 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id C14EC18006C; Wed, 3 Jul 2024 20:36:03 +0800 (CST) Received: from [10.69.30.204] (10.69.30.204) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 3 Jul 2024 20:36:03 +0800 Subject: Re: [PATCH net-next v9 07/13] mm: page_frag: some minor refactoring before adding new API To: Alexander H Duyck , , , CC: , , Andrew Morton , References: <20240625135216.47007-1-linyunsheng@huawei.com> <20240625135216.47007-8-linyunsheng@huawei.com> From: Yunsheng Lin Message-ID: <3f816006-2949-e81a-be6f-b0b63322a1d5@huawei.com> Date: Wed, 3 Jul 2024 20:36:03 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.69.30.204] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 950E5100010 X-Stat-Signature: bad11b8egjdzsjtj8sesrghg66tidje1 X-Rspam-User: X-HE-Tag: 1720010167-137478 X-HE-Meta: U2FsdGVkX18uNbkfnJzA9+UMwWC2XauNbc2YB6lCEorU2uSSZ9Hd+XGeEKHfSWsG6xIMQp6pi0DgOIJNeCSjm3Dk2P4YHy8nxWRZEdRWR/pznEFjznbZ6Fh/K7/ltA/hklbcZTztKF6ZGnMtgL8fFkASWmeqK8GdZrXTn524Cx22fTFD6dB9wYKoj0BXKQVleygKJqLSE8XjmhpJtpYiLaW5l8/fI0K/n2Ryq041W6jS7IjUAZ35wfHk9FhWWzKyjCtk7iU40hbxOn0SZXpMWZQ/fE4mk8Uavj7U9OVJVlHSznYTw37XBGz7ePFa3dJwM7TG/KT6FfbJbulGB96hXmmFv7MMZ8i2wwylnYR9IWElRhBmeTDXzT0/gtwJ3Y5WaDQm1eyPmDNZDw+uJu3tRkThsmTZUebvNaaqbdiPYChg7JkCSfoe//dMZ/h4oV7FwLMK6eEs/AopSOuA9dIs592nNDCQVoZlS46e7xhZW3ORdNiFcv4UrBmUqVTo9po+Fk1QeHZCSXo7hD/3HifIqUte6aP050MruDzPZeGGbRNcG8ASVhhZIrfMheBvrO9zi7OCI8VlMB8dRI+RjVJnUpavAn+BkaRdMmDEx0tSqw+l1jR0OM9u5BtHxRFddD5VbfOr7364+ylBzp0lrGmaXGnLTg3TgWf5H8NGVd2f4C9b1c8rYBHxDCHErRMk2G/UpBVNXqNVOW7BcgpMcbBIVyj4ydcCPd+A9H7DTjivDdxp2eC9l1qI77S66/cm0Ga1/uiRyBwE31zVzmQYEG+SNHR4VPidal3OXvTCvgVMjZdOoMTMia/DrfJZWB8+yxE1tk0Lt7B/arcAJQWbMYzcMxFgSrjI3ZUoW4oK3JXU/oCdtc94sPMeLCDw/HHMP64qDAp9NB6lX+WZ9JIs/HoSLWED5fmCcPbeyX+zwSKbyS77drk2ixgcUgtu0oefXVpm01YVz3p7AzibuOeQ2at Ey9sjltQ RVLUkP1USCg/S9TCIF+MQWAKwEDU5ivq1Ak/nWximgSezxydVvuoEezA+orWGW44MofV1YB/nOWVlvIA3CWiuIMEj00iAnlow7olzxLf4bY3DP3X4Br4zX+biZAQYK7IyLkyeJnnKIo8XOaL6bZ/xnCw7dJls0OFw6vBXLsjoDYU9rZO/LRt8LtC3WZIi9uqU75zs65ROaxn21l73SKrWdlGxOaKVtCU96UKu6ykbGXkfu4hZHoMnc8iFMfrGUAyYSetzWYZqMYeVJpn4MMxxRq32NG4S9ATeRoVqLzORqU5tRZk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/7/2 23:30, Alexander H Duyck wrote: > On Tue, 2024-06-25 at 21:52 +0800, Yunsheng Lin wrote: >> Refactor common codes from __page_frag_alloc_va_align() >> to __page_frag_cache_refill(), so that the new API can >> make use of them. >> >> CC: Alexander Duyck >> Signed-off-by: Yunsheng Lin > > I am generally not a fan of the concept behind this patch. I really > think we should keep the page_frag_cache_refill function to just > allocating the page, or in this case the encoded_va and populating only > that portion of the struct. As my understanding, the above mainly depends on how you look at it, at least it seems odd to me that part of a struct is populated in one function and other part of the same struct is populated in other function. > >> --- >> mm/page_frag_cache.c | 61 ++++++++++++++++++++++---------------------- >> 1 file changed, 31 insertions(+), 30 deletions(-) >> >> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c >> index a3316dd50eff..4fd421d4f22c 100644 >> --- a/mm/page_frag_cache.c >> +++ b/mm/page_frag_cache.c >> @@ -29,10 +29,36 @@ static void *page_frag_cache_current_va(struct page_frag_cache *nc) >> static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, >> gfp_t gfp_mask) >> { >> - struct page *page = NULL; >> + struct encoded_va *encoded_va = nc->encoded_va; >> gfp_t gfp = gfp_mask; >> unsigned int order; >> + struct page *page; >> + >> + if (unlikely(!encoded_va)) >> + goto alloc; >> + >> + page = virt_to_page(encoded_va); >> + if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) >> + goto alloc; >> + >> + if (unlikely(encoded_page_pfmemalloc(encoded_va))) { >> + VM_BUG_ON(compound_order(page) != >> + encoded_page_order(encoded_va)); >> + free_unref_page(page, encoded_page_order(encoded_va)); >> + goto alloc; >> + } >> + >> + /* OK, page count is 0, we can safely set it */ >> + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); > > Why not just make this block of code a function onto itself? You put an > if statement at the top that essentially is just merging two functions > into one. Perhaps this logic could be __page_frag_cache_recharge which > would return an error if the page is busy or the wrong type. Then > acting on that you could switch to the refill attempt. > > Also thinking about it more the set_page_count in this function and > page_ref_add in the other can probably be merged into the recharge and > refill functions since they are acting directly on the encoded page and > not interacting with the other parts of the page_frag_cache. So we are agreed that the below is merged into __page_frag_cache_recharge()? set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; The below is merged into __page_frag_cache_refill()? page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; > >> + >> + /* reset page count bias and remaining of new frag */ >> + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; >> + nc->remaining = page_frag_cache_page_size(encoded_va); > > These two parts are more or less agnostic to the setup and could be > applied to refill or recharge. Also one thought occurs to me. You were > encoding "order" into the encoded VA. Why use that when your choices > are either PAGE_FRAG_CACHE_MAX_SIZE or PAGE_SIZE. It should be a single > bit and doesn't need to be a fully byte to store that. That would allow > you to reduce this down to just 2 bits, one for pfmemalloc and one for > max order vs order 0. I thought about the above and implemented it actually, but it turned out that it was not as good as encoding "order" into the encoded VA, at least for the generated asm code size, it didn't seem better. Using one bit, we need a checking to decide it is PAGE_FRAG_CACHE_MAX_SIZE or PAGE_SIZE. And we can use that to replace compound_order(page) when calling free_unref_page() too. > >> + >> + return page;