From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DE1AE77188 for ; Fri, 3 Jan 2025 11:29:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 058626B007B; Fri, 3 Jan 2025 06:29:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 007F86B0082; Fri, 3 Jan 2025 06:29:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E38FE6B0083; Fri, 3 Jan 2025 06:29:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C51256B007B for ; Fri, 3 Jan 2025 06:29:38 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 359AF160757 for ; Fri, 3 Jan 2025 11:29:38 +0000 (UTC) X-FDA: 82965919428.16.749318C Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf07.hostedemail.com (Postfix) with ESMTP id 3B7C640008 for ; Fri, 3 Jan 2025 11:27:59 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735903741; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PFSjjVDVLyzk05q7UrmumRWxM60ZzWxLqUKbbn07nwU=; b=Ax5Xe//MGXtN1PhwSUToZBzixAn0HXLN7cLs6kRNgkzNovCMlIAa5ZAKCT5rcow4P4Fraj wsBv2y7/EqQ6G5p6eNRjIcvLCexv5p3xZIoiVvXdqqa5N3Qqml5l8IDG8drSsHDRNDcw22 FWvdRqBLFtK/MiKA38aoJKUOinF8ijs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735903741; a=rsa-sha256; cv=none; b=c1ZY4nUrBxX0ZdCdDnpelj3lPFnu1pHFc/IBD1r/zQh560WJVhl5MwXk/y9nqLnFIbtB5/ pNMGyP0A/SXN+cH95bQBXDtctzh3pIEuyo0zbwqrcz9Px/Ihhpuh95R7LF7bKR0gw8hoy/ Bygbg86Aau8PKxiwS9ZTtZKFI7qqiSk= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4YPh9R55mfz1V4Nn; Fri, 3 Jan 2025 19:26:35 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 644081A0188; Fri, 3 Jan 2025 19:29:31 +0800 (CST) Received: from [10.67.120.129] (10.67.120.129) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 3 Jan 2025 19:29:31 +0800 Message-ID: <6b56af9b-2670-4a15-84e8-314443ae590c@huawei.com> Date: Fri, 3 Jan 2025 19:29:30 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/2] mm: alloc_pages_bulk_noprof: drop page_list argument To: Mel Gorman CC: Luiz Capitulino , , , , , References: Content-Language: en-US From: Yunsheng Lin In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.120.129] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3B7C640008 X-Stat-Signature: fc1c7t3e65pmmeiq8jcyo5bdfaairduz X-Rspam-User: X-HE-Tag: 1735903679-561192 X-HE-Meta: U2FsdGVkX1/GLUe6D0kF4uYyGFWAoVEP0ueL9T6YtsnPYxZfIG1b1TrnBbW/TdhXbllJ5IRp6LiabRNgfDRmJHe1mnoLFmaDkzqdmqNkKFa2kB/90hIBj3dtXOGtsV+beWCrhZ6/IkmoUuiYmGjQdtvIFD1RHIjhIgYoD6fC6Yv8pZiwBIWUp7xGIzpE8RV9z8o3UXjjWS1wW3Md55EEoUUoaPNaJvBcfhomVjX2aHp5kPMVgtXmjig2SMklOAzd3rxs0OooYgxOt2dCS9M64g1+bxEAoPTLBrWPcY6sVKa5h/DQsu0yiM8Mie438ozgQzLJQbPj3GFdwRHw6sy+5dR7X66vFMDNrnOhuVROvKdAwJDT0b0+7ILlXHw3gFEFboxjMlJiFcUpTBqM2Pr9Pi5g23Zbz+rn6TCJyyCi/B8t6TyjIK/YnGDuSgsdmCxOX3qM7umrlnKJugg/avn1gDTCeWk2GefIj86q2uCwu+EdpMeW52HPhagw5hqPy5qpcNRA5G9t7kxgzjD4gOSpQeVEVm69KKOTPtpxVQEVSkzpPv3Ul5rRYXVhfgYyF4uZEtMZ5lVft8luHz2dkrgWefwsgCxCH7uIlGRlXuPLjASwXG4L3/KYmrHiJCAyzrkOifwPsBwlUIgYiByxZM+AafB9QjDyEekw4BQ8G4DiYWiry6cxzGJlQvOw/ij7UeSnOfKerxbFEeoH4XSmGJTw4zEiymavcxA6LPJcYt2UwOdfkewHtOsJ1FZanJ36DuWV86KyFb/c1UT4Hsayj/OqUMAL/cPIUQ34hSRPXI21ZzHwsZA5OozfgJNUVovprdv5NgMYWyvIM/YeHk6ySg//5jJ97lamBPtdBr+KFfpnGZ7ZxfB7ZGUdv0w31Gpmy5WYJdF/DPySNWSTVHLvax5KI5FgqKlWWrkIkeVIm9sWwxEnbm/gYDdiRpdpj/NWOvXpoJ1+KSv2+w9e4/3v91N U/4kLUe8 6yN+pLWC+DtawJ+13ebfXkHK93GGHzh5qL2cpoHzxjwJ6BIAdAYxCeYgIRPIKMUmP3mOLPs6gHMr16UCTUNYlCvMrhkoPXrO4b0HNcfrrOUJ/vFVbmDBJ3BJwNUEdCuI48uj1vEpWgMonQLEDBVvl/TJUK91vN5/Dyjx8P+BDgCYv6+3CD+WG024XyVQKPOnOWayVjnmvQXsKrdU4XFHKgiLZWQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/1/3 4:00, Mel Gorman wrote: > On Wed, Dec 25, 2024 at 08:36:04PM +0800, Yunsheng Lin wrote: >> On 2024/12/24 6:00, Luiz Capitulino wrote: >> >>> /* >>> - * __alloc_pages_bulk - Allocate a number of order-0 pages to a list or array >>> + * __alloc_pages_bulk - Allocate a number of order-0 pages to an array >>> * @gfp: GFP flags for the allocation >>> * @preferred_nid: The preferred NUMA node ID to allocate from >>> * @nodemask: Set of nodes to allocate from, may be NULL >>> - * @nr_pages: The number of pages desired on the list or array >>> - * @page_list: Optional list to store the allocated pages >>> - * @page_array: Optional array to store the pages >>> + * @nr_pages: The number of pages desired in the array >>> + * @page_array: Array to store the pages >>> * >>> * This is a batched version of the page allocator that attempts to >>> - * allocate nr_pages quickly. Pages are added to page_list if page_list >>> - * is not NULL, otherwise it is assumed that the page_array is valid. >>> + * allocate nr_pages quickly. Pages are added to the page_array. >>> * >>> - * For lists, nr_pages is the number of pages that should be allocated. >>> - * >>> - * For arrays, only NULL elements are populated with pages and nr_pages >>> + * Note that only NULL elements are populated with pages and nr_pages >> >> It is not really related to this patch, but while we are at this, the above >> seems like an odd behavior. By roughly looking at all the callers of that >> API, it seems like only the below callers rely on that? >> fs/erofs/zutil.c: z_erofs_gbuf_growsize() >> fs/xfs/xfs_buf.c: xfs_buf_alloc_pages() >> >> It seems it is quite straight forward to change the above callers to not >> rely on the above behavior, and we might be able to avoid more checking >> by removing the above behavior? >> > > It was implemented that way for an early user, net/sunrpc/svc_xprt.c. > The behaviour removes a burden from the caller to track the number of > populated elements and then pass the exact number of pages that must be > allocated. If the API does not handle that detail, each caller needs > similar state tracking implementations. As the overhead is going to be > the same whether the API implements it once or each caller implements > there own, it is simplier if there is just one implementation. It seems it is quite straight forward to change the above use case to not rely on that by something like below? diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 43c57124de52..52800bfddc86 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -670,19 +670,21 @@ static bool svc_alloc_arg(struct svc_rqst *rqstp) pages = RPCSVC_MAXPAGES; } - for (filled = 0; filled < pages; filled = ret) { - ret = alloc_pages_bulk_array(GFP_KERNEL, pages, - rqstp->rq_pages); - if (ret > filled) + for (filled = 0; filled < pages;) { + ret = alloc_pages_bulk_array(GFP_KERNEL, pages - filled, + rqstp->rq_pages + filled); + if (ret) { + filled += ret; /* Made progress, don't sleep yet */ continue; + } set_current_state(TASK_IDLE); if (svc_thread_should_stop(rqstp)) { set_current_state(TASK_RUNNING); return false; } - trace_svc_alloc_arg_err(pages, ret); + trace_svc_alloc_arg_err(pages, filled); memalloc_retry_wait(GFP_KERNEL); } rqstp->rq_page_end = &rqstp->rq_pages[pages]; >