From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AF7AE7718B for ; Wed, 25 Dec 2024 12:36:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23A526B007B; Wed, 25 Dec 2024 07:36:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E9106B0083; Wed, 25 Dec 2024 07:36:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B2296B0085; Wed, 25 Dec 2024 07:36:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E0B956B007B for ; Wed, 25 Dec 2024 07:36:20 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 49FB6AED7B for ; Wed, 25 Dec 2024 12:36:20 +0000 (UTC) X-FDA: 82933427430.08.7E1C511 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf24.hostedemail.com (Postfix) with ESMTP id 836A8180010 for ; Wed, 25 Dec 2024 12:36:11 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735130159; a=rsa-sha256; cv=none; b=HwviW7UAveF51HNNsaMfM7fqhfdXlQl4SmJ9Gaw3nb0q+6XUIw4o5prDy6apDGDJw2F4a4 4p4fVCMFWAgZYZfkYXUShVpBUbmr5eMTfZ0oxH2o6sia4gKtSbOm5RAQkCW32ud4t7CYCR BhP51lkFdPMtFfQgAE+flbqqBfCFRVI= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735130159; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EbvNSoTvyoS7ncDz1/C8suD5is6m3Ja4zT8JfWAvPMk=; b=I3OCnBdO/OBmiaNHBIFSNKcpG5i7GB4Mw+hjeyCJpZkc4mz+kl3ncaHNcXJiy7KDi+syw/ wmsatg/IjQNp5zMzN1gggvoF77hFLJ50kH1/gchDL1uDL1GdBFZomvCmyKCwWIPlQzEUwa /p/Rararrxp4wHOqyHZARrdIM8Xj0ec= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4YJB4k5X2qz1T7JG; Wed, 25 Dec 2024 20:33:26 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 10C5718001B; Wed, 25 Dec 2024 20:36:11 +0800 (CST) Received: from [10.67.120.129] (10.67.120.129) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 25 Dec 2024 20:36:10 +0800 Message-ID: Date: Wed, 25 Dec 2024 20:36:04 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/2] mm: alloc_pages_bulk_noprof: drop page_list argument To: Luiz Capitulino , , , CC: , , References: Content-Language: en-US From: Yunsheng Lin In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.120.129] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 836A8180010 X-Stat-Signature: 1hbtpubzkx767obcupq373sjztqeofws X-HE-Tag: 1735130171-825302 X-HE-Meta: U2FsdGVkX1+igwBiUOP67GrpQ8rxGFQwN/UlBVB0QZXmNh8ZcfamyrxWC99lv3Ae53540HZIdowd01ypc/5VbaVlB1f298Ja3RL+tekiO4jwKrspwjC7AogBlWARxoJXj2I2sf9i7kQFNNrE7cbkjbgRyDxJhfCEoMDDFtM7SlTV9Ab/yjxDJ8gRhBbP9+9PZi32kL/L0zXnT3fDKlLEq1spZMbz+Pm4nZBLKNWK3HOXwdp7eV+nnEJsSGq3aVUVPWlMVFuKlmCTBFHtf6w/pWP9eFWoP7sKTMJwZQUZT2SBnQ8LmrkM51Y7PAUpLxNt8CCWbZ8IOToIZg5bQqAJKRpi6cEtXKCCjSSkaixil3rxX4P2Ej9D02ClJvhnpBJxwyFL+ZoDXry1Be9hTyZSIXdD4Y194vlukHeDDfmxOpxdcTn2OWF3GR1ioYNOxtDjt8bCQBPq0Ycz2OZd7XhjyOZI0+DXbsDlLbH9bDZcPmHtiFvspgEJunCI0fPCsbKFBybqrDo3g3RJXGTtLHjpLeY9TJ+wUxC87ntvxP+nqa1b/aUG7TbcIoHig8Iq7Oo4HicwxTUf4d2htagPGxLveb+0cmHToIYXKLkAvhP6L/h899/eXsRF83z2cIhkBT/etfTDyfNQMQBGX8rSQBwbbsDxKtm8P1Xt2n2RG2+odn9N8gtWIbb7hpNxAk5zOpSVpsqGYyoGWhDspT8mRt4pmwHat641ADmwZJU6ozfFHIX9nm1pO8d39WHakNfrbsTcXzjnklW5S7e/jQH7FuQ4HVqoX7zEdfNkQ+qPESHzinER/fnM6cPkIm2qMAPutn6NIGARcpRzDZ9kq076Bdvht3/dUbO7rPuls+XjXp1l1V1Lyis8TuSBmjUG1PtoVw1a6siD4WekSks4E0mMRhIYdIcqGK4ZkGNdNLaP++3CGA7T2fW8i3tgve7Z3Wc0/98DugMMxc2y44WSo59MWW2 BWDpHVWp WLObMQs2cIHGEBPwaMhSalt3kf4t4zzKUDT2V4uUYaL9fY/SBjMW+iw42Mf3UEMe9RsQF3EwUfJzTLtV3ms/UCMY0wRHlV5XTjuX6cX0EJyVBC7fsxpS9w69FzPCTzlaQSfjdFsv4Y/75cUPTx5uq0vopXBQuqU42AbV/VKz4rBkDIvh5T3/TaGE/coPodTZWF7AWDnA4gHawIndhFFaElk4C+Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/12/24 6:00, Luiz Capitulino wrote: > /* > - * __alloc_pages_bulk - Allocate a number of order-0 pages to a list or array > + * __alloc_pages_bulk - Allocate a number of order-0 pages to an array > * @gfp: GFP flags for the allocation > * @preferred_nid: The preferred NUMA node ID to allocate from > * @nodemask: Set of nodes to allocate from, may be NULL > - * @nr_pages: The number of pages desired on the list or array > - * @page_list: Optional list to store the allocated pages > - * @page_array: Optional array to store the pages > + * @nr_pages: The number of pages desired in the array > + * @page_array: Array to store the pages > * > * This is a batched version of the page allocator that attempts to > - * allocate nr_pages quickly. Pages are added to page_list if page_list > - * is not NULL, otherwise it is assumed that the page_array is valid. > + * allocate nr_pages quickly. Pages are added to the page_array. > * > - * For lists, nr_pages is the number of pages that should be allocated. > - * > - * For arrays, only NULL elements are populated with pages and nr_pages > + * Note that only NULL elements are populated with pages and nr_pages It is not really related to this patch, but while we are at this, the above seems like an odd behavior. By roughly looking at all the callers of that API, it seems like only the below callers rely on that? fs/erofs/zutil.c: z_erofs_gbuf_growsize() fs/xfs/xfs_buf.c: xfs_buf_alloc_pages() It seems it is quite straight forward to change the above callers to not rely on the above behavior, and we might be able to avoid more checking by removing the above behavior? > * is the maximum number of pages that will be stored in the array. > * > - * Returns the number of pages on the list or array. > + * Returns the number of pages in the array. > */ > unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, > nodemask_t *nodemask, int nr_pages, > - struct list_head *page_list, > struct page **page_array) > { > struct page *page; > @@ -4570,7 +4565,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, > * Skip populated array elements to determine if any pages need > * to be allocated before disabling IRQs. > */ > - while (page_array && nr_populated < nr_pages && page_array[nr_populated]) > + while (nr_populated < nr_pages && page_array[nr_populated]) > nr_populated++; The above might be avoided as mentioned above. > > /* No pages requested? */ > @@ -4578,7 +4573,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, > goto out; > > /* Already populated array? */ > - if (unlikely(page_array && nr_pages - nr_populated == 0)) > + if (unlikely(nr_pages - nr_populated == 0)) > goto out; > > /* Bulk allocator does not support memcg accounting. */ > @@ -4660,7 +4655,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, > while (nr_populated < nr_pages) { > > /* Skip existing pages */ > - if (page_array && page_array[nr_populated]) { > + if (page_array[nr_populated]) { Similar here. > nr_populated++; > continue; > }