From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E708C433C1 for ; Thu, 25 Mar 2021 11:43:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 28E60619D5 for ; Thu, 25 Mar 2021 11:43:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 28E60619D5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B06786B0075; Thu, 25 Mar 2021 07:43:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AB5926B0078; Thu, 25 Mar 2021 07:43:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 92FB96B007B; Thu, 25 Mar 2021 07:43:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id 6FDDE6B0075 for ; Thu, 25 Mar 2021 07:43:12 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2E94B18464D12 for ; Thu, 25 Mar 2021 11:43:12 +0000 (UTC) X-FDA: 77958210624.08.BA429F7 Received: from outbound-smtp57.blacknight.com (outbound-smtp57.blacknight.com [46.22.136.241]) by imf16.hostedemail.com (Postfix) with ESMTP id 2DF2E80192D4 for ; Thu, 25 Mar 2021 11:43:10 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp57.blacknight.com (Postfix) with ESMTPS id 5992AFA819 for ; Thu, 25 Mar 2021 11:43:10 +0000 (GMT) Received: (qmail 16560 invoked from network); 25 Mar 2021 11:43:10 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 25 Mar 2021 11:43:10 -0000 From: Mel Gorman To: Andrew Morton Cc: Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , Alexander Duyck , Vlastimil Babka , Matthew Wilcox , Ilias Apalodimas , LKML , Linux-Net , Linux-MM , Linux-NFS , Mel Gorman Subject: [PATCH 3/9] mm/page_alloc: Add an array-based interface to the bulk page allocator Date: Thu, 25 Mar 2021 11:42:22 +0000 Message-Id: <20210325114228.27719-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210325114228.27719-1-mgorman@techsingularity.net> References: <20210325114228.27719-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Stat-Signature: 5gxqfb4qjdicj7suh3wd3xcm9fmow8pq X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 2DF2E80192D4 Received-SPF: none (techsingularity.net>: No applicable sender policy available) receiver=imf16; identity=mailfrom; envelope-from=""; helo=outbound-smtp57.blacknight.com; client-ip=46.22.136.241 X-HE-DKIM-Result: none/none X-HE-Tag: 1616672590-536406 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The proposed callers for the bulk allocator store pages from the bulk allocator in an array. This patch adds an array-based interface to the AP= I to avoid multiple list iterations. The page list interface is preserved to avoid requiring all users of the bulk API to allocate and manage enoug= h storage to store the pages. Signed-off-by: Mel Gorman --- include/linux/gfp.h | 13 +++++++--- mm/page_alloc.c | 60 +++++++++++++++++++++++++++++++++------------ 2 files changed, 54 insertions(+), 19 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 4a304fd39916..fb6234e1fe59 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -520,13 +520,20 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int = order, int preferred_nid, =20 int __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, - struct list_head *list); + struct list_head *page_list, + struct page **page_array); =20 /* Bulk allocate order-0 pages */ static inline unsigned long -alloc_pages_bulk(gfp_t gfp, unsigned long nr_pages, struct list_head *li= st) +alloc_pages_bulk_list(gfp_t gfp, unsigned long nr_pages, struct list_hea= d *list) { - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list); + return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list, NUL= L); +} + +static inline unsigned long +alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **= page_array) +{ + return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, pag= e_array); } =20 /* diff --git a/mm/page_alloc.c b/mm/page_alloc.c index eb547470a7e4..be1e33a4df39 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4966,21 +4966,29 @@ static inline bool prepare_alloc_pages(gfp_t gfp_= mask, unsigned int order, } =20 /* - * __alloc_pages_bulk - Allocate a number of order-0 pages to a list + * __alloc_pages_bulk - Allocate a number of order-0 pages to a list or = array * @gfp: GFP flags for the allocation * @preferred_nid: The preferred NUMA node ID to allocate from * @nodemask: Set of nodes to allocate from, may be NULL - * @nr_pages: The number of pages desired on the list - * @page_list: List to store the allocated pages + * @nr_pages: The number of pages desired on the list or array + * @page_list: Optional list to store the allocated pages + * @page_array: Optional array to store the pages * * This is a batched version of the page allocator that attempts to - * allocate nr_pages quickly and add them to a list. + * allocate nr_pages quickly. Pages are added to page_list if page_list + * is not NULL, otherwise it is assumed that the page_array is valid. * - * Returns the number of pages on the list. + * For lists, nr_pages is the number of pages that should be allocated. + * + * For arrays, only NULL elements are populated with pages and nr_pages + * is the maximum number of pages that will be stored in the array. + * + * Returns the number of pages on the list or array. */ int __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, - struct list_head *page_list) + struct list_head *page_list, + struct page **page_array) { struct page *page; unsigned long flags; @@ -4991,13 +4999,20 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_n= id, struct alloc_context ac; gfp_t alloc_gfp; unsigned int alloc_flags; - int allocated =3D 0; + int nr_populated =3D 0; =20 if (WARN_ON_ONCE(nr_pages <=3D 0)) return 0; =20 + /* + * Skip populated array elements to determine if any pages need + * to be allocated before disabling IRQs. + */ + while (page_array && page_array[nr_populated] && nr_populated < nr_page= s) + nr_populated++; + /* Use the single page allocator for one page. */ - if (nr_pages =3D=3D 1) + if (nr_pages - nr_populated =3D=3D 1) goto failed; =20 /* May set ALLOC_NOFRAGMENT, fragmentation will return 1 page. */ @@ -5041,12 +5056,19 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_n= id, pcp =3D &this_cpu_ptr(zone->pageset)->pcp; pcp_list =3D &pcp->lists[ac.migratetype]; =20 - while (allocated < nr_pages) { + while (nr_populated < nr_pages) { + + /* Skip existing pages */ + if (page_array && page_array[nr_populated]) { + nr_populated++; + continue; + } + page =3D __rmqueue_pcplist(zone, ac.migratetype, alloc_flags, pcp, pcp_list); if (!page) { /* Try and get at least one page */ - if (!allocated) + if (!nr_populated) goto failed_irq; break; } @@ -5061,13 +5083,16 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_n= id, zone_statistics(ac.preferred_zoneref->zone, zone); =20 prep_new_page(page, 0, gfp, 0); - list_add(&page->lru, page_list); - allocated++; + if (page_list) + list_add(&page->lru, page_list); + else + page_array[nr_populated] =3D page; + nr_populated++; } =20 local_irq_restore(flags); =20 - return allocated; + return nr_populated; =20 failed_irq: local_irq_restore(flags); @@ -5075,11 +5100,14 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_n= id, failed: page =3D __alloc_pages(gfp, 0, preferred_nid, nodemask); if (page) { - list_add(&page->lru, page_list); - allocated =3D 1; + if (page_list) + list_add(&page->lru, page_list); + else + page_array[nr_populated] =3D page; + nr_populated++; } =20 - return allocated; + return nr_populated; } EXPORT_SYMBOL_GPL(__alloc_pages_bulk); =20 --=20 2.26.2