From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81E22C433E0 for ; Mon, 22 Mar 2021 09:18:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F1B746191E for ; Mon, 22 Mar 2021 09:18:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F1B746191E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C57A76B0082; Mon, 22 Mar 2021 04:59:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB5786B0085; Mon, 22 Mar 2021 04:59:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2E586B0087; Mon, 22 Mar 2021 04:59:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id 85EBC6B0082 for ; Mon, 22 Mar 2021 04:59:58 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F10A46103 for ; Mon, 22 Mar 2021 09:18:48 +0000 (UTC) X-FDA: 77946960336.27.78D343C Received: from outbound-smtp16.blacknight.com (outbound-smtp16.blacknight.com [46.22.139.233]) by imf14.hostedemail.com (Postfix) with ESMTP id 3C6CAC0007CC for ; Mon, 22 Mar 2021 09:18:48 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp16.blacknight.com (Postfix) with ESMTPS id 03FB21C5843 for ; Mon, 22 Mar 2021 09:18:47 +0000 (GMT) Received: (qmail 16194 invoked from network); 22 Mar 2021 09:18:46 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 22 Mar 2021 09:18:46 -0000 From: Mel Gorman To: Andrew Morton Cc: Vlastimil Babka , Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , Alexander Duyck , Matthew Wilcox , LKML , Linux-Net , Linux-MM , Linux-NFS , Mel Gorman Subject: [PATCH 3/3] mm/page_alloc: Add an array-based interface to the bulk page allocator Date: Mon, 22 Mar 2021 09:18:45 +0000 Message-Id: <20210322091845.16437-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210322091845.16437-1-mgorman@techsingularity.net> References: <20210322091845.16437-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Stat-Signature: 4sj3jxwki9xoqaxd7ykmiff79mnejmme X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3C6CAC0007CC Received-SPF: none (techsingularity.net>: No applicable sender policy available) receiver=imf14; identity=mailfrom; envelope-from=""; helo=outbound-smtp16.blacknight.com; client-ip=46.22.139.233 X-HE-DKIM-Result: none/none X-HE-Tag: 1616404728-603626 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The proposed callers for the bulk allocator store pages from the bulk allocator in an array. This patch adds an array-based interface to the AP= I to avoid multiple list iterations. The page list interface is preserved to avoid requiring all users of the bulk API to allocate and manage enoug= h storage to store the pages. Signed-off-by: Mel Gorman --- include/linux/gfp.h | 13 ++++++-- mm/page_alloc.c | 75 ++++++++++++++++++++++++++++++++++----------- 2 files changed, 67 insertions(+), 21 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 4a304fd39916..fb6234e1fe59 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -520,13 +520,20 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int = order, int preferred_nid, =20 int __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, - struct list_head *list); + struct list_head *page_list, + struct page **page_array); =20 /* Bulk allocate order-0 pages */ static inline unsigned long -alloc_pages_bulk(gfp_t gfp, unsigned long nr_pages, struct list_head *li= st) +alloc_pages_bulk_list(gfp_t gfp, unsigned long nr_pages, struct list_hea= d *list) { - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list); + return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list, NUL= L); +} + +static inline unsigned long +alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **= page_array) +{ + return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, pag= e_array); } =20 /* diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3f4d56854c74..c83d38dfe936 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4966,22 +4966,31 @@ static inline bool prepare_alloc_pages(gfp_t gfp_= mask, unsigned int order, } =20 /* - * __alloc_pages_bulk - Allocate a number of order-0 pages to a list + * __alloc_pages_bulk - Allocate a number of order-0 pages to a list or = array * @gfp: GFP flags for the allocation * @preferred_nid: The preferred NUMA node ID to allocate from * @nodemask: Set of nodes to allocate from, may be NULL * @nr_pages: The number of pages requested - * @page_list: List to store the allocated pages, must be empty + * @page_list: Optional list to store the allocated pages + * @page_array: Optional array to store the pages * * This is a batched version of the page allocator that attempts to - * allocate nr_pages quickly and add them to a list. The list must be - * empty to allow new pages to be prepped with IRQs enabled. + * allocate nr_pages quickly. Pages are added to page_list if page_list + * is not NULL, otherwise it is assumed that the page_array is valid. * - * Returns the number of pages allocated. + * For lists, nr_pages is the number of pages that should be allocated. + * + * For arrays, only NULL elements are populated with pages and nr_pages + * is the maximum number of pages that will be stored in the array. Note + * that arrays with NULL holes in the middle may return prematurely. + * + * Returns the number of pages added to the page_list or the known + * number of populated elements in the page_array. */ int __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, - struct list_head *page_list) + struct list_head *page_list, + struct page **page_array) { struct page *page; unsigned long flags; @@ -4992,14 +5001,23 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_n= id, struct alloc_context ac; gfp_t alloc_gfp; unsigned int alloc_flags; - int allocated =3D 0; + int nr_populated =3D 0, prep_index =3D 0; =20 if (WARN_ON_ONCE(nr_pages <=3D 0)) return 0; =20 - if (WARN_ON_ONCE(!list_empty(page_list))) + if (WARN_ON_ONCE(page_list && !list_empty(page_list))) return 0; =20 + /* Skip populated array elements. */ + if (page_array) { + while (nr_populated < nr_pages && page_array[nr_populated]) + nr_populated++; + if (nr_populated =3D=3D nr_pages) + return nr_populated; + prep_index =3D nr_populated; + } + if (nr_pages =3D=3D 1) goto failed; =20 @@ -5044,12 +5062,22 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_n= id, pcp =3D &this_cpu_ptr(zone->pageset)->pcp; pcp_list =3D &pcp->lists[ac.migratetype]; =20 - while (allocated < nr_pages) { + while (nr_populated < nr_pages) { + /* + * Stop allocating if the next index has a populated + * page or the page will be prepared a second time when + * IRQs are enabled. + */ + if (page_array && page_array[nr_populated]) { + nr_populated++; + break; + } + page =3D __rmqueue_pcplist(zone, ac.migratetype, alloc_flags, pcp, pcp_list); if (!page) { /* Try and get at least one page */ - if (!allocated) + if (!nr_populated) goto failed_irq; break; } @@ -5063,17 +5091,25 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_n= id, __count_zid_vm_events(PGALLOC, zone_idx(zone), 1); zone_statistics(ac.preferred_zoneref->zone, zone); =20 - list_add(&page->lru, page_list); - allocated++; + if (page_list) + list_add(&page->lru, page_list); + else + page_array[nr_populated] =3D page; + nr_populated++; } =20 local_irq_restore(flags); =20 /* Prep pages with IRQs enabled. */ - list_for_each_entry(page, page_list, lru) - prep_new_page(page, 0, gfp, 0); + if (page_list) { + list_for_each_entry(page, page_list, lru) + prep_new_page(page, 0, gfp, 0); + } else { + while (prep_index < nr_populated) + prep_new_page(page_array[prep_index++], 0, gfp, 0); + } =20 - return allocated; + return nr_populated; =20 failed_irq: local_irq_restore(flags); @@ -5081,11 +5117,14 @@ int __alloc_pages_bulk(gfp_t gfp, int preferred_n= id, failed: page =3D __alloc_pages(gfp, 0, preferred_nid, nodemask); if (page) { - list_add(&page->lru, page_list); - allocated =3D 1; + if (page_list) + list_add(&page->lru, page_list); + else + page_array[nr_populated] =3D page; + nr_populated++; } =20 - return allocated; + return nr_populated; } EXPORT_SYMBOL_GPL(__alloc_pages_bulk); =20 --=20 2.26.2