From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66B18C433DB for ; Wed, 10 Mar 2021 11:04:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C1B6F64FE4 for ; Wed, 10 Mar 2021 11:04:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C1B6F64FE4 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 21F208D0198; Wed, 10 Mar 2021 06:04:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CF7D8D0195; Wed, 10 Mar 2021 06:04:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 049D18D0198; Wed, 10 Mar 2021 06:04:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id D67608D0195 for ; Wed, 10 Mar 2021 06:04:49 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 94E081801611F for ; Wed, 10 Mar 2021 11:04:49 +0000 (UTC) X-FDA: 77903681898.08.43AFB71 Received: from smtp-fw-4101.amazon.com (smtp-fw-4101.amazon.com [72.21.198.25]) by imf11.hostedemail.com (Postfix) with ESMTP id DA28520003A3 for ; Wed, 10 Mar 2021 11:04:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1615374289; x=1646910289; h=references:from:to:cc:subject:in-reply-to:date: message-id:mime-version; bh=PrqnaKixzRwVMZgF18I7NsnIoh+7m+JQmlYK+RfHK24=; b=CWZpTD32j39mmW8Mp+FgsC82tzmBF9UjhsoHrzezsbcoP70hWZHdoMrb 1Axg4KuMBZpkmNE/6QRAILa24UMR0d6WjinVwcbgtMDjjSarKNL3x8U3B s2BEN0sFs7aFRsKEP0CmzAQxDe4fEoMFGceoeODiRESEnh/iluIfwKP+h U=; X-IronPort-AV: E=Sophos;i="5.81,237,1610409600"; d="scan'208";a="91717614" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-2b-c300ac87.us-west-2.amazon.com) ([10.43.8.2]) by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP; 10 Mar 2021 11:04:41 +0000 Received: from EX13D28EUB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-2b-c300ac87.us-west-2.amazon.com (Postfix) with ESMTPS id EBEC5A057A; Wed, 10 Mar 2021 11:04:39 +0000 (UTC) Received: from u570694869fb251.ant.amazon.com.amazon.com (10.43.160.67) by EX13D28EUB001.ant.amazon.com (10.43.166.50) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Mar 2021 11:04:34 +0000 References: <20210301161200.18852-1-mgorman@techsingularity.net> <20210301161200.18852-3-mgorman@techsingularity.net> User-agent: mu4e 1.4.15; emacs 27.1 From: Shay Agroskin To: Mel Gorman CC: Andrew Morton , Chuck Lever , Jesper Dangaard Brouer , LKML , Linux-Net , Linux-MM , Linux-NFS Subject: Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator In-Reply-To: <20210301161200.18852-3-mgorman@techsingularity.net> Date: Wed, 10 Mar 2021 13:04:17 +0200 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; format=flowed X-Originating-IP: [10.43.160.67] X-ClientProxiedBy: EX13D03UWC004.ant.amazon.com (10.43.162.49) To EX13D28EUB001.ant.amazon.com (10.43.166.50) X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: DA28520003A3 X-Stat-Signature: mw6w6xuojain4ggbzrj9a7xmd9p73ria Received-SPF: none (amazon.com>: No applicable sender policy available) receiver=imf11; identity=mailfrom; envelope-from=""; helo=smtp-fw-4101.amazon.com; client-ip=72.21.198.25 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615374283-632638 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Mel Gorman writes: > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index 8572a1474e16..4903d1cc48dc 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -515,6 +515,10 @@ static inline int > arch_make_page_accessible(struct page *page) > } > #endif > > +int __alloc_pages_bulk_nodemask(gfp_t gfp_mask, int > preferred_nid, > + nodemask_t *nodemask, int > nr_pages, > + struct list_head *list); > + > struct page * > __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int > preferred_nid, > nodemask_t > *nodemask); > @@ -525,6 +529,14 @@ __alloc_pages(gfp_t gfp_mask, unsigned int > order, int preferred_nid) > return __alloc_pages_nodemask(gfp_mask, order, > preferred_nid, NULL); > } > > +/* Bulk allocate order-0 pages */ > +static inline unsigned long > +alloc_pages_bulk(gfp_t gfp_mask, unsigned long nr_pages, struct > list_head *list) > +{ > + return __alloc_pages_bulk_nodemask(gfp_mask, > numa_mem_id(), NULL, > + nr_pages, > list); Is the second line indentation intentional ? Why not align it to the first argument (gfp_mask) ? > +} > + > /* > * Allocate pages, preferring the node given as nid. The node > must be valid and > * online. For more general interface, see alloc_pages_node(). > @@ -594,6 +606,7 @@ void * __meminit alloc_pages_exact_nid(int > nid, size_t size, gfp_t gfp_mask); > > extern void __free_pages(struct page *page, unsigned int > order); > extern void free_pages(unsigned long addr, unsigned int order); > +extern void free_pages_bulk(struct list_head *list); > > struct page_frag_cache; > extern void __page_frag_cache_drain(struct page *page, unsigned > int count); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 3e4b29ee2b1e..ff1e55793786 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4436,6 +4436,21 @@ static void wake_all_kswapds(unsigned int > order, gfp_t gfp_mask, > } > } > ... > > +/* > + * This is a batched version of the page allocator that > attempts to > + * allocate nr_pages quickly from the preferred zone and add > them to list. > + */ > +int __alloc_pages_bulk_nodemask(gfp_t gfp_mask, int > preferred_nid, > + nodemask_t *nodemask, int nr_pages, > + struct list_head *alloc_list) > +{ > + struct page *page; > + unsigned long flags; > + struct zone *zone; > + struct zoneref *z; > + struct per_cpu_pages *pcp; > + struct list_head *pcp_list; > + struct alloc_context ac; > + gfp_t alloc_mask; > + unsigned int alloc_flags; > + int alloced = 0; Does alloced count the number of allocated pages ? Do you mind renaming it to 'allocated' ? > + > + if (nr_pages == 1) > + goto failed; > + > + /* May set ALLOC_NOFRAGMENT, fragmentation will return 1 > page. */ > + if (!prepare_alloc_pages(gfp_mask, 0, preferred_nid, > nodemask, &ac, &alloc_mask, &alloc_flags)) > + return 0; > + gfp_mask = alloc_mask; > + > + /* Find an allowed local zone that meets the high > watermark. */ > + for_each_zone_zonelist_nodemask(zone, z, ac.zonelist, > ac.highest_zoneidx, ac.nodemask) { > + unsigned long mark; > + > + if (cpusets_enabled() && (alloc_flags & > ALLOC_CPUSET) && > + !__cpuset_zone_allowed(zone, gfp_mask)) { > + continue; > + } > + > + if (nr_online_nodes > 1 && zone != > ac.preferred_zoneref->zone && > + zone_to_nid(zone) != > zone_to_nid(ac.preferred_zoneref->zone)) { > + goto failed; > + } > + > + mark = wmark_pages(zone, alloc_flags & > ALLOC_WMARK_MASK) + nr_pages; > + if (zone_watermark_fast(zone, 0, mark, > + > zonelist_zone_idx(ac.preferred_zoneref), > + alloc_flags, gfp_mask)) { > + break; > + } > + } > + if (!zone) > + return 0; > + > + /* Attempt the batch allocation */ > + local_irq_save(flags); > + pcp = &this_cpu_ptr(zone->pageset)->pcp; > + pcp_list = &pcp->lists[ac.migratetype]; > + > + while (alloced < nr_pages) { > + page = __rmqueue_pcplist(zone, ac.migratetype, > alloc_flags, > + > pcp, pcp_list); Same indentation comment as before > + if (!page) > + break; > + > + prep_new_page(page, 0, gfp_mask, 0); > + list_add(&page->lru, alloc_list); > + alloced++; > + } > + > + if (!alloced) > + goto failed_irq; > + > + if (alloced) { > + __count_zid_vm_events(PGALLOC, zone_idx(zone), > alloced); > + zone_statistics(zone, zone); > + } > + > + local_irq_restore(flags); > + > + return alloced; > + > +failed_irq: > + local_irq_restore(flags); > + > +failed: > + page = __alloc_pages_nodemask(gfp_mask, 0, preferred_nid, > nodemask); > + if (page) { > + alloced++; > + list_add(&page->lru, alloc_list); > + } > + > + return alloced; > +} > +EXPORT_SYMBOL_GPL(__alloc_pages_bulk_nodemask); > + > /* > * This is the 'heart' of the zoned buddy allocator. > */ > @@ -4981,8 +5092,6 @@ __alloc_pages_nodemask(gfp_t gfp_mask, > unsigned int order, int preferred_nid, > return NULL; > } > > - gfp_mask &= gfp_allowed_mask; > - alloc_mask = gfp_mask; > if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, > nodemask, &ac, &alloc_mask, &alloc_flags)) > return NULL;