From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 482FCC433EF for ; Sun, 13 Mar 2022 21:36:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 98CD06B0072; Sun, 13 Mar 2022 17:36:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 93CC96B0073; Sun, 13 Mar 2022 17:36:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 804B86B0074; Sun, 13 Mar 2022 17:36:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 726006B0072 for ; Sun, 13 Mar 2022 17:36:20 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 33EF020B3F for ; Sun, 13 Mar 2022 21:36:20 +0000 (UTC) X-FDA: 79240671720.02.0FABB46 Received: from mail-yw1-f171.google.com (mail-yw1-f171.google.com [209.85.128.171]) by imf04.hostedemail.com (Postfix) with ESMTP id B9BE240021 for ; Sun, 13 Mar 2022 21:36:19 +0000 (UTC) Received: by mail-yw1-f171.google.com with SMTP id 00721157ae682-2db2add4516so144470677b3.1 for ; Sun, 13 Mar 2022 14:36:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=u07J4CAvfW2rxku6j2NiK/WFDWA9v2JFm0VitAGxtws=; b=d41V6mfhx2i6Yg04udiAe+jr0Xg8XKiB03eFBYem3zFdpWq5wbChkfHU2TjDCXW6Ek XyhQaRT61am4M/j3WEnD0t2sympFmHpcAapHq5mEw7BBvTtY1UguJyoUFWzDXxUKJs+E 1FWaIRfkCOjQLEFgGxCmsOQccQykdeS+RAsyfW76mbAXjn3+wrWKbPUdcrDqa/PGUKGB 7gwz5QxX99co85P2fUG/TAVX8HSuD44rZpuWkwHcG1noWWZ6QWUaAHDhX+JxQXMcTtah ONkjo8jvMnmHhRilLayuP76mnaaSCTkztf5xI/ffyZLCe6DlhY/fRh3TiOGzgU5wpqb/ 7K/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=u07J4CAvfW2rxku6j2NiK/WFDWA9v2JFm0VitAGxtws=; b=Hc8Msyvtxl58UiiZ5dP7lFQs3PkCIOoSmwfkOa42le6Z5TTa6tc9FjrtXkZ/7P/N+D wFgAbCVmOdlW+dL0w+VjukZURYC/rgoXe0vY2jniUICnB2DrZmFASP2C90KtzHyHCOKu gkzcwF8+uTP5smCob0iPHmlvGLkaQxp/GO3UDkr3PuxXGu4GEqB2u5nsaHB3W8S4skbC AvKhaCwzt4J0mY+yK7H85GxKTIHweVrPnV6SBNjp8tJhNLh8N5AJeGg7SSkGBXyCldLS MR9jsdtJ8V9POGAw9wPLofouu6f3UKuu50o/bvDqGuL7vpkr2KTjxY7kzm5lFhkBaR8J dgmA== X-Gm-Message-State: AOAM533q3z4pVV/BpXGOb8otjBd1nOmY9eX1GTXjHj86adnv2CZcOyVw U/X/7r3eJESaymtYcWJWaMRjDHCLHF6EVHMuW2HpeQ== X-Google-Smtp-Source: ABdhPJx1CMwmPasmuVNQGeM2vLJuWikCar73Gmd03DWO1azBNeiNK7psPr5k/Gqv6vM/euPfIwTcKFy36rWx23UurRM= X-Received: by 2002:a81:a743:0:b0:2dc:6eab:469a with SMTP id e64-20020a81a743000000b002dc6eab469amr16626636ywh.332.1647207378575; Sun, 13 Mar 2022 14:36:18 -0700 (PDT) MIME-Version: 1.0 References: <20220312154321.GC1189@xsang-OptiPlex-9020> <15307f8a-c202-75d8-1361-dae0146df734@suse.cz> <8f499c76-68cb-a2c3-01fd-c8759e2fd317@suse.cz> In-Reply-To: From: Eric Dumazet Date: Sun, 13 Mar 2022 14:36:07 -0700 Message-ID: Subject: Re: [mm/page_alloc] 8212a964ee: vm-scalability.throughput 30.5% improvement To: Matthew Wilcox Cc: Vlastimil Babka , kernel test robot , Mel Gorman , 0day robot , Michal Hocko , Shakeel Butt , Wei Xu , Greg Thelen , Hugh Dickins , David Rientjes , LKML , lkp@lists.01.org, "Huang, Ying" , "Tang, Feng" , zhengjun.xing@linux.intel.com, fengwei.yin@intel.com, Eric Dumazet , Andrew Morton , linux-mm Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Queue-Id: B9BE240021 X-Stat-Signature: a14xrduc3qhzh8jzp9w617zf1ryuaghz Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=d41V6mfh; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of edumazet@google.com designates 209.85.128.171 as permitted sender) smtp.mailfrom=edumazet@google.com X-Rspamd-Server: rspam03 X-HE-Tag: 1647207379-697158 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Mar 13, 2022 at 2:27 PM Eric Dumazet wrote: > > On Sun, Mar 13, 2022 at 2:18 PM Matthew Wilcox wrote: > > > > On Sun, Mar 13, 2022 at 02:10:12PM -0700, Eric Dumazet wrote: > > > @@ -3065,6 +3062,12 @@ static int rmqueue_bulk(struct zone *zone, > > > unsigned int order, > > > */ > > > __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order)); > > > spin_unlock(&zone->lock); > > > + list_for_each_entry_safe(page, tmp, list, lru) { > > > + if (unlikely(check_pcp_refill(page))) { > > > + list_del(&page->lru); > > > + allocated--; > > > + } > > > + } > > > > ... you'd need to adjust __mod_zone_page_state() too, right? > > Probably ! > This was only to show the basic idea, as I said, not even compiled or tested :) I can test the following: diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1804287c1b792b8aa0e964b17eb002b6b1115258..30a1abf40ea7e9104bfd24a42d9e0c8ebb152fc4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3024,7 +3024,9 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, unsigned long count, struct list_head *list, int migratetype, unsigned int alloc_flags) { + struct page *page, *tmp; int i, allocated = 0; + int free_cma_pages = 0; /* * local_lock_irq held so equivalent to spin_lock_irqsave for @@ -3032,14 +3034,10 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, */ spin_lock(&zone->lock); for (i = 0; i < count; ++i) { - struct page *page = __rmqueue(zone, order, migratetype, - alloc_flags); + page = __rmqueue(zone, order, migratetype, alloc_flags); if (unlikely(page == NULL)) break; - if (unlikely(check_pcp_refill(page))) - continue; - /* * Split buddy pages returned by expand() are received here in * physical page order. The page is added to the tail of @@ -3052,9 +3050,6 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, */ list_add_tail(&page->lru, list); allocated++; - if (is_migrate_cma(get_pcppage_migratetype(page))) - __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, - -(1 << order)); } /* @@ -3065,6 +3060,16 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, */ __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order)); spin_unlock(&zone->lock); + list_for_each_entry_safe(page, tmp, list, lru) { + if (unlikely(check_pcp_refill(page))) { + list_del(&page->lru); + allocated--; + } else if (is_migrate_cma(get_pcppage_migratetype(page))) { + free_cma_pages++; + } + } + if (free_cma_pages) + __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, -(free_cma_pages << order)); return allocated; }