From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5D69C433EF for ; Sat, 16 Oct 2021 16:27:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 60FAF6101E for ; Sat, 16 Oct 2021 16:27:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 60FAF6101E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D53B7900002; Sat, 16 Oct 2021 12:27:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CDC7A6B0072; Sat, 16 Oct 2021 12:27:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B080D900002; Sat, 16 Oct 2021 12:27:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0197.hostedemail.com [216.40.44.197]) by kanga.kvack.org (Postfix) with ESMTP id 9C28C6B0071 for ; Sat, 16 Oct 2021 12:27:05 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5305D8249980 for ; Sat, 16 Oct 2021 16:27:05 +0000 (UTC) X-FDA: 78702830010.27.D11E11C Received: from mail-lf1-f45.google.com (mail-lf1-f45.google.com [209.85.167.45]) by imf22.hostedemail.com (Postfix) with ESMTP id 3D5891905 for ; Sat, 16 Oct 2021 16:27:04 +0000 (UTC) Received: by mail-lf1-f45.google.com with SMTP id x27so55664130lfa.9 for ; Sat, 16 Oct 2021 09:27:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=sQ9MmYJBuCFx6xP+hNWvVvbHM5PG9vrczGM888TZF4k=; b=lM7N+OqcjWVuhtLlHxInnpbXNDo3y53iymmzurUdoMREFIZ++mDJjhqRhUJiIvhc8u CoxXopYWB6ay7Nq5kDGcqJ/9MHbeFP8e7bN4e3PSMkmpYjF99JmBLUNvh6NC4/Xn+qla jQX0lOHP/idjHzza6VaebaIRftPiB1xnQWFrDLfx32HidWouwQji4WzpkOb0/HzcQMh1 tGCM2uFUKsAfaLLhS/QeuuUAwq06J5xd6PnCAvYA/yYW7R9RZtiKqHvJM91Y1wi6NZD7 vcAEkj5DlHyFEwR2Go6CGuQAE9EQkDVSUonHic1Lhcw1GxxZLeGHrAbzYrBYwmQFu7+z WnBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=sQ9MmYJBuCFx6xP+hNWvVvbHM5PG9vrczGM888TZF4k=; b=4EtfjfKR2/xSjiXwkQJBP8GmZGAxHkzgn0eY1mWzPUIAfDosDW9Pr4xfX3ZcbRXiVw hqXxkG7vGE5GBKPQsvt3gml9lblPlmCFOfcffMiuqMcs84/rlf+d5zCscSQ1gElZrMpA hJwWDJkU+YzS4fMZKQ+LWiwhbOCzSGr2ww3ZmQAKIaXJJZMsoefSIDBYz2CypxbclVYY 7m0Y+33YkgI6tgXSnxLxREDwE18BIbiFu6V9JvQXP1EkJ3ZTWv9Wtu8K5D1CkN7fqfxX S4vpTpAEPUarTGZ160bdSA19B/J/IP9BJnEW3glQER0o+FPSoE9b+9rXtUZLwgAfLtWv Hh/w== X-Gm-Message-State: AOAM531itTbL00DjOT+wMxvrg+YE5kXiUdXw4YyJa8xjFapDWFAPi/q7 qqB13cisZtcTj6QWQtKS2y8= X-Google-Smtp-Source: ABdhPJwbhiNxospEeLweYzRGJn/g2oXILwqZCvvwt0eL6ckHy82fEKyeAvrxfR3J/8qm97ohBWXgBA== X-Received: by 2002:a2e:910b:: with SMTP id m11mr19344897ljg.11.1634401623356; Sat, 16 Oct 2021 09:27:03 -0700 (PDT) Received: from pc638.lan (h5ef52e3d.seluork.dyn.perspektivbredband.net. [94.245.46.61]) by smtp.gmail.com with ESMTPSA id m7sm890774lfk.63.2021.10.16.09.27.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 16 Oct 2021 09:27:02 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Sat, 16 Oct 2021 18:27:00 +0200 To: Chen Wandun Cc: akpm@linux-foundation.org, shakeelb@google.com, npiggin@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, edumazet@google.com, wangkefeng.wang@huawei.com, guohanjun@huawei.com Subject: Re: [PATCH] mm/vmalloc: introduce alloc_pages_bulk_array_mempolicy to accelerate memory allocation Message-ID: <20211016162700.GA1914@pc638.lan> References: <20210928121040.2547407-1-chenwandun@huawei.com> <20211014092952.1500982-1-chenwandun@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211014092952.1500982-1-chenwandun@huawei.com> User-Agent: Mutt/1.10.1 (2018-07-13) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=lM7N+Oqc; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.45 as permitted sender) smtp.mailfrom=urezki@gmail.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 3D5891905 X-Stat-Signature: o6zmmoj1mfpi3d9tw1pfxpx4ucsbhzoy X-HE-Tag: 1634401624-270806 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Oct 14, 2021 at 05:29:52PM +0800, Chen Wandun wrote: > It will cause significant performance regressions in some situations > as Andrew mentioned in [1]. The main situation is vmalloc, vmalloc > will allocate pages with NUMA_NO_NODE by default, that will result > in alloc page one by one; > > In order to solve this, __alloc_pages_bulk and mempolicy should be > considered at the same time. > > 1) If node is specified in memory allocation request, it will alloc > all pages by __alloc_pages_bulk. > > 2) If interleaving allocate memory, it will cauculate how many pages > should be allocated in each node, and use __alloc_pages_bulk to alloc > pages in each node. > > [1]: https://lore.kernel.org/lkml/CALvZod4G3SzP3kWxQYn0fj+VgG-G3yWXz=gz17+3N57ru1iajw@mail.gmail.com/t/#m750c8e3231206134293b089feaa090590afa0f60 > > Signed-off-by: Chen Wandun > ---------------- > based on "[PATCH] mm/vmalloc: fix numa spreading for large hash tables" > --- > include/linux/gfp.h | 4 +++ > mm/mempolicy.c | 76 +++++++++++++++++++++++++++++++++++++++++++++ > mm/vmalloc.c | 19 +++--------- > 3 files changed, 85 insertions(+), 14 deletions(-) > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index 558299cb2970..b976c4177299 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -531,6 +531,10 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, > struct list_head *page_list, > struct page **page_array); > > +unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, > + unsigned long nr_pages, > + struct page **page_array); > + > /* Bulk allocate order-0 pages */ > static inline unsigned long > alloc_pages_bulk_list(gfp_t gfp, unsigned long nr_pages, struct list_head *list) > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index 9f8cd1457829..f456c5eb8d10 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -2196,6 +2196,82 @@ struct page *alloc_pages(gfp_t gfp, unsigned order) > } > EXPORT_SYMBOL(alloc_pages); > > +unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, > + struct mempolicy *pol, unsigned long nr_pages, > + struct page **page_array) > +{ > + int nodes; > + unsigned long nr_pages_per_node; > + int delta; > + int i; > + unsigned long nr_allocated; > + unsigned long total_allocated = 0; > + > + nodes = nodes_weight(pol->nodes); > + nr_pages_per_node = nr_pages / nodes; > + delta = nr_pages - nodes * nr_pages_per_node; > + > + for (i = 0; i < nodes; i++) { > + if (delta) { > + nr_allocated = __alloc_pages_bulk(gfp, > + interleave_nodes(pol), NULL, > + nr_pages_per_node + 1, NULL, > + page_array); > + delta--; > + } else { > + nr_allocated = __alloc_pages_bulk(gfp, > + interleave_nodes(pol), NULL, > + nr_pages_per_node, NULL, page_array); > + } > + > + page_array += nr_allocated; > + total_allocated += nr_allocated; > + } > + > + return total_allocated; > +} > + > +unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid, > + struct mempolicy *pol, unsigned long nr_pages, > + struct page **page_array) > +{ > + gfp_t preferred_gfp; > + unsigned long nr_allocated = 0; > + > + preferred_gfp = gfp | __GFP_NOWARN; > + preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); > + > + nr_allocated = __alloc_pages_bulk(preferred_gfp, nid, &pol->nodes, > + nr_pages, NULL, page_array); > + > + if (nr_allocated < nr_pages) > + nr_allocated += __alloc_pages_bulk(gfp, numa_node_id(), NULL, > + nr_pages - nr_allocated, NULL, > + page_array + nr_allocated); > + return nr_allocated; > +} > + > +unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, > + unsigned long nr_pages, struct page **page_array) > +{ > + struct mempolicy *pol = &default_policy; > + > + if (!in_interrupt() && !(gfp & __GFP_THISNODE)) > + pol = get_task_policy(current); > + > + if (pol->mode == MPOL_INTERLEAVE) > + return alloc_pages_bulk_array_interleave(gfp, pol, > + nr_pages, page_array); > + > + if (pol->mode == MPOL_PREFERRED_MANY) > + return alloc_pages_bulk_array_preferred_many(gfp, > + numa_node_id(), pol, nr_pages, page_array); > + > + return __alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()), > + policy_nodemask(gfp, pol), nr_pages, NULL, > + page_array); > +} > + > struct folio *folio_alloc(gfp_t gfp, unsigned order) > { > struct page *page = alloc_pages(gfp | __GFP_COMP, order); > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index b7ac4a8fe2b3..49adba793f3c 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2856,23 +2856,14 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > */ > nr_pages_request = min(100U, nr_pages - nr_allocated); > > - if (nid == NUMA_NO_NODE) { > - for (i = 0; i < nr_pages_request; i++) { > - page = alloc_page(gfp); > - if (page) > - pages[nr_allocated + i] = page; > - else { > - nr = i; > - break; > - } > - } > - if (i >= nr_pages_request) > - nr = nr_pages_request; > - } else { > + if (nid == NUMA_NO_NODE) > + nr = alloc_pages_bulk_array_mempolicy(gfp, > + nr_pages_request, > + pages + nr_allocated); > + else > nr = alloc_pages_bulk_array_node(gfp, nid, > nr_pages_request, > pages + nr_allocated); > - } > nr_allocated += nr; > cond_resched(); > > -- > 2.25.1 > Now it looks much more correct. Reviewed-by: Uladzislau Rezki (Sony) -- Vlad Rezki