From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A071C433FE for ; Fri, 15 Oct 2021 07:11:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3011D611C2 for ; Fri, 15 Oct 2021 07:11:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3011D611C2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 94AB46B006C; Fri, 15 Oct 2021 03:11:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8FA3F6B0071; Fri, 15 Oct 2021 03:11:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C289900002; Fri, 15 Oct 2021 03:11:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id 6DECC6B006C for ; Fri, 15 Oct 2021 03:11:32 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 32BA81809E2BD for ; Fri, 15 Oct 2021 07:11:32 +0000 (UTC) X-FDA: 78697801224.08.803CE4D Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf26.hostedemail.com (Postfix) with ESMTP id 7686320019C3 for ; Fri, 15 Oct 2021 07:11:32 +0000 (UTC) Received: by mail-pf1-f179.google.com with SMTP id c29so7640202pfp.2 for ; Fri, 15 Oct 2021 00:11:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:subject:to:cc:references:in-reply-to:mime-version :message-id:content-transfer-encoding; bh=zm7YkG8ue0/7nw5ACjv9T9SMPdO+s6Vc7kJ3z0VxOjs=; b=Ks4MhnG8gr5R2uZppwO3waPIx+5PyMZMgj8SQAU5hSDNKzLfP+yQh6+QS4605rjkjf z8m5LsJpZrAXEZNXARJVg3WIg4U8D7tgN9QujADBFaMhADruF3fItiY1ZWFj37qNg1H0 gviF4gNsA0rw8+fUZe0AjjGb2lo4HvdCHEnvtl85qpmvP8ps14TpyYK5VAySgTsi+Z2D Qr4TvDn14TP3eQD0Qfs7IgD0cuG5Fx6cmq6AJRDK9SWCIsLvd+DKaNi/YufHrZnwYJT1 aYNmqrS+3g3XqHZvvpbGvInDzQ+s7NCSrpXQ1aOfAxTzLNcqTnaTdx8Koz17wNLIQoIw rWEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:subject:to:cc:references:in-reply-to :mime-version:message-id:content-transfer-encoding; bh=zm7YkG8ue0/7nw5ACjv9T9SMPdO+s6Vc7kJ3z0VxOjs=; b=WnR5MLJ9vMFySYxlom2APwuyoISJ+YbOrErsTXPJIqvI4h9HlEmpuOrTx3L2nYABWU VoDN4huJpIFiGu5UxzSaH8GH/Wu6GxMCnVLVMokx39vPzU/ChBoQG1mKWtmJGN7iq6R6 dgcauaEa6+lvRTgjPUWT1La4OdSWWn4rVPiJfj4ghxinh/C0mkxAXE3b1g1Z9YI6vKlY 19fuZejFJqXQk37+y10N4AZZ2AmfeibX2z19U1iOQ4XMDgkHYf+aJmX5z1KHlQ0J4i1w F38f4Y2MCSvobXejJZhDKt5JWQVvPIsPKG0V+srwr7aw2h0TQ0er6Ua1fVI/SC1FchlL ANeg== X-Gm-Message-State: AOAM53243gvntQ9OEKYhZhXtIZNqlg0SPt8tlQITMvkK5p0Vut55PYRO YLF+wyfWnbgqofDbW/CWGlc= X-Google-Smtp-Source: ABdhPJyQry1SwSFnaYo9cSK7x1MFZG/FzFi5GjSU/nnkwPvoCieRdvRSAKO/fFIe5EZiSOHXfJUgmg== X-Received: by 2002:a63:2dc7:: with SMTP id t190mr7968460pgt.455.1634281890421; Fri, 15 Oct 2021 00:11:30 -0700 (PDT) Received: from localhost (14-203-144-177.static.tpgi.com.au. [14.203.144.177]) by smtp.gmail.com with ESMTPSA id x129sm4348269pfc.140.2021.10.15.00.11.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Oct 2021 00:11:30 -0700 (PDT) Date: Fri, 15 Oct 2021 17:11:25 +1000 From: Nicholas Piggin Subject: Re: [PATCH] mm/vmalloc: fix numa spreading for large hash tables To: Chen Wandun , Shakeel Butt Cc: Andrew Morton , Eric Dumazet , guohanjun@huawei.com, LKML , Linux MM , Kefeng Wang References: <20210928121040.2547407-1-chenwandun@huawei.com> <8fc5e1ae-a356-6225-2e50-cf0e5ee26208@huawei.com> <1634261360.fed2opbgxw.astroid@bobo.none> In-Reply-To: MIME-Version: 1.0 Message-Id: <1634281763.ecsq6l88ia.astroid@bobo.none> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7686320019C3 X-Stat-Signature: fp5rf7psf481yzzrjfrzg1fihbh77fsp Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Ks4MhnG8; spf=pass (imf26.hostedemail.com: domain of npiggin@gmail.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=npiggin@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1634281892-293548 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Excerpts from Chen Wandun's message of October 15, 2021 12:31 pm: >=20 >=20 > =E5=9C=A8 2021/10/15 9:34, Nicholas Piggin =E5=86=99=E9=81=93: >> Excerpts from Chen Wandun's message of October 14, 2021 6:59 pm: >>> >>> >>> =E5=9C=A8 2021/10/14 5:46, Shakeel Butt =E5=86=99=E9=81=93: >>>> On Tue, Sep 28, 2021 at 5:03 AM Chen Wandun wr= ote: >>>>> >>>>> Eric Dumazet reported a strange numa spreading info in [1], and found >>>>> commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings") introdu= ced >>>>> this issue [2]. >>>>> >>>>> Dig into the difference before and after this patch, page allocation = has >>>>> some difference: >>>>> >>>>> before: >>>>> alloc_large_system_hash >>>>> __vmalloc >>>>> __vmalloc_node(..., NUMA_NO_NODE, ...) >>>>> __vmalloc_node_range >>>>> __vmalloc_area_node >>>>> alloc_page /* because NUMA_NO_NODE, so choose a= lloc_page branch */ >>>>> alloc_pages_current >>>>> alloc_page_interleave /* can be proved = by print policy mode */ >>>>> >>>>> after: >>>>> alloc_large_system_hash >>>>> __vmalloc >>>>> __vmalloc_node(..., NUMA_NO_NODE, ...) >>>>> __vmalloc_node_range >>>>> __vmalloc_area_node >>>>> alloc_pages_node /* choose nid by nuam_mem_id()= */ >>>>> __alloc_pages_node(nid, ....) >>>>> >>>>> So after commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings"= ), >>>>> it will allocate memory in current node instead of interleaving alloc= ate >>>>> memory. >>>>> >>>>> [1] >>>>> https://lore.kernel.org/linux-mm/CANn89iL6AAyWhfxdHO+jaT075iOa3XcYn9k= 6JJc7JR2XYn6k_Q@mail.gmail.com/ >>>>> >>>>> [2] >>>>> https://lore.kernel.org/linux-mm/CANn89iLofTR=3DAK-QOZY87RdUZENCZUT4O= 6a0hvhu3_EwRMerOg@mail.gmail.com/ >>>>> >>>>> Fixes: 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings") >>>>> Reported-by: Eric Dumazet >>>>> Signed-off-by: Chen Wandun >>>>> --- >>>>> mm/vmalloc.c | 33 ++++++++++++++++++++++++++------- >>>>> 1 file changed, 26 insertions(+), 7 deletions(-) >>>>> >>>>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c >>>>> index f884706c5280..48e717626e94 100644 >>>>> --- a/mm/vmalloc.c >>>>> +++ b/mm/vmalloc.c >>>>> @@ -2823,6 +2823,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid, >>>>> unsigned int order, unsigned int nr_pages, struct p= age **pages) >>>>> { >>>>> unsigned int nr_allocated =3D 0; >>>>> + struct page *page; >>>>> + int i; >>>>> >>>>> /* >>>>> * For order-0 pages we make use of bulk allocator, if >>>>> @@ -2833,6 +2835,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid, >>>>> if (!order) { >>>> >>>> Can you please replace the above with if (!order && nid !=3D NUMA_NO_N= ODE)? >>>> >>>>> while (nr_allocated < nr_pages) { >>>>> unsigned int nr, nr_pages_request; >>>>> + page =3D NULL; >>>>> >>>>> /* >>>>> * A maximum allowed request is hard-coded = and is 100 >>>>> @@ -2842,9 +2845,23 @@ vm_area_alloc_pages(gfp_t gfp, int nid, >>>>> */ >>>>> nr_pages_request =3D min(100U, nr_pages - n= r_allocated); >>>>> >>>> >>>> Undo the following change in this if block. >>> >>> Yes, It seem like more simpler as you suggested, But it still have >>> performance regression, I plan to change the following to consider >>> both mempolcy and alloc_pages_bulk. >>=20 >> Thanks for finding and debugging this. These APIs are a maze of twisty >> little passages, all alike so I could be as confused as I was when I >> wrote that patch, but doesn't a minimal fix look something like this? >=20 > Yes, I sent a patch=EF=BC=8Cit looks like as you show, besides it also > contains some performance optimization. >=20 > [PATCH] mm/vmalloc: introduce alloc_pages_bulk_array_mempolicy to=20 > accelerate memory allocation Okay. It would be better to do it as two patches. First the minimal fix=20 so it can be backported easily and have the Fixes: tag pointed at my=20 commit. Then the performance optimization. Thanks, Nick