From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, chenwandun@huawei.com,
edumazet@google.com, guohanjun@huawei.com, linux-mm@kvack.org,
mm-commits@vger.kernel.org, npiggin@gmail.com,
shakeelb@google.com, torvalds@linux-foundation.org,
urezki@gmail.com, wangkefeng.wang@huawei.com
Subject: [patch 07/11] mm/vmalloc: fix numa spreading for large hash tables
Date: Thu, 28 Oct 2021 14:36:24 -0700 [thread overview]
Message-ID: <20211028213624.ioyXk3qpi%akpm@linux-foundation.org> (raw)
In-Reply-To: <20211028143506.5f5d5e2cd1f768a1da864844@linux-foundation.org>
From: Chen Wandun <chenwandun@huawei.com>
Subject: mm/vmalloc: fix numa spreading for large hash tables
Eric Dumazet reported a strange numa spreading info in [1], and found
commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings") introduced
this issue [2].
Dig into the difference before and after this patch, page allocation has
some difference:
before:
alloc_large_system_hash
__vmalloc
__vmalloc_node(..., NUMA_NO_NODE, ...)
__vmalloc_node_range
__vmalloc_area_node
alloc_page /* because NUMA_NO_NODE, so choose alloc_page branch */
alloc_pages_current
alloc_page_interleave /* can be proved by print policy mode */
after:
alloc_large_system_hash
__vmalloc
__vmalloc_node(..., NUMA_NO_NODE, ...)
__vmalloc_node_range
__vmalloc_area_node
alloc_pages_node /* choose nid by nuam_mem_id() */
__alloc_pages_node(nid, ....)
So after commit 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings"), it
will allocate memory in current node instead of interleaving allocate
memory.
[1]
https://lore.kernel.org/linux-mm/CANn89iL6AAyWhfxdHO+jaT075iOa3XcYn9k6JJc7JR2XYn6k_Q@mail.gmail.com/
[2]
https://lore.kernel.org/linux-mm/CANn89iLofTR=AK-QOZY87RdUZENCZUT4O6a0hvhu3_EwRMerOg@mail.gmail.com/
Link: https://lkml.kernel.org/r/20211021080744.874701-2-chenwandun@huawei.com
Fixes: 121e6f3258fe ("mm/vmalloc: hugepage vmalloc mappings")
Signed-off-by: Chen Wandun <chenwandun@huawei.com>
Reported-by: Eric Dumazet <edumazet@google.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Hanjun Guo <guohanjun@huawei.com>
Cc: Uladzislau Rezki <urezki@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/vmalloc.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
--- a/mm/vmalloc.c~mm-vmalloc-fix-numa-spreading-for-large-hash-tables
+++ a/mm/vmalloc.c
@@ -2816,6 +2816,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
unsigned int order, unsigned int nr_pages, struct page **pages)
{
unsigned int nr_allocated = 0;
+ struct page *page;
+ int i;
/*
* For order-0 pages we make use of bulk allocator, if
@@ -2823,7 +2825,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
* to fails, fallback to a single page allocator that is
* more permissive.
*/
- if (!order) {
+ if (!order && nid != NUMA_NO_NODE) {
while (nr_allocated < nr_pages) {
unsigned int nr, nr_pages_request;
@@ -2848,7 +2850,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
if (nr != nr_pages_request)
break;
}
- } else
+ } else if (order)
/*
* Compound pages required for remap_vmalloc_page if
* high-order pages.
@@ -2856,11 +2858,12 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
gfp |= __GFP_COMP;
/* High-order pages or fallback path if "bulk" fails. */
- while (nr_allocated < nr_pages) {
- struct page *page;
- int i;
- page = alloc_pages_node(nid, gfp, order);
+ while (nr_allocated < nr_pages) {
+ if (nid == NUMA_NO_NODE)
+ page = alloc_pages(gfp, order);
+ else
+ page = alloc_pages_node(nid, gfp, order);
if (unlikely(!page))
break;
_
next prev parent reply other threads:[~2021-10-28 21:36 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-28 21:35 incoming Andrew Morton
2021-10-28 21:36 ` [patch 01/11] memcg: page_alloc: skip bulk allocator for __GFP_ACCOUNT Andrew Morton
2021-10-28 21:36 ` [patch 02/11] mm: hwpoison: remove the unnecessary THP check Andrew Morton
2021-10-28 21:36 ` [patch 03/11] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault Andrew Morton
2021-10-28 21:36 ` [patch 04/11] mm/oom_kill.c: prevent a race between process_mrelease and exit_mmap Andrew Morton
2021-10-28 21:36 ` [patch 05/11] ocfs2: fix race between searching chunks and release journal_head from buffer_head Andrew Morton
2021-10-28 21:36 ` [patch 06/11] mm/secretmem: avoid letting secretmem_users drop to zero Andrew Morton
2021-10-28 21:36 ` Andrew Morton [this message]
2021-10-28 21:36 ` [patch 08/11] mm, thp: bail out early in collapse_file for writeback page Andrew Morton
2021-10-28 21:36 ` [patch 09/11] mm: khugepaged: skip huge page collapse for special files Andrew Morton
2021-10-28 21:36 ` [patch 10/11] mm/damon/core-test: fix wrong expectations for 'damon_split_regions_of()' Andrew Morton
2021-10-28 21:36 ` [patch 11/11] tools/testing/selftests/vm/split_huge_page_test.c: fix application of sizeof to pointer Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211028213624.ioyXk3qpi%akpm@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=chenwandun@huawei.com \
--cc=edumazet@google.com \
--cc=guohanjun@huawei.com \
--cc=linux-mm@kvack.org \
--cc=mm-commits@vger.kernel.org \
--cc=npiggin@gmail.com \
--cc=shakeelb@google.com \
--cc=torvalds@linux-foundation.org \
--cc=urezki@gmail.com \
--cc=wangkefeng.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox