From: Tim Chen <tim.c.chen@linux.intel.com>
To: Gang Li <gang.li@linux.dev>, David Hildenbrand <david@redhat.com>,
David Rientjes <rientjes@google.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
Muchun Song <muchun.song@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
ligang.bdlg@bytedance.com
Subject: Re: [PATCH v4 4/7] hugetlb: pass *next_nid_to_alloc directly to for_each_node_mask_to_alloc
Date: Thu, 18 Jan 2024 15:01:01 -0800 [thread overview]
Message-ID: <fbb6448321a94c32ac60bcf3a6858c045863c44b.camel@linux.intel.com> (raw)
In-Reply-To: <20240118123911.88833-5-gang.li@linux.dev>
On Thu, 2024-01-18 at 20:39 +0800, Gang Li wrote:
> With parallelization of hugetlb allocation across different threads, each
> thread works on a differnet node to allocate pages from, instead of all
> allocating from a common node h->next_nid_to_alloc. To address this, it's
> necessary to assign a separate next_nid_to_alloc for each thread.
>
> Consequently, the hstate_next_node_to_alloc and for_each_node_mask_to_alloc
> have been modified to directly accept a *next_nid_to_alloc parameter,
> ensuring thread-specific allocation and avoiding concurrent access issues.
>
> Signed-off-by: Gang Li <gang.li@linux.dev>
> Tested-by: David Rientjes <rientjes@google.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
next prev parent reply other threads:[~2024-01-18 23:01 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-18 12:39 [RESEND PATCH v4 0/7] hugetlb: parallelize hugetlb page init on boot Gang Li
2024-01-18 12:39 ` [PATCH v4 1/7] hugetlb: code clean for hugetlb_hstate_alloc_pages Gang Li
2024-01-18 12:39 ` [PATCH v4 2/7] hugetlb: split hugetlb_hstate_alloc_pages Gang Li
2024-01-22 3:43 ` Muchun Song
2024-01-18 12:39 ` [PATCH v4 3/7] padata: dispatch works on different nodes Gang Li
2024-01-18 23:04 ` Tim Chen
2024-01-19 15:05 ` Gang Li
2024-01-19 2:59 ` Muchun Song
2024-01-19 15:04 ` Gang Li
2024-01-18 12:39 ` [PATCH v4 4/7] hugetlb: pass *next_nid_to_alloc directly to for_each_node_mask_to_alloc Gang Li
2024-01-18 23:01 ` Tim Chen [this message]
2024-01-19 2:54 ` Muchun Song
2024-01-22 6:16 ` Muchun Song
2024-01-22 9:14 ` Gang Li
2024-01-22 9:50 ` Muchun Song
2024-01-18 12:39 ` [PATCH v4 5/7] hugetlb: have CONFIG_HUGETLBFS select CONFIG_PADATA Gang Li
2024-01-18 12:39 ` [PATCH v4 6/7] hugetlb: parallelize 2M hugetlb allocation and initialization Gang Li
2024-01-22 7:10 ` Muchun Song
2024-01-22 10:12 ` Gang Li
2024-01-22 11:30 ` Muchun Song
2024-01-23 2:12 ` Gang Li
2024-01-23 3:32 ` Muchun Song
2024-01-18 12:39 ` [PATCH v4 7/7] hugetlb: parallelize 1G hugetlb initialization Gang Li
2024-01-18 14:22 ` Kefeng Wang
2024-01-19 14:45 ` Gang Li
2024-01-24 9:23 ` Muchun Song
2024-01-24 10:52 ` Gang Li
2024-01-25 2:48 ` Muchun Song
2024-01-25 3:47 ` Gang Li
2024-01-25 3:56 ` Gang Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fbb6448321a94c32ac60bcf3a6858c045863c44b.camel@linux.intel.com \
--to=tim.c.chen@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=gang.li@linux.dev \
--cc=ligang.bdlg@bytedance.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=muchun.song@linux.dev \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox