From: Daniel Jordan <daniel.m.jordan@oracle.com>
To: Gang Li <gang.li@linux.dev>
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
David Rientjes <rientjes@google.com>,
Muchun Song <muchun.song@linux.dev>,
Tim Chen <tim.c.chen@linux.intel.com>,
Steffen Klassert <steffen.klassert@secunet.com>,
Jane Chu <jane.chu@oracle.com>,
"Paul E . McKenney" <paulmck@kernel.org>,
Randy Dunlap <rdunlap@infradead.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
ligang.bdlg@bytedance.com
Subject: Re: [PATCH v6 7/8] hugetlb: parallelize 2M hugetlb allocation and initialization
Date: Fri, 8 Mar 2024 12:11:41 -0500 [thread overview]
Message-ID: <icrdkacpdksofftv5jwrwcgojsa7qnby4iuvxsdktuxazivhks@ajcy2shag4nz> (raw)
In-Reply-To: <20240222140422.393911-8-gang.li@linux.dev>
Hi,
On Thu, Feb 22, 2024 at 10:04:20PM +0800, Gang Li wrote:
> By distributing both the allocation and the initialization tasks across
> multiple threads, the initialization of 2M hugetlb will be faster,
> thereby improving the boot speed.
>
> Here are some test results:
> test case no patch(ms) patched(ms) saved
> ------------------- -------------- ------------- --------
> 256c2T(4 node) 2M 3336 1051 68.52%
> 128c1T(2 node) 2M 1943 716 63.15%
Great improvement, and glad to see the multithreading is useful here.
> static unsigned long __init hugetlb_pages_alloc_boot(struct hstate *h)
> {
> - unsigned long i;
> - struct folio *folio;
> - LIST_HEAD(folio_list);
> - nodemask_t node_alloc_noretry;
> -
> - /* Bit mask controlling how hard we retry per-node allocations.*/
> - nodes_clear(node_alloc_noretry);
> + struct padata_mt_job job = {
> + .fn_arg = h,
> + .align = 1,
> + .numa_aware = true
> + };
>
> - for (i = 0; i < h->max_huge_pages; ++i) {
> - folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY],
> - &node_alloc_noretry);
> - if (!folio)
> - break;
> - list_add(&folio->lru, &folio_list);
> - cond_resched();
> - }
> + job.thread_fn = hugetlb_pages_alloc_boot_node;
> + job.start = 0;
> + job.size = h->max_huge_pages;
>
> - prep_and_add_allocated_folios(h, &folio_list);
> + /*
> + * job.max_threads is twice the num_node_state(N_MEMORY),
> + *
> + * Tests below indicate that a multiplier of 2 significantly improves
> + * performance, and although larger values also provide improvements,
> + * the gains are marginal.
> + *
> + * Therefore, choosing 2 as the multiplier strikes a good balance between
> + * enhancing parallel processing capabilities and maintaining efficient
> + * resource management.
> + *
> + * +------------+-------+-------+-------+-------+-------+
> + * | multiplier | 1 | 2 | 3 | 4 | 5 |
> + * +------------+-------+-------+-------+-------+-------+
> + * | 256G 2node | 358ms | 215ms | 157ms | 134ms | 126ms |
> + * | 2T 4node | 979ms | 679ms | 543ms | 489ms | 481ms |
> + * | 50G 2node | 71ms | 44ms | 37ms | 30ms | 31ms |
> + * +------------+-------+-------+-------+-------+-------+
> + */
> + job.max_threads = num_node_state(N_MEMORY) * 2;
> + job.min_chunk = h->max_huge_pages / num_node_state(N_MEMORY) / 2;
For a single huge page, we get min_chunk of 0. padata doesn't
explicitly handle that, but 'align' happens to save us from div by 0
later on. It's an odd case, something to fix if there were another
version.
Not sure what efficient resource management means here. Avoiding lock
contention? The system is waiting on this initialization to start pid
1. On big systems, most CPUs will be idle, so why not use available
resources to optimize it more? max_threads could scale with CPU count
rather than a magic multiplier.
With that said, the major gain is already there, so either way,
Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com> # padata
next prev parent reply other threads:[~2024-03-08 17:12 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-22 14:04 [PATCH v6 0/8] hugetlb: parallelize hugetlb page init on boot Gang Li
2024-02-22 14:04 ` [PATCH v6 1/8] hugetlb: code clean for hugetlb_hstate_alloc_pages Gang Li
2024-02-22 14:04 ` [PATCH v6 2/8] hugetlb: split hugetlb_hstate_alloc_pages Gang Li
2024-02-22 14:04 ` [PATCH v6 3/8] hugetlb: pass *next_nid_to_alloc directly to for_each_node_mask_to_alloc Gang Li
2024-02-22 14:04 ` [PATCH v6 4/8] padata: dispatch works on different nodes Gang Li
2024-02-27 21:24 ` Daniel Jordan
2024-03-05 2:49 ` Gang Li
2024-03-08 15:42 ` Daniel Jordan
2024-02-22 14:04 ` [PATCH v6 5/8] padata: downgrade padata_do_multithreaded to serial execution for non-SMP Gang Li
2024-02-27 21:26 ` Daniel Jordan
2024-03-05 3:24 ` Gang Li
2024-02-22 14:04 ` [PATCH v6 6/8] hugetlb: have CONFIG_HUGETLBFS select CONFIG_PADATA Gang Li
2024-02-27 21:26 ` Daniel Jordan
2024-02-22 14:04 ` [PATCH v6 7/8] hugetlb: parallelize 2M hugetlb allocation and initialization Gang Li
2024-03-08 17:11 ` Daniel Jordan [this message]
2024-02-22 14:04 ` [PATCH v6 8/8] hugetlb: parallelize 1G hugetlb initialization Gang Li
2024-03-08 17:35 ` Daniel Jordan
2024-03-12 2:26 ` Gang Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=icrdkacpdksofftv5jwrwcgojsa7qnby4iuvxsdktuxazivhks@ajcy2shag4nz \
--to=daniel.m.jordan@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=gang.li@linux.dev \
--cc=jane.chu@oracle.com \
--cc=ligang.bdlg@bytedance.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=paulmck@kernel.org \
--cc=rdunlap@infradead.org \
--cc=rientjes@google.com \
--cc=steffen.klassert@secunet.com \
--cc=tim.c.chen@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox