From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C956CC4725D for ; Mon, 22 Jan 2024 07:10:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 606A58D0003; Mon, 22 Jan 2024 02:10:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5904D8D0001; Mon, 22 Jan 2024 02:10:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42F0C8D0003; Mon, 22 Jan 2024 02:10:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 306568D0001 for ; Mon, 22 Jan 2024 02:10:20 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id F2A704058F for ; Mon, 22 Jan 2024 07:10:19 +0000 (UTC) X-FDA: 81706073358.08.E06C04C Received: from out-181.mta1.migadu.com (out-181.mta1.migadu.com [95.215.58.181]) by imf23.hostedemail.com (Postfix) with ESMTP id 1B7DA140003 for ; Mon, 22 Jan 2024 07:10:16 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=HjAY91BB; spf=pass (imf23.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.181 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705907417; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4i/R4tEjyXJlQ2GdaUYgCeui42zB/Fr0UkVlcgTOFHE=; b=fkHLJx4g9OnCOsw4bIiMWDK05BJRVCvAtxzdQJBnGfZz/8k5bbnNMPp09PTjPwL/aVfcQA JqHr/AmEewtswxOoGmfQvnBY6tN4bG3DM0BewQ1dlmV8c0pENaTUj8xv5rzLrPLRD12UJZ lM6REtRNoMAySJuFJ3h1YouCBa4prMk= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=HjAY91BB; spf=pass (imf23.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.181 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705907417; a=rsa-sha256; cv=none; b=7Fdea5yyH6T1pLfW+igycyc2tDylA1gM8qZwJqEz9J8uBzgt5Dq2Vfz/4wg8LmDTw3MeU8 MY0CYn/fAIS/J+bbtBrHrk2bIrlwsMc2/3Lf+s9rIsF7RGZZo+7CHvOyxuZ5M69K+dvhDJ JY9obgYx4w3X1OW0h0zuXNdq/q/T3CE= Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1705907415; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4i/R4tEjyXJlQ2GdaUYgCeui42zB/Fr0UkVlcgTOFHE=; b=HjAY91BBvrJ8KqkPbIbfCAZDXZfnDDPokY4TMh1OS31WpmCnLpzr3dY+HF4f2coKPIFJoA KzbFppeJRF7xI14GMDmXPFcGabiXukFr1wuLOraPcYyzre2zkAgrAax68+TZiuAJVH2DSK ZVG0EoWMp+YmEKV1L3OUuV4df+IygQI= Date: Mon, 22 Jan 2024 15:10:07 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v4 6/7] hugetlb: parallelize 2M hugetlb allocation and initialization To: Gang Li , David Hildenbrand , David Rientjes , Mike Kravetz , Andrew Morton , Tim Chen Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com References: <20240118123911.88833-1-gang.li@linux.dev> <20240118123911.88833-7-gang.li@linux.dev> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20240118123911.88833-7-gang.li@linux.dev> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 1B7DA140003 X-Rspam-User: X-Stat-Signature: g6jo1fj88k7rsak4nkatqt7it4oksygr X-Rspamd-Server: rspam01 X-HE-Tag: 1705907416-94332 X-HE-Meta: U2FsdGVkX18BuYhzsKMqK8jDfVOiLSarN4vbSnfRKIW6SPPRWNhglhYcNS5nrtooo+gaar0wMtnMKgV9H4dZ1QRuJqJH0HHFLCQ6D6dO3QpzjptbYAoCM94YgA22cdvV/SU9kFQhULswOuHKv8AURSeJNqDMSRsv2osB5CD5rSsFQV5iooS6pehwBm2vX2qf9gRODc+lQoCz7QErNMxrq94uYTazq7nmWQvjVC3FK4Dc7m81NYz7q4/8J64CkIG8hvvXDsvZTqWbZZMeT55giheFGRW0VAegoMbVhuNaRGijh1Gu1oB5684+03JF1yQ2ROpALGrmj2KQTRidHiNLrbMctrr06OEWcxahLBuMtUGGR8zaVjSNKHoCkLPLkCLP9odCoErZEn6/VOePdK320Ivl1yrML80bW+14/SNTTTEvID/cIkNSmZ13mbM5fB+ZzlNsUWPBvDeyOoKpxcKG70Kk+xpHQ/ipKU4D8nKaaueUQC/ODVv8dPt8HlgLkQ5h7aJHbP+KJ0dXbExeR/zAFQwUPpwK9mbOhPm8QBPA52EOVX5Th3+EgNuZe/UMMqh9IZ8U/nuLUTKCizcb7Q+2fQdylxxi7N0dKwKrafOQX3qApN+mWTz/udhKvZngYH94+wOdLmQiN0Y6QYW/Exu5Mr9U4QTtS5ww/XbW0wwOqjVXmSUHBbZ+eqdQ9jhU7/om/ot+rMDfSbLoU4JViOc9xCeyD5LhcBdGd8FA/g7CnvzgHECIU70UM/5UEF6wFOVLm4Cs5NXB6sOgFCbQm0Y+cPo7epj+RptRAq8sMoltfl0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/1/18 20:39, Gang Li wrote: > By distributing both the allocation and the initialization tasks across > multiple threads, the initialization of 2M hugetlb will be faster, > thereby improving the boot speed. > > Here are some test results: > test no patch(ms) patched(ms) saved > ------------------- -------------- ------------- -------- > 256c2t(4 node) 2M 3336 1051 68.52% > 128c1t(2 node) 2M 1943 716 63.15% > > Signed-off-by: Gang Li > Tested-by: David Rientjes > --- > mm/hugetlb.c | 70 ++++++++++++++++++++++++++++++++++++++-------------- > 1 file changed, 52 insertions(+), 18 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index effe5539e545..9b348ba418f5 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -35,6 +35,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -3510,43 +3511,76 @@ static void __init hugetlb_hstate_alloc_pages_errcheck(unsigned long allocated, > } > } > > -static unsigned long __init hugetlb_gigantic_pages_alloc_boot(struct hstate *h) > +static void __init hugetlb_alloc_node(unsigned long start, unsigned long end, void *arg) > { > - unsigned long i; > + struct hstate *h = (struct hstate *)arg; > + int i, num = end - start; > + nodemask_t node_alloc_noretry; > + unsigned long flags; > + int next_node = 0; This should be first_online_node which may be not zero. > > - for (i = 0; i < h->max_huge_pages; ++i) { > - if (!alloc_bootmem_huge_page(h, NUMA_NO_NODE)) > + /* Bit mask controlling how hard we retry per-node allocations.*/ > + nodes_clear(node_alloc_noretry); > + > + for (i = 0; i < num; ++i) { > + struct folio *folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY], > + &node_alloc_noretry, &next_node); > + if (!folio) > break; > + spin_lock_irqsave(&hugetlb_lock, flags); I suspect there will more contention on this lock when parallelizing. I want to know why you chose to drop prep_and_add_allocated_folios() call in the original hugetlb_pages_alloc_boot()? > + __prep_account_new_huge_page(h, folio_nid(folio)); > + enqueue_hugetlb_folio(h, folio); > + spin_unlock_irqrestore(&hugetlb_lock, flags); > cond_resched(); > } > +} > > - return i; > +static void __init hugetlb_vmemmap_optimize_node(unsigned long start, unsigned long end, void *arg) > +{ > + struct hstate *h = (struct hstate *)arg; > + int nid = start; > + > + hugetlb_vmemmap_optimize_folios(h, &h->hugepage_freelists[nid]); > } > > -static unsigned long __init hugetlb_pages_alloc_boot(struct hstate *h) > +static unsigned long __init hugetlb_gigantic_pages_alloc_boot(struct hstate *h) > { > unsigned long i; > - struct folio *folio; > - LIST_HEAD(folio_list); > - nodemask_t node_alloc_noretry; > - > - /* Bit mask controlling how hard we retry per-node allocations.*/ > - nodes_clear(node_alloc_noretry); > > for (i = 0; i < h->max_huge_pages; ++i) { > - folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY], > - &node_alloc_noretry); > - if (!folio) > + if (!alloc_bootmem_huge_page(h, NUMA_NO_NODE)) > break; > - list_add(&folio->lru, &folio_list); > cond_resched(); > } > > - prep_and_add_allocated_folios(h, &folio_list); > - > return i; > } > > +static unsigned long __init hugetlb_pages_alloc_boot(struct hstate *h) > +{ > + struct padata_mt_job job = { > + .fn_arg = h, > + .align = 1, > + .numa_aware = true > + }; > + > + job.thread_fn = hugetlb_alloc_node; > + job.start = 0; > + job.size = h->max_huge_pages; > + job.min_chunk = h->max_huge_pages / num_node_state(N_MEMORY) / 2; > + job.max_threads = num_node_state(N_MEMORY) * 2; I am curious the magic number of 2 used in assignments of ->min_chunk and ->max_threads, does it from your experiment? I thinke it should be a comment here. And I am also sceptical about the optimization for a small amount of allocation of hugepages. Given 4 hugepags needed to be allocated on UMA system, job.min_chunk will be 2, job.max_threads will be 2. Then, 2 workers will be scheduled, however each worker will just allocate 2 pages, how much the cost of scheduling? What if allocate 4 pages in single worker? Do you have any numbers on parallelism vs non-parallelism in a small allocation case? If we cannot gain from this case, I think we shold assign a reasonable value to ->min_chunk based on experiment. Thanks. > + padata_do_multithreaded(&job); > + > + job.thread_fn = hugetlb_vmemmap_optimize_node; > + job.start = 0; > + job.size = num_node_state(N_MEMORY); > + job.min_chunk = 1; > + job.max_threads = num_node_state(N_MEMORY); > + padata_do_multithreaded(&job); > + > + return h->nr_huge_pages; > +} > + > /* > * NOTE: this routine is called in different contexts for gigantic and > * non-gigantic pages.