From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFD5CC54E49 for ; Tue, 5 Mar 2024 02:50:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 715046B0075; Mon, 4 Mar 2024 21:50:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C4946B007B; Mon, 4 Mar 2024 21:50:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58B506B0080; Mon, 4 Mar 2024 21:50:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 496B26B0075 for ; Mon, 4 Mar 2024 21:50:29 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 048F9A0D59 for ; Tue, 5 Mar 2024 02:50:28 +0000 (UTC) X-FDA: 81861456978.21.A3CE7E4 Received: from out-182.mta1.migadu.com (out-182.mta1.migadu.com [95.215.58.182]) by imf02.hostedemail.com (Postfix) with ESMTP id 571FE80009 for ; Tue, 5 Mar 2024 02:50:26 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=s176z2eX; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf02.hostedemail.com: domain of gang.li@linux.dev designates 95.215.58.182 as permitted sender) smtp.mailfrom=gang.li@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709607026; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=beUuoq2trrcVUwnkWmtat2HUyfIXk2ilttkG7yqdylo=; b=TuqHrH9EiNLsG9suZwhLm+PJk9vNjRiUxy4RuhouVX7dEYxo/5b6iojNSafWRC3wkHi8nl JbsWxKJCjp2unYvGR59rgbfBe3Pts2FG4CPfSsgNv04CELBjX8PRXdMq+efgm8yQOTDoLR pyYxkea5+BIjk5Cv4UpYNpN27u2pf7g= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=s176z2eX; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf02.hostedemail.com: domain of gang.li@linux.dev designates 95.215.58.182 as permitted sender) smtp.mailfrom=gang.li@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709607026; a=rsa-sha256; cv=none; b=XCmDIxrSl5QI/QaCmRVJGP+dcwG/RWQblpjblS/a/xgK12MWkb/uz4+7qpPVPmdK3O9tda +inmlCL9zEOu7V/UFaCMeEXH7IQeoMs1v5q0MCINo/9u7o1iIB4lNUeAA0Rgdp8ezfFu2R yyD/yE6EVat7UUbk89aMPHyxqK2JNfU= Message-ID: <9b044d9b-d3b1-4fb3-8b05-2a54c2b45716@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1709607024; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=beUuoq2trrcVUwnkWmtat2HUyfIXk2ilttkG7yqdylo=; b=s176z2eXd+OYZPI6Y//fhAExukI7W6k5Cay9kE9dTHAchc7U5cofVgAUm6Og1PF+QkZMTL 33gqhjGM7LDv2d8DtyD1nTAR1QI3heeu87qo8fNStZ45kQJ2d8xdvgaNvdVE+Br+ulClyw KrjzlyqksaCjvJLd8cSQP6QWL7XRnjo= Date: Tue, 5 Mar 2024 10:49:47 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v6 4/8] padata: dispatch works on different nodes To: Daniel Jordan Cc: Andrew Morton , David Hildenbrand , David Rientjes , Muchun Song , Tim Chen , Steffen Klassert , Jane Chu , "Paul E . McKenney" , Randy Dunlap , linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com References: <20240222140422.393911-1-gang.li@linux.dev> <20240222140422.393911-5-gang.li@linux.dev> <7g53p42favkoibzg4w3ly3ypdjdy6oymhj74ekwk62bwx4rlaj@seoavjygfadq> Content-Language: en-US X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Gang Li In-Reply-To: <7g53p42favkoibzg4w3ly3ypdjdy6oymhj74ekwk62bwx4rlaj@seoavjygfadq> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 571FE80009 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 6qifnuj75zcrhnduhgxb154w3y7uy7jy X-HE-Tag: 1709607026-673027 X-HE-Meta: U2FsdGVkX19dd2ZEVh68ZClzPwZS3DjGgvKdGAzeL+Gw+K/0L4jJvCiPdY4MEL2VdAA3KhsDJqfl2o6yzygOn1bCm+OgiexwN9lQKx52kNr2m6pHMusaI8A39dDIagqA5QlQU8xjCJbtrlfqHMZnXvD8sotv8P7w/x7r0j20JFJ5L5ToCuDxLjbmJbgak55gi0gI98ZT9Xh8Whc+KlyDefZXTJ0cI3Wbd6uW9v6axQ10tlK/0J9dYxn6Nf5/XMctGsJpnQFHAKsKf2KnpoKmt1gTtsSIWLTgtG7I5GFiB7pqm6zD68nuxPwsV5lNuqUm1eCip3z0beFjeOs4FcvS042QeBId8BwJPPnHInVrS3LziHEYFSGim0V9wYqKaB5QwAlt1NuGXx+dY05j9fc1HLFAUbdV8gNRI26UhQHUWgVtXlTMxIzJ8+E8joFBa2YLrxbopGfaH7NqMPQ1rA33sdXESwIoSTxvwM64SF2YAvVV9ybgv8iR0P8NtmyR3XEYIj9bPwezTZ11lB6L/r6iKC3vum80W0k7g60aEwRbLQR7Zn60u/upFOgdHqyP8t7naiYXhWDsirESTquxugBurHJtcuezQIx+EseO+yD9SesupffSwPcnG7mMh9E2n1rx02xDMZt1BfTlcnqVw/y0CD1glFfIrMxBYtlHP6zWXMClMDc3rSxZLcmp0QlJqarAs8dUR5iX3raI+KYmlt+J9FJ/jxV4elWlagaA60PUJ1D/FCUUQIAt6VQTQZb7bguFllrbBWtN+cJKJN7tS1zb8U9rCXWK5KnZhaebOg8Zgrdh/EVwMe7N+A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/2/28 05:24, Daniel Jordan wrote: > Hi, > > On Thu, Feb 22, 2024 at 10:04:17PM +0800, Gang Li wrote: >> When a group of tasks that access different nodes are scheduled on the >> same node, they may encounter bandwidth bottlenecks and access latency. >> >> Thus, numa_aware flag is introduced here, allowing tasks to be >> distributed across different nodes to fully utilize the advantage of >> multi-node systems. >> >> Signed-off-by: Gang Li >> Tested-by: David Rientjes >> Reviewed-by: Muchun Song >> Reviewed-by: Tim Chen >> --- >> include/linux/padata.h | 2 ++ >> kernel/padata.c | 14 ++++++++++++-- >> mm/mm_init.c | 1 + >> 3 files changed, 15 insertions(+), 2 deletions(-) >> >> diff --git a/include/linux/padata.h b/include/linux/padata.h >> index 495b16b6b4d72..8f418711351bc 100644 >> --- a/include/linux/padata.h >> +++ b/include/linux/padata.h >> @@ -137,6 +137,7 @@ struct padata_shell { >> * appropriate for one worker thread to do at once. >> * @max_threads: Max threads to use for the job, actual number may be less >> * depending on task size and minimum chunk size. >> + * @numa_aware: Distribute jobs to different nodes with CPU in a round robin fashion. > > numa_interleave seems more descriptive. > >> */ >> struct padata_mt_job { >> void (*thread_fn)(unsigned long start, unsigned long end, void *arg); >> @@ -146,6 +147,7 @@ struct padata_mt_job { >> unsigned long align; >> unsigned long min_chunk; >> int max_threads; >> + bool numa_aware; >> }; >> >> /** >> diff --git a/kernel/padata.c b/kernel/padata.c >> index 179fb1518070c..e3f639ff16707 100644 >> --- a/kernel/padata.c >> +++ b/kernel/padata.c >> @@ -485,7 +485,8 @@ void __init padata_do_multithreaded(struct padata_mt_job *job) >> struct padata_work my_work, *pw; >> struct padata_mt_job_state ps; >> LIST_HEAD(works); >> - int nworks; >> + int nworks, nid; >> + static atomic_t last_used_nid __initdata; > > nit, move last_used_nid up so it's below load_balance_factor to keep > that nice tree shape > >> >> if (job->size == 0) >> return; >> @@ -517,7 +518,16 @@ void __init padata_do_multithreaded(struct padata_mt_job *job) >> ps.chunk_size = roundup(ps.chunk_size, job->align); >> >> list_for_each_entry(pw, &works, pw_list) >> - queue_work(system_unbound_wq, &pw->pw_work); >> + if (job->numa_aware) { >> + int old_node = atomic_read(&last_used_nid); >> + >> + do { >> + nid = next_node_in(old_node, node_states[N_CPU]); >> + } while (!atomic_try_cmpxchg(&last_used_nid, &old_node, nid)); > > There aren't concurrent NUMA-aware _do_multithreaded calls now, so an > atomic per work seems like an unnecessary expense for guarding against Hi Daniel, Yes, this is not necessary. But I think this operation is infrequent, so the burden shouldn't be too great? > possible uneven thread distribution in the future. Non-atomic access > instead? > >> + queue_work_node(nid, system_unbound_wq, &pw->pw_work); >> + } else { >> + queue_work(system_unbound_wq, &pw->pw_work); >> + } >> >> /* Use the current thread, which saves starting a workqueue worker. */ >> padata_work_init(&my_work, padata_mt_helper, &ps, PADATA_WORK_ONSTACK); >> diff --git a/mm/mm_init.c b/mm/mm_init.c >> index 2c19f5515e36c..549e76af8f82a 100644 >> --- a/mm/mm_init.c >> +++ b/mm/mm_init.c >> @@ -2231,6 +2231,7 @@ static int __init deferred_init_memmap(void *data) >> .align = PAGES_PER_SECTION, >> .min_chunk = PAGES_PER_SECTION, >> .max_threads = max_threads, >> + .numa_aware = false, >> }; >> >> padata_do_multithreaded(&job); >> -- >> 2.20.1 >>