From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C5FECDB474 for ; Mon, 23 Oct 2023 02:12:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A0C3C6B009D; Sun, 22 Oct 2023 22:12:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BCB86B009E; Sun, 22 Oct 2023 22:12:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8840E6B009F; Sun, 22 Oct 2023 22:12:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 755116B009D for ; Sun, 22 Oct 2023 22:12:08 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 462B61608C7 for ; Mon, 23 Oct 2023 02:12:08 +0000 (UTC) X-FDA: 81375101136.26.A8738C1 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by imf02.hostedemail.com (Postfix) with ESMTP id 4266B80005 for ; Mon, 23 Oct 2023 02:12:04 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=h4Bd5EeD; spf=pass (imf02.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.100 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698027126; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rTStbqNT3fgqiA6RJ+me2XIM3KWvOFtF5K3jQPsMzDk=; b=p4qgM1lETUmI7Tfe0wmhZRN5eW4xP0KsfDkILGH+gf3bUYxFIuRXK1nr05DdTr27rM4jTF nUL2cvhJD9/ktwPLTS/PBW1j80cl/Xyl27eCX/HFh5uVBKwWABa6KJmUTStKRnpK/DirW0 FCn2DkxsGw2NUg1Blkje7jmlh+9/B1E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698027126; a=rsa-sha256; cv=none; b=tl6VZqEI3YtFDqObjpatIMYESHHTneMnF3XhsFyOM6O6nPhMY9iRtFhlm5CZ7Wewm0v768 yxuF737ppMwI9sGfJfCJSfXnsVpqaQ09ehyIv6ZRzIomPr89TPUzHlNsW4XOzWP3V0N8OJ PTFvnIH85i4BmJ7htjAvNFys4Wg93G0= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=h4Bd5EeD; spf=pass (imf02.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.100 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698027125; x=1729563125; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=X8PfC4/5+STlDD/lTLvT01RQNBVJUC5WbeS43Ah2Um8=; b=h4Bd5EeD3+YiJAub65fF6Zq3391sNx5HQzo26Own5lU7JjxZ+Rz/DGpO xSSwDphRrxgyka0rX97uic9VlYoMRyCGNpdOVIyaRvSyiQ5nUd8c+/OWu udzXU16S1uR0MY+UYHqHLwo1xFj9zx8KiotpLNelkVttdpU4Cq7AnO+7G V3d/Xb2dxjtZd4JK9tpA0/Z+gVik8dAXJEETnpR+HoNVC6RjC7hdaUXiv voLPbLvEwAnGUKICBsNwvVe2QLc/VHNgnmFq3J7n/IMotMZQs/LzlRQFa ueNLBO12EAHXL9z/3r/ZmbvKvrt+oYJ10PQ2u51t+g8FNExTl4On3DcxV g==; X-IronPort-AV: E=McAfee;i="6600,9927,10871"; a="453221746" X-IronPort-AV: E=Sophos;i="6.03,244,1694761200"; d="scan'208";a="453221746" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2023 19:12:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10871"; a="734510129" X-IronPort-AV: E=Sophos;i="6.03,244,1694761200"; d="scan'208";a="734510129" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2023 19:11:58 -0700 From: "Huang, Ying" To: Gregory Price Cc: Gregory Price , , , , , , Aneesh Kumar K.V , Wei Xu , Alistair Popple , Dan Williams , Dave Hansen , Johannes Weiner , "Jonathan Cameron" , Michal Hocko , "Tim Chen" , Yang Shi Subject: Re: [RFC PATCH v2 0/3] mm: mempolicy: Multi-tier weighted interleaving In-Reply-To: (Gregory Price's message of "Thu, 19 Oct 2023 09:26:15 -0400") References: <20231009204259.875232-1-gregory.price@memverge.com> <87o7gzm22n.fsf@yhuang6-desk2.ccr.corp.intel.com> <87pm1cwcz5.fsf@yhuang6-desk2.ccr.corp.intel.com> <87edhrunvp.fsf@yhuang6-desk2.ccr.corp.intel.com> <87fs25g6w3.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Mon, 23 Oct 2023 10:09:56 +0800 Message-ID: <87ttqidr7v.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: 4266B80005 X-Rspam-User: X-Stat-Signature: 39secessxpcot39aun7i48xu1ww89zfh X-Rspamd-Server: rspam03 X-HE-Tag: 1698027124-478896 X-HE-Meta: U2FsdGVkX18eu2ypoH73loqktanlLMYqAFE8tE6v6bUQ4n09QLH+sB5bN4kyTc0sxOMR6Cs4uSHUhmUDgOxB2zo11tKjgrLeY9KwX66gh68+0YRPOA1I6OZz10W3KUBoubh0YRCMCOUYaNCWvKdl+WxVIkXOapa3Exzt8qEmid0x0NTIENec7V+yE7TvHSciX2G2xbrQDiPuHWQEsEcSRmMkSFsBbpBGxPIdjnKT5azfeSU6z1I60zftlnoO2nToUSOrIuxTYeYastroXtvxjbqeKWTWx4hR83nVZjs/iAi691C5ulpFChVMc+5uVQ00UBTQzmzi0lAbd6CXHonX21qMNzZ9cGEwf+E2mYPriEx/TUp0fLaqvP6b1zbtUM7D7SLxo0bhiXVMVOx+K39DaY0F7QLkR+ULzM8iiML4l615NBqjaUQA8IfRlhb2hS+t6sQtQwndKkPv6F+GzQH6iwU/Ru8oE9KnlQN47ciyO6tskJ78SHiiBRLbHpSuqywfavsRvsGqgYQUTH20UY/RXqxgPS+ATPX5HoblLqLZscGFpOjIlHvtcAUMsekw3ipE2er6n1tNaYk4w2WwySuZ99OWO+W3XbGn/TQwg9Iv5wrg6QojCUIEF7Fk8SdLfmhpYaVwn7gUExlx5Mjeloxhei1xxPbJ4VAWC4rPxKmHYpx0ZMufx7zwa59OsWlm9dI+C5sXQHrHo6Ek6D+MUiPoLcXi12D3F+hhfq0Cync6TUUmF8NlI3Dj/u3GYQ0MriZzZCKoOspVhTzcIHhB6ol8MlOe7dKDPLGtmqS6otPtOBJgtpMt3fXwe2xyqVnqzTkbDx7nZGRtnP7dIAzBYHuNc6Tf6wq/lWWrfQJ/F9qX+1EsvllGc0rRemrvSiZDKlzB36mDeyvzvQuynvSToRanOczSXIiD40kfwhV38wmN3v2zSz4hvjYlZZBZvdATz62zBOp2WdfY/ffEjGfjWnP jwbSHf5D p8+PpMHPIExvWUlT7PnaUhJmDMrepwvGTeeEUBjGyv/zG++Xup4xtFIOV3YXn+D786sMstEEzc47+a6KnBZ3AkKjZKTSBSRMqhwc6NWGFw4hcZ9CfCvyilMfmSQzJNhVigN5yHsliIxUaIGPDh077bNi/P2Bm2Shwf5BEvV6Np0JZpPARYSFrm9i6alCyiqc/hj4bg6hDCHYgZGOum7KHKqLYmMIO0zc1LaZ4Q/CkwMHAzXck1HnnyH9N73A3So3uGxKxarJr3RgEWIANWpv+KuR5FA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Gregory Price writes: > On Fri, Oct 20, 2023 at 02:11:40PM +0800, Huang, Ying wrote: >> Gregory Price writes: >> >> > > [...snip...] >> > Example 2: A dual-socket system with 1 CXL device per socket >> > === >> > CPU Nodes: node0, node1 >> > CXL Nodes: node2, node3 (on sockets 0 and 1 respective) >> > > [...snip...] >> > This is similar to example #1, but with one difference: A task running >> > on node 0 should not treat nodes 0 and 1 the same, nor nodes 2 and 3. > [...snip...] >> > This leaves us with weights of: >> > >> > node0 - 57% >> > node1 - 26% >> > node2 - 12% >> > node3 - 5% >> > >> >> Does the workload run on CPU of node 0 only? This appears unreasonable. > > Depends. if a user explicitly launches with `numactl --cpunodebind=0` > then yes, you can force a task (and all its children) to run on node0. IIUC, in your example, the `numactl` command line will be numactl --cpunodebind=0 --weighted-interleave=0,1,2,3 That is, the CPU is restricted to node 0, while memory is distributed to all nodes. This doesn't sound like reasonable for me. > If a workload multi-threaded enough to run on both sockets, then you are > right that you'd want to basically limit cross-socket traffic by binding > individual threads to nodes that don't cross sockets - if at all > feasible this may not be feasible). > > But at that point, we're getting into the area of numa-aware software. > That's a bit beyond the scope of this - which is to enable a coarse > grained interleaving solution that can easily be accessed with something > like `numactl --interleave` or `numactl --weighted-interleave`. > >> If the memory bandwidth requirement of the workload is so large that CXL >> is used to expand bandwidth, why not run workload on CPU of node 1 and >> use the full memory bandwidth of node 1? > > Settings are NOT one size fits all. You can certainly come up with another > scenario in which these weights are not optimal. > > If we're running enough threads that we need multiple sockets to run > them concurrently, then the memory distribution weights become much more > complex. Without more precise control over task placement and > preventing task migration, you can't really get an "optimal" placement. > > What I'm really saying is "Task placement is a more powerful function > for predicting performance than memory placement". However, user > software would need to implement a pseudo-scheduler and explicit data > placement to be the most optimized. Beyond this, there is only so much > we can do from a `numactl` perspective. > > tl;dr: We can't get a perfect system here, because getting a best-case > for all possible scenarios is an probably undecidable problem. You will > always be able to generate an example wherein the system is not optimal. > >> >> If the workload run on CPU of node 0 and node 1, then the cross-socket >> traffic should be minimized if possible. That is, threads/processes on >> node 0 should interleave memory of node 0 and node 2, while that on node >> 1 should interleave memory of node 1 and node 3. > > This can be done with set_mempolicy() with MPOL_INTERLEAVE and set the > nodemask to the what you describe. Those tasks need to also prevent > themselves from being migrated as well. But this can absolutely be > done. > > In this scenario, the weights need to be re-calculated to be based on > the bandwidth of the nodes in the mempolicy nodemask, which is what i > described in the last email. IMHO, we should keep thing as simple as possible, only add complexity if necessary. -- Best Regards, Huang, Ying