From: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
To: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com,
chengming.zhou@linux.dev, usamaarif642@gmail.com,
ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com,
akpm@linux-foundation.org, linux-crypto@vger.kernel.org,
herbert@gondor.apana.org.au, davem@davemloft.net,
clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com,
surenb@google.com, kristen.c.accardi@intel.com,
zanussi@kernel.org
Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com,
kanchana.p.sridhar@intel.com
Subject: [PATCH v2 06/13] crypto: iaa - Change cpu-to-iaa mappings to evenly balance cores to IAAs.
Date: Sat, 2 Nov 2024 20:21:04 -0700 [thread overview]
Message-ID: <20241103032111.333282-7-kanchana.p.sridhar@intel.com> (raw)
In-Reply-To: <20241103032111.333282-1-kanchana.p.sridhar@intel.com>
This change distributes the cpus more evenly among the IAAs in each socket.
Old algorithm to assign cpus to IAA:
------------------------------------
If "nr_cpus" = nr_logical_cpus (includes hyper-threading), the current
algorithm determines "nr_cpus_per_node" = nr_cpus / nr_nodes.
Hence, on a 2-socket Sapphire Rapids server where each socket has 56 cores
and 4 IAA devices, nr_cpus_per_node = 112.
Further, cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa
Hence, cpus_per_iaa = 224/8 = 28.
The iaa_crypto driver then assigns 28 "logical" node cpus per IAA device
on that node, that results in this cpu-to-iaa mapping:
lscpu|grep NUMA
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
NUMA node 0:
cpu 0-27 28-55 112-139 140-167
iaa iax1 iax3 iax5 iax7
NUMA node 1:
cpu 56-83 84-111 168-195 196-223
iaa iax9 iax11 iax13 iax15
This appears non-optimal for a few reasons:
1) The 2 logical threads on a core will get assigned to different IAA
devices. For e.g.:
cpu 0: iax1
cpu 112: iax5
2) One of the logical threads on a core is assigned to an IAA that is not
closest to that core. For e.g. cpu 112.
3) If numactl is used to start processes sequentially on the logical
cores, some of the IAA devices on the socket could be over-subscribed,
while some could be under-utilized.
This patch introduces a scheme to more evenly balance the logical cores to
IAA devices on a socket.
New algorithm to assign cpus to IAA:
------------------------------------
We introduce a function "cpu_to_iaa()" that takes a logical cpu and
returns the IAA device closest to it.
If "nr_cpus" = nr_logical_cpus (includes hyper-threading), the new
algorithm determines "nr_cpus_per_node" = topology_num_cores_per_package().
Hence, on a 2-socket Sapphire Rapids server where each socket has 56 cores
and 4 IAA devices, nr_cpus_per_node = 56.
Further, cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa
Hence, cpus_per_iaa = 112/8 = 14.
The iaa_crypto driver then assigns 14 "logical" node cpus per IAA device
on that node, that results in this cpu-to-iaa mapping:
NUMA node 0:
cpu 0-13,112-125 14-27,126-139 28-41,140-153 42-55,154-167
iaa iax1 iax3 iax5 iax7
NUMA node 1:
cpu 56-69,168-181 70-83,182-195 84-97,196-209 98-111,210-223
iaa iax9 iax11 iax13 iax15
This resolves the 3 issues with non-optimality of cpu-to-iaa mappings
pointed out earlier with the existing approach.
Originally-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
---
drivers/crypto/intel/iaa/iaa_crypto_main.c | 84 ++++++++++++++--------
1 file changed, 54 insertions(+), 30 deletions(-)
diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
index c4b143dd1ddd..a12a8f9caa84 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
+++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
@@ -55,6 +55,46 @@ static struct idxd_wq *wq_table_next_wq(int cpu)
return entry->wqs[entry->cur_wq];
}
+/*
+ * Given a cpu, find the closest IAA instance. The idea is to try to
+ * choose the most appropriate IAA instance for a caller and spread
+ * available workqueues around to clients.
+ */
+static inline int cpu_to_iaa(int cpu)
+{
+ int node, n_cpus = 0, test_cpu, iaa = 0;
+ int nr_iaa_per_node;
+ const struct cpumask *node_cpus;
+
+ if (!nr_nodes)
+ return 0;
+
+ nr_iaa_per_node = nr_iaa / nr_nodes;
+ if (!nr_iaa_per_node)
+ return 0;
+
+ for_each_online_node(node) {
+ node_cpus = cpumask_of_node(node);
+ if (!cpumask_test_cpu(cpu, node_cpus))
+ continue;
+
+ for_each_cpu(test_cpu, node_cpus) {
+ if ((n_cpus % nr_cpus_per_node) == 0)
+ iaa = node * nr_iaa_per_node;
+
+ if (test_cpu == cpu)
+ return iaa;
+
+ n_cpus++;
+
+ if ((n_cpus % cpus_per_iaa) == 0)
+ iaa++;
+ }
+ }
+
+ return -1;
+}
+
static void wq_table_add(int cpu, struct idxd_wq *wq)
{
struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu);
@@ -895,8 +935,7 @@ static int wq_table_add_wqs(int iaa, int cpu)
*/
static void rebalance_wq_table(void)
{
- const struct cpumask *node_cpus;
- int node, cpu, iaa = -1;
+ int cpu, iaa;
if (nr_iaa == 0)
return;
@@ -906,37 +945,22 @@ static void rebalance_wq_table(void)
clear_wq_table();
- if (nr_iaa == 1) {
- for (cpu = 0; cpu < nr_cpus; cpu++) {
- if (WARN_ON(wq_table_add_wqs(0, cpu))) {
- pr_debug("could not add any wqs for iaa 0 to cpu %d!\n", cpu);
- return;
- }
- }
-
- return;
- }
-
- for_each_node_with_cpus(node) {
- node_cpus = cpumask_of_node(node);
-
- for (cpu = 0; cpu < cpumask_weight(node_cpus); cpu++) {
- int node_cpu = cpumask_nth(cpu, node_cpus);
-
- if (WARN_ON(node_cpu >= nr_cpu_ids)) {
- pr_debug("node_cpu %d doesn't exist!\n", node_cpu);
- return;
- }
+ for (cpu = 0; cpu < nr_cpus; cpu++) {
+ iaa = cpu_to_iaa(cpu);
+ pr_debug("rebalance: cpu=%d iaa=%d\n", cpu, iaa);
- if ((cpu % cpus_per_iaa) == 0)
- iaa++;
+ if (WARN_ON(iaa == -1)) {
+ pr_debug("rebalance (cpu_to_iaa(%d)) failed!\n", cpu);
+ return;
+ }
- if (WARN_ON(wq_table_add_wqs(iaa, node_cpu))) {
- pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu);
- return;
- }
+ if (WARN_ON(wq_table_add_wqs(iaa, cpu))) {
+ pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu);
+ return;
}
}
+
+ pr_debug("Finished rebalance local wqs.");
}
static inline int check_completion(struct device *dev,
@@ -2332,7 +2356,7 @@ static int __init iaa_crypto_init_module(void)
pr_err("IAA couldn't find any nodes with cpus\n");
return -ENODEV;
}
- nr_cpus_per_node = nr_cpus / nr_nodes;
+ nr_cpus_per_node = topology_num_cores_per_package();
if (crypto_has_comp("deflate-generic", 0, 0))
deflate_generic_tfm = crypto_alloc_comp("deflate-generic", 0, 0);
--
2.27.0
next prev parent reply other threads:[~2024-11-03 3:21 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-03 3:20 [PATCH v2 00/13] zswap IAA compress batching Kanchana P Sridhar
2024-11-03 3:20 ` [PATCH v2 01/13] crypto: acomp - Define two new interfaces for compress/decompress batching Kanchana P Sridhar
2024-11-15 11:32 ` Herbert Xu
2024-11-15 21:16 ` Sridhar, Kanchana P
2024-11-03 3:21 ` [PATCH v2 02/13] crypto: iaa - Add an acomp_req flag CRYPTO_ACOMP_REQ_POLL to enable async mode Kanchana P Sridhar
2024-11-03 3:21 ` [PATCH v2 03/13] crypto: iaa - Implement compress/decompress batching API in iaa_crypto Kanchana P Sridhar
2024-11-03 3:21 ` [PATCH v2 04/13] crypto: iaa - Make async mode the default Kanchana P Sridhar
2024-11-03 3:21 ` [PATCH v2 05/13] crypto: iaa - Disable iaa_verify_compress by default Kanchana P Sridhar
2024-11-03 3:21 ` Kanchana P Sridhar [this message]
2024-11-03 3:21 ` [PATCH v2 07/13] crypto: iaa - Distribute compress jobs to all IAA devices on a NUMA node Kanchana P Sridhar
2024-11-03 3:21 ` [PATCH v2 08/13] mm: zswap: acomp_ctx mutex lock/unlock optimizations Kanchana P Sridhar
2024-11-03 10:57 ` kernel test robot
2024-11-03 3:21 ` [PATCH v2 09/13] mm: zswap: Modify struct crypto_acomp_ctx to be configurable in nr of acomp_reqs Kanchana P Sridhar
2024-11-03 3:21 ` [PATCH v2 10/13] mm: zswap: Add a per-cpu "acomp_batch_ctx" to struct zswap_pool Kanchana P Sridhar
2024-11-03 3:21 ` [PATCH v2 11/13] mm: zswap: Allocate acomp_batch_ctx resources for a given zswap_pool Kanchana P Sridhar
2024-11-03 3:21 ` [PATCH v2 12/13] mm: Add sysctl vm.compress-batching switch for compress batching during swapout Kanchana P Sridhar
2024-11-03 9:03 ` kernel test robot
2024-11-03 10:25 ` kernel test robot
2024-11-03 3:21 ` [PATCH v2 13/13] mm: zswap: Compress batching with Intel IAA in zswap_store() of large folios Kanchana P Sridhar
2024-11-04 8:31 ` Dan Carpenter
2024-11-04 18:29 ` Sridhar, Kanchana P
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241103032111.333282-7-kanchana.p.sridhar@intel.com \
--to=kanchana.p.sridhar@intel.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=chengming.zhou@linux.dev \
--cc=clabbe@baylibre.com \
--cc=davem@davemloft.net \
--cc=ebiggers@google.com \
--cc=hannes@cmpxchg.org \
--cc=herbert@gondor.apana.org.au \
--cc=kristen.c.accardi@intel.com \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nphamcs@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=usamaarif642@gmail.com \
--cc=vinodh.gopal@intel.com \
--cc=wajdi.k.feghali@intel.com \
--cc=ying.huang@intel.com \
--cc=yosryahmed@google.com \
--cc=zanussi@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox