From: Huang Ying <ying.huang@intel.com>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org,
Arjan Van De Ven <arjan@linux.intel.com>,
Huang Ying <ying.huang@intel.com>,
Sudeep Holla <sudeep.holla@arm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@techsingularity.net>,
Vlastimil Babka <vbabka@suse.cz>,
David Hildenbrand <david@redhat.com>,
Johannes Weiner <jweiner@redhat.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Michal Hocko <mhocko@suse.com>,
Pavel Tatashin <pasha.tatashin@soleen.com>,
Matthew Wilcox <willy@infradead.org>,
Christoph Lameter <cl@linux.com>
Subject: [PATCH 02/10] cacheinfo: calculate per-CPU data cache size
Date: Wed, 20 Sep 2023 14:18:48 +0800 [thread overview]
Message-ID: <20230920061856.257597-3-ying.huang@intel.com> (raw)
In-Reply-To: <20230920061856.257597-1-ying.huang@intel.com>
Per-CPU data cache size is useful information. For example, it can be
used to determine per-CPU cache size. So, in this patch, the data
cache size for each CPU is calculated via data_cache_size /
shared_cpu_weight.
A brute-force algorithm to iterate all online CPUs is used to avoid
to allocate an extra cpumask, especially in offline callback.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++-
include/linux/cacheinfo.h | 1 +
2 files changed, 42 insertions(+), 1 deletion(-)
diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
index cbae8be1fe52..3e8951a3fbab 100644
--- a/drivers/base/cacheinfo.c
+++ b/drivers/base/cacheinfo.c
@@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu)
return rc;
}
+static void update_data_cache_size_cpu(unsigned int cpu)
+{
+ struct cpu_cacheinfo *ci;
+ struct cacheinfo *leaf;
+ unsigned int i, nr_shared;
+ unsigned int size_data = 0;
+
+ if (!per_cpu_cacheinfo(cpu))
+ return;
+
+ ci = ci_cacheinfo(cpu);
+ for (i = 0; i < cache_leaves(cpu); i++) {
+ leaf = per_cpu_cacheinfo_idx(cpu, i);
+ if (leaf->type != CACHE_TYPE_DATA &&
+ leaf->type != CACHE_TYPE_UNIFIED)
+ continue;
+ nr_shared = cpumask_weight(&leaf->shared_cpu_map);
+ if (!nr_shared)
+ continue;
+ size_data += leaf->size / nr_shared;
+ }
+ ci->size_data = size_data;
+}
+
+static void update_data_cache_size(bool cpu_online, unsigned int cpu)
+{
+ unsigned int icpu;
+
+ for_each_online_cpu(icpu) {
+ if (!cpu_online && icpu == cpu)
+ continue;
+ update_data_cache_size_cpu(icpu);
+ }
+}
+
static int cacheinfo_cpu_online(unsigned int cpu)
{
int rc = detect_cache_attributes(cpu);
@@ -906,7 +941,11 @@ static int cacheinfo_cpu_online(unsigned int cpu)
return rc;
rc = cache_add_dev(cpu);
if (rc)
- free_cache_attributes(cpu);
+ goto err;
+ update_data_cache_size(true, cpu);
+ return 0;
+err:
+ free_cache_attributes(cpu);
return rc;
}
@@ -916,6 +955,7 @@ static int cacheinfo_cpu_pre_down(unsigned int cpu)
cpu_cache_sysfs_exit(cpu);
free_cache_attributes(cpu);
+ update_data_cache_size(false, cpu);
return 0;
}
diff --git a/include/linux/cacheinfo.h b/include/linux/cacheinfo.h
index a5cfd44fab45..4e7ccfa0c36d 100644
--- a/include/linux/cacheinfo.h
+++ b/include/linux/cacheinfo.h
@@ -73,6 +73,7 @@ struct cacheinfo {
struct cpu_cacheinfo {
struct cacheinfo *info_list;
+ unsigned int size_data;
unsigned int num_levels;
unsigned int num_leaves;
bool cpu_map_populated;
--
2.39.2
next prev parent reply other threads:[~2023-09-20 6:19 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-20 6:18 [PATCH 00/10] mm: PCP high auto-tuning Huang Ying
2023-09-20 6:18 ` [PATCH 01/10] mm, pcp: avoid to drain PCP when process exit Huang Ying
2023-10-11 12:46 ` Mel Gorman
2023-10-11 17:16 ` Andrew Morton
2023-10-12 13:09 ` Mel Gorman
2023-10-12 13:35 ` Huang, Ying
2023-10-12 12:21 ` Huang, Ying
2023-09-20 6:18 ` Huang Ying [this message]
2023-09-20 9:24 ` [PATCH 02/10] cacheinfo: calculate per-CPU data cache size Sudeep Holla
2023-09-22 7:56 ` Huang, Ying
2023-10-11 12:20 ` Mel Gorman
2023-10-12 12:08 ` Huang, Ying
2023-10-12 12:52 ` Mel Gorman
2023-10-12 13:12 ` Huang, Ying
2023-10-12 15:22 ` Mel Gorman
2023-10-13 3:06 ` Huang, Ying
2023-10-16 15:43 ` Mel Gorman
2023-09-20 6:18 ` [PATCH 03/10] mm, pcp: reduce lock contention for draining high-order pages Huang Ying
2023-10-11 12:49 ` Mel Gorman
2023-10-12 12:11 ` Huang, Ying
2023-09-20 6:18 ` [PATCH 04/10] mm: restrict the pcp batch scale factor to avoid too long latency Huang Ying
2023-10-11 12:52 ` Mel Gorman
2023-10-12 12:15 ` Huang, Ying
2023-09-20 6:18 ` [PATCH 05/10] mm, page_alloc: scale the number of pages that are batch allocated Huang Ying
2023-10-11 12:54 ` Mel Gorman
2023-09-20 6:18 ` [PATCH 06/10] mm: add framework for PCP high auto-tuning Huang Ying
2023-09-20 6:18 ` [PATCH 07/10] mm: tune PCP high automatically Huang Ying
2023-09-20 6:18 ` [PATCH 08/10] mm, pcp: decrease PCP high if free pages < high watermark Huang Ying
2023-10-11 13:08 ` Mel Gorman
2023-10-12 12:19 ` Huang, Ying
2023-09-20 6:18 ` [PATCH 09/10] mm, pcp: avoid to reduce PCP high unnecessarily Huang Ying
2023-10-11 14:09 ` Mel Gorman
2023-10-12 7:48 ` Huang, Ying
2023-10-12 12:49 ` Mel Gorman
2023-10-12 13:19 ` Huang, Ying
2023-09-20 6:18 ` [PATCH 10/10] mm, pcp: reduce detecting time of consecutive high order page freeing Huang Ying
2023-09-20 16:41 ` [PATCH 00/10] mm: PCP high auto-tuning Andrew Morton
2023-09-21 13:32 ` Huang, Ying
2023-09-21 15:46 ` Andrew Morton
2023-09-22 0:33 ` Huang, Ying
2023-10-11 13:05 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230920061856.257597-3-ying.huang@intel.com \
--to=ying.huang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=arjan@linux.intel.com \
--cc=cl@linux.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=jweiner@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=pasha.tatashin@soleen.com \
--cc=sudeep.holla@arm.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox