From: Yafang Shao <laoar.shao@gmail.com>
To: 42.hyeyoo@gmail.com, vbabka@suse.cz, ast@kernel.org,
daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com,
songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com,
kpsingh@kernel.org, sdf@google.com, haoluo@google.com,
jolsa@kernel.org, tj@kernel.org, dennis@kernel.org, cl@linux.com,
akpm@linux-foundation.org, penberg@kernel.org,
rientjes@google.com, iamjoonsoo.kim@lge.com,
roman.gushchin@linux.dev
Cc: linux-mm@kvack.org, bpf@vger.kernel.org,
Yafang Shao <laoar.shao@gmail.com>
Subject: [RFC PATCH bpf-next v2 03/11] mm: slab: rename obj_full_size()
Date: Thu, 12 Jan 2023 15:53:18 +0000 [thread overview]
Message-ID: <20230112155326.26902-4-laoar.shao@gmail.com> (raw)
In-Reply-To: <20230112155326.26902-1-laoar.shao@gmail.com>
The helper obj_full_size() is a little misleading, because it is only
valid when kmemcg is enabled. Meanwhile it is only used when kmemcg is
enabled currently, so we just need to rename it to a more meaningful name.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
---
mm/slab.h | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/slab.h b/mm/slab.h
index 7cc4329..35e0b3b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -467,7 +467,10 @@ static inline void memcg_free_slab_cgroups(struct slab *slab)
slab->memcg_data = 0;
}
-static inline size_t obj_full_size(struct kmem_cache *s)
+/*
+ * This helper is only valid when kmemcg isn't disabled.
+ */
+static inline size_t obj_kmemcg_size(struct kmem_cache *s)
{
/*
* For each accounted object there is an extra space which is used
@@ -508,7 +511,7 @@ static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s,
goto out;
}
- if (obj_cgroup_charge(objcg, flags, objects * obj_full_size(s)))
+ if (obj_cgroup_charge(objcg, flags, objects * obj_kmemcg_size(s)))
goto out;
*objcgp = objcg;
@@ -537,7 +540,7 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
if (!slab_objcgs(slab) &&
memcg_alloc_slab_cgroups(slab, s, flags,
false)) {
- obj_cgroup_uncharge(objcg, obj_full_size(s));
+ obj_cgroup_uncharge(objcg, obj_kmemcg_size(s));
continue;
}
@@ -545,9 +548,9 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
obj_cgroup_get(objcg);
slab_objcgs(slab)[off] = objcg;
mod_objcg_state(objcg, slab_pgdat(slab),
- cache_vmstat_idx(s), obj_full_size(s));
+ cache_vmstat_idx(s), obj_kmemcg_size(s));
} else {
- obj_cgroup_uncharge(objcg, obj_full_size(s));
+ obj_cgroup_uncharge(objcg, obj_kmemcg_size(s));
}
}
obj_cgroup_put(objcg);
@@ -576,9 +579,9 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
continue;
objcgs[off] = NULL;
- obj_cgroup_uncharge(objcg, obj_full_size(s));
+ obj_cgroup_uncharge(objcg, obj_kmemcg_size(s));
mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s),
- -obj_full_size(s));
+ -obj_kmemcg_size(s));
obj_cgroup_put(objcg);
}
}
--
1.8.3.1
next prev parent reply other threads:[~2023-01-12 15:53 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-12 15:53 [RFC PATCH bpf-next v2 00/11] mm, bpf: Add BPF into /proc/meminfo Yafang Shao
2023-01-12 15:53 ` [RFC PATCH bpf-next v2 01/11] mm: percpu: count memcg relevant memory only when kmemcg is enabled Yafang Shao
2023-01-12 15:53 ` [RFC PATCH bpf-next v2 02/11] mm: percpu: introduce percpu_size() Yafang Shao
2023-01-12 15:53 ` Yafang Shao [this message]
2023-01-12 15:53 ` [RFC PATCH bpf-next v2 04/11] mm: slab: introduce ksize_full() Yafang Shao
2023-01-12 15:53 ` [RFC PATCH bpf-next v2 05/11] mm: vmalloc: introduce vsize() Yafang Shao
2023-01-12 15:53 ` [RFC PATCH bpf-next v2 06/11] mm: util: introduce kvsize() Yafang Shao
2023-01-12 15:53 ` [RFC PATCH bpf-next v2 07/11] bpf: introduce new helpers bpf_ringbuf_pages_{alloc,free} Yafang Shao
2023-01-12 15:53 ` [RFC PATCH bpf-next v2 08/11] bpf: use bpf_map_kzalloc in arraymap Yafang Shao
2023-01-12 15:53 ` [RFC PATCH bpf-next v2 09/11] bpf: use bpf_map_kvcalloc in bpf_local_storage Yafang Shao
2023-01-12 15:53 ` [RFC PATCH bpf-next v2 10/11] bpf: add and use bpf map free helpers Yafang Shao
2023-01-12 15:53 ` [RFC PATCH bpf-next v2 11/11] bpf: introduce bpf memory statistics Yafang Shao
2023-01-12 21:05 ` [RFC PATCH bpf-next v2 00/11] mm, bpf: Add BPF into /proc/meminfo Alexei Starovoitov
2023-01-13 11:53 ` Yafang Shao
2023-01-17 17:25 ` Alexei Starovoitov
2023-01-18 3:07 ` Yafang Shao
2023-01-18 5:39 ` Alexei Starovoitov
2023-01-18 6:49 ` Yafang Shao
2023-01-26 5:45 ` Alexei Starovoitov
2023-01-28 11:49 ` Yafang Shao
2023-01-30 13:14 ` Uladzislau Rezki
2023-01-31 6:28 ` Yafang Shao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230112155326.26902-4-laoar.shao@gmail.com \
--to=laoar.shao@gmail.com \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=cl@linux.com \
--cc=daniel@iogearbox.net \
--cc=dennis@kernel.org \
--cc=haoluo@google.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=kafai@fb.com \
--cc=kpsingh@kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=sdf@google.com \
--cc=songliubraving@fb.com \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
--cc=yhs@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox