From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Rongwei Wang <rongwei.wang@linux.alibaba.com>,
Christoph Lameter <cl@linux.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
David Rientjes <rientjes@google.com>,
Pekka Enberg <penberg@kernel.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
linux-mm@kvack.org, Feng Tang <feng.tang@intel.com>
Subject: Re: [PATCH] mm, slub: restrict sysfs validation to debug caches and make it safe
Date: Thu, 11 Aug 2022 06:53:18 +0000 [thread overview]
Message-ID: <YvSnXqVIxSD1+xIL@ip-172-31-24-42.ap-northeast-1.compute.internal> (raw)
In-Reply-To: <20220809140043.9903-1-vbabka@suse.cz>
On Tue, Aug 09, 2022 at 04:00:43PM +0200, Vlastimil Babka wrote:
> Rongwei Wang reports [1] that cache validation triggered by writing to
> /sys/kernel/slab/<cache>/validate is racy against normal cache
> operations (e.g. freeing) in a way that can cause false positive
> inconsistency reports for caches with debugging enabled. The problem is
> that debugging actions that mark object free or active and actual
> freelist operations are not atomic, and the validation can see an
> inconsistent state.
>
> For caches that do or don't have debugging enabled, additional races
> involving n->nr_slabs are possible that result in false reports of wrong
> slab counts.
>
> This patch attempts to solve these issues while not adding overhead to
> normal (especially fastpath) operations for caches that do not have
> debugging enabled. Such overhead would not be justified to make possible
> userspace-triggered validation safe. Instead, disable the validation for
> caches that don't have debugging enabled and make their sysfs validate
> handler return -EINVAL.
>
> For caches that do have debugging enabled, we can instead extend the
> existing approach of not using percpu freelists to force all alloc/free
> perations to the slow paths where debugging flags is checked and acted
> upon. There can adjust the debug-specific paths to increase lock
> coverage against concurrent validation as necessary.
>
> The processing on free in free_debug_processing() already happens under
> n->list_lock and slab_lock() so we can extend it to actually do the
> freeing as well and thus make it atomic against concurrent validation.
>
> The processing on alloc in alloc_debug_processing() currently doesn't
> take any locks, but we have to first allocate the object from a slab on
> the partial list (as debugging caches have no percpu slabs) and thus
> take the n->list_lock anyway. Add a function alloc_single_from_partial()
> that additionally takes slab_lock() for the debug processing and then
> grabs just the allocated object instead of the whole freelist. This
> again makes it atomic against validation and it is also ultimately more
> efficient than the current grabbing of freelist immediately followed by
> slab deactivation.
>
> To prevent races on n->nr_slabs updates, make sure that for caches with
> debugging enabled, inc_slabs_node() or dec_slabs_node() is called under
> n->list_lock. When allocating a new slab for a debug cache, handle the
> allocation by a new function alloc_single_from_new_slab() instead of the
> current forced deactivation path.
>
> Neither of these changes affect the fast paths at all. The changes in
> slow paths are negligible for non-debug caches.
>
> The function free_debug_processing() is moved so that it is placed
> later than the definitions of add_partial(), remove_partial() and
> discard_slab(), to avoid a need for forward declarations.
>
> [1] https://lore.kernel.org/all/20220529081535.69275-1-rongwei.wang@linux.alibaba.com/
>
I started to wonder...
Do we need slab_lock() after this patch for debugging caches?
if SLUB never takes a slab from partial list (unless moving to another list or freeing it),
and if you must acquire list_lock before accessing (to alloc/free objects) per-node partial list,
Why do we need it?
Of course, it is still needed for ARCHs that does not support
cmpxchg_double().
But as code for debugging caches is separated after this patch,
maybe we can simply drop slab_lock() in:
- alloc_debug_processing()
- free_debug_processing()
- alloc_single_from_{new_slab,partial}()
- validate_slab()
And also relevant comments.
> Reported-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> Changes from RFC:
> - addressed feedback from Hyeonggon Yoo
> - rebased on current mainline
> The plan is to add to slab tree/linux-next after rc1. Please test and
> review.
> mm/slub.c | 334 ++++++++++++++++++++++++++++++++++++++----------------
> 1 file changed, 238 insertions(+), 96 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 862dbd9af4f5..6de667bcfe91 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1324,17 +1324,14 @@ static inline int alloc_consistency_checks(struct kmem_cache *s,
> }
>
> static noinline int alloc_debug_processing(struct kmem_cache *s,
> - struct slab *slab,
> - void *object, unsigned long addr)
> + struct slab *slab, void *object)
> {
> if (s->flags & SLAB_CONSISTENCY_CHECKS) {
> if (!alloc_consistency_checks(s, slab, object))
> goto bad;
> }
>
> - /* Success perform special debug activities for allocs */
> - if (s->flags & SLAB_STORE_USER)
> - set_track(s, object, TRACK_ALLOC, addr);
> + /* Success. Perform special debug activities for allocs */
> trace(s, slab, object, 1);
> init_object(s, object, SLUB_RED_ACTIVE);
> return 1;
> @@ -1385,63 +1382,6 @@ static inline int free_consistency_checks(struct kmem_cache *s,
> return 1;
> }
>
> -/* Supports checking bulk free of a constructed freelist */
> -static noinline int free_debug_processing(
> - struct kmem_cache *s, struct slab *slab,
> - void *head, void *tail, int bulk_cnt,
> - unsigned long addr)
> -{
> - struct kmem_cache_node *n = get_node(s, slab_nid(slab));
> - void *object = head;
> - int cnt = 0;
> - unsigned long flags, flags2;
> - int ret = 0;
> - depot_stack_handle_t handle = 0;
> -
> - if (s->flags & SLAB_STORE_USER)
> - handle = set_track_prepare();
> -
> - spin_lock_irqsave(&n->list_lock, flags);
> - slab_lock(slab, &flags2);
> -
> - if (s->flags & SLAB_CONSISTENCY_CHECKS) {
> - if (!check_slab(s, slab))
> - goto out;
> - }
> -
> -next_object:
> - cnt++;
> -
> - if (s->flags & SLAB_CONSISTENCY_CHECKS) {
> - if (!free_consistency_checks(s, slab, object, addr))
> - goto out;
> - }
> -
> - if (s->flags & SLAB_STORE_USER)
> - set_track_update(s, object, TRACK_FREE, addr, handle);
> - trace(s, slab, object, 0);
> - /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */
> - init_object(s, object, SLUB_RED_INACTIVE);
> -
> - /* Reached end of constructed freelist yet? */
> - if (object != tail) {
> - object = get_freepointer(s, object);
> - goto next_object;
> - }
> - ret = 1;
> -
> -out:
> - if (cnt != bulk_cnt)
> - slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n",
> - bulk_cnt, cnt);
> -
> - slab_unlock(slab, &flags2);
> - spin_unlock_irqrestore(&n->list_lock, flags);
> - if (!ret)
> - slab_fix(s, "Object at 0x%p not freed", object);
> - return ret;
> -}
> -
> /*
> * Parse a block of slub_debug options. Blocks are delimited by ';'
> *
> @@ -1661,9 +1601,9 @@ static inline
> void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *addr) {}
>
> static inline int alloc_debug_processing(struct kmem_cache *s,
> - struct slab *slab, void *object, unsigned long addr) { return 0; }
> + struct slab *slab, void *object) { return 0; }
>
> -static inline int free_debug_processing(
> +static inline void free_debug_processing(
> struct kmem_cache *s, struct slab *slab,
> void *head, void *tail, int bulk_cnt,
> unsigned long addr) { return 0; }
> @@ -1671,6 +1611,8 @@ static inline int free_debug_processing(
> static inline void slab_pad_check(struct kmem_cache *s, struct slab *slab) {}
> static inline int check_object(struct kmem_cache *s, struct slab *slab,
> void *object, u8 val) { return 1; }
> +static inline void set_track(struct kmem_cache *s, void *object,
> + enum track_item alloc, unsigned long addr) {}
> static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n,
> struct slab *slab) {}
> static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n,
> @@ -1976,11 +1918,13 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
> */
> slab = alloc_slab_page(alloc_gfp, node, oo);
> if (unlikely(!slab))
> - goto out;
> + return NULL;
> stat(s, ORDER_FALLBACK);
> }
>
> slab->objects = oo_objects(oo);
> + slab->inuse = 0;
> + slab->frozen = 0;
>
> account_slab(slab, oo_order(oo), s, flags);
>
> @@ -2007,15 +1951,6 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
> set_freepointer(s, p, NULL);
> }
>
> - slab->inuse = slab->objects;
> - slab->frozen = 1;
> -
> -out:
> - if (!slab)
> - return NULL;
> -
> - inc_slabs_node(s, slab_nid(slab), slab->objects);
> -
> return slab;
> }
>
> @@ -2102,6 +2037,86 @@ static inline void remove_partial(struct kmem_cache_node *n,
> n->nr_partial--;
> }
>
> +/*
> + * Called only for kmem_cache_debug() caches instead of acquire_slab(), with a
> + * slab from the n->partial list. Removes only a single object from the slab
> + * under slab_lock(), does the alloc_debug_processing() checks and leaves the
> + * slab on the list, or moves it to full list if it was the last object.
> + */
> +static void *alloc_single_from_partial(struct kmem_cache *s,
> + struct kmem_cache_node *n, struct slab *slab)
> +{
> + void *object;
> + unsigned long flags;
> +
> + lockdep_assert_held(&n->list_lock);
> +
> + slab_lock(slab, &flags);
> +
> + object = slab->freelist;
> + slab->freelist = get_freepointer(s, object);
> + slab->inuse++;
> +
> + if (!alloc_debug_processing(s, slab, object)) {
> + slab_unlock(slab, &flags);
> + remove_partial(n, slab);
> + return NULL;
> + }
> +
> + slab_unlock(slab, &flags);
> +
> + if (slab->inuse == slab->objects) {
> + remove_partial(n, slab);
> + add_full(s, n, slab);
> + }
> +
> + return object;
> +}
> +
> +/*
> + * Called only for kmem_cache_debug() caches to allocate from a freshly
> + * allocated slab. Allocates a single object instead of whole freelist
> + * and puts the slab to the partial (or full) list.
> + */
> +static void *alloc_single_from_new_slab(struct kmem_cache *s,
> + struct slab *slab)
> +{
> + int nid = slab_nid(slab);
> + struct kmem_cache_node *n = get_node(s, nid);
> + unsigned long flags, flags2;
> + void *object;
> +
> + spin_lock_irqsave(&n->list_lock, flags);
> + slab_lock(slab, &flags2);
> +
> + object = slab->freelist;
> + slab->freelist = get_freepointer(s, object);
> + slab->inuse = 1;
> +
> + if (!alloc_debug_processing(s, slab, object)) {
> + /*
> + * It's not really expected that this would fail on a
> + * freshly allocated slab, but a concurrent memory
> + * corruption in theory could cause that.
> + */
> + slab_unlock(slab, &flags2);
> + spin_unlock_irqrestore(&n->list_lock, flags);
> + return NULL;
> + }
> +
> + slab_unlock(slab, &flags2);
> +
> + if (slab->inuse == slab->objects)
> + add_full(s, n, slab);
> + else
> + add_partial(n, slab, DEACTIVATE_TO_HEAD);
> +
> + inc_slabs_node(s, nid, slab->objects);
> + spin_unlock_irqrestore(&n->list_lock, flags);
> +
> + return object;
> +}
> +
> /*
> * Remove slab from the partial list, freeze it and
> * return the pointer to the freelist.
> @@ -2182,6 +2197,13 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
> if (!pfmemalloc_match(slab, gfpflags))
> continue;
>
> + if (kmem_cache_debug(s)) {
> + object = alloc_single_from_partial(s, n, slab);
> + if (object)
> + break;
> + continue;
> + }
> +
> t = acquire_slab(s, n, slab, object == NULL);
> if (!t)
> break;
> @@ -2788,6 +2810,114 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n)
> {
> return atomic_long_read(&n->total_objects);
> }
> +
> +/* Supports checking bulk free of a constructed freelist */
> +static noinline void free_debug_processing(
> + struct kmem_cache *s, struct slab *slab,
> + void *head, void *tail, int bulk_cnt,
> + unsigned long addr)
> +{
> + struct kmem_cache_node *n = get_node(s, slab_nid(slab));
> + struct slab *slab_free = NULL;
> + void *object = head;
> + int cnt = 0;
> + unsigned long flags, flags2;
> + bool checks_ok = false;
> + depot_stack_handle_t handle = 0;
> +
> + if (s->flags & SLAB_STORE_USER)
> + handle = set_track_prepare();
> +
> + spin_lock_irqsave(&n->list_lock, flags);
> + slab_lock(slab, &flags2);
> +
> + if (s->flags & SLAB_CONSISTENCY_CHECKS) {
> + if (!check_slab(s, slab))
> + goto out;
> + }
> +
> + if (slab->inuse < bulk_cnt) {
> + slab_err(s, slab, "Slab has %d allocated objects but %d are to be freed\n",
> + slab->inuse, bulk_cnt);
> + goto out;
> + }
> +
> +next_object:
> +
> + if (++cnt > bulk_cnt)
> + goto out_cnt;
> +
> + if (s->flags & SLAB_CONSISTENCY_CHECKS) {
> + if (!free_consistency_checks(s, slab, object, addr))
> + goto out;
> + }
> +
> + if (s->flags & SLAB_STORE_USER)
> + set_track_update(s, object, TRACK_FREE, addr, handle);
> + trace(s, slab, object, 0);
> + /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */
> + init_object(s, object, SLUB_RED_INACTIVE);
> +
> + /* Reached end of constructed freelist yet? */
> + if (object != tail) {
> + object = get_freepointer(s, object);
> + goto next_object;
> + }
> + checks_ok = true;
> +
> +out_cnt:
> + if (cnt != bulk_cnt)
> + slab_err(s, slab, "Bulk free expected %d objects but found %d\n",
> + bulk_cnt, cnt);
> +
> +out:
> + if (checks_ok) {
> + void *prior = slab->freelist;
> +
> + /* Perform the actual freeing while we still hold the locks */
> + slab->inuse -= cnt;
> + set_freepointer(s, tail, prior);
> + slab->freelist = head;
> +
> + slab_unlock(slab, &flags2);
> +
> + /* Do we need to remove the slab from full or partial list? */
> + if (!prior) {
> + remove_full(s, n, slab);
> + } else if (slab->inuse == 0) {
> + remove_partial(n, slab);
> + stat(s, FREE_REMOVE_PARTIAL);
> + }
> +
> + /* Do we need to discard the slab or add to partial list? */
> + if (slab->inuse == 0) {
> + slab_free = slab;
> + } else if (!prior) {
> + add_partial(n, slab, DEACTIVATE_TO_TAIL);
> + stat(s, FREE_ADD_PARTIAL);
> + }
> + } else {
> + slab_unlock(slab, &flags2);
> + }
> +
> + if (slab_free) {
> + /*
> + * Update the counters while still holding n->list_lock to
> + * prevent spurious validation warnings
> + */
> + dec_slabs_node(s, slab_nid(slab_free), slab_free->objects);
> + }
> +
> + spin_unlock_irqrestore(&n->list_lock, flags);
> +
> + if (!checks_ok)
> + slab_fix(s, "Object at 0x%p not freed", object);
> +
> + if (slab_free) {
> + stat(s, FREE_SLAB);
> + free_slab(s, slab_free);
> + }
> +}
> #endif /* CONFIG_SLUB_DEBUG */
>
> #if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS)
> @@ -3036,36 +3166,52 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> return NULL;
> }
>
> + stat(s, ALLOC_SLAB);
> +
> + if (kmem_cache_debug(s)) {
> + freelist = alloc_single_from_new_slab(s, slab);
> +
> + if (unlikely(!freelist))
> + goto new_objects;
> +
> + if (s->flags & SLAB_STORE_USER)
> + set_track(s, freelist, TRACK_ALLOC, addr);
> +
> + return freelist;
> + }
> +
> /*
> * No other reference to the slab yet so we can
> * muck around with it freely without cmpxchg
> */
> freelist = slab->freelist;
> slab->freelist = NULL;
> + slab->inuse = slab->objects;
> + slab->frozen = 1;
>
> - stat(s, ALLOC_SLAB);
> + inc_slabs_node(s, slab_nid(slab), slab->objects);
>
> check_new_slab:
>
> if (kmem_cache_debug(s)) {
> - if (!alloc_debug_processing(s, slab, freelist, addr)) {
> - /* Slab failed checks. Next slab needed */
> - goto new_slab;
> - } else {
> - /*
> - * For debug case, we don't load freelist so that all
> - * allocations go through alloc_debug_processing()
> - */
> - goto return_single;
> - }
> + /*
> + * For debug caches here we had to go through
> + * alloc_single_from_partial() so just store the tracking info
> + * and return the object
> + */
> + if (s->flags & SLAB_STORE_USER)
> + set_track(s, freelist, TRACK_ALLOC, addr);
> + return freelist;
> }
>
> - if (unlikely(!pfmemalloc_match(slab, gfpflags)))
> + if (unlikely(!pfmemalloc_match(slab, gfpflags))) {
> /*
> * For !pfmemalloc_match() case we don't load freelist so that
> * we don't make further mismatched allocations easier.
> */
> - goto return_single;
> + deactivate_slab(s, slab, get_freepointer(s, freelist));
> + return freelist;
> + }
>
> retry_load_slab:
>
> @@ -3089,11 +3235,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> c->slab = slab;
>
> goto load_freelist;
> -
> -return_single:
> -
> - deactivate_slab(s, slab, get_freepointer(s, freelist));
> - return freelist;
> }
>
> /*
> @@ -3341,9 +3482,10 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
> if (kfence_free(head))
> return;
>
> - if (kmem_cache_debug(s) &&
> - !free_debug_processing(s, slab, head, tail, cnt, addr))
> + if (kmem_cache_debug(s)) {
> + free_debug_processing(s, slab, head, tail, cnt, addr);
> return;
> + }
>
> do {
> if (unlikely(n)) {
> @@ -3936,6 +4078,7 @@ static void early_kmem_cache_node_alloc(int node)
> slab = new_slab(kmem_cache_node, GFP_NOWAIT, node);
>
> BUG_ON(!slab);
> + inc_slabs_node(kmem_cache_node, slab_nid(slab), slab->objects);
> if (slab_nid(slab) != node) {
> pr_err("SLUB: Unable to allocate memory from node %d\n", node);
> pr_err("SLUB: Allocating a useless per node structure in order to be able to continue\n");
> @@ -3950,7 +4093,6 @@ static void early_kmem_cache_node_alloc(int node)
> n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false);
> slab->freelist = get_freepointer(kmem_cache_node, n);
> slab->inuse = 1;
> - slab->frozen = 0;
> kmem_cache_node->node[node] = n;
> init_kmem_cache_node(n);
> inc_slabs_node(kmem_cache_node, node, slab->objects);
> @@ -5601,7 +5743,7 @@ static ssize_t validate_store(struct kmem_cache *s,
> {
> int ret = -EINVAL;
>
> - if (buf[0] == '1') {
> + if (buf[0] == '1' && kmem_cache_debug(s)) {
> ret = validate_slab_cache(s);
> if (ret >= 0)
> ret = length;
> --
> 2.37.1
>
next prev parent reply other threads:[~2022-08-11 6:53 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-09 14:00 Vlastimil Babka
2022-08-11 6:53 ` Hyeonggon Yoo [this message]
2022-08-11 8:42 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YvSnXqVIxSD1+xIL@ip-172-31-24-42.ap-northeast-1.compute.internal \
--to=42.hyeyoo@gmail.com \
--cc=cl@linux.com \
--cc=feng.tang@intel.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=rongwei.wang@linux.alibaba.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox