From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-24.8 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 365E7C433E9 for ; Thu, 18 Mar 2021 11:48:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A936F64F2B for ; Thu, 18 Mar 2021 11:48:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A936F64F2B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 27CA26B0070; Thu, 18 Mar 2021 07:48:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 22E9D8D0002; Thu, 18 Mar 2021 07:48:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0822A8D0001; Thu, 18 Mar 2021 07:48:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0251.hostedemail.com [216.40.44.251]) by kanga.kvack.org (Postfix) with ESMTP id E109F6B0070 for ; Thu, 18 Mar 2021 07:48:01 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A2B611E16 for ; Thu, 18 Mar 2021 11:48:01 +0000 (UTC) X-FDA: 77932821162.05.D7FB209 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) by imf10.hostedemail.com (Postfix) with ESMTP id 0B87C4080F4F for ; Thu, 18 Mar 2021 11:48:00 +0000 (UTC) Received: by mail-wm1-f51.google.com with SMTP id m20-20020a7bcb940000b029010cab7e5a9fso5215921wmi.3 for ; Thu, 18 Mar 2021 04:48:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=f/sfUqxITjvaV6JgauSLAz16yZwCK1W+iY+GNVML/0s=; b=dOBD9m7kWfOCjQoTkJS/jh2ols1hookxAD6tsIVcQSO0SXtr5mXwg7lkaJct9Y/9Em S/Wj7HbRCKMjVmGan/1S7mouqLqlUqy0eEYYDyDdURtQz6cErrdMGT01nwTnCprzSWh+ vKS0Qu67tXluq/E6JdQMTfNwufXIKjLcCZ0Q6LuiX0B7KVc9FEeoNC1oZNM5l5oGqaOT y/bAc2tSgMnoaFSjY298rp/D9x9l45rjILX5SLboJIEUMexxpzrE9nhF1bE0T0y2rSkc gB8EZMb9w+wrn+BvDE5oQmWuYAMr1TmfmAFYKTEPG6raruR0t6O6tMPWxD1fIsOwzzqg qEoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=f/sfUqxITjvaV6JgauSLAz16yZwCK1W+iY+GNVML/0s=; b=icurCEYime7XZzkPyXz6DsMr26moSI0TtPR5xuLNT5Af3C0deLjnd6dy7fxPMTLKiN XzvInOe3E0Aam9rrbNHkjhYPFE7MumOUvSczlZsbJuh9WVsbEHD6rSWl8WyRjGP6cx+5 VHcJnPJXXRr5tnoZ8BvMPhbdGhKvTVOkPRIqD3p9/KDVtZbUlGe6V2VPIOIa3QSVaLIJ Qwhj0fs0bpNMDTgip7I6NxzyndLsrXbKzfhJyuzE+7CLj5tufv/e3s19h1wGnDoAMbXb Czh4U/MHMNISPKRjZcCu8zrnkWz6Nr7MPlqr2J7Mu1M6bDQgMwafon0NKQgEHDQ22lit xVXw== X-Gm-Message-State: AOAM533/B3ixDq+blrPbjxcOgkyAa+kD/Q/Mmx9cSgD5GCv/jUgesnB6 oRxYqICPUEQPseBX/rp5XFumXA== X-Google-Smtp-Source: ABdhPJwwiUqbNTUwE3YgH3pXoEYBV9Jow9foGwdEVgyOOyMMc3AgdQ9m/wNJxq35aNp6rUj1XtSxSw== X-Received: by 2002:a1c:541a:: with SMTP id i26mr3156402wmb.75.1616068079578; Thu, 18 Mar 2021 04:47:59 -0700 (PDT) Received: from elver.google.com ([2a00:79e0:15:13:988f:7a99:3425:c6b4]) by smtp.gmail.com with ESMTPSA id x11sm2418100wme.9.2021.03.18.04.47.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Mar 2021 04:47:58 -0700 (PDT) Date: Thu, 18 Mar 2021 12:47:53 +0100 From: Marco Elver To: glittao@gmail.com Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, shuah@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: Re: [PATCH 1/2] selftests: add a kselftest for SLUB debugging functionality Message-ID: References: <20210316124118.6874-1-glittao@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210316124118.6874-1-glittao@gmail.com> User-Agent: Mutt/2.0.5 (2021-01-21) X-Stat-Signature: qm1gugmonmitipqmxz9d4piyjme8n3zh X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 0B87C4080F4F Received-SPF: none (google.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mail-wm1-f51.google.com; client-ip=209.85.128.51 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616068080-556142 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Mar 16, 2021 at 01:41PM +0100, glittao@gmail.com wrote: > From: Oliver Glitta > > SLUB has resiliency_test() function which is hidden behind #ifdef > SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody > runs it. Kselftest should proper replacement for it. > > Try changing byte in redzone after allocation and changing > pointer to next free node, first byte, 50th byte and redzone > byte. Check if validation finds errors. > > There are several differences from the original resiliency test: > Tests create own caches with known state instead of corrupting > shared kmalloc caches. > > The corruption of freepointer uses correct offset, the original > resiliency test got broken with freepointer changes. > > Scratch changing random byte test, because it does not have > meaning in this form where we need deterministic results. > > Add new option CONFIG_TEST_SLUB in Kconfig. > > Add parameter to function validate_slab_cache() to return > number of errors in cache. > > Signed-off-by: Oliver Glitta No objection per-se, but have you considered a KUnit-based test instead? There is no user space portion required to run this test, and a pure in-kernel KUnit test would be cleaner. Various boiler-plate below, including pr_err()s, the kselftest script etc. would simply not be necessary. This is only a suggestion, but just want to make sure you've considered the option and weighed its pros/cons. Thanks, -- Marco > --- > lib/Kconfig.debug | 4 + > lib/Makefile | 1 + > lib/test_slub.c | 125 +++++++++++++++++++++++++++ > mm/slab.h | 1 + > mm/slub.c | 34 +++++--- > tools/testing/selftests/lib/Makefile | 2 +- > tools/testing/selftests/lib/config | 1 + > tools/testing/selftests/lib/slub.sh | 3 + > 8 files changed, 159 insertions(+), 12 deletions(-) > create mode 100644 lib/test_slub.c > create mode 100755 tools/testing/selftests/lib/slub.sh > > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug > index 2779c29d9981..2d56092abbc4 100644 > --- a/lib/Kconfig.debug > +++ b/lib/Kconfig.debug > @@ -2123,6 +2123,10 @@ config TEST_KSTRTOX > config TEST_PRINTF > tristate "Test printf() family of functions at runtime" > > +config TEST_SLUB > + tristate "Test SLUB cache errors at runtime" > + depends on SLUB_DEBUG > + > config TEST_BITMAP > tristate "Test bitmap_*() family of functions at runtime" > help > diff --git a/lib/Makefile b/lib/Makefile > index b5307d3eec1a..b6603803b1c4 100644 > --- a/lib/Makefile > +++ b/lib/Makefile > @@ -83,6 +83,7 @@ obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o > obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_keys.o > obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_key_base.o > obj-$(CONFIG_TEST_PRINTF) += test_printf.o > +obj-$(CONFIG_TEST_SLUB) += test_slub.o > obj-$(CONFIG_TEST_BITMAP) += test_bitmap.o > obj-$(CONFIG_TEST_STRSCPY) += test_strscpy.o > obj-$(CONFIG_TEST_UUID) += test_uuid.o > diff --git a/lib/test_slub.c b/lib/test_slub.c > new file mode 100644 > index 000000000000..0075d9b44251 > --- /dev/null > +++ b/lib/test_slub.c > @@ -0,0 +1,125 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* > + * Test cases for slub facility. > + */ > + > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt > + > +#include > +#include > +#include > +#include > +#include "../mm/slab.h" > + > +#include "../tools/testing/selftests/kselftest_module.h" > + > + > +KSTM_MODULE_GLOBALS(); > + > + > +static void __init validate_result(struct kmem_cache *s, int expected_errors) > +{ > + int errors = 0; > + > + validate_slab_cache(s, &errors); > + KSTM_CHECK_ZERO(errors - expected_errors); > +} > + > +static void __init test_clobber_zone(void) > +{ > + struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0, > + SLAB_RED_ZONE, NULL); > + u8 *p = kmem_cache_alloc(s, GFP_KERNEL); > + > + p[64] = 0x12; > + pr_err("1. kmem_cache: Clobber Redzone 0x12->0x%p\n", p + 64); > + > + validate_result(s, 1); > + kmem_cache_free(s, p); > + kmem_cache_destroy(s); > +} > + > +static void __init test_next_pointer(void) > +{ > + struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0, > + SLAB_RED_ZONE, NULL); > + u8 *p = kmem_cache_alloc(s, GFP_KERNEL); > + > + kmem_cache_free(s, p); > + p[s->offset] = 0x12; > + pr_err("1. kmem_cache: Clobber next pointer 0x34 -> -0x%p\n", p); > + > + validate_result(s, 1); > + kmem_cache_destroy(s); > +} > + > +static void __init test_first_word(void) > +{ > + struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0, > + SLAB_POISON, NULL); > + u8 *p = kmem_cache_alloc(s, GFP_KERNEL); > + > + kmem_cache_free(s, p); > + *p = 0x78; > + pr_err("2. kmem_cache: Clobber first word 0x78->0x%p\n", p); > + > + validate_result(s, 1); > + kmem_cache_destroy(s); > +} > + > +static void __init test_clobber_50th_byte(void) > +{ > + struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0, > + SLAB_POISON, NULL); > + u8 *p = kmem_cache_alloc(s, GFP_KERNEL); > + > + kmem_cache_free(s, p); > + p[50] = 0x9a; > + pr_err("3. kmem_cache: Clobber 50th byte 0x9a->0x%p\n", p); > + > + validate_result(s, 1); > + kmem_cache_destroy(s); > +} > + > +static void __init test_clobber_redzone_free(void) > +{ > + struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0, > + SLAB_RED_ZONE, NULL); > + u8 *p = kmem_cache_alloc(s, GFP_KERNEL); > + > + kmem_cache_free(s, p); > + p[64] = 0xab; > + pr_err("4. kmem_cache: Clobber redzone 0xab->0x%p\n", p); > + > + validate_result(s, 1); > + kmem_cache_destroy(s); > +} > + > +static void __init resiliency_test(void) > +{ > + > + BUILD_BUG_ON(KMALLOC_MIN_SIZE > 16 || KMALLOC_SHIFT_HIGH < 10); > + > + pr_err("SLUB resiliency testing\n"); > + pr_err("-----------------------\n"); > + pr_err("A. Corruption after allocation\n"); > + > + test_clobber_zone(); > + > + pr_err("\nB. Corruption after free\n"); > + > + test_next_pointer(); > + test_first_word(); > + test_clobber_50th_byte(); > + test_clobber_redzone_free(); > +} > + > + > +static void __init selftest(void) > +{ > + resiliency_test(); > +} > + > + > +KSTM_MODULE_LOADERS(test_slub); > +MODULE_LICENSE("GPL"); > diff --git a/mm/slab.h b/mm/slab.h > index 076582f58f68..5fc18d506b3b 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -215,6 +215,7 @@ DECLARE_STATIC_KEY_TRUE(slub_debug_enabled); > DECLARE_STATIC_KEY_FALSE(slub_debug_enabled); > #endif > extern void print_tracking(struct kmem_cache *s, void *object); > +long validate_slab_cache(struct kmem_cache *s, int *errors); > #else > static inline void print_tracking(struct kmem_cache *s, void *object) > { > diff --git a/mm/slub.c b/mm/slub.c > index e26c274b4657..c00e2b263e03 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -4612,7 +4612,8 @@ static int count_total(struct page *page) > #endif > > #ifdef CONFIG_SLUB_DEBUG > -static void validate_slab(struct kmem_cache *s, struct page *page) > +static void validate_slab(struct kmem_cache *s, struct page *page, > + int *errors) > { > void *p; > void *addr = page_address(page); > @@ -4620,8 +4621,10 @@ static void validate_slab(struct kmem_cache *s, struct page *page) > > slab_lock(page); > > - if (!check_slab(s, page) || !on_freelist(s, page, NULL)) > + if (!check_slab(s, page) || !on_freelist(s, page, NULL)) { > + *errors += 1; > goto unlock; > + } > > /* Now we know that a valid freelist exists */ > map = get_map(s, page); > @@ -4629,8 +4632,10 @@ static void validate_slab(struct kmem_cache *s, struct page *page) > u8 val = test_bit(__obj_to_index(s, addr, p), map) ? > SLUB_RED_INACTIVE : SLUB_RED_ACTIVE; > > - if (!check_object(s, page, p, val)) > + if (!check_object(s, page, p, val)) { > + *errors += 1; > break; > + } > } > put_map(map); > unlock: > @@ -4638,7 +4643,7 @@ static void validate_slab(struct kmem_cache *s, struct page *page) > } > > static int validate_slab_node(struct kmem_cache *s, > - struct kmem_cache_node *n) > + struct kmem_cache_node *n, int *errors) > { > unsigned long count = 0; > struct page *page; > @@ -4647,30 +4652,34 @@ static int validate_slab_node(struct kmem_cache *s, > spin_lock_irqsave(&n->list_lock, flags); > > list_for_each_entry(page, &n->partial, slab_list) { > - validate_slab(s, page); > + validate_slab(s, page, errors); > count++; > } > - if (count != n->nr_partial) > + if (count != n->nr_partial) { > pr_err("SLUB %s: %ld partial slabs counted but counter=%ld\n", > s->name, count, n->nr_partial); > + *errors += 1; > + } > > if (!(s->flags & SLAB_STORE_USER)) > goto out; > > list_for_each_entry(page, &n->full, slab_list) { > - validate_slab(s, page); > + validate_slab(s, page, errors); > count++; > } > - if (count != atomic_long_read(&n->nr_slabs)) > + if (count != atomic_long_read(&n->nr_slabs)) { > pr_err("SLUB: %s %ld slabs counted but counter=%ld\n", > s->name, count, atomic_long_read(&n->nr_slabs)); > + *errors += 1; > + } > > out: > spin_unlock_irqrestore(&n->list_lock, flags); > return count; > } > > -static long validate_slab_cache(struct kmem_cache *s) > +long validate_slab_cache(struct kmem_cache *s, int *errors) > { > int node; > unsigned long count = 0; > @@ -4678,10 +4687,12 @@ static long validate_slab_cache(struct kmem_cache *s) > > flush_all(s); > for_each_kmem_cache_node(s, node, n) > - count += validate_slab_node(s, n); > + count += validate_slab_node(s, n, errors); > > return count; > } > +EXPORT_SYMBOL(validate_slab_cache); > + > /* > * Generate lists of code addresses where slabcache objects are allocated > * and freed. > @@ -5336,9 +5347,10 @@ static ssize_t validate_store(struct kmem_cache *s, > const char *buf, size_t length) > { > int ret = -EINVAL; > + int errors = 0; > > if (buf[0] == '1') { > - ret = validate_slab_cache(s); > + ret = validate_slab_cache(s, &errors); > if (ret >= 0) > ret = length; > } > diff --git a/tools/testing/selftests/lib/Makefile b/tools/testing/selftests/lib/Makefile > index a105f094676e..f168313b7949 100644 > --- a/tools/testing/selftests/lib/Makefile > +++ b/tools/testing/selftests/lib/Makefile > @@ -4,6 +4,6 @@ > # No binaries, but make sure arg-less "make" doesn't trigger "run_tests" > all: > > -TEST_PROGS := printf.sh bitmap.sh prime_numbers.sh strscpy.sh > +TEST_PROGS := printf.sh bitmap.sh prime_numbers.sh strscpy.sh slub.sh > > include ../lib.mk > diff --git a/tools/testing/selftests/lib/config b/tools/testing/selftests/lib/config > index b80ee3f6e265..4190863032e7 100644 > --- a/tools/testing/selftests/lib/config > +++ b/tools/testing/selftests/lib/config > @@ -3,3 +3,4 @@ CONFIG_TEST_BITMAP=m > CONFIG_PRIME_NUMBERS=m > CONFIG_TEST_STRSCPY=m > CONFIG_TEST_BITOPS=m > +CONFIG_TEST_SLUB=m > \ No newline at end of file > diff --git a/tools/testing/selftests/lib/slub.sh b/tools/testing/selftests/lib/slub.sh > new file mode 100755 > index 000000000000..8b5757702910 > --- /dev/null > +++ b/tools/testing/selftests/lib/slub.sh > @@ -0,0 +1,3 @@ > +#!/bin/sh > +# SPDX-License-Identifier: GPL-2.0+ > +$(dirname $0)/../kselftest/module.sh "slub" test_slub