From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D1DFC433FE for ; Wed, 30 Nov 2022 09:07:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 856B06B0072; Wed, 30 Nov 2022 04:07:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8067C6B0073; Wed, 30 Nov 2022 04:07:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6CE706B0074; Wed, 30 Nov 2022 04:07:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5D88E6B0072 for ; Wed, 30 Nov 2022 04:07:23 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 2FFD51A0CC1 for ; Wed, 30 Nov 2022 09:07:23 +0000 (UTC) X-FDA: 80189529966.23.2D68DC3 Received: from mail-yb1-f182.google.com (mail-yb1-f182.google.com [209.85.219.182]) by imf08.hostedemail.com (Postfix) with ESMTP id C8394160008 for ; Wed, 30 Nov 2022 09:07:22 +0000 (UTC) Received: by mail-yb1-f182.google.com with SMTP id l67so20747655ybl.1 for ; Wed, 30 Nov 2022 01:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=jJSr80xd14hM4bQ3tRuzX8E5Pgbji3RMWt3595HPw/M=; b=ZSOHFZhP0BNRp1bYpzYsOZCbn7yjIhSPeKFz/exhlVafB40IHkpKTZUxSoCunbKD5G jwPVJJMER2gJU0ZpoPKPmU2xMpunpkEbi0jcUonyQWFQAb0lgQYMgy9wfFWCvIk9dO8+ XD6Gq4efZBd2IOKSmiXVt1CvJBhe91Sym3iVjHGTeZ4JmTKtH/Hcp3pIwTwu3OoNtiRI r4yUcM53ork5jlYSTREDUBm44J7OOe+XeT0jdgRlG+hTAJplcgouYCPUmhh+u0qNVxH4 JIWn4aMFOjXduCjlm/SiN30OLwgLE/bSpnZTij5BRH8veCmtK58DEcpg4o/qgyNckbEm x1qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=jJSr80xd14hM4bQ3tRuzX8E5Pgbji3RMWt3595HPw/M=; b=PzhFbRtL4kFis43huoRkMFBX0eeEShihyH9G9NpncLS5mPC3+Ny+xQ6js1yJUD5ys6 9EU7u3z+ZSA6FquTmSP3y5+yVBgac0zit85mWMYrG7zMSA5f+HQkGWABWZ4wyus0EYbq CQFM5UijCowEUCFbmJEaAjJz6+vumzG+tnMSuk+xDBnSsSavz3NK3BgEjjKOwY53EvtW iMheR4KfUyExbcNGsARuzJ7zVHruwPrYUWIH+kBeSGNsXwurihOnAtzm/Nm3M7ffkbst GiC9ZDllLJI+ACvX8MUPQBOd+dQawkL60Ocl5mXveG5+ekPUn9L+ckF6iChyOF3i0p3+ FU5Q== X-Gm-Message-State: ANoB5pn12wiazRdEEcsP77DWr79X47q4nqGLC6E6ESew5Q3i5MA7LxR7 kPBzx1wZs40ihjSqaV4VvZwgkcTxpRHoWxeRP2/B8g== X-Google-Smtp-Source: AA0mqf4u7IvscwvjWc6mp50qyMHqXDemV28o+bInv4IsCPPw0ZzMuci0LUCfr6h2SM1KaNPafp3I7FviAPMR5nEmlz4= X-Received: by 2002:a25:918a:0:b0:6f7:9c34:67a7 with SMTP id w10-20020a25918a000000b006f79c3467a7mr10896480ybl.16.1669799241764; Wed, 30 Nov 2022 01:07:21 -0800 (PST) MIME-Version: 1.0 References: <20221130085451.3390992-1-feng.tang@intel.com> In-Reply-To: <20221130085451.3390992-1-feng.tang@intel.com> From: Marco Elver Date: Wed, 30 Nov 2022 10:06:45 +0100 Message-ID: Subject: Re: [PATCH v3 1/2] mm/slub, kunit: add SLAB_SKIP_KFENCE flag for cache creation To: Feng Tang Cc: Vlastimil Babka , Andrew Morton , Oliver Glitta , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669799242; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jJSr80xd14hM4bQ3tRuzX8E5Pgbji3RMWt3595HPw/M=; b=cjrDlxMKzX7kqxoICJ8EDhbIKotT0K2BJcQGfvx97OTmfLkOI0/mWdd0ViapJDuX0wNN41 F1Qh13qdD5v1XsM01onXrSwPqOK3aVBwcMIW++Ij3lC6GQ6eGwbFqy4eJGQX0NDUHA11Ir G9yLCL+qKkKRtk00wIBBQxA38w0rtFU= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZSOHFZhP; spf=pass (imf08.hostedemail.com: domain of elver@google.com designates 209.85.219.182 as permitted sender) smtp.mailfrom=elver@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669799242; a=rsa-sha256; cv=none; b=qku5vIZSKLn2Zmu/OfWKrWO5Z7T5vc/R70iUA2y12NiU4TbG5k3T6+EJYi7cTllYitJdc7 1maobELK8tb5S4KrZ62kxZD2oPVPZKD/rTaZJx0iRyqz/PYzqXBvWqVDZzvOqmjhsbDo3A aAXe40VM0Cz9jdIuC/47iTkXIbH+Lgw= X-Stat-Signature: qhjudtu6mba965qu45bc8qjoth8t1wyq X-Rspam-User: X-Rspamd-Queue-Id: C8394160008 X-Rspamd-Server: rspam11 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZSOHFZhP; spf=pass (imf08.hostedemail.com: domain of elver@google.com designates 209.85.219.182 as permitted sender) smtp.mailfrom=elver@google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1669799242-820819 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 30 Nov 2022 at 09:57, Feng Tang wrote: > > When kfence is enabled, the buffer allocated from the test case > could be from a kfence pool, and the operation could be also > caught and reported by kfence first, causing the case to fail. > > With default kfence setting, this is very difficult to be triggered. > By changing CONFIG_KFENCE_NUM_OBJECTS from 255 to 16383, and > CONFIG_KFENCE_SAMPLE_INTERVAL from 100 to 5, the allocation from > kfence did hit 7 times in different slub_kunit cases out of 900 > times of boot test. > > To avoid this, initially we tried is_kfence_address() to check this > and repeated allocation till finding a non-kfence address. Vlastimil > Babka suggested SLAB_SKIP_KFENCE flag could be used to achieve this, > and better add a wrapper function for simplifying cache creation. > > Signed-off-by: Feng Tang Reviewed-by: Marco Elver > --- > Changelog: > > since v2: > * Don't make SKIP_KFENCE an allowd flag for cache creation, and > solve a bug of failed cache creation issue (Marco Elver) > * Add a wrapper cache creation function to simplify code > including SKIP_KFENCE handling (Vlastimil Babka) > > lib/slub_kunit.c | 35 +++++++++++++++++++++++++---------- > 1 file changed, 25 insertions(+), 10 deletions(-) > > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c > index 7a0564d7cb7a..5b0c8e7eb6dc 100644 > --- a/lib/slub_kunit.c > +++ b/lib/slub_kunit.c > @@ -9,10 +9,25 @@ > static struct kunit_resource resource; > static int slab_errors; > > +/* > + * Wrapper function for kmem_cache_create(), which reduces 2 parameters: > + * 'align' and 'ctor', and sets SLAB_SKIP_KFENCE flag to avoid getting an > + * object from kfence pool, where the operation could be caught by both > + * our test and kfence sanity check. > + */ > +static struct kmem_cache *test_kmem_cache_create(const char *name, > + unsigned int size, slab_flags_t flags) > +{ > + struct kmem_cache *s = kmem_cache_create(name, size, 0, > + (flags | SLAB_NO_USER_FLAGS), NULL); > + s->flags |= SLAB_SKIP_KFENCE; > + return s; > +} > + > static void test_clobber_zone(struct kunit *test) > { > - struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0, > - SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL); > + struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_alloc", 64, > + SLAB_RED_ZONE); > u8 *p = kmem_cache_alloc(s, GFP_KERNEL); > > kasan_disable_current(); > @@ -29,8 +44,8 @@ static void test_clobber_zone(struct kunit *test) > #ifndef CONFIG_KASAN > static void test_next_pointer(struct kunit *test) > { > - struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0, > - SLAB_POISON|SLAB_NO_USER_FLAGS, NULL); > + struct kmem_cache *s = test_kmem_cache_create("TestSlub_next_ptr_free", > + 64, SLAB_POISON); > u8 *p = kmem_cache_alloc(s, GFP_KERNEL); > unsigned long tmp; > unsigned long *ptr_addr; > @@ -74,8 +89,8 @@ static void test_next_pointer(struct kunit *test) > > static void test_first_word(struct kunit *test) > { > - struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0, > - SLAB_POISON|SLAB_NO_USER_FLAGS, NULL); > + struct kmem_cache *s = test_kmem_cache_create("TestSlub_1th_word_free", > + 64, SLAB_POISON); > u8 *p = kmem_cache_alloc(s, GFP_KERNEL); > > kmem_cache_free(s, p); > @@ -89,8 +104,8 @@ static void test_first_word(struct kunit *test) > > static void test_clobber_50th_byte(struct kunit *test) > { > - struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0, > - SLAB_POISON|SLAB_NO_USER_FLAGS, NULL); > + struct kmem_cache *s = test_kmem_cache_create("TestSlub_50th_word_free", > + 64, SLAB_POISON); > u8 *p = kmem_cache_alloc(s, GFP_KERNEL); > > kmem_cache_free(s, p); > @@ -105,8 +120,8 @@ static void test_clobber_50th_byte(struct kunit *test) > > static void test_clobber_redzone_free(struct kunit *test) > { > - struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0, > - SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL); > + struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_free", 64, > + SLAB_RED_ZONE); > u8 *p = kmem_cache_alloc(s, GFP_KERNEL); > > kasan_disable_current(); > -- > 2.34.1 >