From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F32E3C4321E for ; Wed, 30 Nov 2022 08:57:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 74D586B0072; Wed, 30 Nov 2022 03:57:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6FD596B0073; Wed, 30 Nov 2022 03:57:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C4486B0074; Wed, 30 Nov 2022 03:57:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 482576B0072 for ; Wed, 30 Nov 2022 03:57:52 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 1654F16071E for ; Wed, 30 Nov 2022 08:57:52 +0000 (UTC) X-FDA: 80189505984.27.FC74BBF Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf09.hostedemail.com (Postfix) with ESMTP id 60F5914000B for ; Wed, 30 Nov 2022 08:57:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669798670; x=1701334670; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=1es6/A3ga1m5NpXqdmla/W5OjLRqvafVSel0I0ZkrnM=; b=TIwe7wh+q1mVG+JNlWWx3XqQlgWxgiy+7ikq7X5CLdcQZ1yldZtwzP9Q rKtnrumjVPG39ELdkLckpR3RaxyVTgVcZYpMZzgzKGjdo7B7wjQidttVm BMUSFalNIagAWNML87McMrpK4vd63enOVTUxl5olvGflLN2ztOZaW55UP uwRLTBWJ80RLycw+ohojogPdYVaoitkBLoODr12RlCtxhk/0yhF/bEvRy p8a6O712TOe4vvNeMVBe0Rx8cZPrg1m+7G+fHf3Mn0F15p9T5dUDMM9Q8 DQ8fitfpkrSq7Jc8LU/OFzTV7BfoLnlOUt2wCdIIwbOtQw4fTGYB9M2KB g==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="295039281" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="295039281" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Nov 2022 00:57:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="973026140" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="973026140" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by fmsmga005.fm.intel.com with ESMTP; 30 Nov 2022 00:57:45 -0800 From: Feng Tang To: Vlastimil Babka , Marco Elver , Andrew Morton , Oliver Glitta , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang Subject: [PATCH v3 1/2] mm/slub, kunit: add SLAB_SKIP_KFENCE flag for cache creation Date: Wed, 30 Nov 2022 16:54:50 +0800 Message-Id: <20221130085451.3390992-1-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=TIwe7wh+; spf=pass (imf09.hostedemail.com: domain of feng.tang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669798671; a=rsa-sha256; cv=none; b=z4OGS1EJmseTDPM2dX23f2FRdb0IT6S4Z0vYGugyR4kI8vqfWcHCW/as1P4A/tAE420ZdK BimsYYixfETChoXRQFIWRSyA27kA/R/CmlOSeo88sdnKBqCZvN7ItR9PJ++5exJ5ThjUU3 dDnNLLUQ5xyYBnNqc7nxvF4PDrI1jXU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669798671; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=ewBpET307hqn1Z0ofRnWM4PAZGCBHDVPOJDYgzDa1FM=; b=ZeaqzXHQciU/whugTtJKDTLeaXYjfSfC+qlAahjY2nkRU6jZFhGbmzCYjejrAzZS/aTS2V gqMZ4I0RoQIT88z2IyhExXoONgoTkElLHnCraZEssSU8qemEUd31fY5bEVW2lo0z19rRi/ AgaiEnBO27GGIqXpFdljrp1DiAyyxdc= Authentication-Results: imf09.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=TIwe7wh+; spf=pass (imf09.hostedemail.com: domain of feng.tang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspamd-Server: rspam01 X-Stat-Signature: tzop13u5y4yoi5grw6fzhxnj568fh1dh X-Rspamd-Queue-Id: 60F5914000B X-Rspam-User: X-HE-Tag: 1669798670-17708 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When kfence is enabled, the buffer allocated from the test case could be from a kfence pool, and the operation could be also caught and reported by kfence first, causing the case to fail. With default kfence setting, this is very difficult to be triggered. By changing CONFIG_KFENCE_NUM_OBJECTS from 255 to 16383, and CONFIG_KFENCE_SAMPLE_INTERVAL from 100 to 5, the allocation from kfence did hit 7 times in different slub_kunit cases out of 900 times of boot test. To avoid this, initially we tried is_kfence_address() to check this and repeated allocation till finding a non-kfence address. Vlastimil Babka suggested SLAB_SKIP_KFENCE flag could be used to achieve this, and better add a wrapper function for simplifying cache creation. Signed-off-by: Feng Tang --- Changelog: since v2: * Don't make SKIP_KFENCE an allowd flag for cache creation, and solve a bug of failed cache creation issue (Marco Elver) * Add a wrapper cache creation function to simplify code including SKIP_KFENCE handling (Vlastimil Babka) lib/slub_kunit.c | 35 +++++++++++++++++++++++++---------- 1 file changed, 25 insertions(+), 10 deletions(-) diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c index 7a0564d7cb7a..5b0c8e7eb6dc 100644 --- a/lib/slub_kunit.c +++ b/lib/slub_kunit.c @@ -9,10 +9,25 @@ static struct kunit_resource resource; static int slab_errors; +/* + * Wrapper function for kmem_cache_create(), which reduces 2 parameters: + * 'align' and 'ctor', and sets SLAB_SKIP_KFENCE flag to avoid getting an + * object from kfence pool, where the operation could be caught by both + * our test and kfence sanity check. + */ +static struct kmem_cache *test_kmem_cache_create(const char *name, + unsigned int size, slab_flags_t flags) +{ + struct kmem_cache *s = kmem_cache_create(name, size, 0, + (flags | SLAB_NO_USER_FLAGS), NULL); + s->flags |= SLAB_SKIP_KFENCE; + return s; +} + static void test_clobber_zone(struct kunit *test) { - struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0, - SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL); + struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_alloc", 64, + SLAB_RED_ZONE); u8 *p = kmem_cache_alloc(s, GFP_KERNEL); kasan_disable_current(); @@ -29,8 +44,8 @@ static void test_clobber_zone(struct kunit *test) #ifndef CONFIG_KASAN static void test_next_pointer(struct kunit *test) { - struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0, - SLAB_POISON|SLAB_NO_USER_FLAGS, NULL); + struct kmem_cache *s = test_kmem_cache_create("TestSlub_next_ptr_free", + 64, SLAB_POISON); u8 *p = kmem_cache_alloc(s, GFP_KERNEL); unsigned long tmp; unsigned long *ptr_addr; @@ -74,8 +89,8 @@ static void test_next_pointer(struct kunit *test) static void test_first_word(struct kunit *test) { - struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0, - SLAB_POISON|SLAB_NO_USER_FLAGS, NULL); + struct kmem_cache *s = test_kmem_cache_create("TestSlub_1th_word_free", + 64, SLAB_POISON); u8 *p = kmem_cache_alloc(s, GFP_KERNEL); kmem_cache_free(s, p); @@ -89,8 +104,8 @@ static void test_first_word(struct kunit *test) static void test_clobber_50th_byte(struct kunit *test) { - struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0, - SLAB_POISON|SLAB_NO_USER_FLAGS, NULL); + struct kmem_cache *s = test_kmem_cache_create("TestSlub_50th_word_free", + 64, SLAB_POISON); u8 *p = kmem_cache_alloc(s, GFP_KERNEL); kmem_cache_free(s, p); @@ -105,8 +120,8 @@ static void test_clobber_50th_byte(struct kunit *test) static void test_clobber_redzone_free(struct kunit *test) { - struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0, - SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL); + struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_free", 64, + SLAB_RED_ZONE); u8 *p = kmem_cache_alloc(s, GFP_KERNEL); kasan_disable_current(); -- 2.34.1