From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64472C4332F for ; Mon, 19 Dec 2022 11:31:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F035B8E0003; Mon, 19 Dec 2022 06:31:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EB2228E0002; Mon, 19 Dec 2022 06:31:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D53118E0003; Mon, 19 Dec 2022 06:31:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C22348E0002 for ; Mon, 19 Dec 2022 06:31:08 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 8E424140ABF for ; Mon, 19 Dec 2022 11:31:08 +0000 (UTC) X-FDA: 80258839416.30.916C0BA Received: from mail-yb1-f180.google.com (mail-yb1-f180.google.com [209.85.219.180]) by imf12.hostedemail.com (Postfix) with ESMTP id 0140840014 for ; Mon, 19 Dec 2022 11:31:05 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=lZpII0Im; spf=pass (imf12.hostedemail.com: domain of elver@google.com designates 209.85.219.180 as permitted sender) smtp.mailfrom=elver@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1671449466; a=rsa-sha256; cv=none; b=i+aqYUGhilKaNS839VISxGnuBcKPk6qnLfi6UPvzQqSBZJKtL1/HTOMRKIF6WLT/FnFfQt DmXBv0LirIP2EHAp6asNi/B/nIWmJlhdDGuAFHf3az3MKGeSRBQiCx8BdL/3qm0JpWWTtx cHbynHDkF3/u4JBo8eH4BN10/0C8Yic= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=lZpII0Im; spf=pass (imf12.hostedemail.com: domain of elver@google.com designates 209.85.219.180 as permitted sender) smtp.mailfrom=elver@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1671449466; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iaIBaFR/5sQ1YgIDiJkbC1N+HwzpKqM3wb71PGjyc/g=; b=XwmxFUIAZJuaEN5smpViBlWwyf99vQhh9wpK7eqQ8ts+YZ1RLx2UDf3x8aAbwPaoV6z7LO psMXnkri07/sfdZ8Z3XvghimZDIYXM14VX9+1TDl3l+8kio4EFywX7iYenk34xN0PJgg0o yN0NLCvz3OXN+32EK6MRBPAbqCalPEk= Received: by mail-yb1-f180.google.com with SMTP id e141so9104673ybh.3 for ; Mon, 19 Dec 2022 03:31:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=iaIBaFR/5sQ1YgIDiJkbC1N+HwzpKqM3wb71PGjyc/g=; b=lZpII0ImfCEExO5W9BsVqxzqpYfkN3NTvOzQjnEMMhmC0h1JLnZYTnL9QIV/Lvteeo ULfvyOBOlwNqJiUajyzhWqDyNYlpESYGWO1wZHmeApBOUuSmqSv5V5wmODt+/hG97LNx A5dh4dVVC8GTzjausLR2HtuBhwCny8Hvo3u8zYkPUDpUi4jLjRbbEWlOYZIDnsjR1iC0 LfpUDgaCNHCCOXee2okLdgAFzfclA/cnpyZfCtjhPvKFzvQQ6D5/RI61hda/IdSkFUzD qqwC77RWYfRtCRDT7sXaX02KBihYFJyQCDYLmlUuXEq0i6IhlDJkj87Nr1qx2kYEPKsI EOUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=iaIBaFR/5sQ1YgIDiJkbC1N+HwzpKqM3wb71PGjyc/g=; b=Vb0pGlRmVrZx9SF9mYCp7nMBQ1Ke0pE08kHskVRfkGBrwNp1O3LxP6Wu7QsjoK8bKC zTG5PLzsI597EcBF548fNGblp8PS9sr6U99Fid6RR6XaRWxlomMiBT8PlrOOgXEnqHwR Sk+6bLU9t3C2rOoLIdx6WpX7QPNGGeDYiOmHqJWSlhaXmRqevzcZUeiI2BiL/U5deZpB onexQkuL+HBQEMuNvqGbDqVradQErYA65xnS39raVUNjIl5U8qgGGqaHLGLIg+usjDFQ eLQbtBfYY1HxS5VorF+dw49biBtBEAfYvNJPXLWAUo1pRlAC8P7INRXHCdq7krI3Yc5Y tVcQ== X-Gm-Message-State: AFqh2krWGdsZwaNryhdI5b9Pu9IBoJbltds1gd4Y8pS3VyCsiSKqqUNP /iKPeKEzd6pGU9cysh7R5xHsx4PelwESdF8G0x5CNQ== X-Google-Smtp-Source: AMrXdXu4OAM9/5LgayYyHv7Ewcs0M9xGVR+zXwtzWeefAbOSKKs7dsdgK+ItWcKL+aKzJFQXs5GHo8woi56DG+dWh/E= X-Received: by 2002:a25:bec7:0:b0:738:9bde:1a8d with SMTP id k7-20020a25bec7000000b007389bde1a8dmr1627021ybm.93.1671449464708; Mon, 19 Dec 2022 03:31:04 -0800 (PST) MIME-Version: 1.0 References: <323d51d422d497b3783dacb130af245f67d77671.1671228324.git.andreyknvl@google.com> In-Reply-To: <323d51d422d497b3783dacb130af245f67d77671.1671228324.git.andreyknvl@google.com> From: Marco Elver Date: Mon, 19 Dec 2022 12:30:28 +0100 Message-ID: Subject: Re: [PATCH v3] kasan: allow sampling page_alloc allocations for HW_TAGS To: andrey.konovalov@linux.dev Cc: Andrew Morton , Andrey Konovalov , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Peter Collingbourne , Evgenii Stepanov , Florian Mayer , Jann Horn , Mark Brand , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0140840014 X-Stat-Signature: kzb195i18p1to68jhd13rsanfuq7ayo9 X-HE-Tag: 1671449465-253099 X-HE-Meta: U2FsdGVkX19TkoJV1C6zYdVfg3RWSZQ2ycULlhwZpl5Q9elDvGoQwPgX8ZORq3nvfJ3qFkDb/fglgWrO2ARrdf9W+8vN8d0NyW/wGXRQ+sGkjmxQ1Es4kGqPAcP8bZl/KnvGoDIQGXUIY+cLfHcSz3E2COm07zKviwz2Nv7+55PMa6fa312KVRRR8dNto2vZ00uPUpXO69x3DoDmtds/RSuPEVaBXrEGaPZH/KLl7MFdq88tDdoET0F+rk3bitwumS4/ywkxKwxpxEL4IpOgO0uEx9jJaHN6BnUICr358CWSBFOzZgCT0jzzR3xEpPQ7lYYJOZhbu+VMBN0Tv3Osr6OKlzeWWwYFwNdSy6cyx0ungKEzHBX78u4WL630qH+KkqrlZPEdQ+AUvgNInZ3d1hIRvSPe774HLkR75anXlTr+K0p1QVgDyxQMlk046Hkw+JPrGHTaU/waLntcusitlM7aPhbAVYgEBX5Fn0deog/qUgGu2D3jLbmEgkGYv1urepWzqQHcdM+YOlgBhEWIjFi/TMUNn1OQ4xBZA90aC0FoNrSdmrFCaEywFICQWcdCPnn/wifzy58LiIr8NG0qGbCGMbh/FPpUKcQvNa0p5S2fJFZiwZoqy7QI/y9gyrYZBi10+z8RGinDnXcBtx14hGqB1BgzeWaFjtmcZNl1NOhwflDBTTkrxq1BOMwWBWtsfCqndLDTfoRyIcm68FW60lRduC+DPTmlTdzv1BfI61TL/S2LamzuU3J6LSkXpQm03FmUON8tOo3CSDVglPv16017gY7xG/kLE4hAfJxnt0f/T5x4sGzObqkI9yQQjsPyNY3oSrvHAdGCVhQaWqBrVv0YXII6ck5GlC7uWWJuihwAyW/OsDLbW0fblK76gL41dNv8TKeLDxk/Y0YsMr+TomWZ+DFrZwL7GVb/oYy68DySUo5Sn8mHeTDWzARu9/FPg8Quwpq/zCs0RZyibLh iBf/5drW I50GswySI4FJviHZunLa4YEB3DCaq5hWbH7X/JQS5yq14ufgtUPh0R/bmGu3qQ6s9EOBIlV0T4N13QJ+uxh6TpBknIRQLikicAWniHMC7Iw8NAC/UNSmxDyYlmQ2aw/+rgurH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 16 Dec 2022 at 23:17, wrote: > > From: Andrey Konovalov > > As Hardware Tag-Based KASAN is intended to be used in production, its > performance impact is crucial. As page_alloc allocations tend to be big, > tagging and checking all such allocations can introduce a significant > slowdown. > > Add two new boot parameters that allow to alleviate that slowdown: > > - kasan.page_alloc.sample, which makes Hardware Tag-Based KASAN tag only > every Nth page_alloc allocation with the order configured by the second > added parameter (default: tag every such allocation). > > - kasan.page_alloc.sample.order, which makes sampling enabled by the first > parameter only affect page_alloc allocations with the order equal or > greater than the specified value (default: 3, see below). > > The exact performance improvement caused by using the new parameters > depends on their values and the applied workload. > > The chosen default value for kasan.page_alloc.sample.order is 3, which > matches both PAGE_ALLOC_COSTLY_ORDER and SKB_FRAG_PAGE_ORDER. This is done > for two reasons: > > 1. PAGE_ALLOC_COSTLY_ORDER is "the order at which allocations are deemed > costly to service", which corresponds to the idea that only large and > thus costly allocations are supposed to sampled. > > 2. One of the workloads targeted by this patch is a benchmark that sends > a large amount of data over a local loopback connection. Most multi-page > data allocations in the networking subsystem have the order of > SKB_FRAG_PAGE_ORDER (or PAGE_ALLOC_COSTLY_ORDER). > > When running a local loopback test on a testing MTE-enabled device in sync > mode, enabling Hardware Tag-Based KASAN introduces a ~50% slowdown. > Applying this patch and setting kasan.page_alloc.sampling to a value higher > than 1 allows to lower the slowdown. The performance improvement saturates > around the sampling interval value of 10 with the default sampling page > order of 3. This lowers the slowdown to ~20%. The slowdown in real > scenarios involving the network will likely be better. > > Enabling page_alloc sampling has a downside: KASAN misses bad accesses to > a page_alloc allocation that has not been tagged. This lowers the value of > KASAN as a security mitigation. > > However, based on measuring the number of page_alloc allocations of > different orders during boot in a test build, sampling with the default > kasan.page_alloc.sample.order value affects only ~7% of allocations. > The rest ~93% of allocations are still checked deterministically. > > Signed-off-by: Andrey Konovalov On a whole: Reviewed-by: Marco Elver This looks much better, given it'll automatically do the right thing without marking costly allocation sites. Minor comments below. > --- > > Changes v2-v3: > - Drop __GFP_KASAN_SAMPLE flag. > - Add kasan.page_alloc.sample.order. > - Add fast-path for disabled sampling to kasan_sample_page_alloc. > > Changes v1->v2: > - Only sample allocations when __GFP_KASAN_SAMPLE is provided to > alloc_pages(). > - Fix build when KASAN is disabled. > - Add more information about the flag to documentation. > - Use optimized preemption-safe approach for sampling suggested by Marco. > --- > Documentation/dev-tools/kasan.rst | 16 +++++++++ > include/linux/kasan.h | 14 +++++--- > mm/kasan/common.c | 9 +++-- > mm/kasan/hw_tags.c | 60 +++++++++++++++++++++++++++++++ > mm/kasan/kasan.h | 27 ++++++++++++++ > mm/page_alloc.c | 43 ++++++++++++++-------- > 6 files changed, 148 insertions(+), 21 deletions(-) > > diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst > index 5c93ab915049..d983e4fcee7c 100644 > --- a/Documentation/dev-tools/kasan.rst > +++ b/Documentation/dev-tools/kasan.rst > @@ -140,6 +140,22 @@ disabling KASAN altogether or controlling its features: > - ``kasan.vmalloc=off`` or ``=on`` disables or enables tagging of vmalloc > allocations (default: ``on``). > > +- ``kasan.page_alloc.sample=`` makes KASAN tag only every > + Nth page_alloc allocation with the order equal or greater than > + ``kasan.page_alloc.sample.order``, where N is the value of the ``sample`` > + parameter (default: ``1``, or tag every such allocation). > + This parameter is intended to mitigate the performance overhead introduced > + by KASAN. > + Note that enabling this parameter makes Hardware Tag-Based KASAN skip checks > + of allocations chosen by sampling and thus miss bad accesses to these > + allocations. Use the default value for accurate bug detection. > + > +- ``kasan.page_alloc.sample.order=`` specifies the minimum > + order of allocations that are affected by sampling (default: ``3``). > + Only applies when ``kasan.page_alloc.sample`` is set to a non-default value. "set to a value greater than 1"? The additional indirection through "non-default" seems unnecessary. > + This parameter is intended to allow sampling only large page_alloc > + allocations, which is the biggest source of the performace overhead. s/performace/performance/ > + > Error reports > ~~~~~~~~~~~~~ > > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index 96c9d56e5510..5ebbaf672009 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -120,12 +120,13 @@ static __always_inline void kasan_poison_pages(struct page *page, > __kasan_poison_pages(page, order, init); > } > > -void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init); > -static __always_inline void kasan_unpoison_pages(struct page *page, > +bool __kasan_unpoison_pages(struct page *page, unsigned int order, bool init); > +static __always_inline bool kasan_unpoison_pages(struct page *page, > unsigned int order, bool init) > { > if (kasan_enabled()) > - __kasan_unpoison_pages(page, order, init); > + return __kasan_unpoison_pages(page, order, init); > + return false; > } > > void __kasan_cache_create_kmalloc(struct kmem_cache *cache); > @@ -249,8 +250,11 @@ static __always_inline bool kasan_check_byte(const void *addr) > static inline void kasan_unpoison_range(const void *address, size_t size) {} > static inline void kasan_poison_pages(struct page *page, unsigned int order, > bool init) {} > -static inline void kasan_unpoison_pages(struct page *page, unsigned int order, > - bool init) {} > +static inline bool kasan_unpoison_pages(struct page *page, unsigned int order, > + bool init) > +{ > + return false; > +} > static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {} > static inline void kasan_poison_slab(struct slab *slab) {} > static inline void kasan_unpoison_object_data(struct kmem_cache *cache, > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 833bf2cfd2a3..1d0008e1c420 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -95,19 +95,24 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark) > } > #endif /* CONFIG_KASAN_STACK */ > > -void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init) > +bool __kasan_unpoison_pages(struct page *page, unsigned int order, bool init) > { > u8 tag; > unsigned long i; > > if (unlikely(PageHighMem(page))) > - return; > + return false; > + > + if (!kasan_sample_page_alloc(order)) > + return false; > > tag = kasan_random_tag(); > kasan_unpoison(set_tag(page_address(page), tag), > PAGE_SIZE << order, init); > for (i = 0; i < (1 << order); i++) > page_kasan_tag_set(page + i, tag); > + > + return true; > } > > void __kasan_poison_pages(struct page *page, unsigned int order, bool init) > diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c > index b22c4f461cb0..d1bcb0205327 100644 > --- a/mm/kasan/hw_tags.c > +++ b/mm/kasan/hw_tags.c > @@ -59,6 +59,24 @@ EXPORT_SYMBOL_GPL(kasan_mode); > /* Whether to enable vmalloc tagging. */ > DEFINE_STATIC_KEY_TRUE(kasan_flag_vmalloc); > > +#define PAGE_ALLOC_SAMPLE_DEFAULT 1 > +#define PAGE_ALLOC_SAMPLE_ORDER_DEFAULT 3 Why not just set it to PAGE_ALLOC_COSTLY_ORDER? > +/* > + * Sampling interval of page_alloc allocation (un)poisoning. > + * Defaults to no sampling. > + */ > +unsigned long kasan_page_alloc_sample = PAGE_ALLOC_SAMPLE_DEFAULT; > + > +/* > + * Minimum order of page_alloc allocations to be affected by sampling. > + * The default value is chosen to match both > + * PAGE_ALLOC_COSTLY_ORDER and SKB_FRAG_PAGE_ORDER. > + */ > +unsigned int kasan_page_alloc_sample_order = PAGE_ALLOC_SAMPLE_ORDER_DEFAULT; > + > +DEFINE_PER_CPU(long, kasan_page_alloc_skip); > + > /* kasan=off/on */ > static int __init early_kasan_flag(char *arg) > { > @@ -122,6 +140,48 @@ static inline const char *kasan_mode_info(void) > return "sync"; > } > > +/* kasan.page_alloc.sample= */ > +static int __init early_kasan_flag_page_alloc_sample(char *arg) > +{ > + int rv; > + > + if (!arg) > + return -EINVAL; > + > + rv = kstrtoul(arg, 0, &kasan_page_alloc_sample); > + if (rv) > + return rv; > + > + if (!kasan_page_alloc_sample || kasan_page_alloc_sample > LONG_MAX) { > + kasan_page_alloc_sample = PAGE_ALLOC_SAMPLE_DEFAULT; > + return -EINVAL; > + } > + > + return 0; > +} > +early_param("kasan.page_alloc.sample", early_kasan_flag_page_alloc_sample); > + > +/* kasan.page_alloc.sample.order= */ > +static int __init early_kasan_flag_page_alloc_sample_order(char *arg) > +{ > + int rv; > + > + if (!arg) > + return -EINVAL; > + > + rv = kstrtouint(arg, 0, &kasan_page_alloc_sample_order); > + if (rv) > + return rv; > + > + if (kasan_page_alloc_sample_order > INT_MAX) { > + kasan_page_alloc_sample_order = PAGE_ALLOC_SAMPLE_ORDER_DEFAULT; > + return -EINVAL; > + } > + > + return 0; > +} > +early_param("kasan.page_alloc.sample.order", early_kasan_flag_page_alloc_sample_order); > + > /* > * kasan_init_hw_tags_cpu() is called for each CPU. > * Not marked as __init as a CPU can be hot-plugged after boot. > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index ea8cf1310b1e..32413f22aa82 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -42,6 +42,10 @@ enum kasan_mode { > > extern enum kasan_mode kasan_mode __ro_after_init; > > +extern unsigned long kasan_page_alloc_sample; > +extern unsigned int kasan_page_alloc_sample_order; > +DECLARE_PER_CPU(long, kasan_page_alloc_skip); > + > static inline bool kasan_vmalloc_enabled(void) > { > return static_branch_likely(&kasan_flag_vmalloc); > @@ -57,6 +61,24 @@ static inline bool kasan_sync_fault_possible(void) > return kasan_mode == KASAN_MODE_SYNC || kasan_mode == KASAN_MODE_ASYMM; > } > > +static inline bool kasan_sample_page_alloc(unsigned int order) > +{ > + /* Fast-path for when sampling is disabled. */ > + if (kasan_page_alloc_sample == 1) > + return true; > + > + if (order < kasan_page_alloc_sample_order) > + return true; > + > + if (this_cpu_dec_return(kasan_page_alloc_skip) < 0) { > + this_cpu_write(kasan_page_alloc_skip, > + kasan_page_alloc_sample - 1); > + return true; > + } > + > + return false; > +} > + > #else /* CONFIG_KASAN_HW_TAGS */ > > static inline bool kasan_async_fault_possible(void) > @@ -69,6 +91,11 @@ static inline bool kasan_sync_fault_possible(void) > return true; > } > > +static inline bool kasan_sample_page_alloc(unsigned int order) > +{ > + return true; > +} > + > #endif /* CONFIG_KASAN_HW_TAGS */ > > #ifdef CONFIG_KASAN_GENERIC > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0745aedebb37..7d980dc0000e 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1356,6 +1356,8 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) > * see the comment next to it. > * 3. Skipping poisoning is requested via __GFP_SKIP_KASAN_POISON, > * see the comment next to it. > + * 4. The allocation is excluded from being checked due to sampling, > + * see the call to kasan_unpoison_pages. > * > * Poisoning pages during deferred memory init will greatly lengthen the > * process and cause problem in large memory systems as the deferred pages > @@ -2468,7 +2470,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order, > { > bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags) && > !should_skip_init(gfp_flags); > - bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS); > + bool zero_tags = init && (gfp_flags & __GFP_ZEROTAGS); > + bool reset_tags = !zero_tags; > int i; > > set_page_private(page, 0); > @@ -2491,30 +2494,42 @@ inline void post_alloc_hook(struct page *page, unsigned int order, > */ > > /* > - * If memory tags should be zeroed (which happens only when memory > - * should be initialized as well). > + * If memory tags should be zeroed > + * (which happens only when memory should be initialized as well). > */ > - if (init_tags) { > + if (zero_tags) { > /* Initialize both memory and tags. */ > for (i = 0; i != 1 << order; ++i) > tag_clear_highpage(page + i); > > - /* Note that memory is already initialized by the loop above. */ > + /* Take note that memory was initialized by the loop above. */ > init = false; > } > if (!should_skip_kasan_unpoison(gfp_flags)) { > - /* Unpoison shadow memory or set memory tags. */ > - kasan_unpoison_pages(page, order, init); > - > - /* Note that memory is already initialized by KASAN. */ > - if (kasan_has_integrated_init()) > - init = false; > - } else { > - /* Ensure page_address() dereferencing does not fault. */ > + /* Try unpoisoning (or setting tags) and initializing memory. */ > + if (kasan_unpoison_pages(page, order, init)) { > + /* Take note that memory was initialized by KASAN. */ > + if (kasan_has_integrated_init()) > + init = false; > + /* Take note that memory tags were set by KASAN. */ > + reset_tags = false; > + } else { > + /* > + * KASAN decided to exclude this allocation from being > + * poisoned due to sampling. Skip poisoning as well. > + */ > + SetPageSkipKASanPoison(page); > + } > + } > + /* > + * If memory tags have not been set, reset the page tags to ensure > + * page_address() dereferencing does not fault. > + */ > + if (reset_tags) { > for (i = 0; i != 1 << order; ++i) > page_kasan_tag_reset(page + i); > } > - /* If memory is still not initialized, do it now. */ > + /* If memory is still not initialized, initialize it now. */ > if (init) > kernel_init_pages(page, 1 << order); > /* Propagate __GFP_SKIP_KASAN_POISON to page flags. */ > -- > 2.25.1 >