From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 163F4C433F5 for ; Thu, 26 May 2022 12:24:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 553058D0003; Thu, 26 May 2022 08:24:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FEBF8D0002; Thu, 26 May 2022 08:24:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3EBDB8D0003; Thu, 26 May 2022 08:24:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2F9DE8D0002 for ; Thu, 26 May 2022 08:24:22 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id F07841203AC for ; Thu, 26 May 2022 12:24:21 +0000 (UTC) X-FDA: 79507811964.02.E57BB1B Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf21.hostedemail.com (Postfix) with ESMTP id DB7BB1C000A for ; Thu, 26 May 2022 12:24:08 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5D0886199F; Thu, 26 May 2022 12:24:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3FD7FC34113; Thu, 26 May 2022 12:24:18 +0000 (UTC) Date: Thu, 26 May 2022 13:24:14 +0100 From: Catalin Marinas To: Andrey Konovalov Cc: Andrey Ryabinin , Will Deacon , Vincenzo Frascino , Peter Collingbourne , kasan-dev , Linux Memory Management List , Linux ARM Subject: Re: [PATCH 0/3] kasan: Fix ordering between MTE tag colouring and page->flags Message-ID: References: <20220517180945.756303-1-catalin.marinas@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: nra67woeahbpu8xxszu7wnwxeenqrha1 X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf21.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: DB7BB1C000A X-HE-Tag: 1653567848-806920 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 25, 2022 at 07:41:08PM +0200, Andrey Konovalov wrote: > On Wed, May 25, 2022 at 5:45 PM Catalin Marinas wrote: > > > Adding __GFP_SKIP_KASAN_UNPOISON makes sense, but we still need to > > > reset the tag in page->flags. > > > > My thought was to reset the tag in page->flags based on 'unpoison' > > alone without any extra flags. We use this flag for vmalloc() pages but > > it seems we don't reset the page tags (as we do via > > kasan_poison_slab()). > > I just realized that we already have __GFP_ZEROTAGS that initializes > both in-memory and page->flags tags. IIUC it only zeroes the tags and skips the unpoisoning but page_kasan_tag() remains unchanged. > Currently only used for user > pages allocated via alloc_zeroed_user_highpage_movable(). Perhaps we > can add this flag to GFP_HIGHUSER_MOVABLE? I wouldn't add __GFP_ZEROTAGS to GFP_HIGHUSER_MOVABLE as we only need it if the page is mapped with PROT_MTE. Clearing a page without tags may be marginally faster. > We'll also need to change the behavior of __GFP_ZEROTAGS to work even > when GFP_ZERO is not set, but this doesn't seem to be a problem. Why? We'd get unnecessary tag zeroing. We have these cases for anonymous, private pages: 1. Zeroed page allocation without PROT_MTE: we need GFP_ZERO and page_kasan_tag_reset() in case of later mprotect(PROT_MTE). 2. Zeroed page allocation with PROT_MTE: we need GFP_ZERO, __GFP_ZEROTAGS and page_kasan_tag_reset(). 3. CoW page allocation without PROT_MTE: copy data and we only need page_kasan_tag_reset() in case of later mprotect(PROT_MTE). 4. CoW page allocation with PROT_MTE: copy data and tags together with page_kasan_tag_reset(). So basically we always need page_kasan_tag_reset() for pages mapped in user space even if they are not PROT_MTE, in case of a later mprotect(PROT_MTE). For (1), (3) and (4) we don't need to zero the tags. For (1) maybe we could do it as part of data zeroing (subject to some benchmarks) but for (3) and (4) they'd be overridden by the copy anyway. > And, at this point, we can probably combine __GFP_ZEROTAGS with > __GFP_SKIP_KASAN_POISON, as they both would target user pages. For user pages, I think we should skip unpoisoning as well. We can keep unpoisoning around but if we end up calling page_kasan_tag_reset(), there's not much value, at least in page_address() accesses since the pointer would match all tags. That's unless you want to detect other stray pointers to such pages but we already skip the poisoning on free, so it doesn't seem to be a use-case. If we skip unpoisoning (not just poisoning as we already do) for user pages, we should reset the tags in page->flags. Whether __GFP_ZEROTAGS is passed is complementary, depending on the reason for allocation. Currently if __GFP_ZEROTAGS is passed, the unpoisoning is skipped but I think we should have just added __GFP_SKIP_KASAN_UNPOISON instead and not add a new argument to should_skip_kasan_unpoison(). If we decide to always skip unpoisoning, something like below on top of the vanilla kernel: -------------8<----------------- diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 3e3d36fc2109..df0ec30524fb 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -348,7 +348,7 @@ struct vm_area_struct; #define GFP_DMA32 __GFP_DMA32 #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) #define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE | \ - __GFP_SKIP_KASAN_POISON) + __GFP_SKIP_KASAN_POISON | __GFP_SKIP_KASAN_UNPOISON) #define GFP_TRANSHUGE_LIGHT ((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \ __GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM) #define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0e42038382c1..3173e8f0e69a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2346,7 +2346,7 @@ static inline bool check_new_pcp(struct page *page, unsigned int order) } #endif /* CONFIG_DEBUG_VM */ -static inline bool should_skip_kasan_unpoison(gfp_t flags, bool init_tags) +static inline bool should_skip_kasan_unpoison(gfp_t flags) { /* Don't skip if a software KASAN mode is enabled. */ if (IS_ENABLED(CONFIG_KASAN_GENERIC) || @@ -2358,12 +2358,10 @@ static inline bool should_skip_kasan_unpoison(gfp_t flags, bool init_tags) return true; /* - * With hardware tag-based KASAN enabled, skip if either: - * - * 1. Memory tags have already been cleared via tag_clear_highpage(). - * 2. Skipping has been requested via __GFP_SKIP_KASAN_UNPOISON. + * With hardware tag-based KASAN enabled, skip if this was requested + * via __GFP_SKIP_KASAN_UNPOISON. */ - return init_tags || (flags & __GFP_SKIP_KASAN_UNPOISON); + return flags & __GFP_SKIP_KASAN_UNPOISON; } static inline bool should_skip_init(gfp_t flags) @@ -2416,7 +2414,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order, /* Note that memory is already initialized by the loop above. */ init = false; } - if (!should_skip_kasan_unpoison(gfp_flags, init_tags)) { + if (!should_skip_kasan_unpoison(gfp_flags)) { /* Unpoison shadow memory or set memory tags. */ kasan_unpoison_pages(page, order, init); -------------8<----------------- With the above, we can wire up page_kasan_tag_reset() to the __GFP_SKIP_KASAN_UNPOISON check without any additional flags. -- Catalin