From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B3BB4F327AA for ; Tue, 21 Apr 2026 07:06:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F0B176B0088; Tue, 21 Apr 2026 03:06:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EBBD26B0089; Tue, 21 Apr 2026 03:06:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DAA706B008A; Tue, 21 Apr 2026 03:06:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C55A36B0088 for ; Tue, 21 Apr 2026 03:06:37 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2D10FC3BC1 for ; Tue, 21 Apr 2026 07:06:37 +0000 (UTC) X-FDA: 84681680034.16.F16AE4F Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf30.hostedemail.com (Postfix) with ESMTP id DC39780013 for ; Tue, 21 Apr 2026 07:06:34 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=arm.com header.s=foss header.b=caIkEahP; spf=pass (imf30.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1776755195; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tHGyV9JrMyOlPN88c2CyCQ6hiq5IFCOBZ9918+Av9oo=; b=KKaGz2rLb1KAY1kwHTL+jvWor0NQ/0sSZ9E63SrfIb2Y54fNJQ0tjZxuDTHlziZ0wbG+Of ncHRpXMfs+EUE6zkQ8Ack80KSv90IbkF7sGO6zkfDcDRlFdGTu1y/LMONipPTF3xxVpP75 p1Y67HTydHIe6Jji227PQMJ7GPrkmgU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1776755195; a=rsa-sha256; cv=none; b=df3PJhXl3GAiGHHcyQefwUlIcJD9OUwmfdkMElIy1wh5Z35upLD12IvlaCNJrncyY11R9f AgTyPO28A/5kCuQkWHDDm3H7+NFOCgCXoqnJ5063ayi54G4ks5MDdEabzGdDDyze5izD4f brub8OXA+LPY/XU7A+7zenAdm8UJ/jk= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=arm.com header.s=foss header.b=caIkEahP; spf=pass (imf30.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E51D425E0; Tue, 21 Apr 2026 00:06:27 -0700 (PDT) Received: from [10.164.148.40] (MacBook-Pro.blr.arm.com [10.164.148.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8CFC63F641; Tue, 21 Apr 2026 00:06:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1776755193; bh=3wB7yzrfBlQ4UrzNV+oRyj5dj3pW8qlpw1VT3aYrqNs=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=caIkEahP4GTI8WiiBzxI/IgGStQy71u4HnuwQMLiFype1Ov59C3RHKzvHJJ7+Rp5c p9wZK/GvOMAH/8NpBHH02SxjKMFpzuP3NE5oriiBFINnv1H3o3aBDZUAhfY5UOJb6G houo3EE1+cc6uDbc+dWbPU2eEaLXqLU+tzYee61s= Message-ID: <5463cf34-bd8c-4ecb-b93a-fd8b2bd2976d@arm.com> Date: Tue, 21 Apr 2026 12:36:16 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm/page_alloc: fix initialization of tags of the huge zero folio with init_on_free To: "David Hildenbrand (Arm)" , Catalin Marinas , Will Deacon , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan , Lance Yang , Ryan Roberts Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org References: <20260420-zerotags-v1-1-3edc93e95bb4@kernel.org> Content-Language: en-US From: Dev Jain In-Reply-To: <20260420-zerotags-v1-1-3edc93e95bb4@kernel.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Stat-Signature: p7hi4auyhi8btazps98tfgugragip9uw X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: DC39780013 X-Rspam-User: X-HE-Tag: 1776755194-347827 X-HE-Meta: U2FsdGVkX1+dY7XWh7uUmDoZtTyhGWq8nFJLriEJZZHxMAfeMEMqjS89x39VWmsxNziIEc6ZyL27Vs1cdZMpoT5pWDIaE5TIIEzPqJFxIjxnbu+UJUC7NRYneWz7+eVL/DS93poCu2giFqYm9A7t5j7a2UmUwwdoAd7gez8D50ITly3L6ZSyK3hf3BoyHiwtZqx/Pr25vGUv4MUd7HpX3yJR4rBCb1BKc8aWNFXASIFwvgg6StqMeWcjYQ+BPBkIJzDl7dE3lfGTrcEKW1O1SpJxfEkwuDDNh/megeeedsMWevwBp3wlankCcvaHQ55w1ijZKxyNrZHRJTrj2v3duyiNHTebZgJch95GV+ECLfung4nrlviqfQRqLh+hyi12Uy7TAvA5OsplN2un+ykXKFAOqRVkHb9ZUKYBq88+N1cExHntYOFfGILcZ1yjnJqEOP2PesZe3cL1ITyuNV41yMAZoZGJzxsDEcGz9gv1PlyQ0vGFYyTqVZJXDAVvTwePXWy5GRXlsPaoPRHoJ9VQQ6Hfjy0dwCS/ODM6jJuGONuNLBEmtbIAQFBZTLce0QIz9nXerjQ8Fdd8ZjG1qx9UpLHU3y1bQqDGMkhI/+GEfbkg1vkwT5+7RrvvH4YwxcdsHkAt6sVzaKX9m8b4Qt/qJ+NXZDEazhNg0qLsx+42WBeRQUipXsRMZ5xvHfkJvYVgbMjslnkNDVrBQ++e38R2NlGwcdeuvQDBaWuJlVq419Niswf4mkZbzpUzdweJA0AraymnFYfGu15qBPFPZW/JtkPGVNssMJZrM1wr7HfBK2dZNrOxNr2CkaQinNONfNkENKHlTedBopZ3Dj9Gj3hNEIZ9lCG9IPJk0w2CvULJj97GhO5CH/f5faTmiamhqrSQwKI/lYvXyRFx23/qmGAblC89WupEfJGP7qXe3q/R9WklU6tgwOdldKIFZ+msiCnpWubwzcivIupV/5hATLr ETWeuaMH JHoJWtdlMO489OB+fhwPmm06f90i28LDfuDjXSUjIvUqJoEM+pXQHvc3FhLh/W93vnP05nYLdQG/fGSgK0paWyf8cw1yfEz7DmxYRvrJsPGCFa4Zx+8ZgoLFT2IUdxVVGJjyVn+ke3/CQT+BsN050CoJnUYkNqw0SgrnJjbQNYClJJpp1tU6R7Pp8bdfoCvfGReJ3f0FkxhzYjAnNYlrxT8cg8RDxL10UwrkT+P0slDTmdrTENPcesW6IbVXvLXNTGvBCJUev5s3/e7/h7XRJrwDDx3sAlIksOaS8zZureK1koRfk2/cqyLeiXaWqdzNmrunpGu0kT+nw32BF5skxQwrrrJzC+OHqiWP6riJII2takH8= Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 21/04/26 2:46 am, David Hildenbrand (Arm) wrote: > __GFP_ZEROTAGS semantics are currently a bit weird, but effectively this > flag is only ever set alongside __GFP_ZERO and __GFP_SKIP_KASAN. > > If we run with init_on_free, we will zero out pages during > __free_pages_prepare(), to skip zeroing on the allocation path. > > However, when allocating with __GFP_ZEROTAG set, post_alloc_hook() will > consequently not only skip clearing page content, but also skip > clearing tag memory. > > Not clearing tags through __GFP_ZEROTAGS is irrelevant for most pages that > will get mapped to user space through set_pte_at() later: set_pte_at() and > friends will detect that the tags have not been initialized yet > (PG_mte_tagged not set), and initialize them. > > However, for the huge zero folio, which will be mapped through a PMD > marked as special, this initialization will not be performed, ending up > exposing whatever tags were still set for the pages. > > The docs (Documentation/arch/arm64/memory-tagging-extension.rst) state > that allocation tags are set to 0 when a page is first mapped to user > space. That no longer holds with the huge zero folio when init_on_free > is enabled. > > Fix it by decoupling __GFP_ZEROTAGS from __GFP_ZERO, passing to > tag_clear_highpages() whether we want to also clear page content. > > As we are touching the interface either way, just clean it up by > only calling it when HW tags are enabled, dropping the return value, and > dropping the common code stub. > > Reproduced with the huge zero folio by modifying the check_buffer_fill > arm64/mte selftest to use a 2 MiB area, after making sure that pages have > a non-0 tag set when freeing (note that, during boot, we will not > actually initialize tags, but only set KASAN_TAG_KERNEL in the page > flags). > > $ ./check_buffer_fill > 1..20 > ... > not ok 17 Check initial tags with private mapping, sync error mode and mmap memory > not ok 18 Check initial tags with private mapping, sync error mode and mmap/mprotect memory > ... > > This code needs more cleanups; we'll tackle that next, like > decoupling __GFP_ZEROTAGS from __GFP_SKIP_KASAN, moving all the > KASAN magic into a separate helper, and consolidating HW-tag handling. > > Fixes: adfb6609c680 ("mm/huge_memory: initialise the tags of the huge zero folio") > Cc: stable@vger.kernel.org > Signed-off-by: David Hildenbrand (Arm) > --- > arch/arm64/include/asm/page.h | 3 --- > arch/arm64/mm/fault.c | 16 +++++----------- > include/linux/gfp_types.h | 10 +++++----- > include/linux/highmem.h | 10 +--------- > mm/page_alloc.c | 12 +++++++----- > 5 files changed, 18 insertions(+), 33 deletions(-) > > diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h > index e25d0d18f6d7..5c6cbfbbd34c 100644 > --- a/arch/arm64/include/asm/page.h > +++ b/arch/arm64/include/asm/page.h > @@ -33,9 +33,6 @@ struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, > unsigned long vaddr); > #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio > > -bool tag_clear_highpages(struct page *to, int numpages); > -#define __HAVE_ARCH_TAG_CLEAR_HIGHPAGES > - > #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) > > typedef struct page *pgtable_t; > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index 0f3c5c7ca054..32a3723f2d34 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -1018,21 +1018,15 @@ struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, > return vma_alloc_folio(flags, 0, vma, vaddr); > } > > -bool tag_clear_highpages(struct page *page, int numpages) > +void tag_clear_highpages(struct page *page, int numpages, bool clear_pages) > { > - /* > - * Check if MTE is supported and fall back to clear_highpage(). > - * get_huge_zero_folio() unconditionally passes __GFP_ZEROTAGS and > - * post_alloc_hook() will invoke tag_clear_highpages(). > - */ > - if (!system_supports_mte()) > - return false; > - > /* Newly allocated pages, shouldn't have been tagged yet */ > for (int i = 0; i < numpages; i++, page++) { > WARN_ON_ONCE(!try_page_mte_tagging(page)); > - mte_zero_clear_page_tags(page_address(page)); > + if (clear_pages) > + mte_zero_clear_page_tags(page_address(page)); > + else > + mte_clear_page_tags(page_address(page)); > set_page_mte_tagged(page); > } > - return true; > } > diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h > index 6c75df30a281..fd53a6fba33f 100644 > --- a/include/linux/gfp_types.h > +++ b/include/linux/gfp_types.h > @@ -273,11 +273,11 @@ enum { > * > * %__GFP_ZERO returns a zeroed page on success. > * > - * %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself > - * is being zeroed (either via __GFP_ZERO or via init_on_alloc, provided that > - * __GFP_SKIP_ZERO is not set). This flag is intended for optimization: setting > - * memory tags at the same time as zeroing memory has minimal additional > - * performance impact. > + * %__GFP_ZEROTAGS zeroes memory tags at allocation time. This flag is intended > + * for optimization: setting memory tags at the same time as zeroing memory > + * (e.g., with __GPF_ZERO) has minimal additional performance impact. However, > + * __GFP_ZEROTAGS also zeroes the tags even if memory is not getting zeroed at > + * allocation time (e.g., with init_on_free). > * > * %__GFP_SKIP_KASAN makes KASAN skip unpoisoning on page allocation. > * Used for userspace and vmalloc pages; the latter are unpoisoned by > diff --git a/include/linux/highmem.h b/include/linux/highmem.h > index af03db851a1d..62f589baa343 100644 > --- a/include/linux/highmem.h > +++ b/include/linux/highmem.h > @@ -345,15 +345,7 @@ static inline void clear_highpage_kasan_tagged(struct page *page) > kunmap_local(kaddr); > } > > -#ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGES > - > -/* Return false to let people know we did not initialize the pages */ > -static inline bool tag_clear_highpages(struct page *page, int numpages) > -{ > - return false; > -} > - > -#endif > +void tag_clear_highpages(struct page *to, int numpages, bool clear_pages); > > /* > * If we pass in a base or tail page, we can zero up to PAGE_SIZE. > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 65e205111553..8c6821d25a00 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1808,9 +1808,9 @@ static inline bool should_skip_init(gfp_t flags) > inline void post_alloc_hook(struct page *page, unsigned int order, > gfp_t gfp_flags) > { > + const bool zero_tags = kasan_hw_tags_enabled() && (gfp_flags & __GFP_ZEROTAGS); Sashiko: https://sashiko.dev/#/patchset/20260420-zerotags-v1-1-3edc93e95bb4%40kernel.org PROT_MTE works without KASAN_HW_TAGS, so probably just retain the system_supports_mte() check in tag_clear_highpages(), and document that GFP_ZEROTAGS is only for MTE? > bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags) && > !should_skip_init(gfp_flags); > - bool zero_tags = init && (gfp_flags & __GFP_ZEROTAGS); > int i; > > set_page_private(page, 0); > @@ -1832,11 +1832,13 @@ inline void post_alloc_hook(struct page *page, unsigned int order, > */ > > /* > - * If memory tags should be zeroed > - * (which happens only when memory should be initialized as well). > + * Clearing tags can efficiently clear the memory for us as well, if > + * required. > */ > - if (zero_tags) > - init = !tag_clear_highpages(page, 1 << order); > + if (zero_tags) { > + tag_clear_highpages(page, 1 << order, /* clear_pages= */init); Micro-nit: ^ space > + init = false; > + } > > if (!should_skip_kasan_unpoison(gfp_flags) && > kasan_unpoison_pages(page, order, init)) { > > --- > base-commit: f1541b40cd422d7e22273be9b7e9edfc9ea4f0d7 > change-id: 20260417-zerotags-343a3673e18d > > Best regards,