From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCD95C4363A for ; Wed, 28 Oct 2020 11:28:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 12048246C2 for ; Wed, 28 Oct 2020 11:28:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="j/gMAMTD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 12048246C2 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 52CB16B006C; Wed, 28 Oct 2020 07:28:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B7B56B006E; Wed, 28 Oct 2020 07:28:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37C046B0070; Wed, 28 Oct 2020 07:28:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id 04BBA6B006C for ; Wed, 28 Oct 2020 07:28:22 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9741A181AC9B6 for ; Wed, 28 Oct 2020 11:28:22 +0000 (UTC) X-FDA: 77421110844.19.pies80_30129af27284 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 5E9001AD1B3 for ; Wed, 28 Oct 2020 11:28:22 +0000 (UTC) X-HE-Tag: pies80_30129af27284 X-Filterd-Recvd-Size: 13204 Received: from mail-qt1-f193.google.com (mail-qt1-f193.google.com [209.85.160.193]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Wed, 28 Oct 2020 11:28:21 +0000 (UTC) Received: by mail-qt1-f193.google.com with SMTP id c5so3227018qtw.3 for ; Wed, 28 Oct 2020 04:28:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=PZef5tw/nzxfeTNGoZfWo9FnqFR59/C7pLoAJniJg8E=; b=j/gMAMTDPuJ0eMWMGMCNvyDXfCghwKtruJ7OuV3QrZmHN6gxGuNo4aMDs9wDnH5vHb NrVHD7z3e7sKkQtHVRYu0I/SwgVhD7V48BqVggZ5LwTYAPGN4YZcKu+TxrAU8esbcGJI f3/fcfygarJFYfMYSI+/jYwSCqslDMtGRoTigibZb+O2PNeJ54YSaYkSeHLjrWqtBkVz H8fQ9dq8ctl7J9XEO44r6zEmlBsNPoo2L63KxXyPG6myRkP/RmHwqYPe32YEuF8HepI6 aJOf5vyhEETA7F8lSUAqp4Dbf7oAaKXAzlxM3Xdm5GwP8xiXPm+aEsVMVuLs1C9jCjaA oTAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=PZef5tw/nzxfeTNGoZfWo9FnqFR59/C7pLoAJniJg8E=; b=XT6zqRb/ry7PjTPVsx5lsiB/eTGcR9g7DCXzdGke9seqq6NIldbW0CfcDqr8BXB5dz HOqseDxhmv37gIBBd2TbVokPB30M3lDcRGdUOHYJWqc+3NksXay2KuKl1guKV8+u351t S/Fd8ZZCtdXlpCc1m64942Tb72IYov9wZHVyXpFPWIYd6CRN0LEvttcnyJcwUVWESkst DIcE7LzHce2rFmwSAobUorhjzFtYeCY8CVRq2RouYFNqbI1sWrJJXR/dn7VSMZqSMdcc wodPNYKya8Nm7h6Hz2tC5jz0K77bi+5dlqNTj80MMBhUSovvgFSqAL3mup9Ubz/1ZQeA pEVw== X-Gm-Message-State: AOAM533HlnoaI1AXbis2JDwAXWa2CrI/8bLxLZk8uJ4Lxliho+4Kt6MS 1mdTzNizBprXrusb4H+bIC9vjfF6JQDrqoFv/j8UDg== X-Google-Smtp-Source: ABdhPJz+XMaWe98mBebdYsOq6qOqEPETaOwzxlSYuxkPvG1joCpr32ftBs045Zz060BC0QaVoci83XejJNon8VyWzJo= X-Received: by 2002:ac8:44b1:: with SMTP id a17mr6582933qto.43.1603884500705; Wed, 28 Oct 2020 04:28:20 -0700 (PDT) MIME-Version: 1.0 References: <94dfda607f7f7a28a5df9ee68703922aa9a52a1e.1602535397.git.andreyknvl@google.com> In-Reply-To: <94dfda607f7f7a28a5df9ee68703922aa9a52a1e.1602535397.git.andreyknvl@google.com> From: Dmitry Vyukov Date: Wed, 28 Oct 2020 12:28:09 +0100 Message-ID: Subject: Re: [PATCH v5 02/40] arm64: mte: Add in-kernel MTE helpers To: Andrey Konovalov Cc: Catalin Marinas , Will Deacon , Vincenzo Frascino , kasan-dev , Andrey Ryabinin , Alexander Potapenko , Marco Elver , Evgenii Stepanov , Elena Petrova , Branislav Rankov , Kevin Brodsky , Andrew Morton , Linux ARM , Linux-MM , LKML Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Oct 12, 2020 at 10:44 PM Andrey Konovalov wrote: > > From: Vincenzo Frascino > > Provide helper functions to manipulate allocation and pointer tags for > kernel addresses. > > Low-level helper functions (mte_assign_*, written in assembly) operate > tag values from the [0x0, 0xF] range. High-level helper functions > (mte_get/set_*) use the [0xF0, 0xFF] range to preserve compatibility > with normal kernel pointers that have 0xFF in their top byte. > > MTE_GRANULE_SIZE and related definitions are moved to mte-def.h header > that doesn't have any dependencies and is safe to include into any > low-level header. > > Signed-off-by: Vincenzo Frascino > Co-developed-by: Andrey Konovalov > Signed-off-by: Andrey Konovalov > Reviewed-by: Catalin Marinas > --- > Change-Id: I1b5230254f90dc21a913447cb17f07fea7944ece > --- > arch/arm64/include/asm/esr.h | 1 + > arch/arm64/include/asm/mte-def.h | 15 ++++++++ > arch/arm64/include/asm/mte-kasan.h | 56 ++++++++++++++++++++++++++++++ > arch/arm64/include/asm/mte.h | 20 +++++++---- > arch/arm64/kernel/mte.c | 48 +++++++++++++++++++++++++ > arch/arm64/lib/mte.S | 16 +++++++++ > 6 files changed, 150 insertions(+), 6 deletions(-) > create mode 100644 arch/arm64/include/asm/mte-def.h > create mode 100644 arch/arm64/include/asm/mte-kasan.h > > diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h > index 035003acfa87..bc0dc66a6a27 100644 > --- a/arch/arm64/include/asm/esr.h > +++ b/arch/arm64/include/asm/esr.h > @@ -103,6 +103,7 @@ > #define ESR_ELx_FSC (0x3F) > #define ESR_ELx_FSC_TYPE (0x3C) > #define ESR_ELx_FSC_EXTABT (0x10) > +#define ESR_ELx_FSC_MTE (0x11) > #define ESR_ELx_FSC_SERROR (0x11) > #define ESR_ELx_FSC_ACCESS (0x08) > #define ESR_ELx_FSC_FAULT (0x04) > diff --git a/arch/arm64/include/asm/mte-def.h b/arch/arm64/include/asm/mte-def.h > new file mode 100644 > index 000000000000..8401ac5840c7 > --- /dev/null > +++ b/arch/arm64/include/asm/mte-def.h > @@ -0,0 +1,15 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +/* > + * Copyright (C) 2020 ARM Ltd. > + */ > +#ifndef __ASM_MTE_DEF_H > +#define __ASM_MTE_DEF_H > + > +#define MTE_GRANULE_SIZE UL(16) > +#define MTE_GRANULE_MASK (~(MTE_GRANULE_SIZE - 1)) > +#define MTE_TAG_SHIFT 56 > +#define MTE_TAG_SIZE 4 > +#define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT) > +#define MTE_TAG_MAX (MTE_TAG_MASK >> MTE_TAG_SHIFT) > + > +#endif /* __ASM_MTE_DEF_H */ > diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h > new file mode 100644 > index 000000000000..3a70fb1807fd > --- /dev/null > +++ b/arch/arm64/include/asm/mte-kasan.h > @@ -0,0 +1,56 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +/* > + * Copyright (C) 2020 ARM Ltd. > + */ > +#ifndef __ASM_MTE_KASAN_H > +#define __ASM_MTE_KASAN_H > + > +#include > + > +#ifndef __ASSEMBLY__ > + > +#include > + > +/* > + * The functions below are meant to be used only for the > + * KASAN_HW_TAGS interface defined in asm/memory.h. > + */ > +#ifdef CONFIG_ARM64_MTE > + > +static inline u8 mte_get_ptr_tag(void *ptr) > +{ > + /* Note: The format of KASAN tags is 0xF */ > + u8 tag = 0xF0 | (u8)(((u64)(ptr)) >> MTE_TAG_SHIFT); > + > + return tag; > +} > + > +u8 mte_get_mem_tag(void *addr); > +u8 mte_get_random_tag(void); > +void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag); > + > +#else /* CONFIG_ARM64_MTE */ > + > +static inline u8 mte_get_ptr_tag(void *ptr) > +{ > + return 0xFF; > +} > + > +static inline u8 mte_get_mem_tag(void *addr) > +{ > + return 0xFF; > +} > +static inline u8 mte_get_random_tag(void) > +{ > + return 0xFF; > +} > +static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag) > +{ > + return addr; > +} > + > +#endif /* CONFIG_ARM64_MTE */ > + > +#endif /* __ASSEMBLY__ */ > + > +#endif /* __ASM_MTE_KASAN_H */ > diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h > index 1c99fcadb58c..cf1cd181dcb2 100644 > --- a/arch/arm64/include/asm/mte.h > +++ b/arch/arm64/include/asm/mte.h > @@ -5,14 +5,16 @@ > #ifndef __ASM_MTE_H > #define __ASM_MTE_H > > -#define MTE_GRANULE_SIZE UL(16) > -#define MTE_GRANULE_MASK (~(MTE_GRANULE_SIZE - 1)) > -#define MTE_TAG_SHIFT 56 > -#define MTE_TAG_SIZE 4 > +#include > +#include > + > +#define __MTE_PREAMBLE ARM64_ASM_PREAMBLE ".arch_extension memtag\n" > > #ifndef __ASSEMBLY__ > > +#include > #include > +#include > > #include > > @@ -45,7 +47,9 @@ long get_mte_ctrl(struct task_struct *task); > int mte_ptrace_copy_tags(struct task_struct *child, long request, > unsigned long addr, unsigned long data); > > -#else > +void mte_assign_mem_tag_range(void *addr, size_t size); > + > +#else /* CONFIG_ARM64_MTE */ > > /* unused if !CONFIG_ARM64_MTE, silence the compiler */ > #define PG_mte_tagged 0 > @@ -80,7 +84,11 @@ static inline int mte_ptrace_copy_tags(struct task_struct *child, > return -EIO; > } > > -#endif > +static inline void mte_assign_mem_tag_range(void *addr, size_t size) > +{ > +} > + > +#endif /* CONFIG_ARM64_MTE */ > > #endif /* __ASSEMBLY__ */ > #endif /* __ASM_MTE_H */ > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > index 52a0638ed967..8f99c65837fd 100644 > --- a/arch/arm64/kernel/mte.c > +++ b/arch/arm64/kernel/mte.c > @@ -13,10 +13,13 @@ > #include > #include > #include > +#include > #include > > +#include > #include > #include > +#include > #include > #include > > @@ -72,6 +75,51 @@ int memcmp_pages(struct page *page1, struct page *page2) > return ret; > } > > +u8 mte_get_mem_tag(void *addr) > +{ > + if (!system_supports_mte()) > + return 0xFF; > + > + asm(__MTE_PREAMBLE "ldg %0, [%0]" > + : "+r" (addr)); > + > + return mte_get_ptr_tag(addr); > +} > + > +u8 mte_get_random_tag(void) > +{ > + void *addr; > + > + if (!system_supports_mte()) > + return 0xFF; > + > + asm(__MTE_PREAMBLE "irg %0, %0" > + : "+r" (addr)); > + > + return mte_get_ptr_tag(addr); > +} > + > +void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag) > +{ > + void *ptr = addr; > + > + if ((!system_supports_mte()) || (size == 0)) > + return addr; > + > + /* Make sure that size is MTE granule aligned. */ > + WARN_ON(size & (MTE_GRANULE_SIZE - 1)); > + > + /* Make sure that the address is MTE granule aligned. */ > + WARN_ON((u64)addr & (MTE_GRANULE_SIZE - 1)); > + > + tag = 0xF0 | tag; > + ptr = (void *)__tag_set(ptr, tag); > + > + mte_assign_mem_tag_range(ptr, size); This function will be called on production hot paths. I think it makes sense to shave off some overheads here. The additional debug checks may be useful, so maybe we need an additional debug mode (debug of MTE/KASAN itself)? Do we ever call this when !system_supports_mte()? I think we wanted to have static_if's higher up the stack. Having additional checks scattered across lower-level functions is overhead for every malloc/free. Looking at how this is called from KASAN code. KASAN code already ensures addr/size are properly aligned. I think we should either remove the duplicate alignment checks, or do them only in the additional debugging mode. Does KASAN also ensure proper tag value (0xF0 mask)? KASAN wrapper is inlined in this patch: https://linux-review.googlesource.com/c/linux/kernel/git/torvalds/linux/+/3699 but here we still have 2 non-inlined calls. The mte_assign_mem_tag_range is kinda inherent since it's in .S. But then I think this wrapper should be inlinable. Also, can we move mte_assign_mem_tag_range into inline asm in the header? This would avoid register spills around the call in malloc/free. The asm code seems to do the rounding of the size up at no additional cost (checks remaining size > 0, right?). I think it makes sense to document that as the contract and remove the additional round_up(size, KASAN_GRANULE_SIZE) in KASAN code. > + return ptr; > +} > + > static void update_sctlr_el1_tcf0(u64 tcf0) > { > /* ISB required for the kernel uaccess routines */ > diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S > index 03ca6d8b8670..ede1ea65428c 100644 > --- a/arch/arm64/lib/mte.S > +++ b/arch/arm64/lib/mte.S > @@ -149,3 +149,19 @@ SYM_FUNC_START(mte_restore_page_tags) > > ret > SYM_FUNC_END(mte_restore_page_tags) > + > +/* > + * Assign allocation tags for a region of memory based on the pointer tag > + * x0 - source pointer > + * x1 - size > + * > + * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and > + * size must be non-zero and MTE_GRANULE_SIZE aligned. > + */ > +SYM_FUNC_START(mte_assign_mem_tag_range) > +1: stg x0, [x0] > + add x0, x0, #MTE_GRANULE_SIZE > + subs x1, x1, #MTE_GRANULE_SIZE > + b.gt 1b > + ret > +SYM_FUNC_END(mte_assign_mem_tag_range) > -- > 2.28.0.1011.ga647a8990f-goog >