From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D931C433DF for ; Thu, 27 Aug 2020 12:46:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4EF9F22CAE for ; Thu, 27 Aug 2020 12:46:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gdEkfTqR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4EF9F22CAE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E2510900007; Thu, 27 Aug 2020 08:46:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DACC48E0006; Thu, 27 Aug 2020 08:46:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4D60900007; Thu, 27 Aug 2020 08:46:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A8DCD8E0006 for ; Thu, 27 Aug 2020 08:46:31 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 67FA74821 for ; Thu, 27 Aug 2020 12:46:31 +0000 (UTC) X-FDA: 77196322182.25.soda62_5f0931c2706d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 40FEE1804E3A1 for ; Thu, 27 Aug 2020 12:46:31 +0000 (UTC) X-HE-Tag: soda62_5f0931c2706d X-Filterd-Recvd-Size: 9000 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Thu, 27 Aug 2020 12:46:30 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id mw10so2535510pjb.2 for ; Thu, 27 Aug 2020 05:46:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=BHmZEvhLEtaO7aqOAe1Kz0vxpzzyDph3TU1ME4obtnc=; b=gdEkfTqRbB/Kce9J4U43NYTJ0FSw5bKcLylX1r5TuuVtKYudxTe4eisWe7++garpR3 AIMwQObELacZK6InF/KInf7LuBEgBr6AaRHQStv9BaZVNp6lw7JL0/vXQnlmoNs1v9jh UvrUCzc9kXrr+f1K62TpVeUwHjaTrQM0Q8ckQazsHehf9HIC23NGw5PmXOgku21EWYw2 ju1tontr5nq6w6KtK+/MwdSriWs0LnK1SptCBWazsG9VU3nERavUd14/lRcn/w1Q0396 A1OBPTCV1T9KDVtn6B6aMYmFHLFQKXRCnkc264DNEfKyXuPKbpWhGoPvpgJoroJWzE5h 8vWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=BHmZEvhLEtaO7aqOAe1Kz0vxpzzyDph3TU1ME4obtnc=; b=stCPZxnNXh/N+XwXDUsJD9H11PqIgfDBLWriFCpAZJ4yGoX8M30uxsKgiX2k+w6lEN JgBXZmaTXDW9Cftvk0y2h33/0caFEl2J0iOQOf8wmrzlgdrAzL/96FR6J/5kwkSCOv3x nCdrMufUkgLgDkPDbTJtK2xUy0G4mtXKx7Zv+kzI8PLXdHvNUDVGZx/pE2ThA6sfXQx3 iqL0QJkmKkhv5HlPejKwiZo9K9zN/2+aXekeqWpnaj5umwVLlRVVMFKKvCLeUdCLIyQ3 BcKweHVJMjqagWB+lDNw9e8imhrpmGQXhnSMvifCIzPRD/cSpS+T7xC6U+kS0jvEzdJX hhxg== X-Gm-Message-State: AOAM533IeZ3PzOruW+HU1+RBdJTYysn8GnYbxpFK8aFpk+2ut/qYNb6z MvSagCN25NmfFR5Hz5/OCO5WVQnPMP9VhQSPf7rWiQ== X-Google-Smtp-Source: ABdhPJwDOypO69lt7F9hkmMpo7AI0YZxyVjlAoqUwRCQcwPpqL5dsDTNUlMHW7hsHiiESPEGq3wlEDJDnZQgKzAqWNs= X-Received: by 2002:a17:90a:2d82:: with SMTP id p2mr9503738pjd.166.1598532389629; Thu, 27 Aug 2020 05:46:29 -0700 (PDT) MIME-Version: 1.0 References: <2cf260bdc20793419e32240d2a3e692b0adf1f80.1597425745.git.andreyknvl@google.com> <20200827093808.GB29264@gaia> In-Reply-To: <20200827093808.GB29264@gaia> From: Andrey Konovalov Date: Thu, 27 Aug 2020 14:46:18 +0200 Message-ID: Subject: Re: [PATCH 20/35] arm64: mte: Add in-kernel MTE helpers To: Catalin Marinas Cc: Dmitry Vyukov , Vincenzo Frascino , kasan-dev , Andrey Ryabinin , Alexander Potapenko , Marco Elver , Evgenii Stepanov , Elena Petrova , Branislav Rankov , Kevin Brodsky , Will Deacon , Andrew Morton , Linux ARM , Linux Memory Management List , LKML Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 40FEE1804E3A1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Aug 27, 2020 at 11:38 AM Catalin Marinas wrote: > > On Fri, Aug 14, 2020 at 07:27:02PM +0200, Andrey Konovalov wrote: > > diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h > > index 1c99fcadb58c..733be1cb5c95 100644 > > --- a/arch/arm64/include/asm/mte.h > > +++ b/arch/arm64/include/asm/mte.h > > @@ -5,14 +5,19 @@ > > #ifndef __ASM_MTE_H > > #define __ASM_MTE_H > > > > -#define MTE_GRANULE_SIZE UL(16) > > +#include > > So the reason for this move is to include it in asm/cache.h. Fine by > me but... > > > #define MTE_GRANULE_MASK (~(MTE_GRANULE_SIZE - 1)) > > #define MTE_TAG_SHIFT 56 > > #define MTE_TAG_SIZE 4 > > +#define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT) > > +#define MTE_TAG_MAX (MTE_TAG_MASK >> MTE_TAG_SHIFT) > > ... I'd rather move all these definitions in a file with a more > meaningful name like mte-def.h. The _asm implies being meant for .S > files inclusion which isn't the case. Sounds good, I'll leave fixing this and other arm64-specific comments to Vincenzo. I'll change KASAN code to use mte-def.h once I have patches where this file is renamed. > > > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > > index eb39504e390a..e2d708b4583d 100644 > > --- a/arch/arm64/kernel/mte.c > > +++ b/arch/arm64/kernel/mte.c > > @@ -72,6 +74,47 @@ int memcmp_pages(struct page *page1, struct page *page2) > > return ret; > > } > > > > +u8 mte_get_mem_tag(void *addr) > > +{ > > + if (system_supports_mte()) > > + addr = mte_assign_valid_ptr_tag(addr); > > The mte_assign_valid_ptr_tag() is slightly misleading. All it does is > read the allocation tag from memory. > > I also think this should be inline asm, possibly using alternatives. > It's just an LDG instruction (and it saves us from having to invent a > better function name). > > > + > > + return 0xF0 | mte_get_ptr_tag(addr); > > +} > > + > > +u8 mte_get_random_tag(void) > > +{ > > + u8 tag = 0xF; > > + > > + if (system_supports_mte()) > > + tag = mte_get_ptr_tag(mte_assign_random_ptr_tag(NULL)); > > Another alternative inline asm with an IRG instruction. > > > + > > + return 0xF0 | tag; > > +} > > + > > +void * __must_check mte_set_mem_tag_range(void *addr, size_t size, u8 tag) > > +{ > > + void *ptr = addr; > > + > > + if ((!system_supports_mte()) || (size == 0)) > > + return addr; > > + > > + tag = 0xF0 | (tag & 0xF); > > + ptr = (void *)__tag_set(ptr, tag); > > + size = ALIGN(size, MTE_GRANULE_SIZE); > > I think aligning the size is dangerous. Can we instead turn it into a > WARN_ON if not already aligned? At a quick look, the callers of > kasan_{un,}poison_memory() already align the size. > > > + > > + mte_assign_mem_tag_range(ptr, size); > > + > > + /* > > + * mte_assign_mem_tag_range() can be invoked in a multi-threaded > > + * context, ensure that tags are written in memory before the > > + * reference is used. > > + */ > > + smp_wmb(); > > + > > + return ptr; > > I'm not sure I understand the barrier here. It ensures the relative > ordering of memory (or tag) accesses on a CPU as observed by other CPUs. > While the first access here is setting the tag, I can't see what other > access on _this_ CPU it is ordered with. > > > +} > > + > > static void update_sctlr_el1_tcf0(u64 tcf0) > > { > > /* ISB required for the kernel uaccess routines */ > > diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S > > index 03ca6d8b8670..8c743540e32c 100644 > > --- a/arch/arm64/lib/mte.S > > +++ b/arch/arm64/lib/mte.S > > @@ -149,3 +149,44 @@ SYM_FUNC_START(mte_restore_page_tags) > > > > ret > > SYM_FUNC_END(mte_restore_page_tags) > > + > > +/* > > + * Assign pointer tag based on the allocation tag > > + * x0 - source pointer > > + * Returns: > > + * x0 - pointer with the correct tag to access memory > > + */ > > +SYM_FUNC_START(mte_assign_valid_ptr_tag) > > + ldg x0, [x0] > > + ret > > +SYM_FUNC_END(mte_assign_valid_ptr_tag) > > + > > +/* > > + * Assign random pointer tag > > + * x0 - source pointer > > + * Returns: > > + * x0 - pointer with a random tag > > + */ > > +SYM_FUNC_START(mte_assign_random_ptr_tag) > > + irg x0, x0 > > + ret > > +SYM_FUNC_END(mte_assign_random_ptr_tag) > > As I said above, these two can be inline asm. > > > + > > +/* > > + * Assign allocation tags for a region of memory based on the pointer tag > > + * x0 - source pointer > > + * x1 - size > > + * > > + * Note: size is expected to be MTE_GRANULE_SIZE aligned > > + */ > > +SYM_FUNC_START(mte_assign_mem_tag_range) > > + /* if (src == NULL) return; */ > > + cbz x0, 2f > > + /* if (size == 0) return; */ > > You could skip the cbz here and just document that the size should be > non-zero and aligned. The caller already takes care of this check. > > > + cbz x1, 2f > > +1: stg x0, [x0] > > + add x0, x0, #MTE_GRANULE_SIZE > > + sub x1, x1, #MTE_GRANULE_SIZE > > + cbnz x1, 1b > > +2: ret > > +SYM_FUNC_END(mte_assign_mem_tag_range) > > -- > Catalin > > -- > You received this message because you are subscribed to the Google Groups "kasan-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20200827093808.GB29264%40gaia.