From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=0.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6473BC433E0 for ; Thu, 4 Feb 2021 18:32:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AD87B64F53 for ; Thu, 4 Feb 2021 18:32:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AD87B64F53 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E6CDB6B0005; Thu, 4 Feb 2021 13:32:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E1CC46B0006; Thu, 4 Feb 2021 13:32:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D34986B006E; Thu, 4 Feb 2021 13:32:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id BD85B6B0005 for ; Thu, 4 Feb 2021 13:32:40 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7B1188249980 for ; Thu, 4 Feb 2021 18:32:40 +0000 (UTC) X-FDA: 77781431280.12.beam94_5416a02275de Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 58892180171BB for ; Thu, 4 Feb 2021 18:32:40 +0000 (UTC) X-HE-Tag: beam94_5416a02275de X-Filterd-Recvd-Size: 10655 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Thu, 4 Feb 2021 18:32:39 +0000 (UTC) Received: by mail-pf1-f181.google.com with SMTP id b145so2649448pfb.4 for ; Thu, 04 Feb 2021 10:32:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Vr29zfd+YKuYiJH6SfU+iXid1sSoH649Xg+y7HCfk/M=; b=EdUuwlQnMLOR+t6vzCNB8YxnqXNG1UYdWNV7KYOcm4IZ2IBdfjXxjomX5pfEvHrc+U EqVwYiQINxuJjmZuiEUTuazKCkAOKSikrR9xkASaESRf3cploG/Ji7xCNcwlSh1c4quT JGu9mJH7ANOW0RSuCDKazZJvZzRv45fbKvCkLmAFrkTdElVKao8iUqQxBt1ckHPwnm1c 5TedgoQ2yGlFHDYVzPfLB79Jv/JPKGSO1ZrfbLR23lyoMV9AJx1viTBG4Z/7ddMA3Ud6 3W5ayGBnSlySRLWGU4J9XxnD/esydF7zNDzlgLzHUlJRkt6hVd3LtzafLiQ70m7iVTk1 Gghw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Vr29zfd+YKuYiJH6SfU+iXid1sSoH649Xg+y7HCfk/M=; b=H4LvpsHpBADRCyRFAtiNyfFV1WlDutNk5m6rWZ8Q04vbdD5di/JVeCDHzYP8iBud44 ceYwqnAlxJR20tnVbaPSkUyxERGhAiZUhDb6Bw8n5otGvkriyGOgrxqSgTmG5qWShf+z OvRfFfSLRpkkzFqO2105rE2+PFiZlBBa/scsBs7Ec7vTO+R5KTvsGbCclXTvm2crMMHU +2IRrrLIXzoaeyHWrG5/8wB0+xafRufrRtkvSx30QXLR9MgmYx3iQmf8xBdx/2Qe9UZE aIJ7ZOOQvqFxp6IiTyS6VILQzqq3C93oM0m5UQsgraKkvT7gsCxpTfMn1yApGi/WCVMl eGQg== X-Gm-Message-State: AOAM5318p4onlShw/Tep787P6E4hlITcHaX7OsWoujE/D0Et8x+mfzpI XZiIBMUaiZGOnLc0fH9wrvznjr8n9YAibO0VfJw= X-Google-Smtp-Source: ABdhPJyhB1Ul4Z+NSAh9ZJ7V3JtiPWxZ+LLzwblI6KfeJnDlJai0PkbD6gb7dlf9jY+SL11hFPE6inOy6HMwVrD/FT4= X-Received: by 2002:a63:4d52:: with SMTP id n18mr270960pgl.237.1612463558781; Thu, 04 Feb 2021 10:32:38 -0800 (PST) MIME-Version: 1.0 References: <20210204124914.GC20468@willie-the-truck> <20210204155346.88028-1-lecopzer@gmail.com> <20210204175659.GC21303@willie-the-truck> In-Reply-To: <20210204175659.GC21303@willie-the-truck> From: Lecopzer Chen Date: Fri, 5 Feb 2021 02:32:27 +0800 Message-ID: Subject: Re: [PATCH v2 0/4] arm64: kasan: support CONFIG_KASAN_VMALLOC To: Will Deacon Cc: Andrew Morton , Andrey Konovalov , ardb@kernel.org, aryabinin@virtuozzo.com, broonie@kernel.org, catalin.marinas@arm.com, dan.j.williams@intel.com, dvyukov@google.com, glider@google.com, gustavoars@kernel.org, kasan-dev@googlegroups.com, lecopzer.chen@mediatek.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-mm@kvack.org, linux@roeck-us.net, robin.murphy@arm.com, rppt@kernel.org, tyhicks@linux.microsoft.com, vincenzo.frascino@arm.com, yj.chiang@mediatek.com Content-Type: multipart/alternative; boundary="000000000000a26d8405ba86edb1" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --000000000000a26d8405ba86edb1 Content-Type: text/plain; charset="UTF-8" On Thu, Feb 04, 2021 at 11:53:46PM +0800, Lecopzer Chen wrote: > > > On Sat, Jan 09, 2021 at 06:32:48PM +0800, Lecopzer Chen wrote: > > > > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da9 > > > > ("kasan: support backing vmalloc space with real shadow memory") > > > > > > > > Acroding to how x86 ported it [1], they early allocated p4d and pgd, > > > > but in arm64 I just simulate how KAsan supports MODULES_VADDR in > arm64 > > > > by not to populate the vmalloc area except for kimg address. > > > > > > The one thing I've failed to grok from your series is how you deal with > > > vmalloc allocations where the shadow overlaps with the shadow which has > > > already been allocated for the kernel image. Please can you explain? > > > > > > The most key point is we don't map anything in the vmalloc shadow > address. > > So we don't care where the kernel image locate inside vmalloc area. > > > > kasan_map_populate(kimg_shadow_start, kimg_shadow_end,...) > > > > Kernel image was populated with real mapping in its shadow address. > > I `bypass' the whole shadow of vmalloc area, the only place you can find > > about vmalloc_shadow is > > kasan_populate_early_shadow((void *)vmalloc_shadow_end, > > (void *)KASAN_SHADOW_END); > > > > ----------- vmalloc_shadow_start > > | | > > | | > > | | <= non-mapping > > | | > > | | > > |-----------| > > |///////////|<- kimage shadow with page table mapping. > > |-----------| > > | | > > | | <= non-mapping > > | | > > ------------- vmalloc_shadow_end > > |00000000000| > > |00000000000| <= Zero shadow > > |00000000000| > > ------------- KASAN_SHADOW_END > > > > vmalloc shadow will be mapped 'ondemend', see kasan_populate_vmalloc() > > in mm/vmalloc.c in detail. > > So the shadow of vmalloc will be allocated later if anyone use its va. > > Indeed, but the question I'm asking is what happens when an on-demand > shadow > allocation from vmalloc overlaps with the shadow that we allocated early > for > the kernel image? > > Sounds like I have to go and read the code... > oh, sorry I misunderstood your question. FWIW, I think this won't happend because this mean vmalloc() provides va which already allocated by kimg, as I know, vmalloc_init() will insert early allocated vma into its vmalloc rb tree > , and this early allocated vma will include kernel image. After quick review of mm init code, this early allocated for vma is at map_kernel() in arch/arm64/mm/mmu.c BRs Lecopzer --000000000000a26d8405ba86edb1 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable



On Thu, Feb 0= 4, 2021 at 11:53:46PM +0800, Lecopzer Chen wrote:
> > On Sat, Jan 09, 2021 at 06:32:48PM +0800, Lecopzer Chen wrote: > > > Linux supports KAsan for VMALLOC since commit 3c5c3cfb9ef4da= 9
> > > ("kasan: support backing vmalloc space with real shadow= memory")
> > >
> > > Acroding to how x86 ported it [1], they early allocated p4d = and pgd,
> > > but in arm64 I just simulate how KAsan supports MODULES_VADD= R in arm64
> > > by not to populate the vmalloc area except for kimg address.=
> >
> > The one thing I've failed to grok from your series is how you= deal with
> > vmalloc allocations where the shadow overlaps with the shadow whi= ch has
> > already been allocated for the kernel image. Please can you expla= in?
>
>
> The most key point is we don't map anything in the vmalloc shadow = address.
> So we don't care where the kernel image locate inside vmalloc area= .
>
>=C2=A0 =C2=A0kasan_map_populate(kimg_shadow_start, kimg_shadow_end,...)=
>
> Kernel image was populated with real mapping in its shadow address. > I `bypass' the whole shadow of vmalloc area, the only place you ca= n find
> about vmalloc_shadow is
>=C2=A0 =C2=A0 =C2=A0 =C2=A0kasan_populate_early_shadow((void *)vmalloc_= shadow_end,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0(void *)KASAN_SHADOW_END);
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0-----------=C2=A0 vmalloc_shadow_start
>=C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|
>=C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|
>=C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| <=3D non-mapping<= br> >=C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|
>=C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|
>=C2=A0 |-----------|
>=C2=A0 |///////////|<- kimage shadow with page table mapping.
>=C2=A0 |-----------|
>=C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|
>=C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| <=3D non-mapping<= br> >=C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|
>=C2=A0 ------------- vmalloc_shadow_end
>=C2=A0 |00000000000|
>=C2=A0 |00000000000| <=3D Zero shadow
>=C2=A0 |00000000000|
>=C2=A0 ------------- KASAN_SHADOW_END
>
> vmalloc shadow will be mapped 'ondemend', see kasan_populate_v= malloc()
> in mm/vmalloc.c in detail.
> So the shadow of vmalloc will be allocated later if anyone use its va.=

Indeed, but the question I'm asking is what happens when an on-demand s= hadow
allocation from vmalloc overlaps with the shadow that we allocated early fo= r
the kernel image?

Sounds like I have to go and read the code...
<= div dir=3D"auto">
oh, sorry I misunderstood your= question.

FWIW,
I think this won't happend because this mean vmalloc() = provides va which already allocated by kimg, as I know, vmalloc_init() will= insert early allocated vma into its vmalloc rb tree
, and this early allocated vma will include=C2= =A0 kernel image.

After = quick review of mm init code,
this early allocated f= or vma is at map_kernel() in arch/arm64/mm/mmu.c


BRs
Lecopzer

=

--000000000000a26d8405ba86edb1--