From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44823CED27C for ; Tue, 8 Oct 2024 08:45:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CCC0B6B0088; Tue, 8 Oct 2024 04:45:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C53576B0089; Tue, 8 Oct 2024 04:45:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACCA86B008A; Tue, 8 Oct 2024 04:45:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 87F846B0088 for ; Tue, 8 Oct 2024 04:45:32 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id C53011A1AC9 for ; Tue, 8 Oct 2024 08:45:30 +0000 (UTC) X-FDA: 82649801304.06.A1B9B9B Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by imf22.hostedemail.com (Postfix) with ESMTP id 5576CC0006 for ; Tue, 8 Oct 2024 08:45:30 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DCn8eWMP; spf=pass (imf22.hostedemail.com: domain of snovitoll@gmail.com designates 209.85.208.46 as permitted sender) smtp.mailfrom=snovitoll@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728376996; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tjUdOJA4jWMVSrUGgvOSdZl3neMitSFURhlyYwW/SrM=; b=5nEQFQMMlOmvJTlWIiZvPAOLMWoiROw5yFXfCRfnbutxmasUXKUlhdJ3L/k18enWSlJzv/ KzTayoSM48ycBnlO/u6jyoQMKSqQvPXJaw36AjNWM5oqxGH3v6TqY838qvpUrSugu3Icbg cfzSTenWn5au4zsCwSvwfMZIH2PanFs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728376996; a=rsa-sha256; cv=none; b=BU+1H4rx3YKVYYK2fsOH62HHTsH0Ffv7+AXMq5X8kgSo6ACEQHCek89vKn8ZJjDy7CgMjj Lfoy0GfUIEAh1UgHBsVH/0pLrGwz9yW+ZiT1JOLn6w4CERkzC9psoZ+CcRqDo4ELPPV/xt NXF+eAhI90obYFWP+8ikFttHIyO0p4s= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DCn8eWMP; spf=pass (imf22.hostedemail.com: domain of snovitoll@gmail.com designates 209.85.208.46 as permitted sender) smtp.mailfrom=snovitoll@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-ed1-f46.google.com with SMTP id 4fb4d7f45d1cf-5c8967dd2c7so6662956a12.1 for ; Tue, 08 Oct 2024 01:45:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1728377129; x=1728981929; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=tjUdOJA4jWMVSrUGgvOSdZl3neMitSFURhlyYwW/SrM=; b=DCn8eWMPXIU40nit8DvBirErNeRn+J90FK+KjOrR3HKQWftkerN2UCN03RmqrudYUb f2yBMmzGjLXIN5pvYWwkZQh7VZdsxCZVhX1nvJhPObJr6P3EmYEI4rwMAcwuNhNyYtJZ o2ugc4wEBSOL9geV/mvkDXg5/U+RcdIMdGdCiC7JeWd/3DkWCqtwZYKY/MFdiPDdMU7a f0UJZDNG1HgyY0+b+6MlfAZjPFkAQXn9G+VW3JR/dqE2al4MBO2Uk7zGwuhL2ZiVyLO+ b2Tg8YEfrrPAr0J7AWpE7s6jy9YQcRt8YPXkBMfIC9A+iyOH+irkbP3ybIAPnS+euiVO 5oww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728377129; x=1728981929; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tjUdOJA4jWMVSrUGgvOSdZl3neMitSFURhlyYwW/SrM=; b=mPvmDV1yVIMZlHAV5kEGukqZznQheJndSoXsUIyjXUvKFb3Bs4hzfOz9ymqWyRvxds Qc2cEljTsjHI6KTgrLoA3vA3DYmMmHrYEVIsfbiMVEQJpw0R8Ffn9IiYNKrAHB63F/VG ZotfLZEYV3wqgVshE/cpAWB990Yo5JhEnv5OiywVb7Ig07LzjMno47rwJXiF1K+vVyQG dL/t1gVlEx9ZbzbbXEAd1nfhvNe3R2cyTPJHlCG4fsn03dB2PfFmB2gtp37lnPFRukA0 kLV4dc8SppAStk5kS5DPcZkwsmHoxUePZ/TRaHLiIYeWl4o+iV3QQOt7veJ1JZFQJYCA ZQkQ== X-Forwarded-Encrypted: i=1; AJvYcCUKJiFHAHwfhQRNK0VRXNskR6C1/4qagWuYc+m26Cwe22VQ6n/o+7kPwPSyOYRx0aSsAuAXcN+ZSQ==@kvack.org X-Gm-Message-State: AOJu0YxwDDmYkvBsrQlwfNuGqrnZGolOoi3QjRbUzg5R4We9C0COHLob Af+tSN/UInocxb3UIIF/yENblaAzOXJSHyYiwTNTIeKdOxIef8gJmuidjRQ4V9BO1vCPoKuU0Z3 vs6/9BC66mb0wrHmQwS/BkpHdqM0= X-Google-Smtp-Source: AGHT+IEVzwOnSYQA1b2F4OqwlA/gmNlLWqBbX9kmps/lSsu6YEwLy8WGawj/FxMV3YlylFOIka6CG/gYsPqDXcjpFyg= X-Received: by 2002:a05:6402:5108:b0:5c7:1922:d770 with SMTP id 4fb4d7f45d1cf-5c8d2d015cfmr10905346a12.6.1728377128325; Tue, 08 Oct 2024 01:45:28 -0700 (PDT) MIME-Version: 1.0 References: <20241005164813.2475778-1-snovitoll@gmail.com> <20241005164813.2475778-2-snovitoll@gmail.com> In-Reply-To: From: Sabyrzhan Tasbolatov Date: Tue, 8 Oct 2024 13:46:17 +0500 Message-ID: Subject: Re: [PATCH v2 1/1] mm, kasan, kmsan: copy_from/to_kernel_nofault To: Marco Elver Cc: akpm@linux-foundation.org, andreyknvl@gmail.com, bpf@vger.kernel.org, dvyukov@google.com, glider@google.com, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, ryabinin.a.a@gmail.com, syzbot+61123a5daeb9f7454599@syzkaller.appspotmail.com, vincenzo.frascino@arm.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 5576CC0006 X-Stat-Signature: k1b1c8f7p7wb7ujsfqgx3pxmpd6x81h5 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1728377130-272644 X-HE-Meta: U2FsdGVkX1+lXJFsPRMhp7j3UUndHJEPMH3WY6vDIolAl/Yvfh6b2Lrdr1KP8XCVQX/VKz7H0S2IKF8feqnC7kLJQ51rzFtigVzVAK0W25P1aCbn1BwQ7Bi0m5ttk4wNeE3mTv8OTnQI80GMFBIKrJJdhdzb/8ly3CWtWcjrPoFzN5vUt1u1NHAxZkj4tCHoIyWTkGOxgO1okTZFAG3hv+wc5BKHdQYvyQeahkBNzvtVr2iz3vcfHV7SnYRm6pqdJNGmX0I1atyx9kq2n6tOc0T+EM37f3axO2VHp5M7pVDy/qNJTj/6LaHClsyDxnAmjmu+ThcWCdwuHcrzI7/i4BHTFiRdoY6o68lbBRJdgk1TQlSFrcr18bCcxSebUXZcMq6SSYj20/0KucotylPwaAKTRfVU5Ag72odit6cJN/qygQbk+5YznTpYgzCMiDPcGAM4lrexvsmX/E00vkPDM5nAONxW6TjvMXmGO5FZxZsOHona03e1qFrogZZch+BC52SfnA0gU/VUEosyEUZnmrRXOjPkDFK2fKR2NMzzKHE0g9uqBTZsYOf5nsb107ORnth3V0Yld/FMQpL//oIBLsEXMy9L6t/fjheuTxp5ykDW8qeNiH3V9mTRTBO3QxSyzrjMenScvaHhKiHUme51GnNYsrK4DFeZACOE0UL0yPZWM9x+TSYIuxYAPVrcLhNMBFGLXwc5EOq8NvNhkdMurmClx7E4LeT/uEgm18iGqQe8ijxbsc1FT40hEpuKpgoCm4mReFOniokcOIBZnyaFqalQisyeh4k7vXP3UmnKsyhIF0Q5DGhqzPUu2PpWRqxukpCpr29sIh42AYFY46czepYfFg9Ix9IQrJKhFXp0dAHTEZziczHswJHgsttf55fRSUPGwJgaOY8FBjmx5rCucR/B7y5+o8GBBweUbEDfyBQM9r8ihffBeezerk/aaoVh9HozPJN1e2TryT2Fjw5 jSjlVSNZ tuJOmKGQXsIHhPCWMHSQi5XCZElbGRfpW/KyesAd+l35tPI3BZTzOUYqfDEA+a83dPV3C87Lxt9xB+Nbd9gKhR/Xg4pSMM5JcpKGHH8BGKP/C+Oc1smXamjDS+DjX6uVLGUAirpZu7jZBAXsC7765GrLBgWpnMF+toOgmzVwpT2uWfbNqsJ8KIpl9mxpqHx7f8L9HupEHG1GnwqQxWxKeW7Zkn2g8PrhPdySS0D3OOcZ90SFsRaTFwy1RK3FzjOToPZhwAIaFcsX5g24m8dgbzfQ+c7KYS+bNCXxWpm1NBpEk6uVNJ3k3NPPmJMk/+fgoYIUxG//QGFZhhYbEhd5lKN3TLVJvOE8ekYvEbuPEB+ERS09+t6s2kfgfRuQJciSKnvc7inTOk38c7dfFmiLY1p2A77u174cx0+f0JXHRMC8sncWxlacp26sabepqAEaabMrRwbrlOVaOmQOx1oR7ecHP9iRVJyRYhF6j3wEyVLSqfuuDIObY1LeL0N0bqm5EG0/R1s/395Yn4g8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Oct 8, 2024 at 1:32=E2=80=AFPM Marco Elver wrote= : > > On Sat, Oct 05, 2024 at 09:48PM +0500, Sabyrzhan Tasbolatov wrote: > > Instrument copy_from_kernel_nofault() with KMSAN for uninitialized kern= el > > memory check and copy_to_kernel_nofault() with KASAN, KCSAN to detect > > the memory corruption. > > > > syzbot reported that bpf_probe_read_kernel() kernel helper triggered > > KASAN report via kasan_check_range() which is not the expected behaviou= r > > as copy_from_kernel_nofault() is meant to be a non-faulting helper. > > > > Solution is, suggested by Marco Elver, to replace KASAN, KCSAN check in > > copy_from_kernel_nofault() with KMSAN detection of copying uninitilaize= d > > kernel memory. In copy_to_kernel_nofault() we can retain > > instrument_write() for the memory corruption instrumentation but before > > pagefault_disable(). > > I don't understand why it has to be before the whole copy i.e. before > pagefault_disable()? > I was unsure about this decision as well - I should've waited for your resp= onse before sending the PATCH when I was asking for clarification. Sorry for the confusion, I thought that what you meant as the instrumentation was already done after pagefault_disable(). Let me send the v3 with your suggested diff, I will also ask Andrew to drop merged to -mm patch. https://lore.kernel.org/all/20241008020150.4795AC4CEC6@smtp.kernel.org/ Thanks for the review. > I think my suggestion was to only check the memory where no fault > occurred. See below. > > > diff --git a/mm/maccess.c b/mm/maccess.c > > index 518a25667323..a91a39a56cfd 100644 > > --- a/mm/maccess.c > > +++ b/mm/maccess.c > > @@ -15,7 +15,7 @@ bool __weak copy_from_kernel_nofault_allowed(const vo= id *unsafe_src, > > > > #define copy_from_kernel_nofault_loop(dst, src, len, type, err_label) = \ > > while (len >=3D sizeof(type)) { = \ > > - __get_kernel_nofault(dst, src, type, err_label); = \ > > + __get_kernel_nofault(dst, src, type, err_label); \ > > dst +=3D sizeof(type); = \ > > src +=3D sizeof(type); = \ > > len -=3D sizeof(type); = \ > > @@ -31,6 +31,8 @@ long copy_from_kernel_nofault(void *dst, const void *= src, size_t size) > > if (!copy_from_kernel_nofault_allowed(src, size)) > > return -ERANGE; > > > > + /* Make sure uninitialized kernel memory isn't copied. */ > > + kmsan_check_memory(src, size); > > pagefault_disable(); > > if (!(align & 7)) > > copy_from_kernel_nofault_loop(dst, src, size, u64, Efault= ); > > @@ -49,7 +51,7 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault); > > > > #define copy_to_kernel_nofault_loop(dst, src, len, type, err_label) \ > > while (len >=3D sizeof(type)) { = \ > > - __put_kernel_nofault(dst, src, type, err_label); = \ > > + __put_kernel_nofault(dst, src, type, err_label); \ > > dst +=3D sizeof(type); = \ > > src +=3D sizeof(type); = \ > > len -=3D sizeof(type); = \ > > @@ -62,6 +64,7 @@ long copy_to_kernel_nofault(void *dst, const void *sr= c, size_t size) > > if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) > > align =3D (unsigned long)dst | (unsigned long)src; > > > > + instrument_write(dst, size); > > pagefault_disable(); > > So this will check the whole range before the access. But if the copy > aborts because of a fault, then we may still end up with false > positives. > > Why not something like the below - normally we check the accesses > before, but these are debug kernels anyway, so I see no harm in making > an exception in this case and checking the memory if there was no fault > i.e. it didn't jump to err_label yet. It's also slower because of > repeated calls, but these helpers aren't frequently used. > > The alternative is to do the sanitizer check after the entire copy if we > know there was no fault at all. But that may still hide real bugs if > e.g. it starts copying some partial memory and then accesses an > unfaulted page. > > > diff --git a/mm/maccess.c b/mm/maccess.c > index a91a39a56cfd..3ca55ec63a6a 100644 > --- a/mm/maccess.c > +++ b/mm/maccess.c > @@ -13,9 +13,14 @@ bool __weak copy_from_kernel_nofault_allowed(const voi= d *unsafe_src, > return true; > } > > +/* > + * The below only uses kmsan_check_memory() to ensure uninitialized kern= el > + * memory isn't leaked. > + */ > #define copy_from_kernel_nofault_loop(dst, src, len, type, err_label) \ > while (len >=3D sizeof(type)) { = \ > __get_kernel_nofault(dst, src, type, err_label); \ > + kmsan_check_memory(src, sizeof(type)); \ > dst +=3D sizeof(type); = \ > src +=3D sizeof(type); = \ > len -=3D sizeof(type); = \ > @@ -31,8 +36,6 @@ long copy_from_kernel_nofault(void *dst, const void *sr= c, size_t size) > if (!copy_from_kernel_nofault_allowed(src, size)) > return -ERANGE; > > - /* Make sure uninitialized kernel memory isn't copied. */ > - kmsan_check_memory(src, size); > pagefault_disable(); > if (!(align & 7)) > copy_from_kernel_nofault_loop(dst, src, size, u64, Efault= ); > @@ -52,6 +55,7 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault); > #define copy_to_kernel_nofault_loop(dst, src, len, type, err_label) \ > while (len >=3D sizeof(type)) { = \ > __put_kernel_nofault(dst, src, type, err_label); \ > + instrument_write(dst, sizeof(type)); \ > dst +=3D sizeof(type); = \ > src +=3D sizeof(type); = \ > len -=3D sizeof(type); = \ > @@ -64,7 +68,6 @@ long copy_to_kernel_nofault(void *dst, const void *src,= size_t size) > if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) > align =3D (unsigned long)dst | (unsigned long)src; > > - instrument_write(dst, size); > pagefault_disable(); > if (!(align & 7)) > copy_to_kernel_nofault_loop(dst, src, size, u64, Efault);