From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.4 required=3.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FSL_HELO_FAKE,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0295FC433E0 for ; Mon, 21 Dec 2020 07:30:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 35EEC22BF5 for ; Mon, 21 Dec 2020 07:30:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 35EEC22BF5 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 283256B005D; Mon, 21 Dec 2020 02:30:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 233DA6B0068; Mon, 21 Dec 2020 02:30:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 121C66B006C; Mon, 21 Dec 2020 02:30:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0242.hostedemail.com [216.40.44.242]) by kanga.kvack.org (Postfix) with ESMTP id F02FC6B005D for ; Mon, 21 Dec 2020 02:30:00 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B978E363F for ; Mon, 21 Dec 2020 07:30:00 +0000 (UTC) X-FDA: 77616465360.11.part59_2a0900627455 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 97906180F8B80 for ; Mon, 21 Dec 2020 07:30:00 +0000 (UTC) X-HE-Tag: part59_2a0900627455 X-Filterd-Recvd-Size: 12138 Received: from mail-il1-f171.google.com (mail-il1-f171.google.com [209.85.166.171]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Mon, 21 Dec 2020 07:29:59 +0000 (UTC) Received: by mail-il1-f171.google.com with SMTP id w12so8004774ilm.12 for ; Sun, 20 Dec 2020 23:29:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=1wkuFX+4OLAHVsDyhj5wdt/EL4XjKmpOcn6v/SpwNME=; b=MUwgIkEi9l/37Hkh+T2Wj6dx9CwxFSVVf9E5aJyNhLbkRa7ShsiaP3240pfUzlP1WH leMhmiHYYcFWJ9ZhROuVXvayGMNfcZU7NQ0Ueh1/9vAf3tQ2WKnBU0UmAi+dOlhN+Tmn GexTKm5hnQ1ITYktycU/LUANZ7RAOtaWXzPpmK+nFIL8l2zCG177GJzAje+hefq3IPjP 7rKC75m7an6w34T5JLftB9LKfgP3kR5C0SOfzChR1L5PCBmV6GYntxGMES47B4QjXhme PvD/ZH7eeOhjKjGIW7MRO+IhNd04FXN8teNQjuUCuRHEWLI1IOWSABbhs5jdUQ98AQ0w utvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=1wkuFX+4OLAHVsDyhj5wdt/EL4XjKmpOcn6v/SpwNME=; b=aowhMW0XmBAz+LNNSXatz+5NwzVw1AZzdF2bKRNAOWVXt8dsQuobR2fccIveP7SrTl /6sml7+4l/wbbzpJtPW+HjBfTvocKb5bnMzJCHPbiqv9bZLcQXWx0wC2dSQBTt/vM7/J KzMtcT1okh2a+16vS+W0tZaf826jN29eRDkV3KW+YH8fTrxHn0XQzaXHDbYCOQBFAUKw z3wjIIAkt9GnciLKJOua5MDsFhm1BgFzAZ7dKewssJIe6V53Da60Q6RuBfJnz7qiTcH4 Bb7Wl5fntOvZbGBAEEPhbYPs0rBDwhI7GkqclQoZaJPPr8wTcLMeShUs5Was0y6h5oOU 6P+g== X-Gm-Message-State: AOAM533jtwsDjWUy6rbn0fpW/2R71+42x/FVrkQbOm62i1XOoQTNP50g iL14lxddeGcBc1hVNsa8YL7nsQ== X-Google-Smtp-Source: ABdhPJxGBfZSXNaVTSxC5NSaZa7uAXTVoNYiGqyJz/xrItRZN3vGe8PPZmMz4os4Fi/I4BDGMhq02Q== X-Received: by 2002:a05:6e02:1150:: with SMTP id o16mr14772434ill.105.1608535799281; Sun, 20 Dec 2020 23:29:59 -0800 (PST) Received: from google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) by smtp.gmail.com with ESMTPSA id f29sm13143576ilg.3.2020.12.20.23.29.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Dec 2020 23:29:58 -0800 (PST) Date: Mon, 21 Dec 2020 00:29:53 -0700 From: Yu Zhao To: Nadav Amit Cc: Andrea Arcangeli , linux-mm , Peter Xu , lkml , Pavel Emelyanov , Mike Kravetz , Mike Rapoport , stable@vger.kernel.org, minchan@kernel.org, Andy Lutomirski , Will Deacon , Peter Zijlstra Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect Message-ID: References: <20201219043006.2206347-1-namit@vmware.com> <729A8C1E-FC5B-4F46-AE01-85E00C66DFFF@gmail.com> <7986D881-3EBD-4197-A1A0-3B06BB2300B1@gmail.com> <7EB8560C-620A-433D-933C-996D7E4F2CA1@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <7EB8560C-620A-433D-933C-996D7E4F2CA1@gmail.com> Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Dec 20, 2020 at 09:39:06PM -0800, Nadav Amit wrote: > > On Dec 20, 2020, at 9:25 PM, Nadav Amit wrote: > >=20 > >> On Dec 20, 2020, at 9:12 PM, Yu Zhao wrote: > >>=20 > >> On Sun, Dec 20, 2020 at 08:36:15PM -0800, Nadav Amit wrote: > >>>> On Dec 19, 2020, at 6:20 PM, Andrea Arcangeli wrote: > >>>>=20 > >>>> On Sat, Dec 19, 2020 at 02:06:02PM -0800, Nadav Amit wrote: > >>>>>> On Dec 19, 2020, at 1:34 PM, Nadav Amit w= rote: > >>>>>>=20 > >>>>>> [ cc=E2=80=99ing some more people who have experience with simil= ar problems ] > >>>>>>=20 > >>>>>>> On Dec 19, 2020, at 11:15 AM, Andrea Arcangeli wrote: > >>>>>>>=20 > >>>>>>> Hello, > >>>>>>>=20 > >>>>>>> On Fri, Dec 18, 2020 at 08:30:06PM -0800, Nadav Amit wrote: > >>>>>>>> Analyzing this problem indicates that there is a real bug sinc= e > >>>>>>>> mmap_lock is only taken for read in mwriteprotect_range(). Thi= s might > >>>>>>>=20 > >>>>>>> Never having to take the mmap_sem for writing, and in turn neve= r > >>>>>>> blocking, in order to modify the pagetables is quite an importa= nt > >>>>>>> feature in uffd that justifies uffd instead of mprotect. It's n= ot the > >>>>>>> most important reason to use uffd, but it'd be nice if that gua= rantee > >>>>>>> would remain also for the UFFDIO_WRITEPROTECT API, not only for= the > >>>>>>> other pgtable manipulations. > >>>>>>>=20 > >>>>>>>> Consider the following scenario with 3 CPUs (cpu2 is not shown= ): > >>>>>>>>=20 > >>>>>>>> cpu0 cpu1 > >>>>>>>> ---- ---- > >>>>>>>> userfaultfd_writeprotect() > >>>>>>>> [ write-protecting ] > >>>>>>>> mwriteprotect_range() > >>>>>>>> mmap_read_lock() > >>>>>>>> change_protection() > >>>>>>>> change_protection_range() > >>>>>>>> ... > >>>>>>>> change_pte_range() > >>>>>>>> [ defer TLB flushes] > >>>>>>>> userfaultfd_writeprotect() > >>>>>>>> mmap_read_lock() > >>>>>>>> change_protection() > >>>>>>>> [ write-unprotect ] > >>>>>>>> ... > >>>>>>>> [ unprotect PTE logically ] > >>>>=20 > >>>> Is the uffd selftest failing with upstream or after your kernel > >>>> modification that removes the tlb flush from unprotect? > >>>=20 > >>> Please see my reply to Yu. I was wrong in this analysis, and I sent= a > >>> correction to my analysis. The problem actually happens when > >>> userfaultfd_writeprotect() unprotects the memory. > >>>=20 > >>>> } else if (uffd_wp_resolve) { > >>>> /* > >>>> * Leave the write bit to be handled > >>>> * by PF interrupt handler, then > >>>> * things like COW could be properly > >>>> * handled. > >>>> */ > >>>> ptent =3D pte_clear_uffd_wp(ptent); > >>>> } > >>>>=20 > >>>> Upstraem this will still do pages++, there's a tlb flush before > >>>> change_protection can return here, so I'm confused. > >>>=20 > >>> You are correct. The problem I encountered with userfaultfd_writepr= otect() > >>> is during unprotecting path. > >>>=20 > >>> Having said that, I think that there are additional scenarios that = are > >>> problematic. Consider for instance madvise_dontneed_free() that is = racing > >>> with userfaultfd_writeprotect(). If madvise_dontneed_free() complet= ed > >>> removing the PTEs, but still did not flush, change_pte_range() will= see > >>> non-present PTEs, say a flush is not needed, and then > >>> change_protection_range() will not do a flush, and return while > >>> the memory is still not protected. > >>>=20 > >>>> I don't share your concern. What matters is the PT lock, so it > >>>> wouldn't be one per pte, but a least an order 9 higher, but let's > >>>> assume one flush per pte. > >>>>=20 > >>>> It's either huge mapping and then it's likely running without othe= r > >>>> tlb flushing in background (postcopy snapshotting), or it's a gran= ular > >>>> protect with distributed shared memory in which case the number of > >>>> changd ptes or huge_pmds tends to be always 1 anyway. So it doesn'= t > >>>> matter if it's deferred. > >>>>=20 > >>>> I agree it may require a larger tlb flush review not just mprotect > >>>> though, but it didn't sound particularly complex. Note the > >>>> UFFDIO_WRITEPROTECT is still relatively recent so backports won't > >>>> risk to reject so heavy as to require a band-aid. > >>>>=20 > >>>> My second thought is, I don't see exactly the bug and it's not cle= ar > >>>> if it's upstream reproducing this, but assuming this happens on > >>>> upstream, even ignoring everything else happening in the tlb flush > >>>> code, this sounds like purely introduced by userfaultfd_writeprote= ct() > >>>> vs userfaultfd_writeprotect() (since it's the only place changing > >>>> protection with mmap_sem for reading and note we already unmap and > >>>> flush tlb with mmap_sem for reading in MADV_DONTNEED/MADV_FREE cle= ars > >>>> the dirty bit etc..). Flushing tlbs with mmap_sem for reading is > >>>> nothing new, the only new thing is the flush after wrprotect. > >>>>=20 > >>>> So instead of altering any tlb flush code, would it be possible to > >>>> just stick to mmap_lock for reading and then serialize > >>>> userfaultfd_writeprotect() against itself with an additional > >>>> mm->mmap_wprotect_lock mutex? That'd be a very local change to > >>>> userfaultfd too. > >>>>=20 > >>>> Can you look if the rule mmap_sem for reading plus a new > >>>> mm->mmap_wprotect_lock mutex or the mmap_sem for writing, whenever > >>>> wrprotecting ptes, is enough to comply with the current tlb flushi= ng > >>>> code, so not to require any change non local to uffd (modulo the > >>>> additional mutex). > >>>=20 > >>> So I did not fully understand your solution, but I took your point = and > >>> looked again on similar cases. To be fair, despite my experience wi= th these > >>> deferred TLB flushes as well as Peter Zijlstra=E2=80=99s great docu= mentation, I keep > >>> getting confused (e.g., can=E2=80=99t we somehow combine tlb_flush_= batched and > >>> tlb_flush_pending ?) > >>>=20 > >>> As I said before, my initial scenario was wrong, and the problem is= not > >>> userfaultfd_writeprotect() racing against itself. This one seems ac= tually > >>> benign to me. > >>>=20 > >>> Nevertheless, I do think there is a problem in change_protection_ra= nge(). > >>> Specifically, see the aforementioned scenario of a race between > >>> madvise_dontneed_free() and userfaultfd_writeprotect(). > >>>=20 > >>> So an immediate solution for such a case can be resolve without hol= ding > >>> mmap_lock for write, by just adding a test for mm_tlb_flush_nested(= ) in > >>> change_protection_range(): > >>>=20 > >>> /* > >>> * Only flush the TLB if we actually modified any entries > >>> * or if there are pending TLB flushes. > >>> */ > >>> if (pages || mm_tlb_flush_nested(mm)) > >>> flush_tlb_range(vma, start, end); > >>>=20 > >>> To be fair, I am not confident I did not miss other problematic cas= es. > >>>=20 > >>> But for now, this change, with the preserve_write change should add= ress the > >>> immediate issues. Let me know if you agree. > >>>=20 > >>> Let me know whether you agree. > >>=20 > >> The problem starts in UFD, and is related to tlb flush. But its foca= l > >> point is in do_wp_page(). I'd suggest you look at function and see > >> what it does before and after the commits I listed, with the followi= ng > >> conditions > >>=20 > >> PageAnon(), !PageKsm(), !PageSwapCache(), !pte_write(), > >> page_mapcount() =3D 1, page_count() > 1 or PageLocked() > >>=20 > >> when it runs against the two UFD examples you listed. > >=20 > > Thanks for your quick response. I wanted to write a lengthy response,= but I > > do want to sleep on it. I presume page_count() > 1, since I have mult= iple > > concurrent page-faults on the same address in my test, but I will che= ck. > >=20 > > Anyhow, before I give a further response, I was just wondering - sinc= e you > > recently dealt with soft-dirty issue as I remember - isn't this probl= ematic > > COW for non-COW page scenario, in which the copy races with writes to= a page > > which is protected in the PTE but not in all TLB, also problematic fo= r > > soft-dirty clearing? Yes, it has the same problem. > Stupid me. You hold mmap_lock for write, so no, it cannot happen when c= lear > soft-dirty. mmap_write_lock is temporarily held to update vm_page_prot for write notifications. It doesn't help in the context of this problem.