From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 060B4C77B73 for ; Tue, 2 May 2023 15:57:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 967FB6B0074; Tue, 2 May 2023 11:57:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 940DC6B0075; Tue, 2 May 2023 11:57:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82EAC6B0078; Tue, 2 May 2023 11:57:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by kanga.kvack.org (Postfix) with ESMTP id 661406B0074 for ; Tue, 2 May 2023 11:57:54 -0400 (EDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7814862461; Tue, 2 May 2023 13:30:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D0848C433D2; Tue, 2 May 2023 13:30:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683034221; bh=sfX70rgX6LXhrIEHuYX+oDxJVNsw5R+p+cnfXJBH4Ks=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=jyXMf9In6AHy1bmkdNltyyjI4MUbC8e1Nymm1lHaYc13hX4IbCgWT9V4SOi6ONxsF Ieomw9/iBjC7qecZnGCUMcS6RkJvEzMLW2GGx/Qd54Z94gH4w1HHDQh7SxF59TSnuK +Zj2iJxRCjsgra2wnCM1vBUC2R4dw26dZpx0OztkUYSsShRB4maw/iwZGeQQut1cBG ZnU1hWx+7MQW6Mdt4nzDFTjRcv0yA3sMmo1oOhm8nH9iDrs7QF1yX+clv9U1n7g5pi RKdMfKMJck/ZDnJUuqrF/lzSP3RN1fcvhHHrs7fPuQXKT4HRwoWX1XbTA0TsumoH6N J1++5bovfKEPg== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 61ED715405AD; Tue, 2 May 2023 06:30:21 -0700 (PDT) Date: Tue, 2 May 2023 06:30:21 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Lorenzo Stoakes , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , David Hildenbrand , Dave Chinner , Theodore Ts'o , Peter Xu Subject: Re: [PATCH v6 3/3] mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings Message-ID: Reply-To: paulmck@kernel.org References: <20230502111334.GP1597476@hirez.programming.kicks-ass.net> <20230502120810.GD1597538@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230502120810.GD1597538@hirez.programming.kicks-ass.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 02, 2023 at 02:08:10PM +0200, Peter Zijlstra wrote: > On Tue, May 02, 2023 at 12:25:54PM +0100, Lorenzo Stoakes wrote: > > On Tue, May 02, 2023 at 01:13:34PM +0200, Peter Zijlstra wrote: > > > On Tue, May 02, 2023 at 12:11:49AM +0100, Lorenzo Stoakes wrote: > > > > @@ -95,6 +96,77 @@ static inline struct folio *try_get_folio(struct page *page, int refs) > > > > return folio; > > > > } > > > > > > > > +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE > > > > +static bool stabilise_mapping_rcu(struct folio *folio) > > > > +{ > > > > + struct address_space *mapping = READ_ONCE(folio->mapping); > > > > + > > > > + rcu_read_lock(); > > > > + > > > > + return mapping == READ_ONCE(folio->mapping); > > > > > > This doesn't make sense; why bother reading the same thing twice? > > > > The intent is to see whether the folio->mapping has been truncated from > > underneath us, as per the futex code that Kirill referred to which does > > something similar [1]. > > Yeah, but per that 3rd load you got nothing here. Also that futex code > did the early load to deal with the !mapping case, but you're not doing > that. > > > > Who cares if the thing changes from before; what you care about is that > > > the value you see has stable storage, this doesn't help with that. > > > > > > > +} > > > > + > > > > +static void unlock_rcu(void) > > > > +{ > > > > + rcu_read_unlock(); > > > > +} > > > > +#else > > > > +static bool stabilise_mapping_rcu(struct folio *) > > > > +{ > > > > + return true; > > > > +} > > > > + > > > > +static void unlock_rcu(void) > > > > +{ > > > > +} > > > > +#endif > > > > > > Anyway, this all can go away. RCU can't progress while you have > > > interrupts disabled anyway. > > > > There seems to be other code in the kernel that assumes that this is not > > the case, > > Yeah, so Paul went back on forth on that a bit. It used to be true in > the good old days when everything was simple. Then Paul made things > complicated by separating out sched-RCU bh-RCU and 'regular' RCU > flavours. Almost. ;-) The way I made things complicated was instead by creating preemptible RCU for the real-time effort. The original non-preemptible RCU was still required for a number of use cases (for example, waiting for hardware interrupt handlers), so it had to stay. Separately, network-based DoS attacks necessitated adding RCU bh. > At that point disabling IRQs would only (officially) inhibit sched and > bh RCU flavours, but not the regular RCU. Quite right. > But then some years ago Linus convinced Paul that having all these > separate RCU flavours with separate QS rules was a big pain in the > backside and Paul munged them all together again. What happened was that someone used one flavor of RCU reader and a different flavor of RCU updater, creating an exploitable bug. http://www2.rdrop.com/~paulmck/RCU/cve.2019.01.23e.pdf https://www.youtube.com/watch?v=hZX1aokdNiY And Linus asked that this bug be ruled out, so... > So now, anything that inhibits any of the RCU flavours inhibits them > all. So disabling IRQs is sufficient. ...for v4.20 and later, exactly. Thanx, Paul > > i.e. the futex code, though not sure if that's being run with > > IRQs disabled... > > That futex code runs in preemptible context, per the lock_page() that > can sleep etc.. :-) > > > > > +/* > > > > + * Used in the GUP-fast path to determine whether a FOLL_PIN | FOLL_LONGTERM | > > > > + * FOLL_WRITE pin is permitted for a specific folio. > > > > + * > > > > + * This assumes the folio is stable and pinned. > > > > + * > > > > + * Writing to pinned file-backed dirty tracked folios is inherently problematic > > > > + * (see comment describing the writeable_file_mapping_allowed() function). We > > > > + * therefore try to avoid the most egregious case of a long-term mapping doing > > > > + * so. > > > > + * > > > > + * This function cannot be as thorough as that one as the VMA is not available > > > > + * in the fast path, so instead we whitelist known good cases. > > > > + * > > > > + * The folio is stable, but the mapping might not be. When truncating for > > > > + * instance, a zap is performed which triggers TLB shootdown. IRQs are disabled > > > > + * so we are safe from an IPI, but some architectures use an RCU lock for this > > > > + * operation, so we acquire an RCU lock to ensure the mapping is stable. > > > > + */ > > > > +static bool folio_longterm_write_pin_allowed(struct folio *folio) > > > > +{ > > > > + bool ret; > > > > + > > > > + /* hugetlb mappings do not require dirty tracking. */ > > > > + if (folio_test_hugetlb(folio)) > > > > + return true; > > > > + > > > > > > This: > > > > > > > + if (stabilise_mapping_rcu(folio)) { > > > > + struct address_space *mapping = folio_mapping(folio); > > > > > > And this is 3rd read of folio->mapping, just for giggles? > > > > I like to giggle :) > > > > Actually this is to handle the various cases in which the mapping might not > > be what we want (i.e. have PAGE_MAPPING_FLAGS set) which doesn't appear to > > have a helper exposed for a check. Given previous review about duplication > > I felt best to reuse this even though it does access again... yes I felt > > weird about doing that. > > Right, I had a peek inside folio_mapping(), but the point is that this > 3rd load might see yet *another* value of mapping from the prior two > loads, rendering them somewhat worthless. > > > > > + > > > > + /* > > > > + * Neither anonymous nor shmem-backed folios require > > > > + * dirty tracking. > > > > + */ > > > > + ret = folio_test_anon(folio) || > > > > + (mapping && shmem_mapping(mapping)); > > > > + } else { > > > > + /* If the mapping is unstable, fallback to the slow path. */ > > > > + ret = false; > > > > + } > > > > + > > > > + unlock_rcu(); > > > > + > > > > + return ret; > > > > > > then becomes: > > > > > > > > > if (folio_test_anon(folio)) > > > return true; > > > > This relies on the mapping so belongs below the lockdep assert imo. > > Oh, right you are. > > > > > > > /* > > > * Having IRQs disabled (as per GUP-fast) also inhibits RCU > > > * grace periods from making progress, IOW. they imply > > > * rcu_read_lock(). > > > */ > > > lockdep_assert_irqs_disabled(); > > > > > > /* > > > * Inodes and thus address_space are RCU freed and thus safe to > > > * access at this point. > > > */ > > > mapping = folio_mapping(folio); > > > if (mapping && shmem_mapping(mapping)) > > > return true; > > > > > > return false; > > > > > > > +} > > > > I'm more than happy to do this (I'd rather drop the RCU bits if possible) > > but need to be sure it's safe. > > GUP-fast as a whole relies on it :-)