From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F4FFC77B73 for ; Tue, 2 May 2023 15:46:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9E1E36B0071; Tue, 2 May 2023 11:46:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B9996B0074; Tue, 2 May 2023 11:46:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 882FD6B0075; Tue, 2 May 2023 11:46:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-lf1-f46.google.com (mail-lf1-f46.google.com [209.85.167.46]) by kanga.kvack.org (Postfix) with ESMTP id 1F6466B0071 for ; Tue, 2 May 2023 11:46:53 -0400 (EDT) Received: by mail-lf1-f46.google.com with SMTP id 2adb3069b0e04-4edcdfa8638so4774200e87.2 for ; Tue, 02 May 2023 08:46:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683042412; x=1685634412; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=lc/02gVv7T5hW5ex1vvWLxMMdnZDLieAI0WVDX3TE50=; b=HwY7dsw3qd9Y61+bjXOtQneTiuC7PXu2waosJfActTn3h+nc0q40jCMq4Qsbp8/hpC 1PHvzJL6LVmiMYUx5Zo9Ho8uGCxqo9+QiQPGvg/xnKayR//NoxjXn7KvHlw6Q4QF/Vi2 Y92QPUsaRWLBWit6WSMK4g0Rzf+yNa5jvxSgdZhXR5kTlk+wcyrE65t77QKzfSX8vdte ts4bp08mPP/xMdYUZC58xexc+5kt3OqIOAdGyehSMJp690/raZ4VyIBxPJwaeYFH+wFE XaDgWcQEZGodUw7VIA7/LfU7MP7GtX+7L0B/jN8715FtK2FILZP35g2PfJQ1rd+1XBEZ iWrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683042412; x=1685634412; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=lc/02gVv7T5hW5ex1vvWLxMMdnZDLieAI0WVDX3TE50=; b=Tq6goLmmSaNcG35/Ed7ga4mPC2dtuTPxNo1EBgmObUH1rW2pP7S4sdIPFCx5TRnvkh zwCRh2XGW2EoWX1SFQF4B8BglnLRuC49rPlEvUu3LtMVKozw8GM23FFv9C6mwkynFk42 FsLFWA5/BH02Z6c1r98DSrLCWrJhGfPj8MYGn8dMhXWiqyJpW+1IOIgM4GMO4/vTlGBR HLQxFLVLPivRCOCGgu+m40/x8ivIGaDhtWxibyB79eufrrEyDW4iLpEohUvBgrkQhAWP 6f6IkZJnrUcJEcKnSdOSWnMws4MXYP7CAoVqnVUibidc9q9/+usqq0apa6hC7baHkRYQ /45Q== X-Gm-Message-State: AC+VfDzaRJzDjognAvLGJjFe3JbkgKo7xeZYOnQJmjf2zvzj4BmRONKA IbxHN+QAo8RrvDVBnPQjGnWqkCCykwlyuQ== X-Google-Smtp-Source: ACHHUZ6rq2VZBBMj2O6nzAN0pehT7/e5PEJD3oeVvMFQX+8UfFhEvHJE5oPVwp1pAJyUBOwWWbK5Eg== X-Received: by 2002:a1c:7203:0:b0:3f1:7b8d:38ec with SMTP id n3-20020a1c7203000000b003f17b8d38ecmr11412494wmc.35.1683026926860; Tue, 02 May 2023 04:28:46 -0700 (PDT) Received: from localhost (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.gmail.com with ESMTPSA id n16-20020a05600c181000b003f046ad52efsm38360559wmp.31.2023.05.02.04.28.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 May 2023 04:28:46 -0700 (PDT) Date: Tue, 2 May 2023 12:28:45 +0100 From: Lorenzo Stoakes To: Peter Zijlstra Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , David Hildenbrand , Dave Chinner , Theodore Ts'o , Peter Xu Subject: Re: [PATCH v6 3/3] mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings Message-ID: <6edae55c-692e-4f6a-968a-fe6f860b2893@lucifer.local> References: <20230502111334.GP1597476@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 02, 2023 at 12:25:54PM +0100, Lorenzo Stoakes wrote: > On Tue, May 02, 2023 at 01:13:34PM +0200, Peter Zijlstra wrote: > > On Tue, May 02, 2023 at 12:11:49AM +0100, Lorenzo Stoakes wrote: > > > @@ -95,6 +96,77 @@ static inline struct folio *try_get_folio(struct page *page, int refs) > > > return folio; > > > } > > > > > > +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE > > > +static bool stabilise_mapping_rcu(struct folio *folio) > > > +{ > > > + struct address_space *mapping = READ_ONCE(folio->mapping); > > > + > > > + rcu_read_lock(); > > > + > > > + return mapping == READ_ONCE(folio->mapping); > > > > This doesn't make sense; why bother reading the same thing twice? > > The intent is to see whether the folio->mapping has been truncated from > underneath us, as per the futex code that Kirill referred to which does > something similar [1]. > > > > > Who cares if the thing changes from before; what you care about is that > > the value you see has stable storage, this doesn't help with that. > > > > > +} > > > + > > > +static void unlock_rcu(void) > > > +{ > > > + rcu_read_unlock(); > > > +} > > > +#else > > > +static bool stabilise_mapping_rcu(struct folio *) > > > +{ > > > + return true; > > > +} > > > + > > > +static void unlock_rcu(void) > > > +{ > > > +} > > > +#endif > > > > Anyway, this all can go away. RCU can't progress while you have > > interrupts disabled anyway. > > There seems to be other code in the kernel that assumes that this is not > the case, i.e. the futex code, though not sure if that's being run with > IRQs disabled... if not and it's absolutely certain that we need no special > handling for the RCU case, then happy days and more than glad to remove > this bit. > > I'm far from an expert on RCU (I need to gain a better understanding of it) > so I'm deferring how best to proceed on _this part_ to the community. > > > > > > +/* > > > + * Used in the GUP-fast path to determine whether a FOLL_PIN | FOLL_LONGTERM | > > > + * FOLL_WRITE pin is permitted for a specific folio. > > > + * > > > + * This assumes the folio is stable and pinned. > > > + * > > > + * Writing to pinned file-backed dirty tracked folios is inherently problematic > > > + * (see comment describing the writeable_file_mapping_allowed() function). We > > > + * therefore try to avoid the most egregious case of a long-term mapping doing > > > + * so. > > > + * > > > + * This function cannot be as thorough as that one as the VMA is not available > > > + * in the fast path, so instead we whitelist known good cases. > > > + * > > > + * The folio is stable, but the mapping might not be. When truncating for > > > + * instance, a zap is performed which triggers TLB shootdown. IRQs are disabled > > > + * so we are safe from an IPI, but some architectures use an RCU lock for this > > > + * operation, so we acquire an RCU lock to ensure the mapping is stable. > > > + */ > > > +static bool folio_longterm_write_pin_allowed(struct folio *folio) > > > +{ > > > + bool ret; > > > + > > > + /* hugetlb mappings do not require dirty tracking. */ > > > + if (folio_test_hugetlb(folio)) > > > + return true; > > > + > > > > This: > > > > > + if (stabilise_mapping_rcu(folio)) { > > > + struct address_space *mapping = folio_mapping(folio); > > > > And this is 3rd read of folio->mapping, just for giggles? > > I like to giggle :) > > Actually this is to handle the various cases in which the mapping might not > be what we want (i.e. have PAGE_MAPPING_FLAGS set) which doesn't appear to > have a helper exposed for a check. Given previous review about duplication > I felt best to reuse this even though it does access again... yes I felt > weird about doing that. > > > > > > + > > > + /* > > > + * Neither anonymous nor shmem-backed folios require > > > + * dirty tracking. > > > + */ > > > + ret = folio_test_anon(folio) || > > > + (mapping && shmem_mapping(mapping)); > > > + } else { > > > + /* If the mapping is unstable, fallback to the slow path. */ > > > + ret = false; > > > + } > > > + > > > + unlock_rcu(); > > > + > > > + return ret; > > > > then becomes: > > > > > > if (folio_test_anon(folio)) > > return true; > > This relies on the mapping so belongs below the lockdep assert imo. > > > > > /* > > * Having IRQs disabled (as per GUP-fast) also inhibits RCU > > * grace periods from making progress, IOW. they imply > > * rcu_read_lock(). > > */ > > lockdep_assert_irqs_disabled(); > > > > /* > > * Inodes and thus address_space are RCU freed and thus safe to > > * access at this point. > > */ > > mapping = folio_mapping(folio); > > if (mapping && shmem_mapping(mapping)) > > return true; > > > > return false; > > > > > +} > > I'm more than happy to do this (I'd rather drop the RCU bits if possible) > but need to be sure it's safe. Sorry forgot to include the [1] [1]:https://lore.kernel.org/all/20230428234332.2vhprztuotlqir4x@box.shutemov.name/