From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE415C77B73 for ; Tue, 2 May 2023 17:45:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 745326B007E; Tue, 2 May 2023 13:45:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F46F6B0080; Tue, 2 May 2023 13:45:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5956C6B0081; Tue, 2 May 2023 13:45:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com [209.85.221.44]) by kanga.kvack.org (Postfix) with ESMTP id 0506A6B007E for ; Tue, 2 May 2023 13:45:47 -0400 (EDT) Received: by mail-wr1-f44.google.com with SMTP id ffacd0b85a97d-3062d764455so1807302f8f.3 for ; Tue, 02 May 2023 10:45:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683049546; x=1685641546; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=/465ioAn95NNEKxI/y1OxBfAvlwiRZKFhD46N2OZRqk=; b=MDLyOim5BMRn6hfXjaVej53KW+5dmIM3Kl9aUN5vg5pGtYXJB3mydYvJYPbHAvr5W0 L8Nl8T5KFv+aMHZZ1EUjV0xn5gHIQQLJ8t/0RFiB3xgixHqpsIBsnCtLoPZ4M0IdEkae iFnNOgkq4akftSubl/G37wbbq8Ryyeyqjalcc/nswPnpN2ygGGeD/MNZpVYD5dQwKmn3 Zw2waXt5myN4W17JESf1xHHsPzjuYjD9uQ8LE6SMACY0gesoDxJvnPXk89JVk2qVjwmn sJ6SWbpbNvuodR5JVQGnq1AfcxkDsORytjYHKofxm8N9wP29zJW0uKO4EHE9s6x0CyzG GzNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683049546; x=1685641546; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=/465ioAn95NNEKxI/y1OxBfAvlwiRZKFhD46N2OZRqk=; b=T12OcyjVx3dqsrRVt1XR4YwI17sfp5DDwjuPqE4hq6a47F59UAafn562Mt3kzesht0 vg0V7UIlCQ9Ss2DIi1EDKgP8cg94FgRXGIFhi60vNH/dANAI7pwywvyBEzObxSGlF9id S7t76zJrsh9snTaSx36QnmU7AqsNbfFfY9pdx6wtjZPTIrpCbTAAusPVHknL+atV/6Hj rjNokblnJXNrYcBZ4nsIf1YKRu3Id+BYwYcxRMNQ4ajnKy+h+AsCgGZVOx+wkEiTcnJY s+85VeIk3XpXGAz/GqCx9zVEPmI6TzqAVmNrsu8BwjDFVaP7gYgCsTWjmWXXvT1lXbC5 1g4w== X-Gm-Message-State: AC+VfDwr1hes7iE7VicXLpnGVyheDhGgPW0TMHwvrJMwmr0qqbmpd8xC IzxyD3D9D5vprCrFV00i57c= X-Google-Smtp-Source: ACHHUZ5GCNyVFMjZgxcHAoZ9opb9xSy+K8GBwSG7Hehx4pddEaJp9V9nLPRBUGf2oMsUu63waSSlog== X-Received: by 2002:adf:ce05:0:b0:306:34f6:de85 with SMTP id p5-20020adfce05000000b0030634f6de85mr2709622wrn.58.1683049546176; Tue, 02 May 2023 10:45:46 -0700 (PDT) Received: from localhost (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.gmail.com with ESMTPSA id f15-20020a7bcd0f000000b003f182cc55c4sm36031959wmj.12.2023.05.02.10.45.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 May 2023 10:45:45 -0700 (PDT) Date: Tue, 2 May 2023 18:45:44 +0100 From: Lorenzo Stoakes To: David Hildenbrand Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , Dave Chinner , Theodore Ts'o , Peter Xu , Matthew Rosato , "Paul E . McKenney" , Christian Borntraeger , Mike Rapoport Subject: Re: [PATCH v7 3/3] mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings Message-ID: <88fcd103-7302-4838-a730-f7e0f189cfe7@lucifer.local> References: <1691115d-dba4-636b-d736-6a20359a67c3@redhat.com> <392debc7-2de8-440e-8b26-20f2d42cdf8d@lucifer.local> <6f17af6b-0925-12bd-5041-14462dab2768@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6f17af6b-0925-12bd-5041-14462dab2768@redhat.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 02, 2023 at 07:38:27PM +0200, David Hildenbrand wrote: > On 02.05.23 19:31, Lorenzo Stoakes wrote: > > On Tue, May 02, 2023 at 07:13:49PM +0200, David Hildenbrand wrote: > > > [...] > > > > > > > +{ > > > > + struct address_space *mapping; > > > > + > > > > + /* > > > > + * GUP-fast disables IRQs - this prevents IPIs from causing page tables > > > > + * to disappear from under us, as well as preventing RCU grace periods > > > > + * from making progress (i.e. implying rcu_read_lock()). > > > > + * > > > > + * This means we can rely on the folio remaining stable for all > > > > + * architectures, both those that set CONFIG_MMU_GATHER_RCU_TABLE_FREE > > > > + * and those that do not. > > > > + * > > > > + * We get the added benefit that given inodes, and thus address_space, > > > > + * objects are RCU freed, we can rely on the mapping remaining stable > > > > + * here with no risk of a truncation or similar race. > > > > + */ > > > > + lockdep_assert_irqs_disabled(); > > > > + > > > > + /* > > > > + * If no mapping can be found, this implies an anonymous or otherwise > > > > + * non-file backed folio so in this instance we permit the pin. > > > > + * > > > > + * shmem and hugetlb mappings do not require dirty-tracking so we > > > > + * explicitly whitelist these. > > > > + * > > > > + * Other non dirty-tracked folios will be picked up on the slow path. > > > > + */ > > > > + mapping = folio_mapping(folio); > > > > + return !mapping || shmem_mapping(mapping) || folio_test_hugetlb(folio); > > > > > > "Folios in the swap cache return the swap mapping" -- you might disallow > > > pinning anonymous pages that are in the swap cache. > > > > > > I recall that there are corner cases where we can end up with an anon page > > > that's mapped writable but still in the swap cache ... so you'd fallback to > > > the GUP slow path (acceptable for these corner cases, I guess), however > > > especially the comment is a bit misleading then. > > > > How could that happen? > > > > > > > > So I'd suggest not dropping the folio_test_anon() check, or open-coding it > > > ... which will make this piece of code most certainly easier to get when > > > staring at folio_mapping(). Or to spell it out in the comment (usually I > > > prefer code over comments). > > > > I literally made this change based on your suggestion :) but perhaps I > > misinterpreted what you meant. > > > > I do spell it out in the comment that the page can be anonymous, But perhaps > > explicitly checking the mapping flags is the way to go. > > > > > > > > > +} > > > > + > > > > /** > > > > * try_grab_folio() - Attempt to get or pin a folio. > > > > * @page: pointer to page to be grabbed > > > > @@ -123,6 +170,8 @@ static inline struct folio *try_get_folio(struct page *page, int refs) > > > > */ > > > > struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) > > > > { > > > > + bool is_longterm = flags & FOLL_LONGTERM; > > > > + > > > > if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page))) > > > > return NULL; > > > > @@ -136,8 +185,7 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) > > > > * right zone, so fail and let the caller fall back to the slow > > > > * path. > > > > */ > > > > - if (unlikely((flags & FOLL_LONGTERM) && > > > > - !is_longterm_pinnable_page(page))) > > > > + if (unlikely(is_longterm && !is_longterm_pinnable_page(page))) > > > > return NULL; > > > > /* > > > > @@ -148,6 +196,16 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) > > > > if (!folio) > > > > return NULL; > > > > + /* > > > > + * Can this folio be safely pinned? We need to perform this > > > > + * check after the folio is stabilised. > > > > + */ > > > > + if ((flags & FOLL_WRITE) && is_longterm && > > > > + !folio_longterm_write_pin_allowed(folio)) { > > > > + folio_put_refs(folio, refs); > > > > + return NULL; > > > > + } > > > > > > So we perform this change before validating whether the PTE changed. > > > > > > Hmm, naturally, I would have done it afterwards. > > > > > > IIRC, without IPI syncs during TLB flush (i.e., > > > CONFIG_MMU_GATHER_RCU_TABLE_FREE), there is the possibility that > > > (1) We lookup the pte > > > (2) The page was unmapped and free > > > (3) The page gets reallocated and used > > > (4) We pin the page > > > (5) We dereference page->mapping > > > > But we have an implied RCU lock from disabled IRQs right? Unless that CONFIG > > option does something odd (I've not really dug into its brehaviour). It feels > > like that would break GUP-fast as a whole. > > > > > > > > If we then de-reference page->mapping that gets used by whoever allocated it > > > for something completely different (not a pointer to something reasonable), > > > I wonder if we might be in trouble. > > > > > > Checking first, whether the PTE changed makes sure that what we pinned and > > > what we're looking at is what we expected. > > > > > > ... I can spot that the page_is_secretmem() check is also done before that. > > > But it at least makes sure that it's still an LRU page before staring at the > > > mapping (making it a little safer?). > > > > As do we :) > > > > We also via try_get_folio() check to ensure that we aren't subject to a split. > > > > > > > > BUT, I keep messing up this part of the story. Maybe it all works as > > > expected because we will be synchronizing RCU somehow before actually > > > freeing the page in the !IPI case. ... but I think that's only true for page > > > tables with CONFIG_MMU_GATHER_RCU_TABLE_FREE. > > > > My understanding based on what Peter said is that the IRQs being disabled should > > prevent anything bad from happening here. > > > ... only if we verified that the PTE didn't change IIUC. IRQs disabled only > protect you from the mapping getting freed and reused (because mappings are > freed via RCU IIUC). > > But as far as I can tell, it doesn't protect you from the page itself > getting freed and reused, and whoever freed the page uses page->mapping to > store something completely different. Ack, and we'd not have mapping->inode to save us in an anon case either. I'd rather be as cautious as we can possibly be, so let's move this to after the 'PTE is the same' check then, will fix on respin. > > But, again, it's all complicated and confusing to me. > It's just a fiddly, complicated, delicate area I feel :) hence why I endeavour to take on board the community's views on this series to ensure we end up with the best possible implementation. > > page_is_secretmem() also doesn't use a READ_ONCE() ... Perhaps one for a follow up patch... > > -- > Thanks, > > David / dhildenb >