From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C520EC4332F for ; Wed, 2 Nov 2022 19:02:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1180D8E0002; Wed, 2 Nov 2022 15:02:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C8728E0001; Wed, 2 Nov 2022 15:02:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAAC58E0002; Wed, 2 Nov 2022 15:02:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D5E568E0001 for ; Wed, 2 Nov 2022 15:02:55 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 880AB1412F1 for ; Wed, 2 Nov 2022 19:02:55 +0000 (UTC) X-FDA: 80089424310.09.D458C97 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 1FFDC14000C for ; Wed, 2 Nov 2022 19:02:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667415773; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=bD6QDSnJuvOuj7eWXYrNEzW7d9xy3GrpA6OWXmFcfc0=; b=H0NrXgzCHWhlE3Pc1fzzjDoEZyhEW4hDO6zQ5cG9JoTCxcjfQxppUrHljyuVGPPBj0JMsU yD46+ECukJFVu5a6acUx6I1j7mEklbrBLKG7WYexyUC0cfqLHIJSy+xtxCCcZS5f+iA7/s pgX/dY03aBKEppN27lx/dOBpPklF0K0= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-626-R7i-Cq2CPVW5JckvVv2RHQ-1; Wed, 02 Nov 2022 15:02:52 -0400 X-MC-Unique: R7i-Cq2CPVW5JckvVv2RHQ-1 Received: by mail-qt1-f199.google.com with SMTP id ay12-20020a05622a228c00b003a52bd33749so5907969qtb.8 for ; Wed, 02 Nov 2022 12:02:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=UqcZSn2dIGg+txRUz0yg2pNp7hn+WnfiER5ZgqstaLw=; b=vBhk4eZLd/HG1bepgstkoUU8c82VqqZQAoROqOjPgo9HGoCmlGnbaYcAG+2BQwL+CG yMFFVucspt8yDDIyyuqx9DBhWr+gpKA99wU2uBPQAO3fgE+X90fGQmizNpnCfPwTA9/E gVSq8FZPqRMuZr0+brg41Hed+7L4L5ScTZLDe3CAQy4T9/g8P74ck7kge/YQoUM4DNg+ O2Si1wf7VqlBr8q8+U7l8fa4Y9asvugsj9sjdrZcrQhW7ccmp6hV53OQLfqkFsZTPdvY p4RAmfYghGKAkI1MMQkPEYew1AbjLlFppGwS3/1kK7c1NAhwrTRahXaRtGd061raePRK pfNQ== X-Gm-Message-State: ACrzQf1yFW78EsPZqnjx1llyR/GJ0CJdNKo2O71MvUME+8syqFGfulOM dcOQ8P6r2D0AX3p+MBmFUyzP+RvqHOxs+Y4VE1V+hyTSN4SZF4+5SVDsW7XKRjB6RWsNXyJwLpx r0jsteHAo79c= X-Received: by 2002:a0c:e28a:0:b0:4b9:e578:1581 with SMTP id r10-20020a0ce28a000000b004b9e5781581mr22708620qvl.102.1667415765731; Wed, 02 Nov 2022 12:02:45 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6MCTCf/l3AxomdMLN6xadr+DHXOXgwS23JyXphBskH+IPvIsI6jm68HNZtMx+QppTeytMxpA== X-Received: by 2002:a0c:e28a:0:b0:4b9:e578:1581 with SMTP id r10-20020a0ce28a000000b004b9e5781581mr22708077qvl.102.1667415757850; Wed, 02 Nov 2022 12:02:37 -0700 (PDT) Received: from x1n (bras-base-aurron9127w-grc-46-70-31-27-79.dsl.bell.ca. [70.31.27.79]) by smtp.gmail.com with ESMTPSA id fx7-20020a05622a4ac700b003a4f6a566e9sm6990905qtb.83.2022.11.02.12.02.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Nov 2022 12:02:36 -0700 (PDT) Date: Wed, 2 Nov 2022 15:02:35 -0400 From: Peter Xu To: Matthew Wilcox Cc: "Vishal Moola (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, akpm@linux-foundation.org, Hugh Dickins , Axel Rasmussen Subject: Re: [PATCH 3/5] userfualtfd: Replace lru_cache functions with folio_add functions Message-ID: References: <20221101175326.13265-1-vishal.moola@gmail.com> <20221101175326.13265-4-vishal.moola@gmail.com> MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: multipart/mixed; boundary="XtXX9UT9oQ4Z3Adt" Content-Disposition: inline ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667415774; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bD6QDSnJuvOuj7eWXYrNEzW7d9xy3GrpA6OWXmFcfc0=; b=DD2IsuIA3/IL02349V9y7JZPchWLI28Ov0+KsmS+CWR34MawFuOwNL1A+05w8+e0KJDvbw e45Pv7+8+VkncnA7MjX2C885omahT0/QrueIDFhx1Txv/mv0TdkG/+H9vwG26g2McJkADE UH7xE2KkqNTGxg4N1XXH32tQvBDrEig= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=H0NrXgzC; spf=pass (imf26.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667415774; a=rsa-sha256; cv=none; b=mQ1kTZ+3DR2UwmnO1xKHFJ45fhaX0PsvESoICNGKfR4Fk7ExMJzcbembFVKQFA3Oc+taCT 4JJcPeKEN/1rpzOk2S1f4MXzZpf4nnqvBX1GODz4DQ0MOEPmD8nSYu2zZ64cn5WGE66YNL T6hU7dt1tPmyOsumjdDUvSPU7lKCZls= X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 1FFDC14000C Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=H0NrXgzC; spf=pass (imf26.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: gf3o7tt96ss39nhcj48a3d3ri3xhcz9w X-HE-Tag: 1667415773-890108 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --XtXX9UT9oQ4Z3Adt Content-Type: text/plain; charset=utf-8 Content-Disposition: inline On Tue, Nov 01, 2022 at 06:31:26PM +0000, Matthew Wilcox wrote: > On Tue, Nov 01, 2022 at 10:53:24AM -0700, Vishal Moola (Oracle) wrote: > > Replaces lru_cache_add() and lru_cache_add_inactive_or_unevictable() > > with folio_add_lru() and folio_add_lru_vma(). This is in preparation for > > the removal of lru_cache_add(). > > Ummmmm. Reviewing this patch reveals a bug (not introduced by your > patch). Look: > > mfill_atomic_install_pte: > bool page_in_cache = page->mapping; > > mcontinue_atomic_pte: > ret = shmem_get_folio(inode, pgoff, &folio, SGP_NOALLOC); > ... > page = folio_file_page(folio, pgoff); > ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, > page, false, wp_copy); > > That says pretty plainly that mfill_atomic_install_pte() can be passed > a tail page from shmem, and if it is ... > > if (page_in_cache) { > ... > } else { > page_add_new_anon_rmap(page, dst_vma, dst_addr); > lru_cache_add_inactive_or_unevictable(page, dst_vma); > } > > it'll get put on the rmap as an anon page! Hmm yeah.. thanks Matthew! Does the patch attached look reasonable to you? Copying Axel too. > > > Signed-off-by: Vishal Moola (Oracle) > > --- > > mm/userfaultfd.c | 6 ++++-- > > 1 file changed, 4 insertions(+), 2 deletions(-) > > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > index e24e8a47ce8a..2560973b00d8 100644 > > --- a/mm/userfaultfd.c > > +++ b/mm/userfaultfd.c > > @@ -66,6 +66,7 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, > > bool vm_shared = dst_vma->vm_flags & VM_SHARED; > > bool page_in_cache = page->mapping; > > spinlock_t *ptl; > > + struct folio *folio; > > struct inode *inode; > > pgoff_t offset, max_off; > > > > @@ -113,14 +114,15 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, > > if (!pte_none_mostly(*dst_pte)) > > goto out_unlock; > > > > + folio = page_folio(page); > > if (page_in_cache) { > > /* Usually, cache pages are already added to LRU */ > > if (newly_allocated) > > - lru_cache_add(page); > > + folio_add_lru(folio); > > page_add_file_rmap(page, dst_vma, false); > > } else { > > page_add_new_anon_rmap(page, dst_vma, dst_addr); > > - lru_cache_add_inactive_or_unevictable(page, dst_vma); > > + folio_add_lru_vma(folio, dst_vma); > > } > > > > /* > > -- > > 2.38.1 > > > > > -- Peter Xu --XtXX9UT9oQ4Z3Adt Content-Type: text/plain; charset=utf-8 Content-Disposition: attachment; filename="0001-mm-shmem-Use-page_mapping-to-detect-page-cache-for-u.patch" >From 4eea0908b4890745bedd931283c1af91f509d039 Mon Sep 17 00:00:00 2001 From: Peter Xu Date: Wed, 2 Nov 2022 14:41:52 -0400 Subject: [PATCH] mm/shmem: Use page_mapping() to detect page cache for uffd continue Content-type: text/plain mfill_atomic_install_pte() checks page->mapping to detect whether one page is used in the page cache. However as pointed out by Matthew, the page can logically be a tail page rather than always the head in the case of uffd minor mode with UFFDIO_CONTINUE. It means we could wrongly install one pte with shmem thp tail page assuming it's an anonymous page. It's not that clear even for anonymous page, since normally anonymous pages also have page->mapping being setup with the anon vma. It's safe here only because the only such caller to mfill_atomic_install_pte() is always passing in a newly allocated page (mcopy_atomic_pte()), whose page->mapping is not yet setup. However that's not extremely obvious either. For either of above, use page_mapping() instead. And this should be stable material. Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Axel Rasmussen Cc: stable@vger.kernel.org Reported-by: Matthew Wilcox Signed-off-by: Peter Xu --- mm/userfaultfd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 3d0fef3980b3..650ab6cfd5f4 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -64,7 +64,7 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, pte_t _dst_pte, *dst_pte; bool writable = dst_vma->vm_flags & VM_WRITE; bool vm_shared = dst_vma->vm_flags & VM_SHARED; - bool page_in_cache = page->mapping; + bool page_in_cache = page_mapping(page); spinlock_t *ptl; struct inode *inode; pgoff_t offset, max_off; -- 2.37.3 --XtXX9UT9oQ4Z3Adt--