From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5838C433ED for ; Wed, 28 Apr 2021 15:56:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5DAA36101C for ; Wed, 28 Apr 2021 15:56:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5DAA36101C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C3069940030; Wed, 28 Apr 2021 11:56:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BEAF194002C; Wed, 28 Apr 2021 11:56:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A35E0940030; Wed, 28 Apr 2021 11:56:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0031.hostedemail.com [216.40.44.31]) by kanga.kvack.org (Postfix) with ESMTP id 8729294002C for ; Wed, 28 Apr 2021 11:56:46 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4929A181AF5C7 for ; Wed, 28 Apr 2021 15:56:46 +0000 (UTC) X-FDA: 78082228812.11.96109F5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf26.hostedemail.com (Postfix) with ESMTP id C81C440002C0 for ; Wed, 28 Apr 2021 15:56:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1619625404; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=NAIQJbaFQ/18mbKkoum13vKuadvSUXUDgwqjg93SH3A=; b=GBlHVrPSHTaGu7Zi4fPyIqCCC2RIctCNLKUYXyAY5E2ULh2slgM1YMZuhuTIjdw8tNjCSU 18hV74JgAEoYPdnjsDaDUpXRIbTZBsINwaWOFGpd5Kys/ye6b0lhZ7J4C0iL9yExA+YEz6 h7N5wP8X6KrOLqw+8jfEmGKD7KNjXpU= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-26-Pyu7NnwxNI-hL4pU6aBhaQ-1; Wed, 28 Apr 2021 11:56:41 -0400 X-MC-Unique: Pyu7NnwxNI-hL4pU6aBhaQ-1 Received: by mail-qv1-f70.google.com with SMTP id o7-20020a0cf4c70000b02901a53a28706fso23270378qvm.19 for ; Wed, 28 Apr 2021 08:56:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=NAIQJbaFQ/18mbKkoum13vKuadvSUXUDgwqjg93SH3A=; b=rPyVIYigyTq1OnZo3YZVxAdT6fV9DmFqe43394d48AuobMID/5YX0CLEhgf+LOCBeq 9u8Jo2/2YNR3DUeY78h85QbtjdpnUmpE236c0af/pdcJe20uW/Fe/W8jpwlAyHbo/9Iu wgu3ZK6fSG0Gq6b9nH3t0/+Jk64HRMCVxUSIpkH98c6nadtaEbP+g9on9rUYkfUCn4zR IfjOwspElOPvsX5B5DkgVHaLpn61FR4exj/nWad1sBi0vIRmPHgF9CbqCNPo7uzPkIQJ ThI8PRW6J/px7flPpaptRltS4j/htoaxrvEHssrXLKg80FWtjjQkLxpQyk6jb0JZmstU k0dg== X-Gm-Message-State: AOAM532kNyI8WWbIIS/ftFQw7vUUlC3ETXu/WqU/nrts30WOp6qEEM/C Eo0OBYHbffSCBmu3g2m3/kSu/3gE1KNI4hbOaHlXetM9rkJ/4gRztR/2zxlK8kmh1EjkBb9oAev nagl81/jgGcA= X-Received: by 2002:a05:622a:1186:: with SMTP id m6mr26871391qtk.319.1619625400929; Wed, 28 Apr 2021 08:56:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy8q5uYN6y3jHP6JWTobRGbCN9ooFKd/mXamvT7dYQYBwarL/ZsSzBJJw8wuAZWoA83r9zr4A== X-Received: by 2002:a05:622a:1186:: with SMTP id m6mr26871356qtk.319.1619625400589; Wed, 28 Apr 2021 08:56:40 -0700 (PDT) Received: from xz-x1 (bras-base-toroon474qw-grc-77-184-145-104-227.dsl.bell.ca. [184.145.104.227]) by smtp.gmail.com with ESMTPSA id e10sm83701qka.56.2021.04.28.08.56.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Apr 2021 08:56:39 -0700 (PDT) Date: Wed, 28 Apr 2021 11:56:38 -0400 From: Peter Xu To: Hugh Dickins Cc: Axel Rasmussen , Alexander Viro , Andrea Arcangeli , Andrew Morton , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing , linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Subject: Re: [PATCH v5 06/10] userfaultfd/shmem: modify shmem_mcopy_atomic_pte to use install_pte() Message-ID: <20210428155638.GD6584@xz-x1> References: <20210427225244.4326-1-axelrasmussen@google.com> <20210427225244.4326-7-axelrasmussen@google.com> MIME-Version: 1.0 In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C81C440002C0 X-Stat-Signature: orbqmyfer68jwdfbtu5a7cjxbqmm83uw Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=170.10.133.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619625395-955195 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 27, 2021 at 05:58:16PM -0700, Hugh Dickins wrote: > On Tue, 27 Apr 2021, Axel Rasmussen wrote: > > > In a previous commit, we added the mcopy_atomic_install_pte() helper. > > This helper does the job of setting up PTEs for an existing page, to map > > it into a given VMA. It deals with both the anon and shmem cases, as > > well as the shared and private cases. > > > > In other words, shmem_mcopy_atomic_pte() duplicates a case it already > > handles. So, expose it, and let shmem_mcopy_atomic_pte() use it > > directly, to reduce code duplication. > > > > This requires that we refactor shmem_mcopy_atomic_pte() a bit: > > > > Instead of doing accounting (shmem_recalc_inode() et al) part-way > > through the PTE setup, do it afterward. This frees up > > mcopy_atomic_install_pte() from having to care about this accounting, > > and means we don't need to e.g. shmem_uncharge() in the error path. > > > > A side effect is this switches shmem_mcopy_atomic_pte() to use > > lru_cache_add_inactive_or_unevictable() instead of just lru_cache_add(). > > This wrapper does some extra accounting in an exceptional case, if > > appropriate, so it's actually the more correct thing to use. > > > > Signed-off-by: Axel Rasmussen > > Not quite. Two things. > > One, in this version, delete_from_page_cache(page) has vanished > from the particular error path which needs it. Agreed. I also spotted that the set_page_dirty() seems to have been overlooked when reusing mcopy_atomic_install_pte(), which afaiu should be move into the helper. > > Two, and I think this predates your changes (so needs a separate > fix patch first, for backport to stable? a user with bad intentions > might be able to trigger the BUG), in pondering the new error paths > and that /* don't free the page */ one in particular, isn't it the > case that the shmem_inode_acct_block() on entry might succeed the > first time, but atomic copy fail so -ENOENT, then something else > fill up the tmpfs before the retry comes in, so that retry then > fail with -ENOMEM, and hit the BUG_ON(page) in __mcopy_atomic()? > > (As I understand it, the shmem_inode_unacct_blocks() has to be > done before returning, because the caller may be unable to retry.) > > What the right fix is rather depends on other uses of __mcopy_atomic(): > if they obviously cannot hit that BUG_ON(page), you may prefer to leave > it in, and fix it here where shmem_inode_acct_block() fails. Or you may > prefer instead to delete that "else BUG_ON(page);" - looks as if that > would end up doing the right thing. Peter may have a preference. To me, the BUG_ON(page) wanted to guarantee mfill_atomic_pte() should have consumed the page properly when possible. Removing the BUG_ON() looks good already, it will just stop covering the case when e.g. ret==0. So maybe slightly better to release the page when shmem_inode_acct_block() fails (so as to still keep some guard on the page)? Thanks, -- Peter Xu