From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B954C433DB for ; Tue, 30 Mar 2021 23:30:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 07415619D6 for ; Tue, 30 Mar 2021 23:30:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 07415619D6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8C9086B007E; Tue, 30 Mar 2021 19:30:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 879266B0081; Tue, 30 Mar 2021 19:30:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6CB606B0082; Tue, 30 Mar 2021 19:30:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0203.hostedemail.com [216.40.44.203]) by kanga.kvack.org (Postfix) with ESMTP id 4FF696B007E for ; Tue, 30 Mar 2021 19:30:51 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 04C9B52AE for ; Tue, 30 Mar 2021 23:30:51 +0000 (UTC) X-FDA: 77978137902.08.23D308C Received: from mail-il1-f169.google.com (mail-il1-f169.google.com [209.85.166.169]) by imf11.hostedemail.com (Postfix) with ESMTP id 22B0B2000273 for ; Tue, 30 Mar 2021 23:30:50 +0000 (UTC) Received: by mail-il1-f169.google.com with SMTP id u2so15634878ilk.1 for ; Tue, 30 Mar 2021 16:30:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=o671+X957x3ohIb8lZy+lC6mbupaAR79VW4I7NvzAKM=; b=q3+rZbXz4Tlokww2KCM+zuxWcJwwrl95Q65TPi93mxksx1cxTwhxtRQthFoKlW/aaE xYID/mh8kTYSBSFSfeMbQi09UlsGYuv8sTi/J70kP03kPtupVHpt9VM/2P9gZqtyP8Mv BkqWQSDaOHMyIQHFlG5WOBGaB6HIzl/+hJoq3s8i1M+lLfy/3H7DnwEUm82xCg/zAUNA jg7s43cR2owsrmQgLifKwSvKOxA9Aac0QfDYalUL1M0kAly1px5zGaIeprVHZmph+p9s VD6+cGjT+xJe4apJ1QbLt4mEgkKKQJ15CYKhdnnztORfoWmXVvrLogqWQow8RAfchHEl 8j4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=o671+X957x3ohIb8lZy+lC6mbupaAR79VW4I7NvzAKM=; b=lXH4eINK6et3tGlKKhRG6ZESuQH1uGPhgZirDv+7MIo64FmsbmXdAAvYMvOVK89iBp jzxldrf4T25hXHRxX42qpDQTAUgLiCuAhzh7CwU5VFZ8xXvgMtdlOwl/u4OZzBLXnUQx B4/JBWu+tnnJpLDQFclsbwYRRNchCWkLf6++aV3hEAK2hGTdWFeL6BbyWsU1A4J+Lt1Z Bn0Fyc5MDCFDws+msYqI32iwQiUDxD8eMbYw8Xo0Vu0zlZfkYzIVlRp33wzdA8GdbIXJ x2h9yM+nmoiBEKML5GwzcfeznWZ6w6vdHguUnh8LZ1G7PNQ+Jw54uhwm0O+TosOSSQbg hCFQ== X-Gm-Message-State: AOAM532ljoHt6Js9MbZFeuXCkh8o/zkWdxoDQHby8ac1hZSW6PhWS79Y rkdt4jRDrEp6qlcQboPlNQsT/Fl5p/ewcapLIPC2Gg== X-Google-Smtp-Source: ABdhPJzmBW6GzmEVr5Cuj8rTTicENZCXn7A7uW/m9mCcv0pU4T0pZyDhXHJKXWKktFLUDdPAX+wHrIkVeNo8GZEDhXY= X-Received: by 2002:a05:6e02:ee3:: with SMTP id j3mr520900ilk.85.1617147049699; Tue, 30 Mar 2021 16:30:49 -0700 (PDT) MIME-Version: 1.0 References: <20210329234131.304999-1-axelrasmussen@google.com> <20210330205519.GK429942@xz-x1> In-Reply-To: <20210330205519.GK429942@xz-x1> From: Axel Rasmussen Date: Tue, 30 Mar 2021 16:30:13 -0700 Message-ID: Subject: Re: [PATCH v3] userfaultfd/shmem: fix MCOPY_ATOMIC_CONTNUE behavior To: Peter Xu Cc: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Rapoport , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing , LKML , linux-fsdevel@vger.kernel.org, Linux MM , linux-kselftest@vger.kernel.org, Brian Geffon , Cannon Matthews , "Dr . David Alan Gilbert" , David Rientjes , Michel Lespinasse , Mina Almasry , Oliver Upton Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 22B0B2000273 X-Stat-Signature: gh9w3afaxeagisp8nz3pr8xnphnn5t76 X-Rspamd-Server: rspam02 Received-SPF: none (google.com>: No applicable sender policy available) receiver=imf11; identity=mailfrom; envelope-from=""; helo=mail-il1-f169.google.com; client-ip=209.85.166.169 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617147050-391975 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Mar 30, 2021 at 1:55 PM Peter Xu wrote: > > On Mon, Mar 29, 2021 at 04:41:31PM -0700, Axel Rasmussen wrote: > > Previously, we shared too much of the code with COPY and ZEROPAGE, so we > > manipulated things in various invalid ways: > > > > - Previously, we unconditionally called shmem_inode_acct_block. In the > > continue case, we're looking up an existing page which would have been > > accounted for properly when it was allocated. So doing it twice > > results in double-counting, and eventually leaking. > > > > - Previously, we made the pte writable whenever the VMA was writable. > > However, for continue, consider this case: > > > > 1. A tmpfs file was created > > 2. The non-UFFD-registered side mmap()-s with MAP_SHARED > > 3. The UFFD-registered side mmap()-s with MAP_PRIVATE > > > > In this case, even though the UFFD-registered VMA may be writable, we > > still want CoW behavior. So, check for this case and don't make the > > pte writable. > > > > - The initial pgoff / max_off check isn't necessary, so we can skip past > > it. The second one seems likely to be unnecessary too, but keep it > > just in case. Modify both checks to use pgoff, as offset is equivalent > > and not needed. > > > > - Previously, we unconditionally called ClearPageDirty() in the error > > path. In the continue case though, since this is an existing page, it > > might have already been dirty before we started touching it. It's very > > problematic to clear the bit incorrectly, but not a problem to leave > > it - so, just omit the ClearPageDirty() entirely. > > > > - Previously, we unconditionally removed the page from the page cache in > > the error path. But in the continue case, we didn't add it - it was > > already there because the page is present in some second > > (non-UFFD-registered) mapping. So, removing it is invalid. > > > > Because the error handling issues are easy to exercise in the selftest, > > make a small modification there to do so. > > > > Finally, refactor shmem_mcopy_atomic_pte a bit. By this point, we've > > added a lot of "if (!is_continue)"-s everywhere. It's cleaner to just > > check for that mode first thing, and then "goto" down to where the parts > > we actually want are. This leaves the code in between cleaner. > > > > Changes since v2: > > - Drop the ClearPageDirty() entirely, instead of trying to remember the > > old value. > > - Modify both pgoff / max_off checks to use pgoff. It's equivalent to > > offset, but offset wasn't initialized until the first check (which > > we're skipping). > > - Keep the second pgoff / max_off check in the continue case. > > > > Changes since v1: > > - Refactor to skip ahead with goto, instead of adding several more > > "if (!is_continue)". > > - Fix unconditional ClearPageDirty(). > > - Don't pte_mkwrite() when is_continue && !VM_SHARED. > > > > Fixes: 00da60b9d0a0 ("userfaultfd: support minor fault handling for shmem") > > Signed-off-by: Axel Rasmussen > > --- > > mm/shmem.c | 60 +++++++++++++----------- > > tools/testing/selftests/vm/userfaultfd.c | 12 +++++ > > 2 files changed, 44 insertions(+), 28 deletions(-) > > > > diff --git a/mm/shmem.c b/mm/shmem.c > > index d2e0e81b7d2e..fbcce850a16e 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -2377,18 +2377,22 @@ int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, > > struct page *page; > > pte_t _dst_pte, *dst_pte; > > int ret; > > - pgoff_t offset, max_off; > > - > > - ret = -ENOMEM; > > - if (!shmem_inode_acct_block(inode, 1)) > > - goto out; > > + pgoff_t max_off; > > + int writable; > > Nit: can be bool. > > [...] > > > +install_ptes: > > _dst_pte = mk_pte(page, dst_vma->vm_page_prot); > > - if (dst_vma->vm_flags & VM_WRITE) > > + /* For CONTINUE on a non-shared VMA, don't pte_mkwrite for CoW. */ > > + writable = is_continue && !(dst_vma->vm_flags & VM_SHARED) > > + ? 0 > > + : dst_vma->vm_flags & VM_WRITE; > > Nit: this code is slightly hard to read.. I'd slightly prefer "if > (is_continue)...". But more below. > > > + if (writable) > > _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte)); > > else { > > /* > > @@ -2455,7 +2458,7 @@ int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, > > > > ret = -EFAULT; > > max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); > > - if (unlikely(offset >= max_off)) > > + if (unlikely(pgoff >= max_off)) > > goto out_release_unlock; > > > > ret = -EEXIST; > > @@ -2485,13 +2488,14 @@ int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, > > return ret; > > out_release_unlock: > > pte_unmap_unlock(dst_pte, ptl); > > - ClearPageDirty(page); > > - delete_from_page_cache(page); > > + if (!is_continue) > > + delete_from_page_cache(page); > > out_release: > > unlock_page(page); > > put_page(page); > > out_unacct_blocks: > > - shmem_inode_unacct_blocks(inode, 1); > > + if (!is_continue) > > + shmem_inode_unacct_blocks(inode, 1); > > If you see we still have tons of "if (!is_continue)". Those are the places > error prone.. even if not in this patch, could be in the patch when this > function got changed again. > > Sorry to say this a bit late: how about introduce a helper to install the pte? No worries. :) > Pesudo code: > > int shmem_install_uffd_pte(..., bool writable) > { > ... > _dst_pte = mk_pte(page, dst_vma->vm_page_prot); > if (dst_vma->vm_flags & VM_WRITE) > _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte)); > else > set_page_dirty(page); > > dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); > if (!pte_none(*dst_pte)) { > pte_unmap_unlock(dst_pte, ptl); > return -EEXIST; > } > > inc_mm_counter(dst_mm, mm_counter_file(page)); > page_add_file_rmap(page, false); > set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); > > /* No need to invalidate - it was non-present before */ > update_mmu_cache(dst_vma, dst_addr, dst_pte); > pte_unmap_unlock(dst_pte, ptl); > return 0; > } > > Then at the entry of shmem_mcopy_atomic_pte(): > > if (is_continue) { > page = find_lock_page(mapping, pgoff); > if (!page) > return -EFAULT; > ret = shmem_install_uffd_pte(..., > is_continue && !(dst_vma->vm_flags & VM_SHARED)); > unlock_page(page); > if (ret) > put_page(page); > return ret; > } > > Do you think this would be cleaner? Yes, a refactor like that is promising. It's hard to say for certain without actually looking at the result - I'll spend some time tomorrow on a few options, and send along the cleanest version I come up with. Thanks for all the feedback and advice on this feature, Peter! > > -- > Peter Xu >