From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E934C433B4 for ; Wed, 14 Apr 2021 18:51:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AC7C0600D1 for ; Wed, 14 Apr 2021 18:51:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AC7C0600D1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3960B6B0036; Wed, 14 Apr 2021 14:51:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 345CE6B0070; Wed, 14 Apr 2021 14:51:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 172026B0071; Wed, 14 Apr 2021 14:51:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id EC33C6B0036 for ; Wed, 14 Apr 2021 14:51:46 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A1F18181DFAAE for ; Wed, 14 Apr 2021 18:51:46 +0000 (UTC) X-FDA: 78031866612.16.AC37338 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf15.hostedemail.com (Postfix) with ESMTP id 8B59BA0003A0 for ; Wed, 14 Apr 2021 18:51:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618426305; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=olnINB9MiAVWvAGJIPu39IEipm/CRaDs3WWVqPj5ijQ=; b=LhEVkhyN98Ne+wDQKzktr4ZCyfinHbP3Xa+rGqA6PiKl77gure8wi9HuMyLgBHc+nuGSEt cvrtMpMbOJyDp/9uOMXCJaC0yP+PFY27r47RR22n4wBe5Pk+O2G3oPstrPxbwjJjC0ckjl cxG2LFs1j1JFf6IGNTxIai6FTmIbSRM= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-125-QP06Te2GPW2Ny2LKiFHhZA-1; Wed, 14 Apr 2021 14:51:40 -0400 X-MC-Unique: QP06Te2GPW2Ny2LKiFHhZA-1 Received: by mail-qv1-f70.google.com with SMTP id f7-20020a0562141d27b029019a6fd0a183so226305qvd.23 for ; Wed, 14 Apr 2021 11:51:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=olnINB9MiAVWvAGJIPu39IEipm/CRaDs3WWVqPj5ijQ=; b=XI4Z/tsgDZrZerl4R5d1jQDjr9XoZGgiyRO/9b8jONDZjUjJc1LWJcjAvH8lrQRWO2 FqXe+MRnMxjW7oMs7ltQ8gkhhUiNfnhe4o36Z4JDrTR4FoWVg+ZlUb0ZBlLhG1j+08FM pAW1L5sd0ruMKwOOFjyLHRSP/O6dJvG/nEXHcfv40rK8FKaKDP2Ad+Sld+RRWP99aEzW nAHFL0kfYt6oWw+VUoRbRfjuZDvKeONbLLy1ejMK+G8fyaJEKmj48vVboP/1pvmZAC3D FoZkEVH6vuV4fybnG3UTicNnp/kcx9NJ2CUuo5FuHlbntuJ5qKFLOssSx2nmCta1rWE8 u8dA== X-Gm-Message-State: AOAM530OZ4OrsZnpHeKid1tyRguq1lJIsol4CQM/jD0Wae7uYAlPH4uB CDhRQ9350785OJuC/gguNcPE/kOY2NAKwV4sKLGqeMTK2zFPIqqig2XeSFYzG8pTX71ObPGoImL Q5sMlTMK5998= X-Received: by 2002:a05:6214:849:: with SMTP id dg9mr1265814qvb.30.1618426300103; Wed, 14 Apr 2021 11:51:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyg9sbLBXWyxGdJsTjNQr4DDmurtkXEAu8TTVg/ZkwZ2b+eql0sW4xwjZe0FJzN98fdLZhmxg== X-Received: by 2002:a05:6214:849:: with SMTP id dg9mr1265778qvb.30.1618426299817; Wed, 14 Apr 2021 11:51:39 -0700 (PDT) Received: from xz-x1 (bras-base-toroon474qw-grc-88-174-93-75-154.dsl.bell.ca. [174.93.75.154]) by smtp.gmail.com with ESMTPSA id m11sm159011qtg.67.2021.04.14.11.51.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Apr 2021 11:51:38 -0700 (PDT) Date: Wed, 14 Apr 2021 14:51:37 -0400 From: Peter Xu To: Hugh Dickins Cc: Axel Rasmussen , Alexander Viro , Andrea Arcangeli , Andrew Morton , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing , linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Subject: Re: [PATCH v2 3/9] userfaultfd/shmem: support minor fault registration for shmem Message-ID: <20210414185137.GK4440@xz-x1> References: <20210413051721.2896915-1-axelrasmussen@google.com> <20210413051721.2896915-4-axelrasmussen@google.com> MIME-Version: 1.0 In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Stat-Signature: gnnybrzowf9bgadcxcas61wz73ojidqo X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8B59BA0003A0 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf15; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=216.205.24.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618426304-142236 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Apr 14, 2021 at 12:36:13AM -0700, Hugh Dickins wrote: > On Mon, 12 Apr 2021, Axel Rasmussen wrote: > > > This patch allows shmem-backed VMAs to be registered for minor faults. > > Minor faults are appropriately relayed to userspace in the fault path, > > for VMAs with the relevant flag. > > > > This commit doesn't hook up the UFFDIO_CONTINUE ioctl for shmem-backed > > minor faults, though, so userspace doesn't yet have a way to resolve > > such faults. > > This is a very odd way to divide up the series: an "Intermission" > half way through the implementation of MINOR/CONTINUE: this 3/9 > makes little sense without the 4/9 to mm/userfaultfd.c which follows. > > But, having said that, I won't object and Peter did not object, and > I don't know of anyone else looking here: it will only give each of > us more trouble to insist on repartitioning the series, and it's the > end state that's far more important to me and to all of us. Agreed, ideally it should be after patch 4 since this patch enables the feature already. > > And I'll even seize on it, to give myself an intermission after > this one, until tomorrow (when I'll look at 4/9 and 9/9 - but > shall not look at the selftests ones at all). > > Most of this is okay, except the mm/shmem.c part; and I've just now > realized that somewhere (whether in this patch or separately) there > needs to be an update to Documentation/admin-guide/mm/userfaultfd.rst > (admin-guide? how weird, but not this series' business to correct). (maybe some dir "devel" would suite better? But I do also see soft-dirty.rst, idle_page_tracking.rst,..) [...] > > static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, > > @@ -1820,6 +1820,14 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, > > > > page = pagecache_get_page(mapping, index, > > FGP_ENTRY | FGP_HEAD | FGP_LOCK, 0); > > + > > + if (page && vma && userfaultfd_minor(vma)) { > > + unlock_page(page); > > + put_page(page); > > + *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); > > + return 0; > > + } > > + > > Okay, Peter persuaded you to move that up here: where indeed it > does look better than the earlier "swapped" version. > > But will crash on swap as it's currently written: it needs to say > if (!xa_is_value(page)) { > unlock_page(page); > put_page(page); > } And this is definitely true... Thanks, > > I did say before that it's more robust to return from the swap > case after doing the shmem_swapin_page(). But I might be slowly > realizing that the ioctl to add the pte (in 4/9) will do its > shmem_getpage_gfp(), and that will bring in the swap if user > did not already do so: so I was wrong to claim more robustness > the other way, this placement should be fine. I think. > > > if (xa_is_value(page)) { > > error = shmem_swapin_page(inode, index, &page, > > sgp, gfp, vma, fault_type); > > -- > > 2.31.1.295.g9ea45b61b8-goog > -- Peter Xu