From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69616CDB47E for ; Fri, 13 Oct 2023 16:49:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F1EAB80061; Fri, 13 Oct 2023 12:49:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ECEBE8D0015; Fri, 13 Oct 2023 12:49:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D968680061; Fri, 13 Oct 2023 12:49:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id CA1AC8D0015 for ; Fri, 13 Oct 2023 12:49:26 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 990CC120429 for ; Fri, 13 Oct 2023 16:49:26 +0000 (UTC) X-FDA: 81341023932.04.85330E4 Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) by imf19.hostedemail.com (Postfix) with ESMTP id B2E591A0007 for ; Fri, 13 Oct 2023 16:49:24 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=XY4psO51; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of lokeshgidra@google.com designates 209.85.128.47 as permitted sender) smtp.mailfrom=lokeshgidra@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697215764; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Nz4f3IlrP+KvlAwyWza5Savvd/r4RIbASaykdzTPF2Q=; b=NTyE2PWL0zskbxF8yG4+ZIW/3j1648rvZtc2q3z3r2CQ6GL3lsCzR0LyH04c/LMjDklgVx 6TmXPeR7ZLHsX/wwwNpQyBUS9cIyMu0QQEWOPAh0cJIuYoajZX+yaNhulfUR2BMnvoxKak awrWXvF/HgWNaIY6L5AdrrCpcC4JT8k= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=XY4psO51; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of lokeshgidra@google.com designates 209.85.128.47 as permitted sender) smtp.mailfrom=lokeshgidra@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697215764; a=rsa-sha256; cv=none; b=yGPjSlM9ayRGF9LQ+jQ7HPeWIDx2HHWPB1/mrVNqUOGkY8iFuAW0X3oIjYUXzzeyVHYg+u nRuOK5a49Ql3H7z+jl7u+j3d+s6mM7nxiF4ILoim6RtJGrLOJgYj8O5dR/XLZf27ZDHqEG bUrT7qO6fCBBK8aeCW5wvoKMwA371fc= Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-4060b623e64so15269635e9.0 for ; Fri, 13 Oct 2023 09:49:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1697215763; x=1697820563; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Nz4f3IlrP+KvlAwyWza5Savvd/r4RIbASaykdzTPF2Q=; b=XY4psO51TiMCcjATEJwm+QRGwkLbItiXiKyum9ZMfqeWyWT8QMyU12kQd2XF6Uf0q4 QaJRRCdqGA5e673wWAJwDWEzzEsBBIRoam3FDrptg6qQ9Bc4xLYK5IUui5gdz8E+QqbR 4wmjxUqavT7CsvMNtZtW/qf2CMd1wSXNoEKH8kbiws9DGOeCpfeuPF4DvcyASpPDzTPU 0UUgDGWAkoQFRw0wYMX9MffOp6WM2uBepOI8L41M/mIVArYqhmXw/XfZf34SurTkiAK5 ORbCE2313JJMuVM5lF/6ZIwuo1VCbgLlCdebJ9Vxz8AE0/2bYuxDiaRKKmHA3keab8LU 7shw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697215763; x=1697820563; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Nz4f3IlrP+KvlAwyWza5Savvd/r4RIbASaykdzTPF2Q=; b=FBzgfpPCVSRFKgW3sfqhLZfJzbhOHyXeRBJd+f3pJTvwbdDhdChWBBG42DWJImqrsO b2UAEs1HRFeF5kYa4lGfWNdfCMjSqShJQ64MIWQoe55Nfn1vP6RJ7SkDx2AM0AX+eRyN eLspQsJTX1rFH3WAwNDn0ofUZEK623eq8I+FUymOF7ELt89zVyV9uaksZsVOSj2h8fF+ 4QWFMiK2qysiuo+M32unU684wcxLgajv1UeJCj4fveC1PVNR5FKHD3s5+zQF1fmSwAuN FOTZcKQrlhq5pnH68MnS3C+HuS1UhRGyInAdpJvLUt2Sl36WnyUkSahFc89nk06vWwCj LVJg== X-Gm-Message-State: AOJu0Yy6Ccz/LoP+hZXMf/rXKyOJNdNMCohgQvPBWLkirZo8BRSwrPXs 0d4D6m1UFWa2/gA3cczRP6BXCtku+POOL0ETdD3nVQ== X-Google-Smtp-Source: AGHT+IFtLjgll1xakrK/VXVK5frgZzT7mNEQp+t035PCVEvFE07IjQVR2WujR5jmn9Xe1LJF1YS/ZP6IHRPFfrZ4aQw= X-Received: by 2002:a05:600c:a0a:b0:405:19dd:ad82 with SMTP id z10-20020a05600c0a0a00b0040519ddad82mr538198wmp.16.1697215762890; Fri, 13 Oct 2023 09:49:22 -0700 (PDT) MIME-Version: 1.0 References: <20231009064230.2952396-1-surenb@google.com> <20231009064230.2952396-3-surenb@google.com> <214b78ed-3842-5ba1-fa9c-9fa719fca129@redhat.com> <478697aa-f55c-375a-6888-3abb343c6d9d@redhat.com> <205abf01-9699-ff1c-3e4e-621913ada64e@redhat.com> In-Reply-To: From: Lokesh Gidra Date: Fri, 13 Oct 2023 09:49:10 -0700 Message-ID: Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI To: Peter Xu Cc: David Hildenbrand , Suren Baghdasaryan , akpm@linux-foundation.org, viro@zeniv.linux.org.uk, brauner@kernel.org, shuah@kernel.org, aarcange@redhat.com, hughd@google.com, mhocko@suse.com, axelrasmussen@google.com, rppt@kernel.org, willy@infradead.org, Liam.Howlett@oracle.com, jannh@google.com, zhangpeng362@huawei.com, bgeffon@google.com, kaleshsingh@google.com, ngeoffray@google.com, jdduke@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B2E591A0007 X-Stat-Signature: 4m8gd8pj3ct7scqd86on6moi13zagtqu X-HE-Tag: 1697215764-93260 X-HE-Meta: U2FsdGVkX19+IBzBRonzxivl2eI8sN32cVxW4ZoaafhtAdsIIjUl8QzskJFyvz35JKNWwSkuS/ncuDcy3ocFKo6fBJi6gUYdeN8ZarjnyrK/VKxXmL/jCHLguWdamRBd9/ZWaBt1b9qlibnEhrJlpDXvrKY7Uj6GOnSY9QGXLyasUv0uMbheq52VC/aKOoc8H5/TDHEU6NlltJnnLoXRGI72nmHgxPY8zLzIITpn/sPGZHVt2mlhbcbCa4WMy1nqnXzgMd+uLzUYNQw24v1HEIy6ljiE0By038bn7N9mIHhRp8IwlzfEzGgNN1PlaYzbXzWth8ifcVN82iscXJ5/AL81eXVn04Db7jXzOOrmxivAGdrH0XaLfNt/lqPVz8/nPooTnkFdw683VFfcDhkDTfFr2ZoHyxvFTr0v7zD5Viw/QdjFvf9z2Y2zkER6GcaV/Byu/H9TQ79cblvzVA3wS9efbrxxw2mriVVVSu2H3OVpOt77JM/t42rnnGg/U1G//X8aJaZoQNBURc3I+B7kGxCSwCF/qlKkk3pXVzRhixIQ7rn/HLBSLUMPvWRQtVhj4SwTVmYMqarH8iqMlU5bWAWbgNUcMusRThMTJNIAoS4jUC8Q2+u2hZG5sFj8XmuX06VV89uAsfH40nPHpI54ZoRO+varuPEQmMVydNvzlUs5jak8JgD8ZMZeEvZNFnCxMNmIUu1zYdBx/ykFJaI2gKe079aseG/yGDjgStsKdVz/w7nGr2v3DLzpqJCjq7C+Ef9fFTWLregl96BGdbreDI5I7TEEW5x1+3vye/ccS412kq+6ZWOXuXHb9OvU/OtBD00cL6uiYirOIteOHYE3NUve48xv6XH/P5vJhn8zyVl+yOJx9Xzk0Wwu5uzkR43nDTmzBgtNgakyaEL9IDvfkhIWeHp/l9YPKO59u57KdJzvkxvT/q87EGlfZrkWmehztRnvkxu2HqptEIhkUtw 5GEo/OGt JME6eKZUw8A5vrPY7Bp2/BFMsd/HMdz9mR+OOyj5cR9vfn5BpEemSYm9ZFROCHPRl8J24/ymisK6RGfGZKGDsVxkY17bWsW9bf8eTvgRhaTNnwtnod0EnJ1lIXaX415yJTAq8s7G9WZ+/FSOayAcNKSut4X6rsbVUD/rfsGAsJWk4t14mtcqWRayotJrSglnWpC/HpYo5YgWgT0SRFwueKm/wEpMWtanXR9Bd7hwPKQNV8LSBsMQrNtoqcWX7JN1wPM3I/tL6Cjuz1ZRGOqcD5GvlclojCeghqOGizFSRn0VnL4k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000183, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Oct 13, 2023 at 9:08=E2=80=AFAM Peter Xu wrote: > > On Fri, Oct 13, 2023 at 11:56:31AM +0200, David Hildenbrand wrote: > > Hi Peter, > > Hi, David, > > > > > > I used to have the same thought with David on whether we can simplify= the > > > design to e.g. limit it to single mm. Then I found that the trickies= t is > > > actually patch 1 together with the anon_vma manipulations, and the pr= oblem > > > is that's not avoidable even if we restrict the api to apply on singl= e mm. > > > > > > What else we can benefit from single mm? One less mmap read lock, bu= t > > > probably that's all we can get; IIUC we need to keep most of the rest= of > > > the code, e.g. pgtable walks, double pgtable lockings, etc. > > > > No existing mechanisms move anon pages between unrelated processes, tha= t > > naturally makes me nervous if we're doing it "just because we can". > > IMHO that's also the potential, when guarded with userfaultfd descriptor > being shared between two processes. > > See below with more comment on the raised concerns. > > > > > > > > > Actually, even though I have no solid clue, but I had a feeling that = there > > > can be some interesting way to leverage this across-mm movement, whil= e > > > keeping things all safe (by e.g. elaborately requiring other proc to = create > > > uffd and deliver to this proc). > > > > Okay, but no real use cases yet. > > I can provide a "not solid" example. I didn't mention it because it's > really something that just popped into my mind when thinking cross-mm, so= I > never discussed with anyone yet nor shared it anywhere. > > Consider VM live upgrade in a generic form (e.g., no VFIO), we can do tha= t > very efficiently with shmem or hugetlbfs, but not yet anonymous. We can = do > extremely efficient postcopy live upgrade now with anonymous if with REMA= P. > > Basically I see it a potential way of moving memory efficiently especiall= y > with thp. > > > > > > > > > Considering Andrea's original version already contains those bits and= all > > > above, I'd vote that we go ahead with supporting two MMs. > > > > You can do nasty things with that, as it stands, on the upstream codeba= se. > > > > If you pin the page in src_mm and move it to dst_mm, you successfully b= roke > > an invariant that "exclusive" means "no other references from other > > processes". That page is marked exclusive but it is, in fact, not exclu= sive. > > It is still exclusive to the dst mm? I see your point, but I think you'r= e > taking exclusiveness altogether with pinning, and IMHO that may not be > always necessary? > > > > > Once you achieved that, you can easily have src_mm not have MMF_HAS_PIN= NED, > > (I suppose you meant dst_mm here) > > > so you can just COW-share that page. Now you successfully broke the > > invariant that COW-shared pages must not be pinned. And you can even tr= igger > > VM_BUG_ONs, like in sanity_check_pinned_pages(). > > Yeah, that's really unfortunate. But frankly, I don't think it's the fau= lt > of this new feature, but the rest. > > Let's imagine if the MMF_HAS_PINNED wasn't proposed as a per-mm flag, but > per-vma, which I don't see why we can't because it's simply a hint so far= . > Then if we apply the same rule here, UFFDIO_REMAP won't even work for > single-mm as long as cross-vma. Then UFFDIO_REMAP as a whole feature will > be NACKed simply because of this.. > > And I don't think anyone can guarantee a per-vma MMF_HAS_PINNED can never > happen, or any further change to pinning solution that may affect this. = So > far it just looks unsafe to remap a pin page to me. > > I don't have a good suggestion here if this is a risk.. I'd think it risk= y > then to do REMAP over pinned pages no matter cross-mm or single-mm. It > means probably we just rule them out: folio_maybe_dma_pinned() may not ev= en > be enough to be safe with fast-gup. We may need page_needs_cow_for_dma() > with proper write_protect_seq no matter cross-mm or single-mm? > > > > > Can it all be fixed? Sure, with more complexity. For something without = clear > > motivation, I'll have to pass. > > I think what you raised is a valid concern, but IMHO it's better fixed no > matter cross-mm or single-mm. What do you think? > > In general, pinning lose its whole point here to me for an userspace eith= er > if it DONTNEEDs it or REMAP it. What would be great to do here is we unp= in > it upon DONTNEED/REMAP/whatever drops the page, because it loses its > coherency anyway, IMHO. > > > > > Once there is real demand, we can revisit it and explore what else we w= ould > > have to take care of (I don't know how memcg behaves when moving betwee= n > > completely unrelated processes, maybe that works as expected, I don't k= now > > and I have no time to spare on reviewing features with no real use case= s) > > and announce it as a new feature. > > Good point. memcg is probably needed.. > > So you reminded me to do a more thorough review against zap/fault paths, = I > think what's missing are (besides page pinning): > > - mem_cgroup_charge()/mem_cgroup_uncharge(): > > (side note: I think folio_throttle_swaprate() is only for when > allocating new pages, so not needed here) > > - check_stable_address_space() (under pgtable lock) > > - tlb flush > > Hmm???????????????? I can't see anywhere we did tlb flush, batched or > not, either single-mm or cross-mm should need it. Is this missing? > IIUC, ptep_clear_flush() flushes tlb entry. So I think we are doing unbatched flushing. Possibly a nice performance improvement later on would be to try doing it batched. Suren can throw more light on it. One thing I was wondering is don't we need cache flush for the src pages? mremap's move_page_tables() does it. IMHO, it's required here as well. > > > > > > Note: that (with only reading the documentation) it also kept me wonder= ing > > how the MMs are even implied from > > > > struct uffdio_move { > > __u64 dst; /* Destination of move */ > > __u64 src; /* Source of move */ > > __u64 len; /* Number of bytes to move */ > > __u64 mode; /* Flags controlling behavior of move */ > > __s64 move; /* Number of bytes moved, or negated error */ > > }; > > > > That probably has to be documented as well, in which address space dst = and > > src reside. > > Agreed, some better documentation will never hurt. Dst should be in the = mm > address space that was bound to the userfault descriptor. Src should be = in > the current mm address space. > > Thanks, > > -- > Peter Xu >