From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71116E732FD for ; Thu, 28 Sep 2023 18:24:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EFF408D00CF; Thu, 28 Sep 2023 14:24:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EAE9C8D0053; Thu, 28 Sep 2023 14:24:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D51D18D00CF; Thu, 28 Sep 2023 14:24:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C1FB38D0053 for ; Thu, 28 Sep 2023 14:24:12 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 96D38140351 for ; Thu, 28 Sep 2023 18:24:12 +0000 (UTC) X-FDA: 81286830744.14.74A5A45 Received: from mail-yb1-f169.google.com (mail-yb1-f169.google.com [209.85.219.169]) by imf01.hostedemail.com (Postfix) with ESMTP id AB9FA40011 for ; Thu, 28 Sep 2023 18:24:10 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="P/0XkRQO"; spf=pass (imf01.hostedemail.com: domain of surenb@google.com designates 209.85.219.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695925450; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1vhyBKeJnmBSwRLvoT4DeJoFEAXgBsUcR1SJ8WjemY0=; b=vnqEU4THPauSxtY+C7Nnd4ph8ZO2m9gLsJgt5hFnFiLZPwS6auiDwQ3+0f7L1f8yw1CsPn QRaeVCcNx0UI7gAd6weN67oQjz4GVL/WrfaEnhrafcVzzzHxH9CypSwUUs/C+GlZ0kWCwf ZLHLa+qRmxO2vhaUKdKvrgazKdiKHOo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695925450; a=rsa-sha256; cv=none; b=24vmryFMkfYk0oWl2QMugJo24xLtPHY7FqJaXkc55+jbGevSXxCBs+8qRU7RKY2WmysQpz V1m3WlbrqqqBDFO39CKfsEeGYyig4wmrU43bstJHR6PM+FyzKscVjcW1FG4IxwSyBQJQx4 SMnl7TRVvpl8Ev7nJI9hc7l6sSo5mgA= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="P/0XkRQO"; spf=pass (imf01.hostedemail.com: domain of surenb@google.com designates 209.85.219.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yb1-f169.google.com with SMTP id 3f1490d57ef6-d7b91422da8so15175312276.2 for ; Thu, 28 Sep 2023 11:24:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695925450; x=1696530250; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=1vhyBKeJnmBSwRLvoT4DeJoFEAXgBsUcR1SJ8WjemY0=; b=P/0XkRQOp0/AdFRxpMe063xHCr8NiqCiw1S9RVTRVgL+0ZZYL9EOXqrnzRJ343ciV/ pJFy8xX4Y7p+znZ5YjtQztw7jPHsy4iGLRpwYfSZAM+Oy9YBiMrgbaCdTZZqkI6YdedV 1CPmzETVO2tNBkgG5npHBfJzqFGkkls8+e/zPeufLMlKXR8xE5dwVl7RLt1YjdZZFIEf MXSaLq4sqnulaDGc2yko8XjSU7ltwjktGYTU6okIIgla430jYeVHljA6PAtv9kypn1Lv kx24sGL0G2u6xAcRB5bBVXUl9+rDYL6QUjCvVKRYCc0DoOJGip78quJFt1PgcGkIU0Mr ZncQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695925450; x=1696530250; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1vhyBKeJnmBSwRLvoT4DeJoFEAXgBsUcR1SJ8WjemY0=; b=V5Ja2zBTa2d4y0hGO9Df+BboK4W7hGtLa/CcF/FHPG1pIYGt74L/oeLZJt9K5VdwVW I5D2WOCvngYeL3Fe7SdCXUEGTJ+A2J0aEeg9PchEt7yvFI6Gu8oqUobGlpqXSnpH0kaw VjjQLEODnFSryw2ReVxRorwtfpFpmDhe4Znfjd/Gl5M7Ovypal9xTpHeFKuMfP0J4XK1 1ry88irlFvT7bVYT7WdKPkk96Vx1YzvSB1eSKmtNSYbc54JV7lygwEQsoppWI4gkjeLb shGiKy+QyGYzzEcbIIzBINlRO+SbOFoFGzuMetKfhwWslAWfsVO7czu3VH0By+0Mmnj3 HP+g== X-Gm-Message-State: AOJu0YwAn1NmHk8skKIxb0eB3y/lJAh5Bmv2FZrH3qrU2uTpiaFiX8gq bHgDaaLznwyLuFgYbDENCqjROoSNU3P3HrQ82MlD3Q== X-Google-Smtp-Source: AGHT+IGv7/JnHP+g4RMvNlkcVUMhLnaH6kkWzhnUyy+ImLvupMiRLcMeDIlTtElq7EXDBPYVG7DRDhahUfxYDLd4hv0= X-Received: by 2002:a5b:783:0:b0:d63:1d3b:9416 with SMTP id b3-20020a5b0783000000b00d631d3b9416mr1684086ybq.2.1695925449438; Thu, 28 Sep 2023 11:24:09 -0700 (PDT) MIME-Version: 1.0 References: <20230923013148.1390521-1-surenb@google.com> <20230923013148.1390521-3-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Thu, 28 Sep 2023 11:23:55 -0700 Message-ID: Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI To: Peter Xu Cc: akpm@linux-foundation.org, viro@zeniv.linux.org.uk, brauner@kernel.org, shuah@kernel.org, aarcange@redhat.com, lokeshgidra@google.com, david@redhat.com, hughd@google.com, mhocko@suse.com, axelrasmussen@google.com, rppt@kernel.org, willy@infradead.org, Liam.Howlett@oracle.com, jannh@google.com, zhangpeng362@huawei.com, bgeffon@google.com, kaleshsingh@google.com, ngeoffray@google.com, jdduke@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: AB9FA40011 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: fd6ngrzp884mcryfzk7dbh8j6453hw9g X-HE-Tag: 1695925450-139853 X-HE-Meta: U2FsdGVkX1+1KHX9m9lk6e5UlVExT+Lh316zeT6KnguB1pLCYlZstbIt0Id8Mg/gp/ysvQh11ERBakzCwYYBcovyFo+hqtP9JVutskcOn4iFFw+jRYnp/fGnpQMZLCMii7b7n3bGrvYIX2IHKerfAAEHfh8oHa3z9HL/I3WDUcaiiD+iegUU0QnL9PY68ze11acXu/vdroM8yta+ccLin0DYS07Tba912axUIFvnDJiZc7PoBHQKPsPNq+RsoTTwb0+bGGdAtY0ewemj7yt3pybY4RBXDJCD/ZZj0+rQ1oxhO6I3NaU5gHQo8WNqQRAshgM/ceV4um9CU/3Fyga9eiiqX1m7MD+N2LXFiINxCVqC6I2PlNUZyXq9B7UQyOhuiRT7S26u3Z9/MLq5+ftlcIqrkyvAy0gCdKRppS2nIhvyzQ6xOU2E9xu7RoaHRoHfpXUQQ3DL+MU/DJQAqbuB17DKaETL993yNwfKiBJUus7EF1QSXz37ssvKRyP5KABWjDpm5pzDzX77eMGFNTdrGdOI3gS5eGRfDFn5tUzvEvNFPUIVI+obZ8UAK5P6VI5nRKQqGImIRqZZN0sCKH3nL3xN8uw5G2+HZMobHBoFeqJwTbO/A6qt+wSeJQwTVgAJ1BS1v3iRH2vyqpmfK9qFXFOzo70OwmU7RBQzyF1/UWjpLkCKqMpoN7pL74J/Rh2faexy4ttas6JTh3WtwckqfMz+lQhwoK0tisHlSkranO7mkgbGUx7K06Qf2WqkjXj92vVyAXVjg6dg2+A5ydCtzeZPyoLpDwnrR8XFU7xvrwkODYxT/Tf3Op652YRC2J7uUSQTOiox8sAFSrN/FZThAc4Umufeie6aG35NxLIc/dALEqD0b/yHBt2DtwhpvwJt2TiagwF/oTjbQx5lox/uhUQP6vPWHCKGszEbZgjh+ALR97M9b4KZLb0vX1ZIBf1xVsYl94wDf7bb26aBNKW /MvF4SSi pJGQ6QqeNXZjqtpvVx52PU39GekgP7ZXjdrSHQZK1kSjhf8Ot5o1BwBopyKLeoqY4XZZOUlcywvLlc9mBdZKsmfK7nMJap3ULqqhn/wO1PqgQqHq5xvdqaE51s/lz7dj8XyI+6RlegSKYkwZ6YAkPEgq+HKomaInAdhI9zaj1RrRCTqwMaYGymzcVwBqYx9VaG7G9VQ07rsXyUBV1M1HIlJeFWakCpEKuFlwzpNHDfN0CGrYgHm2upvMftfuChdoMneEXqdT0leR/pXxKOqI4XY4EkGRmsvfx0SUcxDrAcWNs5ueko59ASan83ZrdBQl89sSTDdegZj6GukZEOMIqvLIX6yFm3F+sCTDD X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Sep 28, 2023 at 10:09=E2=80=AFAM Peter Xu wrote= : > > On Fri, Sep 22, 2023 at 06:31:45PM -0700, Suren Baghdasaryan wrote: > > @@ -72,6 +73,7 @@ > > #define _UFFDIO_WAKE (0x02) > > #define _UFFDIO_COPY (0x03) > > #define _UFFDIO_ZEROPAGE (0x04) > > +#define _UFFDIO_REMAP (0x05) > > #define _UFFDIO_WRITEPROTECT (0x06) > > #define _UFFDIO_CONTINUE (0x07) > > #define _UFFDIO_POISON (0x08) > > Might be good to add a feature bit (UFFD_FEATURE_REMAP) for userspace to > probe? Ack. > > IIUC the whole remap feature was proposed at the birth of uffd even befor= e > COPY, but now we have tons of old kernels who will not support it. > > [...] > > > +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *s= rc_mm, > > + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval= , > > + struct vm_area_struct *dst_vma, > > + struct vm_area_struct *src_vma, > > + unsigned long dst_addr, unsigned long src_addr) > > +{ > > + pmd_t _dst_pmd, src_pmdval; > > + struct page *src_page; > > + struct folio *src_folio; > > + struct anon_vma *src_anon_vma, *dst_anon_vma; > > + spinlock_t *src_ptl, *dst_ptl; > > + pgtable_t src_pgtable, dst_pgtable; > > + struct mmu_notifier_range range; > > + int err =3D 0; > > + > > + src_pmdval =3D *src_pmd; > > + src_ptl =3D pmd_lockptr(src_mm, src_pmd); > > + > > + BUG_ON(!spin_is_locked(src_ptl)); > > + mmap_assert_locked(src_mm); > > + mmap_assert_locked(dst_mm); > > + > > + BUG_ON(!pmd_trans_huge(src_pmdval)); > > + BUG_ON(!pmd_none(dst_pmdval)); > > + BUG_ON(src_addr & ~HPAGE_PMD_MASK); > > + BUG_ON(dst_addr & ~HPAGE_PMD_MASK); > > + > > + src_page =3D pmd_page(src_pmdval); > > + if (unlikely(!PageAnonExclusive(src_page))) { > > + spin_unlock(src_ptl); > > + return -EBUSY; > > + } > > + > > + src_folio =3D page_folio(src_page); > > + folio_get(src_folio); > > + spin_unlock(src_ptl); > > + > > + /* preallocate dst_pgtable if needed */ > > + if (dst_mm !=3D src_mm) { > > + dst_pgtable =3D pte_alloc_one(dst_mm); > > + if (unlikely(!dst_pgtable)) { > > + err =3D -ENOMEM; > > + goto put_folio; > > + } > > + } else { > > + dst_pgtable =3D NULL; > > + } > > + > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_= addr, > > + src_addr + HPAGE_PMD_SIZE); > > + mmu_notifier_invalidate_range_start(&range); > > + > > + /* block all concurrent rmap walks */ > > This is not accurate either I think. Maybe we can do "s/all/most/", or > just drop it (assuming the detailed and accurate version of documentation > lies above remap_pages() regarding to REMAP locking)? Yes, comments from the original patch need to be rechecked and I'm sure I missed some new rules. Thanks for pointing this one out! I'll drop it. > > > + folio_lock(src_folio); > > [...] > > > > +static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *= src_mm, > > + struct vm_area_struct *dst_vma, > > + struct vm_area_struct *src_vma, > > + unsigned long dst_addr, unsigned long src_addr, > > + pte_t *dst_pte, pte_t *src_pte, > > + pte_t orig_dst_pte, pte_t orig_src_pte, > > + spinlock_t *dst_ptl, spinlock_t *src_ptl, > > + struct folio *src_folio) > > remap_present_pte? Sounds good. > > [...] > > > +/** > > + * remap_pages - remap arbitrary anonymous pages of an existing vma > > + * @dst_start: start of the destination virtual memory range > > + * @src_start: start of the source virtual memory range > > + * @len: length of the virtual memory range > > + * > > + * remap_pages() remaps arbitrary anonymous pages atomically in zero > > + * copy. It only works on non shared anonymous pages because those can > > + * be relocated without generating non linear anon_vmas in the rmap > > + * code. > > + * > > + * It provides a zero copy mechanism to handle userspace page faults. > > + * The source vma pages should have mapcount =3D=3D 1, which can be > > + * enforced by using madvise(MADV_DONTFORK) on src vma. > > + * > > + * The thread receiving the page during the userland page fault > > + * will receive the faulting page in the source vma through the networ= k, > > + * storage or any other I/O device (MADV_DONTFORK in the source vma > > + * avoids remap_pages() to fail with -EBUSY if the process forks befor= e > > + * remap_pages() is called), then it will call remap_pages() to map th= e > > + * page in the faulting address in the destination vma. > > + * > > + * This userfaultfd command works purely via pagetables, so it's the > > + * most efficient way to move physical non shared anonymous pages > > + * across different virtual addresses. Unlike mremap()/mmap()/munmap() > > + * it does not create any new vmas. The mapping in the destination > > + * address is atomic. > > + * > > + * It only works if the vma protection bits are identical from the > > + * source and destination vma. > > + * > > + * It can remap non shared anonymous pages within the same vma too. > > + * > > + * If the source virtual memory range has any unmapped holes, or if > > + * the destination virtual memory range is not a whole unmapped hole, > > + * remap_pages() will fail respectively with -ENOENT or -EEXIST. This > > + * provides a very strict behavior to avoid any chance of memory > > + * corruption going unnoticed if there are userland race conditions. > > + * Only one thread should resolve the userland page fault at any given > > + * time for any given faulting address. This means that if two threads > > + * try to both call remap_pages() on the same destination address at t= he > > + * same time, the second thread will get an explicit error from this > > + * command. > > + * > > + * The command retval will return "len" is successful. The command > > + * however can be interrupted by fatal signals or errors. If > > + * interrupted it will return the number of bytes successfully > > + * remapped before the interruption if any, or the negative error if > > + * none. It will never return zero. Either it will return an error or > > + * an amount of bytes successfully moved. If the retval reports a > > + * "short" remap, the remap_pages() command should be repeated by > > + * userland with src+retval, dst+reval, len-retval if it wants to know > > + * about the error that interrupted it. > > + * > > + * The UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES flag can be specified to > > + * prevent -ENOENT errors to materialize if there are holes in the > > + * source virtual range that is being remapped. The holes will be > > + * accounted as successfully remapped in the retval of the > > + * command. This is mostly useful to remap hugepage naturally aligned > > + * virtual regions without knowing if there are transparent hugepage > > + * in the regions or not, but preventing the risk of having to split > > + * the hugepmd during the remap. > > + * > > + * If there's any rmap walk that is taking the anon_vma locks without > > + * first obtaining the folio lock (for example split_huge_page and > > + * folio_referenced), they will have to verify if the folio->mapping > > Hmm, this sentence seems to be not 100% accurate, perhaps not anymore? > > As split_huge_page() should need the folio lock and it'll serialize with > REMAP with the folio lock too. It seems to me only folio_referenced() is > the outlier so far, and that's covered by patch 1. > > I did also check other users of folio_get_anon_vma() (similar to use case > of split_huge_page()) and they're all with the folio lock held, so we > should be good. > > In summary, perhaps: > > - Drop split_huge_page() example here? > > - Should we document above folio_get_anon_vma() about this specialty du= e > to UFFDIO_REMAP? I'm thiking something like: > > + * > + * NOTE: the caller should normally hold folio lock when calling this. = If > + * not, the caller needs to double check the anon_vma didn't change afte= r > + * taking the anon_vma lock for either read or write (UFFDIO_REMAP can > + * modify it concurrently without folio lock protection). See > + * folio_lock_anon_vma_read() which has already covered that, and commen= t > + * above remap_pages(). > */ > struct anon_vma *folio_get_anon_vma(struct folio *folio) > { > ... > } Ack. Will fix the remap_pages description and add the comment for folio_get_anon_vma. > > > + * has changed after taking the anon_vma lock. If it changed they > > + * should release the lock and retry obtaining a new anon_vma, because > > + * it means the anon_vma was changed by remap_pages() before the lock > > + * could be obtained. This is the only additional complexity added to > > + * the rmap code to provide this anonymous page remapping functionalit= y. > > + */ > > +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm= , > > + unsigned long dst_start, unsigned long src_start, > > + unsigned long len, __u64 mode) > > +{ > > [...] > > > + if (!err) { > > + dst_addr +=3D step_size; > > + src_addr +=3D step_size; > > + moved +=3D step_size; > > + } > > + > > + if ((!err || err =3D=3D -EAGAIN) && > > + fatal_signal_pending(current)) > > + err =3D -EINTR; > > + > > + if (err && err !=3D -EAGAIN) > > + break; > > The err handling is slightly harder to read. I tried to rewrite it like > this: > > switch (err) { > case 0: > dst_addr +=3D step_size; > src_addr +=3D step_size; > moved +=3D step_size; > /* fall through */ > case -EAGAIN: > if (fatal_signal_pending(current)) { > err =3D -EINTR; > goto out; > } > /* Continue with the loop */ > break; > default: > goto out; > } > > Not super good but maybe slightly better? No strong opinions, but if you > agree that looks better we can use it. Agree that this should be improved. Let me see if I can find a cleaner way to handle these errors. > > > + } > > + > > +out: > > + mmap_read_unlock(dst_mm); > > + if (dst_mm !=3D src_mm) > > + mmap_read_unlock(src_mm); > > + BUG_ON(moved < 0); > > + BUG_ON(err > 0); > > + BUG_ON(!moved && !err); > > + return moved ? moved : err; > > +} > > I think for the rest I'll read the new version (e.g. I saw discussion on > proper handling of pmd swap entries, which is not yet addressed, but > probably will in the next one). Appreciate your feedback! Thanks, Suren. > > Thanks, > > -- > Peter Xu > > -- > To unsubscribe from this group and stop receiving emails from it, send an= email to kernel-team+unsubscribe@android.com. >