From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1F69E810AD for ; Wed, 27 Sep 2023 10:07:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 64BE18D0028; Wed, 27 Sep 2023 06:07:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FCE48D0002; Wed, 27 Sep 2023 06:07:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EB308D0028; Wed, 27 Sep 2023 06:07:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 404D18D0002 for ; Wed, 27 Sep 2023 06:07:11 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 0FBCA160670 for ; Wed, 27 Sep 2023 10:07:11 +0000 (UTC) X-FDA: 81281949462.29.69FF7B4 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf22.hostedemail.com (Postfix) with ESMTP id 36C80C001E for ; Wed, 27 Sep 2023 10:07:09 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=E9b0cKFv; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of jannh@google.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=jannh@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695809229; a=rsa-sha256; cv=none; b=zQjvM6mgl97IC1HoT920Zj8Cn5VKchiAYId10kUSO4vvpb2YEDh8t9aI2boXcN06vC4ZE/ rP06MXAYWJq2O17G1SJhAPcXE6bU2lIW0REb3t+G6biZcFBkg/C0tQHK/1ZVkEdvrVNX2B h29ZaSV/VHW42nlFO6l7QPIeg7av2lQ= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=E9b0cKFv; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of jannh@google.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=jannh@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695809229; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Q1sms3m0q6TLkv39Fz9Q4qmOvbekvnpwGel5K6RYBQc=; b=RM3G/MDh/HsxxIedXkAxTuZlsmiL1geyyBlmOEMFnH5kvcY39u2y/BfHncOI0koV+9m62y JwuJraDNRd80h9xNEbPsqeiqzq5dB0IWQY2uMOjZzjU9ovp3Pj6qRwQUOZYBdDuMXd6TQ4 eFv/f1QLUYJ1ZwHK5mdI3KvxGvuscEg= Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-1c6193d6bb4so286665ad.0 for ; Wed, 27 Sep 2023 03:07:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695809228; x=1696414028; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Q1sms3m0q6TLkv39Fz9Q4qmOvbekvnpwGel5K6RYBQc=; b=E9b0cKFvWW5v1yfxRPZunDlksYvuNpHDP/EoseFld4nC3wvtWSZwgtlyedkZSeaase ocYVkjskEFN23WyOeDlyEbotao3wXRuq8HFywpVX7xrOvvKwqFd1PFa4RYf6GyhktW1W gsKQGuL3TYHvsCIp+R/ukHlhv83En6ggLYyPtnilX+Sm9bAAw2ewKNxAaXXHFh91U06D hvqH9rpwzuNWEph7/DSmYMFYgal3Ee6kf5PrfChbXljoowXUwmNIYkCZaIJeIlwh8KZa hW5aVyyfcDVmIvfV6RDp0klu5es0UbacN0b+wRMOrvwifXgxD0fnKzjsCcsMEWSdX77R ER/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695809228; x=1696414028; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Q1sms3m0q6TLkv39Fz9Q4qmOvbekvnpwGel5K6RYBQc=; b=mTT4OxS1V3MQL/g2b5KLa5rrdeFs4sHdP1wMVbo6ZcZpP1lRUfBI7Fg7F9zRXCevU0 9n2MrIeY62cZeufaNX7WPYGep2v+B+SU8gHHNXeSuk6SHg8QeczJPOFhWwMkoPsWt0hg DLnhBYJYLG+9DoTuiqirfI3IxEBbwZE7pFq+NU+kDstDEyholJr27APMLIPnxj4R3Z4N tJBxTU0td09AbvxaHHyYHEera8p0KnGcPhQUS6Lya+r+Jj31/nDSKyCHtyxAUrUHRxMp YUdivTiD/W0Dqj2vs1RQBlmwHayViGP9iIICe4h1TfU+9VzrbUl8l1LveVorVpRS2wSC XBvg== X-Gm-Message-State: AOJu0Yx0J4FaegSs5FDXheoSCYPVtQI38c5IW191Hv9HIuhMnoesfPRw m31BW+aUDp1Ivs8ozLZDank9YMJhIJZtRzBntLwn6A== X-Google-Smtp-Source: AGHT+IFanxwa4yIU2V0wyf1ATfRK+p51m+xb0grLdzm8c9lTbvXMJ08UMOTQPeDLgoFL36VWCTONOa1jpRuoh7MZ4k8= X-Received: by 2002:a17:902:ea04:b0:1c6:20ca:2cb8 with SMTP id s4-20020a170902ea0400b001c620ca2cb8mr458147plg.22.1695809227668; Wed, 27 Sep 2023 03:07:07 -0700 (PDT) MIME-Version: 1.0 References: <20230923013148.1390521-1-surenb@google.com> <20230923013148.1390521-3-surenb@google.com> In-Reply-To: <20230923013148.1390521-3-surenb@google.com> From: Jann Horn Date: Wed, 27 Sep 2023 12:06:31 +0200 Message-ID: Subject: potential new userfaultfd vs khugepaged conflict [was: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI] To: Suren Baghdasaryan , Hugh Dickins Cc: Andrew Morton , Al Viro , brauner@kernel.org, Shuah Khan , Andrea Arcangeli , Lokesh Gidra , Peter Xu , David Hildenbrand , Michal Hocko , Axel Rasmussen , Mike Rapoport , willy@infradead.org, Liam.Howlett@oracle.com, zhangpeng362@huawei.com, Brian Geffon , Kalesh Singh , Nicolas Geoffray , Jared Duke , Linux-MM , linux-fsdevel , kernel list , "open list:KERNEL SELFTEST FRAMEWORK" , kernel-team Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 36C80C001E X-Stat-Signature: dxdpge7rf4zi5pa3bhwrof5q55qc8tn1 X-HE-Tag: 1695809229-35676 X-HE-Meta: U2FsdGVkX1+VJITLO16JoxwHaWGAw3VxADR4fccLv4MC8tiX0WF6dR4+s1MkaNVTaB4YQ3yrJ1yBHjoloy9rW3F2XOCZOh3UIYMBMVhrxQSj9vG3TVK1+vpKG7Ay4JF60W7hotrQIBYHVNBfLlRx3IyFG2lsjbNJL1FKuViUYgOCWeIMf2Qj3MXuMcMkoP+FsBv+jg8ggxFGgfmDWiQTGSfpw5+YrUGbJGKEKByXuAVnRbF8d3exSqdN5pMthGXf7pZkvbl1DLgc4NMb7Ixwj8cOXDLAo/Y9pyzCoM5gQsZjlIKKfSBGfkqoA51S/cb1o3N1IvPuKhRcZCISiS3s3CucbA/jbkwqYrMOHnoKAWLwN2y+fDVNuV/JwHK8Li3rDiUCphMz3F9TnAcvVbavEXCUsZaM7trRD2UMxtK7Cb5y4L6aw1azfaVUfzPdm5ZnPDfRxx/JqIxIsp13O7eLH4z8vP5IItCrPrGkZgVQ0yaZNFPtOTKxRKNU660TijKW7QuXKhDiYFSOVDbhanazp9fznQ7hwN1BVYBTAnjyV9mwnWaXTlpe3Xc7BnmRkFU1v9ssNlvZxQKS9wXYum/bbqzPVVdxw6FSx4EhssqkDc+E0uqOd7LXGHYpE6WyQgEdygUzdjLbSHI0UqEEZ6Qb0rXnkrJ3j0ZphgqPXbBodP7aXmKGUqew7DNa0sylj1hFZn7eBqvrVPagSOkBcqZBbzf7mGbJ9nYkpW55a0QeEfjLdM4/75hUsPHTpiwqq/f4TGzEroKMy8g1GagXOh2w5UxK29CxTPK90dC0buI1n3VtygoZa0H90Fr3g/fE/0818KLLwLuUblmp1H9JEECI6ABffWHLeyH0ONq2jHeGJ4tr8sgivMc/2P0dvcr3Q937LIj4mWfiyOQd/8ns3ORcaz3gpqW3O8+ElajpFPd5GWuzGHVI9/70Am2Cw2cZVbdZ4OAoTM+Ygm++oC3zXod cv7g4jvY fhHJAyy8YmeRyoDziiwrr85T3TbsIIvbA8TuFtB6wgTiL+69Wva0lb0lHJWs7PtW7xgo5dQyfWmBw3vg68VX1gQqR6Gl/rdb8yaXmkMpn+D1tRzvr0r0UUyRN1usM5dM8p5mUWrAPtlbUv9tFiQdJYfFlCKaen48D8P0w2zTR48iVrBg2hn9f7LdU+a/tLKwOKhCwgbZpmGL4gtgG5W3eUGIRfVeb/NLH3wMk71HqmFpl/jrP7YMLdM5IcxqBjjP/LaxVT2UroN8LEN6zN47s2qXDmITlU2DhnV7B7U2ZWRrGBYpOvMUrsK5eDLCRNuJKM3KAVlttUfy/+Uk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: [moving Hugh into "To:" recipients as FYI for khugepaged interaction] On Sat, Sep 23, 2023 at 3:31=E2=80=AFAM Suren Baghdasaryan wrote: > From: Andrea Arcangeli > > This implements the uABI of UFFDIO_REMAP. > > Notably one mode bitflag is also forwarded (and in turn known) by the > lowlevel remap_pages method. > > Signed-off-by: Andrea Arcangeli > Signed-off-by: Suren Baghdasaryan [...] > +/* > + * The mmap_lock for reading is held by the caller. Just move the page > + * from src_pmd to dst_pmd if possible, and return true if succeeded > + * in moving the page. > + */ > +static int remap_pages_pte(struct mm_struct *dst_mm, > + struct mm_struct *src_mm, > + pmd_t *dst_pmd, > + pmd_t *src_pmd, > + struct vm_area_struct *dst_vma, > + struct vm_area_struct *src_vma, > + unsigned long dst_addr, > + unsigned long src_addr, > + __u64 mode) > +{ > + swp_entry_t entry; > + pte_t orig_src_pte, orig_dst_pte; > + spinlock_t *src_ptl, *dst_ptl; > + pte_t *src_pte =3D NULL; > + pte_t *dst_pte =3D NULL; > + > + struct folio *src_folio =3D NULL; > + struct anon_vma *src_anon_vma =3D NULL; > + struct mmu_notifier_range range; > + int err =3D 0; > + > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, > + src_addr, src_addr + PAGE_SIZE); > + mmu_notifier_invalidate_range_start(&range); > +retry: > + dst_pte =3D pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst= _ptl); > + > + /* If an huge pmd materialized from under us fail */ > + if (unlikely(!dst_pte)) { > + err =3D -EFAULT; > + goto out; > + } > + > + src_pte =3D pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src= _ptl); > + > + /* > + * We held the mmap_lock for reading so MADV_DONTNEED > + * can zap transparent huge pages under us, or the > + * transparent huge page fault can establish new > + * transparent huge pages under us. > + */ > + if (unlikely(!src_pte)) { > + err =3D -EFAULT; > + goto out; > + } > + > + BUG_ON(pmd_none(*dst_pmd)); > + BUG_ON(pmd_none(*src_pmd)); > + BUG_ON(pmd_trans_huge(*dst_pmd)); > + BUG_ON(pmd_trans_huge(*src_pmd)); This works for now, but note that Hugh Dickins has recently been reworking khugepaged such that PTE-based mappings can be collapsed into transhuge mappings under the mmap lock held in *read mode*; holders of the mmap lock in read mode can only synchronize against this by taking the right page table spinlock and rechecking the pmd value. This is only the case for file-based mappings so far, not for anonymous private VMAs; and this code only operates on anonymous private VMAs so far, so it works out. But if either Hugh further reworks khugepaged such that anonymous VMAs can be collapsed under the mmap lock in read mode, or you expand this code to work on file-backed VMAs, then it will become possible to hit these BUG_ON() calls. I'm not sure what the plans for khugepaged going forward are, but the number of edgecases everyone has to keep in mind would go down if you changed this function to deal gracefully with page tables disappearing under you. In the newest version of mm/pgtable-generic.c, above __pte_offset_map_lock(), there is a big comment block explaining the current rules for page table access; in particular, regarding the helper pte_offset_map_nolock() that you're using: * pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_ma= p(); * but when successful, it also outputs a pointer to the spinlock in ptlp -= as * pte_offset_map_lock() does, but in this case without locking it. This h= elps * the caller to avoid a later pte_lockptr(mm, *pmd), which might by that t= ime * act on a changed *pmd: pte_offset_map_nolock() provides the correct spin= lock * pointer for the page table that it returns. In principle, the caller sh= ould * recheck *pmd once the lock is taken; in practice, no callsite needs that= - * either the mmap_lock for write, or pte_same() check on contents, is enou= gh. If this becomes hittable in the future, I think you will need to recheck *pmd, at least for dst_pte, to avoid copying PTEs into a detached page table. > + spin_lock(dst_ptl); > + orig_dst_pte =3D *dst_pte; > + spin_unlock(dst_ptl); > + if (!pte_none(orig_dst_pte)) { > + err =3D -EEXIST; > + goto out; > + } > + > + spin_lock(src_ptl); > + orig_src_pte =3D *src_pte; > + spin_unlock(src_ptl); > + if (pte_none(orig_src_pte)) { > + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) > + err =3D -ENOENT; > + else /* nothing to do to remap a hole */ > + err =3D 0; > + goto out; > + } > + > + if (pte_present(orig_src_pte)) { > + /* > + * Pin and lock both source folio and anon_vma. Since we = are in > + * RCU read section, we can't block, so on contention hav= e to > + * unmap the ptes, obtain the lock and retry. > + */ > + if (!src_folio) { > + struct folio *folio; > + > + /* > + * Pin the page while holding the lock to be sure= the > + * page isn't freed under us > + */ > + spin_lock(src_ptl); > + if (!pte_same(orig_src_pte, *src_pte)) { > + spin_unlock(src_ptl); > + err =3D -EAGAIN; > + goto out; > + } > + > + folio =3D vm_normal_folio(src_vma, src_addr, orig= _src_pte); > + if (!folio || !folio_test_anon(folio) || > + folio_test_large(folio) || > + folio_estimated_sharers(folio) !=3D 1) { > + spin_unlock(src_ptl); > + err =3D -EBUSY; > + goto out; > + } > + > + folio_get(folio); > + src_folio =3D folio; > + spin_unlock(src_ptl); > + > + /* block all concurrent rmap walks */ > + if (!folio_trylock(src_folio)) { > + pte_unmap(&orig_src_pte); > + pte_unmap(&orig_dst_pte); > + src_pte =3D dst_pte =3D NULL; > + /* now we can block and wait */ > + folio_lock(src_folio); > + goto retry; > + } > + } > + > + if (!src_anon_vma) { > + /* > + * folio_referenced walks the anon_vma chain > + * without the folio lock. Serialize against it w= ith > + * the anon_vma lock, the folio lock is not enoug= h. > + */ > + src_anon_vma =3D folio_get_anon_vma(src_folio); > + if (!src_anon_vma) { > + /* page was unmapped from under us */ > + err =3D -EAGAIN; > + goto out; > + } > + if (!anon_vma_trylock_write(src_anon_vma)) { > + pte_unmap(&orig_src_pte); > + pte_unmap(&orig_dst_pte); > + src_pte =3D dst_pte =3D NULL; > + /* now we can block and wait */ > + anon_vma_lock_write(src_anon_vma); > + goto retry; > + } > + } > + > + err =3D remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma, > + dst_addr, src_addr, dst_pte, src_pte= , > + orig_dst_pte, orig_src_pte, > + dst_ptl, src_ptl, src_folio); > + } else { > + entry =3D pte_to_swp_entry(orig_src_pte); > + if (non_swap_entry(entry)) { > + if (is_migration_entry(entry)) { > + pte_unmap(&orig_src_pte); > + pte_unmap(&orig_dst_pte); > + src_pte =3D dst_pte =3D NULL; > + migration_entry_wait(src_mm, src_pmd, > + src_addr); > + err =3D -EAGAIN; > + } else > + err =3D -EFAULT; > + goto out; > + } > + > + err =3D remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr= , > + dst_pte, src_pte, > + orig_dst_pte, orig_src_pte, > + dst_ptl, src_ptl); > + } > + > +out: > + if (src_anon_vma) { > + anon_vma_unlock_write(src_anon_vma); > + put_anon_vma(src_anon_vma); > + } > + if (src_folio) { > + folio_unlock(src_folio); > + folio_put(src_folio); > + } > + if (dst_pte) > + pte_unmap(dst_pte); > + if (src_pte) > + pte_unmap(src_pte); > + mmu_notifier_invalidate_range_end(&range); > + > + return err; > +}