From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.5 required=3.0 tests=FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD8B0C2BA2B for ; Sun, 12 Apr 2020 12:54:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 576F12063A for ; Sun, 12 Apr 2020 12:54:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 576F12063A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=sina.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A8FCA8E00CF; Sun, 12 Apr 2020 08:54:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A3F1F8E0007; Sun, 12 Apr 2020 08:54:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 954998E00CF; Sun, 12 Apr 2020 08:54:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 7C2A38E0007 for ; Sun, 12 Apr 2020 08:54:26 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3E713824805A for ; Sun, 12 Apr 2020 12:54:26 +0000 (UTC) X-FDA: 76699196532.24.books25_965e34235d16 X-HE-Tag: books25_965e34235d16 X-Filterd-Recvd-Size: 3253 Received: from r3-11.sinamail.sina.com.cn (r3-11.sinamail.sina.com.cn [202.108.3.11]) by imf05.hostedemail.com (Postfix) with SMTP for ; Sun, 12 Apr 2020 12:54:24 +0000 (UTC) Received: from unknown (HELO localhost.localdomain)([221.219.5.127]) by sina.com with ESMTP id 5E930F7B0000AEF2; Sun, 12 Apr 2020 20:54:20 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 0046849283288 From: Hillf Danton To: Peter Xu Cc: Hillf Danton , mm , lkml Subject: Re: f45ec5ff16 ("userfaultfd: wp: support swap and page migration"): [ 140.777858] BUG: Bad rss-counter state mm:b278fc66 type:MM_ANONPAGES val:1 Date: Sun, 12 Apr 2020 20:54:08 +0800 Message-Id: <20200412125408.18008-1-hdanton@sina.com> In-Reply-To: <20200410073209.11164-1-hdanton@sina.com> References: <20200410002518.GG8179@shao2-debian> <20200410073209.11164-1-hdanton@sina.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 10 Apr 2020 11:32:34 -0400 Peter Xu wrote: >=20 > I'm not sure this is correct. As I mentioned, the commit wanted to > apply the uffd-wp bit even for the swap entries so that even the swap > entries got swapped in, the page will still be write protected. So > IIUC think we can't remove that. Yes you are right. Now both CONFIG_MIGRATION and swap entry are restored after making uffd_w= q survive migrate the same way as soft_dirty. --- a/mm/migrate.c +++ b/mm/migrate.c @@ -236,6 +236,8 @@ static bool remove_migration_pte(struct pte =3D pte_mkold(mk_pte(new, READ_ONCE(vma->vm_page_prot))); if (pte_swp_soft_dirty(*pvmw.pte)) pte =3D pte_mksoft_dirty(pte); + if (pte_swp_uffd_wp(*pvmw.pte)) + pte =3D pte_mkuffd_wp(pte); =20 /* * Recheck VMA as permissions can change since migration started @@ -243,15 +245,11 @@ static bool remove_migration_pte(struct entry =3D pte_to_swp_entry(*pvmw.pte); if (is_write_migration_entry(entry)) pte =3D maybe_mkwrite(pte, vma); - else if (pte_swp_uffd_wp(*pvmw.pte)) - pte =3D pte_mkuffd_wp(pte); =20 if (unlikely(is_zone_device_page(new))) { if (is_device_private_page(new)) { entry =3D make_device_private_entry(new, pte_write(pte)); pte =3D swp_entry_to_pte(entry); - if (pte_swp_uffd_wp(*pvmw.pte)) - pte =3D pte_mkuffd_wp(pte); } } =20 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -139,11 +139,13 @@ static unsigned long change_pte_range(st } ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); pages++; - } else if (is_swap_pte(oldpte)) { + } else if (IS_ENABLED(CONFIG_MIGRATION)) { swp_entry_t entry =3D pte_to_swp_entry(oldpte); pte_t newpte; =20 - if (is_write_migration_entry(entry)) { + if (!non_swap_entry(entry)) { + newpte =3D oldpte; + } else if (is_write_migration_entry(entry)) { /* * A protection check is difficult so * just be safe and disable write @@ -164,7 +166,7 @@ static unsigned long change_pte_range(st if (pte_swp_uffd_wp(oldpte)) newpte =3D pte_swp_mkuffd_wp(newpte); } else { - newpte =3D oldpte; + continue; } =20 if (uffd_wp)