From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D719BC47082 for ; Tue, 8 Jun 2021 00:06:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7845360BBB for ; Tue, 8 Jun 2021 00:06:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7845360BBB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0F60A6B0070; Mon, 7 Jun 2021 20:06:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A65D6B0071; Mon, 7 Jun 2021 20:06:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3A426B0072; Mon, 7 Jun 2021 20:06:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0106.hostedemail.com [216.40.44.106]) by kanga.kvack.org (Postfix) with ESMTP id B03086B0070 for ; Mon, 7 Jun 2021 20:06:49 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 443A9180AD806 for ; Tue, 8 Jun 2021 00:06:49 +0000 (UTC) X-FDA: 78228615738.23.610D6E0 Received: from mail-oo1-f54.google.com (mail-oo1-f54.google.com [209.85.161.54]) by imf29.hostedemail.com (Postfix) with ESMTP id ADB1A37A for ; Tue, 8 Jun 2021 00:06:44 +0000 (UTC) Received: by mail-oo1-f54.google.com with SMTP id k10-20020a4abd8a0000b0290249ed2f2919so834816oop.4 for ; Mon, 07 Jun 2021 17:06:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=Bd5L0/RPBE9zDsj0Ysf0Zjs7T2z0nJBAm/sFA95+0hk=; b=aXgiVWDspiCZ+EoqE/VOtnGxSJjzR371qQ+LohuyLiEuG8h5+Ayt2UyEr41d1fuM1g Pr+3Qcg18MEKxgperYAB4lSvQ1ycBOBRlXl7vGerBySrplNVKtYVcZ8qY1M1akT59zJq WhfTiU94ZyhgXInj5e89hY7Z/EkggHAUyWaI75/d9CZiO/U58CxX4cz1SN2IiQJQlJSU KjTXye5tORDo/w7jQogeMwLBsIBOmuBRTCDXZNNIdSQw8IpXXfdMbKR7y3bGpia17tCm 9HJ6265DMSrfyLUKw8FANuh30yGP2s03/ES6ouLDT3WbyvkT36LN3FMS1yMloCfDVbjN w/bA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=Bd5L0/RPBE9zDsj0Ysf0Zjs7T2z0nJBAm/sFA95+0hk=; b=Adgx2yvUODm66bazlNt56Iu37jvakdY5GnrYmgdqi6aWC+1hslPAhBpRNJ0r9L5NjZ /2gImuf6pWHPdvD6hHupJZo1GxQpyDS4VEg3p0uC9wFzTiZu8WF+mu91BIeUFhn3RTtT Ad72bqoXHJO1vQWxFnJr+dnXhP6HKhSBHhXa5JWBUFvaCcwNSCSgtxikh8eZpFWAaFWl puCVSXjV5LGYG7rY7hEqs5MvLQVk/VBceZKIYDQysOKp6nnVBcBwNvssrqMCrOjEMmC+ 774y0S90G/gSMz8rnCtY7uS2iTaTXCk+AIg1nJDy3FxvPKoQHwFHPcOVEKF74t1kCvGt kzWQ== X-Gm-Message-State: AOAM532fW3IlvZ6iSRIdnd7Kk7HPK47jQrd0T6fc4CYS/wLfcjEePPAf 5egCMCjiZ7zdu27lzE/TL/cl+w== X-Google-Smtp-Source: ABdhPJwtNQWc0y6OiSd0xS7Vo8frW9gjk1DExvqYGd2GhK0AZVqHifBjQcBC5o7vd7wj0BtLaQVvcg== X-Received: by 2002:a4a:d285:: with SMTP id h5mr5554903oos.71.1623110808129; Mon, 07 Jun 2021 17:06:48 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id s6sm2681342otk.71.2021.06.07.17.06.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 17:06:47 -0700 (PDT) Date: Mon, 7 Jun 2021 17:06:28 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: "Aneesh Kumar K.V" cc: linux-mm@kvack.org, akpm@linux-foundation.org, mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, kaleshsingh@google.com, npiggin@gmail.com, joel@joelfernandes.org, Christophe Leroy , Linus Torvalds , "Kirill A . Shutemov" Subject: Re: [PATCH v7 01/11] mm/mremap: Fix race between MOVE_PMD mremap and pageout In-Reply-To: <20210607055131.156184-2-aneesh.kumar@linux.ibm.com> Message-ID: References: <20210607055131.156184-1-aneesh.kumar@linux.ibm.com> <20210607055131.156184-2-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: ADB1A37A X-Stat-Signature: 43qaqzy7meay757byh8oioufr7pyp9rp Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=aXgiVWDs; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of hughd@google.com designates 209.85.161.54 as permitted sender) smtp.mailfrom=hughd@google.com X-HE-Tag: 1623110804-520908 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 7 Jun 2021, Aneesh Kumar K.V wrote: > CPU 1 CPU 2 CPU 3 > > mremap(old_addr, new_addr) page_shrinker/try_to_unmap_one > > mmap_write_lock_killable() > > addr = old_addr > lock(pte_ptl) > lock(pmd_ptl) > pmd = *old_pmd > pmd_clear(old_pmd) > flush_tlb_range(old_addr) > > *new_pmd = pmd > *new_addr = 10; and fills > TLB with new addr > and old pfn > > unlock(pmd_ptl) > ptep_clear_flush() > old pfn is free. > Stale TLB entry > > Fix this race by holding pmd lock in pageout. This still doesn't handle the race > between MOVE_PUD and pageout. > > Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions") > Link: https://lore.kernel.org/linux-mm/CAHk-=wgXVR04eBNtxQfevontWnP6FDm+oj5vauQXP3S-huwbPw@mail.gmail.com > Signed-off-by: Aneesh Kumar K.V This seems very wrong to me, to require another level of locking in the rmap lookup, just to fix some new pagetable games in mremap. But Linus asked "Am I missing something?": neither of you have mentioned mremap's take_rmap_locks(), so I hope that already meets your need. And if it needs to be called more often than before (see "need_rmap_locks"), that's probably okay. Hugh > --- > include/linux/rmap.h | 9 ++++++--- > mm/page_vma_mapped.c | 36 ++++++++++++++++++------------------ > 2 files changed, 24 insertions(+), 21 deletions(-) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index def5c62c93b3..272ab0c2b60b 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -207,7 +207,8 @@ struct page_vma_mapped_walk { > unsigned long address; > pmd_t *pmd; > pte_t *pte; > - spinlock_t *ptl; > + spinlock_t *pte_ptl; > + spinlock_t *pmd_ptl; > unsigned int flags; > }; > > @@ -216,8 +217,10 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) > /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ > if (pvmw->pte && !PageHuge(pvmw->page)) > pte_unmap(pvmw->pte); > - if (pvmw->ptl) > - spin_unlock(pvmw->ptl); > + if (pvmw->pte_ptl) > + spin_unlock(pvmw->pte_ptl); > + if (pvmw->pmd_ptl) > + spin_unlock(pvmw->pmd_ptl); > } > > bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index 2cf01d933f13..87a2c94c7e27 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -47,8 +47,10 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw) > return false; > } > } > - pvmw->ptl = pte_lockptr(pvmw->vma->vm_mm, pvmw->pmd); > - spin_lock(pvmw->ptl); > + if (USE_SPLIT_PTE_PTLOCKS) { > + pvmw->pte_ptl = pte_lockptr(pvmw->vma->vm_mm, pvmw->pmd); > + spin_lock(pvmw->pte_ptl); > + } > return true; > } > > @@ -162,8 +164,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > if (!pvmw->pte) > return false; > > - pvmw->ptl = huge_pte_lockptr(page_hstate(page), mm, pvmw->pte); > - spin_lock(pvmw->ptl); > + pvmw->pte_ptl = huge_pte_lockptr(page_hstate(page), mm, pvmw->pte); > + spin_lock(pvmw->pte_ptl); > if (!check_pte(pvmw)) > return not_found(pvmw); > return true; > @@ -179,6 +181,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > if (!pud_present(*pud)) > return false; > pvmw->pmd = pmd_offset(pud, pvmw->address); > + pvmw->pmd_ptl = pmd_lock(mm, pvmw->pmd); > /* > * Make sure the pmd value isn't cached in a register by the > * compiler and used as a stale value after we've observed a > @@ -186,7 +189,6 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > */ > pmde = READ_ONCE(*pvmw->pmd); > if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { > - pvmw->ptl = pmd_lock(mm, pvmw->pmd); > if (likely(pmd_trans_huge(*pvmw->pmd))) { > if (pvmw->flags & PVMW_MIGRATION) > return not_found(pvmw); > @@ -206,14 +208,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > } > } > return not_found(pvmw); > - } else { > - /* THP pmd was split under us: handle on pte level */ > - spin_unlock(pvmw->ptl); > - pvmw->ptl = NULL; > } > - } else if (!pmd_present(pmde)) { > - return false; > - } > + } else if (!pmd_present(pmde)) > + return not_found(pvmw); > + > if (!map_pte(pvmw)) > goto next_pte; > while (1) { > @@ -233,19 +231,21 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > /* Did we cross page table boundary? */ > if (pvmw->address % PMD_SIZE == 0) { > pte_unmap(pvmw->pte); > - if (pvmw->ptl) { > - spin_unlock(pvmw->ptl); > - pvmw->ptl = NULL; > + if (pvmw->pte_ptl) { > + spin_unlock(pvmw->pte_ptl); > + pvmw->pte_ptl = NULL; > } > + spin_unlock(pvmw->pmd_ptl); > + pvmw->pmd_ptl = NULL; > goto restart; > } else { > pvmw->pte++; > } > } while (pte_none(*pvmw->pte)); > > - if (!pvmw->ptl) { > - pvmw->ptl = pte_lockptr(mm, pvmw->pmd); > - spin_lock(pvmw->ptl); > + if (USE_SPLIT_PTE_PTLOCKS && !pvmw->pte_ptl) { > + pvmw->pte_ptl = pte_lockptr(mm, pvmw->pmd); > + spin_lock(pvmw->pte_ptl); > } > } > } > -- > 2.31.1