From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0495C433EF for ; Mon, 7 Mar 2022 05:37:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C5AF8D0002; Mon, 7 Mar 2022 00:37:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 274C88D0001; Mon, 7 Mar 2022 00:37:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1651D8D0002; Mon, 7 Mar 2022 00:37:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 09F388D0001 for ; Mon, 7 Mar 2022 00:37:39 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CA0D161850 for ; Mon, 7 Mar 2022 05:37:38 +0000 (UTC) X-FDA: 79216482996.09.D9447F3 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf04.hostedemail.com (Postfix) with ESMTP id B65C840006 for ; Mon, 7 Mar 2022 05:37:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646631456; x=1678167456; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=1VpTGXNwnM/AWZLhUV+8N2p5eC8CeurvB7RKSU7xQ0Q=; b=DxO6NLdV3xad6UvfHyI1uW4CNHXbxdH8axENkOqx1z7J2brFkcy9eHvw /PCfnbHZOWijcocrxiZq0kAAK439fnTSXjdLDfGBYP1VfTdx6bdU+mx8D 9420RJ3VD2mUNWnoEsQt88EgP/WIIa9unGiH61H5pJfVeT6hXzPfqkvru v8tcZlbiV1+wHSL80eTPBVSZgdWg4Ym7VKsNyfiEvBt9OZgxbThyVzaru J+Uyl2hzYJcEa3ftQKrqjG1ia8aEM+/cZs+L6Fcrjk/Yo+ZU95HhqVZnd IPtcItBSBqrsfKFNvuJvGTNknemU61bNNXAkK3db+bEPrYV8btLh8e7er Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10278"; a="254241941" X-IronPort-AV: E=Sophos;i="5.90,160,1643702400"; d="scan'208";a="254241941" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2022 21:37:35 -0800 X-IronPort-AV: E=Sophos;i="5.90,160,1643702400"; d="scan'208";a="536970093" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.13.94]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2022 21:37:30 -0800 From: "Huang, Ying" To: Miaohe Lin Cc: , , , , , , , , , , , , , , , , , Subject: Re: [PATCH 16/16] mm/migration: fix potential pte_unmap on an not mapped pte References: <20220304093409.25829-1-linmiaohe@huawei.com> <20220304093409.25829-17-linmiaohe@huawei.com> Date: Mon, 07 Mar 2022 13:37:28 +0800 In-Reply-To: <20220304093409.25829-17-linmiaohe@huawei.com> (Miaohe Lin's message of "Fri, 4 Mar 2022 17:34:09 +0800") Message-ID: <871qze5nmv.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: 7dykat1yeoqggnzqu7i63sfacknp99ay Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=DxO6NLdV; spf=none (imf04.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspamd-Queue-Id: B65C840006 X-HE-Tag: 1646631456-61831 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Miaohe Lin writes: > __migration_entry_wait and migration_entry_wait_on_locked assume pte is > always mapped from caller. But this is not the case when it's called from > migration_entry_wait_huge and follow_huge_pmd. And a parameter unmap to > indicate whether pte needs to be unmapped to fix this issue. This seems a possible issue. Have you tested it? It appears that it's possible to trigger the issue. If so, you can paste the error log here. BTW: have you tested the other functionality issues in your patchset? Best Regards, Huang, Ying > Fixes: 30dad30922cc ("mm: migration: add migrate_entry_wait_huge()") > Signed-off-by: Miaohe Lin > --- > include/linux/migrate.h | 2 +- > include/linux/swapops.h | 4 ++-- > mm/filemap.c | 10 +++++----- > mm/hugetlb.c | 2 +- > mm/migrate.c | 14 ++++++++------ > 5 files changed, 17 insertions(+), 15 deletions(-) > > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > index 66a34eae8cb6..3ef4ff699bef 100644 > --- a/include/linux/migrate.h > +++ b/include/linux/migrate.h > @@ -41,7 +41,7 @@ extern int migrate_huge_page_move_mapping(struct address_space *mapping, > extern int migrate_page_move_mapping(struct address_space *mapping, > struct page *newpage, struct page *page, int extra_count); > void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, > - spinlock_t *ptl); > + spinlock_t *ptl, bool unmap); > void folio_migrate_flags(struct folio *newfolio, struct folio *folio); > void folio_migrate_copy(struct folio *newfolio, struct folio *folio); > int folio_migrate_mapping(struct address_space *mapping, > diff --git a/include/linux/swapops.h b/include/linux/swapops.h > index d356ab4047f7..d66556875d7d 100644 > --- a/include/linux/swapops.h > +++ b/include/linux/swapops.h > @@ -213,7 +213,7 @@ static inline swp_entry_t make_writable_migration_entry(pgoff_t offset) > } > > extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, > - spinlock_t *ptl); > + spinlock_t *ptl, bool unmap); > extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, > unsigned long address); > extern void migration_entry_wait_huge(struct vm_area_struct *vma, > @@ -235,7 +235,7 @@ static inline int is_migration_entry(swp_entry_t swp) > } > > static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, > - spinlock_t *ptl) { } > + spinlock_t *ptl, bool unmap) { } > static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, > unsigned long address) { } > static inline void migration_entry_wait_huge(struct vm_area_struct *vma, > diff --git a/mm/filemap.c b/mm/filemap.c > index 8f7e6088ee2a..18c353d52aae 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -1389,6 +1389,7 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, > * @ptep: mapped pte pointer. Will return with the ptep unmapped. Only required > * for pte entries, pass NULL for pmd entries. > * @ptl: already locked ptl. This function will drop the lock. > + * @unmap: indicating whether ptep need to be unmapped. > * > * Wait for a migration entry referencing the given page to be removed. This is > * equivalent to put_and_wait_on_page_locked(page, TASK_UNINTERRUPTIBLE) except > @@ -1402,7 +1403,7 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, > * there. > */ > void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, > - spinlock_t *ptl) > + spinlock_t *ptl, bool unmap) > { > struct wait_page_queue wait_page; > wait_queue_entry_t *wait = &wait_page.wait; > @@ -1439,10 +1440,9 @@ void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, > * a valid reference to the page, and it must take the ptl to remove the > * migration entry. So the page is valid until the ptl is dropped. > */ > - if (ptep) > - pte_unmap_unlock(ptep, ptl); > - else > - spin_unlock(ptl); > + spin_unlock(ptl); > + if (unmap && ptep) > + pte_unmap(ptep); > > for (;;) { > unsigned int flags; > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 07668781c246..8088128c25db 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -6713,7 +6713,7 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address, > } else { > if (is_hugetlb_entry_migration(pte)) { > spin_unlock(ptl); > - __migration_entry_wait(mm, (pte_t *)pmd, ptl); > + __migration_entry_wait(mm, (pte_t *)pmd, ptl, false); > goto retry; > } > /* > diff --git a/mm/migrate.c b/mm/migrate.c > index 98a968e6f465..5519261f54fe 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -281,7 +281,7 @@ void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked) > * When we return from this function the fault will be retried. > */ > void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, > - spinlock_t *ptl) > + spinlock_t *ptl, bool unmap) > { > pte_t pte; > swp_entry_t entry; > @@ -295,10 +295,12 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, > if (!is_migration_entry(entry)) > goto out; > > - migration_entry_wait_on_locked(entry, ptep, ptl); > + migration_entry_wait_on_locked(entry, ptep, ptl, unmap); > return; > out: > - pte_unmap_unlock(ptep, ptl); > + spin_unlock(ptl); > + if (unmap) > + pte_unmap(ptep); > } > > void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, > @@ -306,14 +308,14 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, > { > spinlock_t *ptl = pte_lockptr(mm, pmd); > pte_t *ptep = pte_offset_map(pmd, address); > - __migration_entry_wait(mm, ptep, ptl); > + __migration_entry_wait(mm, ptep, ptl, true); > } > > void migration_entry_wait_huge(struct vm_area_struct *vma, > struct mm_struct *mm, pte_t *pte) > { > spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), mm, pte); > - __migration_entry_wait(mm, pte, ptl); > + __migration_entry_wait(mm, pte, ptl, false); > } > > #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION > @@ -324,7 +326,7 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) > ptl = pmd_lock(mm, pmd); > if (!is_pmd_migration_entry(*pmd)) > goto unlock; > - migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), NULL, ptl); > + migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), NULL, ptl, false); > return; > unlock: > spin_unlock(ptl);