From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E46F2C00140 for ; Mon, 1 Aug 2022 03:14:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 145628E0002; Sun, 31 Jul 2022 23:14:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0CE808E0001; Sun, 31 Jul 2022 23:14:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E89698E0002; Sun, 31 Jul 2022 23:14:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D3FB48E0001 for ; Sun, 31 Jul 2022 23:14:20 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 7F722140719 for ; Mon, 1 Aug 2022 03:14:20 +0000 (UTC) X-FDA: 79749555480.16.DA7915F Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf29.hostedemail.com (Postfix) with ESMTP id 006F01200DA for ; Mon, 1 Aug 2022 03:14:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659323659; x=1690859659; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=WTbP2bKiNaHPRGg+rr4Vtpo273wRdjm4l6Tttm4B97Y=; b=Qycy1E9lSEJZjs9z1z/qyOAZbduS53c9erqmcrSma45mUpcc6wddvE1u bqFylL7zJ6XGihDh+/OJyj+KUevg/hYFG9iogCZj9wY2fQmrPHlRiKmtS n38bo4TZhepjG/UlpeinhSeCC3KbHUf1oqQ/ICOrhAza9ARzuig13n5y+ mZxx6phoscPxx3ci3hSs/GtJbs2gULARD2FQvLDFv2LuwdXYo4C9UkND4 R0sStPmHKGNDkoa59ASNrTW7JtxR6s8A2ObLHIwaB4biZDp7juF44ipm3 m6VrMuNL0B+ekgVf0emLOJyU2ahX4fcUDH04lzWOmNx7RiAgK9nK42ALV g==; X-IronPort-AV: E=McAfee;i="6400,9594,10425"; a="269433967" X-IronPort-AV: E=Sophos;i="5.93,206,1654585200"; d="scan'208";a="269433967" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jul 2022 20:14:17 -0700 X-IronPort-AV: E=Sophos;i="5.93,206,1654585200"; d="scan'208";a="577627956" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jul 2022 20:14:14 -0700 From: "Huang, Ying" To: Peter Xu Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrea Arcangeli , Andrew Morton , "Kirill A . Shutemov" , Nadav Amit , Hugh Dickins , David Hildenbrand , Vlastimil Babka Subject: Re: [PATCH RFC 1/4] mm/swap: Add swp_offset_pfn() to fetch PFN from swap entry References: <20220729014041.21292-1-peterx@redhat.com> <20220729014041.21292-2-peterx@redhat.com> Date: Mon, 01 Aug 2022 11:13:58 +0800 In-Reply-To: <20220729014041.21292-2-peterx@redhat.com> (Peter Xu's message of "Thu, 28 Jul 2022 21:40:38 -0400") Message-ID: <8735eglkp5.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Qycy1E9l; spf=pass (imf29.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659323660; a=rsa-sha256; cv=none; b=28vjokPzzIgLkby59gaMOymzhsSxMchNDuiWTza3UQeRxh+1BphKfQ8WNybg0SKY26ytGy xvDEiiyaWvkFrQHSlm1bVsG0XlvgJW9aJ3VwhpmDCGg6aw3RV7qKTX714nmwBUTGFOrFuJ ikldUnYNQVQKth1y74TUTqizIqQMUX0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659323660; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vT1Dwb1pbPSC5EWlVcCkW6FLOhg41AxMOsvusislZjs=; b=f1rFm3MueI0cDneX1YXIO8pgTmqEqdp/uFynMTA3ZbeCX6KR1JcVwi/ursNs9xQa5w07gY Jbm8pYe+8zcMsvc2GjrMG87Uu+UDSOmtLZ3SiY+i6B10tceSNu58uQO7kHzFgstzUVl3Oy RXPJhz1+wKyS18LpcbgkXadvEar+CYU= Authentication-Results: imf29.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Qycy1E9l; spf=pass (imf29.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Stat-Signature: jyrjzjjrbpmu1zod5g6ghucewcmeqyoh X-Rspamd-Queue-Id: 006F01200DA X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1659323658-536564 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Peter Xu writes: > We've got a bunch of special swap entries that stores PFN inside the swap > offset fields. To fetch the PFN, normally the user just calls swp_offset() > assuming that'll be the PFN. > > Add a helper swp_offset_pfn() to fetch the PFN instead, fetching only the > max possible length of a PFN on the host, meanwhile doing proper check with > MAX_PHYSMEM_BITS to make sure the swap offsets can actually store the PFNs > properly always using the BUILD_BUG_ON() in is_pfn_swap_entry(). > > One reason to do so is we never tried to sanitize whether swap offset can > really fit for storing PFN. At the meantime, this patch also prepares us > with the future possibility to store more information inside the swp offset > field, so assuming "swp_offset(entry)" to be the PFN will not stand any > more very soon. > > Replace many of the swp_offset() callers to use swp_offset_pfn() where > proper. Note that many of the existing users are not candidates for the > replacement, e.g.: > > (1) When the swap entry is not a pfn swap entry at all, or, > (2) when we wanna keep the whole swp_offset but only change the swp type. > > For the latter, it can happen when fork() triggered on a write-migration > swap entry pte, we may want to only change the migration type from > write->read but keep the rest, so it's not "fetching PFN" but "changing > swap type only". They're left aside so that when there're more information > within the swp offset they'll be carried over naturally in those cases. > > Since at it, dropping hwpoison_entry_to_pfn() because that's exactly what > the new swp_offset_pfn() is about. > > Signed-off-by: Peter Xu > --- > arch/arm64/mm/hugetlbpage.c | 2 +- > include/linux/swapops.h | 28 ++++++++++++++++++++++------ > mm/hmm.c | 2 +- > mm/memory-failure.c | 2 +- > mm/page_vma_mapped.c | 6 +++--- > 5 files changed, 28 insertions(+), 12 deletions(-) > > diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c > index 7430060cb0d6..f897d40821dd 100644 > --- a/arch/arm64/mm/hugetlbpage.c > +++ b/arch/arm64/mm/hugetlbpage.c > @@ -242,7 +242,7 @@ static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry) > { > VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry)); > > - return page_folio(pfn_to_page(swp_offset(entry))); > + return page_folio(pfn_to_page(swp_offset_pfn(entry))); > } > > void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, > diff --git a/include/linux/swapops.h b/include/linux/swapops.h > index a3d435bf9f97..5378f77860fb 100644 > --- a/include/linux/swapops.h > +++ b/include/linux/swapops.h > @@ -23,6 +23,14 @@ > #define SWP_TYPE_SHIFT (BITS_PER_XA_VALUE - MAX_SWAPFILES_SHIFT) > #define SWP_OFFSET_MASK ((1UL << SWP_TYPE_SHIFT) - 1) > > +/* > + * Definitions only for PFN swap entries (see is_pfn_swap_entry()). To > + * store PFN, we only need SWP_PFN_BITS bits. Each of the pfn swap entries > + * can use the extra bits to store other information besides PFN. > + */ > +#define SWP_PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT) > +#define SWP_PFN_MASK ((1UL << SWP_PFN_BITS) - 1) > + > /* Clear all flags but only keep swp_entry_t related information */ > static inline pte_t pte_swp_clear_flags(pte_t pte) > { > @@ -64,6 +72,16 @@ static inline pgoff_t swp_offset(swp_entry_t entry) > return entry.val & SWP_OFFSET_MASK; > } > > +/* > + * This should only be called upon a pfn swap entry to get the PFN stored > + * in the swap entry. Please refers to is_pfn_swap_entry() for definition > + * of pfn swap entry. > + */ > +static inline unsigned long swp_offset_pfn(swp_entry_t entry) > +{ Is it good to call is_pfn_swap_entry() here for debug that can be eliminated in the production kernel? > + return swp_offset(entry) & SWP_PFN_MASK; > +} > + > /* check whether a pte points to a swap entry */ > static inline int is_swap_pte(pte_t pte) > { > @@ -369,7 +387,7 @@ static inline int pte_none_mostly(pte_t pte) > > static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) > { > - struct page *p = pfn_to_page(swp_offset(entry)); > + struct page *p = pfn_to_page(swp_offset_pfn(entry)); > > /* > * Any use of migration entries may only occur while the > @@ -387,6 +405,9 @@ static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) > */ > static inline bool is_pfn_swap_entry(swp_entry_t entry) > { > + /* Make sure the swp offset can always store the needed fields */ > + BUILD_BUG_ON(SWP_TYPE_SHIFT < SWP_PFN_BITS); BUILD_BUG_ON(SWP_TYPE_SHIFT <= SWP_PFN_BITS); ? Best Regards, Huang, Ying > + > return is_migration_entry(entry) || is_device_private_entry(entry) || > is_device_exclusive_entry(entry); > } > @@ -475,11 +496,6 @@ static inline int is_hwpoison_entry(swp_entry_t entry) > return swp_type(entry) == SWP_HWPOISON; > } > > -static inline unsigned long hwpoison_entry_to_pfn(swp_entry_t entry) > -{ > - return swp_offset(entry); > -} > - > static inline void num_poisoned_pages_inc(void) > { > atomic_long_inc(&num_poisoned_pages); > diff --git a/mm/hmm.c b/mm/hmm.c > index f2aa63b94d9b..3850fb625dda 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -253,7 +253,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, > cpu_flags = HMM_PFN_VALID; > if (is_writable_device_private_entry(entry)) > cpu_flags |= HMM_PFN_WRITE; > - *hmm_pfn = swp_offset(entry) | cpu_flags; > + *hmm_pfn = swp_offset_pfn(entry) | cpu_flags; > return 0; > } > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index cc6fc9be8d22..e451219124dd 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -632,7 +632,7 @@ static int check_hwpoisoned_entry(pte_t pte, unsigned long addr, short shift, > swp_entry_t swp = pte_to_swp_entry(pte); > > if (is_hwpoison_entry(swp)) > - pfn = hwpoison_entry_to_pfn(swp); > + pfn = swp_offset_pfn(swp); > } > > if (!pfn || pfn != poisoned_pfn) > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index 8e9e574d535a..93e13fc17d3c 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -86,7 +86,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) > !is_device_exclusive_entry(entry)) > return false; > > - pfn = swp_offset(entry); > + pfn = swp_offset_pfn(entry); > } else if (is_swap_pte(*pvmw->pte)) { > swp_entry_t entry; > > @@ -96,7 +96,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) > !is_device_exclusive_entry(entry)) > return false; > > - pfn = swp_offset(entry); > + pfn = swp_offset_pfn(entry); > } else { > if (!pte_present(*pvmw->pte)) > return false; > @@ -221,7 +221,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > return not_found(pvmw); > entry = pmd_to_swp_entry(pmde); > if (!is_migration_entry(entry) || > - !check_pmd(swp_offset(entry), pvmw)) > + !check_pmd(swp_offset_pfn(entry), pvmw)) > return not_found(pvmw); > return true; > }