From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDD0CC7EE25 for ; Mon, 12 Jun 2023 08:04:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A22E6B0072; Mon, 12 Jun 2023 04:04:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 653F86B0074; Mon, 12 Jun 2023 04:04:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F3508E0002; Mon, 12 Jun 2023 04:04:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 411796B0072 for ; Mon, 12 Jun 2023 04:04:34 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 08F0EA0153 for ; Mon, 12 Jun 2023 08:04:34 +0000 (UTC) X-FDA: 80893358868.30.93D6E19 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by imf15.hostedemail.com (Postfix) with ESMTP id B68B4A0020 for ; Mon, 12 Jun 2023 08:04:30 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Colet1b1; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686557072; a=rsa-sha256; cv=none; b=wW2i45/++wJR9KTnq1vmCg+07WYPMPNOqZaoLjevhRINZ1HMIGSLmME3sm5MQx3MiZFs80 oBEOdYsKGAGuanAcY/tg83jUImcf2UZ7R0axSgiksHNknPgPLgqSKstrn2J9RHn3XFnEM2 uiRi00P4JcflQ5qb3KYl22aOPkHC9A8= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Colet1b1; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686557072; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fyqOePEb6A/YtIlQYXtrkMoMYNLYoIeJbE2L4QuIVQQ=; b=iDBP8FZHxrm8uqHdvcWzIpJoiGTPrZnAVmEPQ92O27GcXc1PGRlgRQ4r213LA6464FdrYY KAyeeAe+nDtBcbWSYGmtdKCoG3JMls3YscZXzxVDKmmqP79CHCKNHsuSmn6g8kOCfFvY+m 0zDfJxJC5JX+ZDIG5c+7Ie89ScDR0ec= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686557070; x=1718093070; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=7A21HmaRay4LWuG9bVQutg9tF4DSDeHyHi3nObJLshQ=; b=Colet1b1DpXdAif3zxmXoJOpWtvALlAl1Q+YS8ddQyEOzrBTSBQNm+1A gYYSiAku+UnXrNmcS/ryR54a+9OXcd1kr8/8COBvFX3g43ZkEq3In3l1O CVtq9S1UBPJf63camBe7mREmA87lTDXWxZ5WJ+mobTPBsCEmknU2+vleT FHRlkefcenjVbf8Ed3vi3kJF11thfqVOtOqy1NAjM83qxvTMFJs4p8dPa 60+wpMv5Baopftwz3Cmlj6I4sAZyJDbwRUHI6xOjRGXEXoa1sotrEN2Qf OAWh2efdmM87z9VVWBVqQLpTTPX/DOog+wb7bHEoieT5F59LfFlILwPBd Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="421568189" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="421568189" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:04:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="823886922" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="823886922" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:04:20 -0700 From: "Huang, Ying" To: Hugh Dickins Cc: Andrew Morton , Mike Kravetz , Mike Rapoport , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Suren Baghdasaryan , Qi Zheng , Yang Shi , Mel Gorman , Peter Xu , Peter Zijlstra , Will Deacon , Yu Zhao , Alistair Popple , Ralph Campbell , Ira Weiny , Steven Price , SeongJae Park , Lorenzo Stoakes , Naoya Horiguchi , Christophe Leroy , Zack Rusin , Jason Gunthorpe , Axel Rasmussen , Anshuman Khandual , Pasha Tatashin , Miaohe Lin , Minchan Kim , Christoph Hellwig , Song Liu , Thomas Hellstrom , Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v2 31/32] mm/swap: swap_vma_readahead() do the pte_offset_map() References: Date: Mon, 12 Jun 2023 16:03:18 +0800 In-Reply-To: (Hugh Dickins's message of "Thu, 8 Jun 2023 18:52:17 -0700 (PDT)") Message-ID: <87legp6rax.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B68B4A0020 X-Stat-Signature: hsmnmkahrbz7k6u9xi4f4ej6f3wi5jrq X-HE-Tag: 1686557070-921761 X-HE-Meta: U2FsdGVkX19+ummRcNeFoMsN0Xny71Yt5+xryDVF1trxf2xl5YPEFZOUHgaChx8G0RwADZNUh7B8BLDJWasXEOL0uIP4CrOwbY+MzZegr4epVEcpZumIJZxEvWTEsfLROJt9T5+je30MsefMxS/CLYo3oI6uxNS3X+qdiI0kENgkNQjhJyBQHa7zI8xebdG/q6YzAjkooIyy6eolWAb3DaOkMyoAnRCeJhWSUizONUQsrVRGWtFBFE2VHVbIlOu1zYwyrHRSTpkTNS/MM0yNX3+c034XPMVv/w/zp0F3/IcurIytWteKRSnwWvrDV5j3WwqwnH9rWAOHSfd+c8SnfaVKi6zJ4oGBeRml6uKD9PUy0gN3DoQLEvMvbT9uf7WSCKq6OF1K9DmTRZU9454LWfb/Ec/tEy0GgcCdPdFhg8x1e8O4u3Zo47JwsK0dqP7fvnQP2MisXUGQ97t46QLth/e3ID7g5lfImZ/h/gkHvY0ooFEmmKhosAHakJ0YPOO0dMChJXkqz00dkwY2CNnfNc5z3m3FmxEv0ZagmJTfuw1zyonsKDMCaDlZMsjAMtdVIt9NdK6JuNOprGHMMQBXNZlODNvT5OGElAEAnx10UlVn8dBNUT8xUE3XjDoZ0NPd30MjXRiF/OXXbV5QGH5sTFnEvo3N1de+EDj+yK2+ugbaLhUMHBCRSeMyRAm8L/m1/ksjWTgCb41GdRIxmI9qa5wNYbK9Zw27XlViyPGTqgHWRY344Si77z67KErvPwiADagy5uPowqfJ+FAhTFye48f7QDLNbye5PozqjPNNrfi3VxM2MwqFyW0mNb6lTRt4kkrUaAR6FZc7uDSF//jf+CRsDS4CGYgUPNe+w3HuIWvwteTOfhqt65hWeLxmN0S7WoCC2hOGjI/rQppQEfSVUUArfvkwSgvYGZJqvVCjDSP98Oeo9IaF4ZpKZZuPqZqrgTKDZZzkxacLdaTMqGm z5JRt8kD g4cPfcUdZ5ua7h46zwq4LlqJVrtAdrYvxzSn0OwPTyuneJF+T5kJHXQWOf5sN0A7CY4kvtpBrJXxMETJxSK/k6Kt4moUUjXllaIrDTeQmppmwickF7YPe/hFM5+OAjHSvzgXxQYDj1vE2GUZAzC1slUGdumv2Hh/jav16yM/uiGCBAcI5z/XYDVPZd32O0yqkVA80tikGnoV/pk6lja+H7fEnXI0KcaiyHGAX5A7PN5/vdsrqA8a2Eac4Ic/YDM8FOC/67XIYeXebuAs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, Hugh, Sorry for late reply. Hugh Dickins writes: > swap_vma_readahead() has been proceeding in an unconventional way, its > preliminary swap_ra_info() doing the pte_offset_map() and pte_unmap(), > then relying on that pte pointer even after the pte_unmap() - in its > CONFIG_64BIT case (I think !CONFIG_HIGHPTE was intended; whereas 32-bit > copied ptes to stack while they were mapped, but had to limit how many). > > Though it would be difficult to construct a failing testcase, accessing > page table after pte_unmap() will become bad practice, even on 64-bit: > an rcu_read_unlock() in pte_unmap() will allow page table to be freed. > > Move relevant definitions from include/linux/swap.h to mm/swap_state.c, > nothing else used them. Delete the CONFIG_64BIT distinction and buffer, > delete all reference to ptes from swap_ra_info(), use pte_offset_map() > repeatedly in swap_vma_readahead(), breaking from the loop if it fails. > > (Will the repeated "map" and "unmap" show up as a slowdown anywhere? > If so, maybe modify __read_swap_cache_async() to do the pte_unmap() > only when it does not find the page already in the swapcache.) > > Use ptep_get_lockless(), mainly for its READ_ONCE(). Correctly advance > the address passed down to each call of __read__swap_cache_async(). > > Signed-off-by: Hugh Dickins > --- > include/linux/swap.h | 19 ------------------- > mm/swap_state.c | 45 +++++++++++++++++++++++--------------------- > 2 files changed, 24 insertions(+), 40 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 3c69cb653cb9..1b9f2d92fc10 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -337,25 +337,6 @@ struct swap_info_struct { > */ > }; > > -#ifdef CONFIG_64BIT > -#define SWAP_RA_ORDER_CEILING 5 > -#else > -/* Avoid stack overflow, because we need to save part of page table */ > -#define SWAP_RA_ORDER_CEILING 3 > -#define SWAP_RA_PTE_CACHE_SIZE (1 << SWAP_RA_ORDER_CEILING) > -#endif > - > -struct vma_swap_readahead { > - unsigned short win; > - unsigned short offset; > - unsigned short nr_pte; > -#ifdef CONFIG_64BIT > - pte_t *ptes; > -#else > - pte_t ptes[SWAP_RA_PTE_CACHE_SIZE]; > -#endif > -}; > - > static inline swp_entry_t folio_swap_entry(struct folio *folio) > { > swp_entry_t entry = { .val = page_private(&folio->page) }; > diff --git a/mm/swap_state.c b/mm/swap_state.c > index b76a65ac28b3..a43b41975da2 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -698,6 +698,14 @@ void exit_swap_address_space(unsigned int type) > swapper_spaces[type] = NULL; > } > > +#define SWAP_RA_ORDER_CEILING 5 > + > +struct vma_swap_readahead { > + unsigned short win; > + unsigned short offset; > + unsigned short nr_pte; > +}; > + Because we don't deal with PTEs in struct vma_swap_readahead anymore, it appears simpler to record addresses directly, for example, struct vma_swap_readahead { unsigned long start; unsigned long end; }; we can make ra_info.win to be the return value of swap_ra_info(). Anyway, this can be a separate cleanup patch based on this patch. For the patch itself, feel free to add, Reviewed-by: "Huang, Ying" > static void swap_ra_info(struct vm_fault *vmf, > struct vma_swap_readahead *ra_info) > { > @@ -705,11 +713,7 @@ static void swap_ra_info(struct vm_fault *vmf, > unsigned long ra_val; > unsigned long faddr, pfn, fpfn, lpfn, rpfn; > unsigned long start, end; > - pte_t *pte, *orig_pte; > unsigned int max_win, hits, prev_win, win; > -#ifndef CONFIG_64BIT > - pte_t *tpte; > -#endif > > max_win = 1 << min_t(unsigned int, READ_ONCE(page_cluster), > SWAP_RA_ORDER_CEILING); > @@ -728,12 +732,9 @@ static void swap_ra_info(struct vm_fault *vmf, > max_win, prev_win); > atomic_long_set(&vma->swap_readahead_info, > SWAP_RA_VAL(faddr, win, 0)); > - > if (win == 1) > return; > > - /* Copy the PTEs because the page table may be unmapped */ > - orig_pte = pte = pte_offset_map(vmf->pmd, faddr); > if (fpfn == pfn + 1) { > lpfn = fpfn; > rpfn = fpfn + win; > @@ -753,15 +754,6 @@ static void swap_ra_info(struct vm_fault *vmf, > > ra_info->nr_pte = end - start; > ra_info->offset = fpfn - start; > - pte -= ra_info->offset; > -#ifdef CONFIG_64BIT > - ra_info->ptes = pte; > -#else > - tpte = ra_info->ptes; > - for (pfn = start; pfn != end; pfn++) > - *tpte++ = *pte++; > -#endif > - pte_unmap(orig_pte); > } > > /** > @@ -785,7 +777,8 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, > struct swap_iocb *splug = NULL; > struct vm_area_struct *vma = vmf->vma; > struct page *page; > - pte_t *pte, pentry; > + pte_t *pte = NULL, pentry; > + unsigned long addr; > swp_entry_t entry; > unsigned int i; > bool page_allocated; > @@ -797,17 +790,25 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, > if (ra_info.win == 1) > goto skip; > > + addr = vmf->address - (ra_info.offset * PAGE_SIZE); > + > blk_start_plug(&plug); > - for (i = 0, pte = ra_info.ptes; i < ra_info.nr_pte; > - i++, pte++) { > - pentry = *pte; > + for (i = 0; i < ra_info.nr_pte; i++, addr += PAGE_SIZE) { > + if (!pte++) { > + pte = pte_offset_map(vmf->pmd, addr); > + if (!pte) > + break; > + } > + pentry = ptep_get_lockless(pte); > if (!is_swap_pte(pentry)) > continue; > entry = pte_to_swp_entry(pentry); > if (unlikely(non_swap_entry(entry))) > continue; > + pte_unmap(pte); > + pte = NULL; > page = __read_swap_cache_async(entry, gfp_mask, vma, > - vmf->address, &page_allocated); > + addr, &page_allocated); > if (!page) > continue; > if (page_allocated) { > @@ -819,6 +820,8 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, > } > put_page(page); > } > + if (pte) > + pte_unmap(pte); > blk_finish_plug(&plug); > swap_read_unplug(splug); > lru_add_drain();