From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4563CC47073 for ; Thu, 4 Jan 2024 07:31:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A1C5F6B032C; Thu, 4 Jan 2024 02:31:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9CC206B032E; Thu, 4 Jan 2024 02:31:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 893C86B0332; Thu, 4 Jan 2024 02:31:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 792586B032C for ; Thu, 4 Jan 2024 02:31:03 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3E098A0308 for ; Thu, 4 Jan 2024 07:31:03 +0000 (UTC) X-FDA: 81640807206.07.619C1CD Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) by imf28.hostedemail.com (Postfix) with ESMTP id DEF92C001B for ; Thu, 4 Jan 2024 07:31:00 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=VmJL4W+1; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf28.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704353461; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9YVYl0AymH8iPkHy8mfZJ69ZfArdltW9i4SuYxe3i7w=; b=gVXZxauRs+Cp59FEXCggSaZfcNHAWI/2r7WbGu/f9Jv/sI+Fyx2DOekAw+GF1C+fhl6Jfl nEZOjR3intn0FVnDBPvB20sH2ju6IxfyMV2D9qhTHDjsFDIkk0YZYz+uwqNVaw0p9iQ9I+ kHsrpt5jZytJSCauslEzmf+AOlbL0q4= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=VmJL4W+1; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf28.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704353461; a=rsa-sha256; cv=none; b=GqjAnt8kh10RdHlyvfErAVDn0TzUWlEAuphDpF56wY8HpzLaTRwriwdqhr2AKXtsi8QEck YGiLwlsjPDYsUr3HqFN173bRdY1JMT+wi8GdiE7cOhDSlp99BwdyxmDwLm+lWN8rMpzYkw swyaAD2CKBUI6mjdUic45T1e7tpt5Bw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704353461; x=1735889461; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=YAurfh1u9UA6dC9cxd/WLeeXF+n9LButmbp0O/VSSX4=; b=VmJL4W+176raRvaQAfk+lO0CG4/ycrUheQTVyW61af4yLO2Nuomoxrlc 4TTnEmjbKtNn93GDRrsiLmjh5NjYThfRd+A/rVpLDsIF6RIRPIVipyRSb Dhe1Cz9qAZtxK1oc+3o2r6zKg8N9/Q7YRx4VHN5kSOJtBwHQ7fzndsUHw qNR6OYLKvu0ntegGVFspAtnRovnvCvF/em+fxl7vmAsi5/Ly2glCjBKrm 4IG4fpAKfQxhizOow8+uR20znDBOMyy1aFCErhYi4Vqsqzx/LkbN/DD+U mXIX5jj2Jb2JTInY4A7hIeYwp/hJ7Dds0MTZCJ3cbQwFnCMYxzdlI/exI Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10942"; a="387613715" X-IronPort-AV: E=Sophos;i="6.04,330,1695711600"; d="scan'208";a="387613715" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 23:30:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10942"; a="923820628" X-IronPort-AV: E=Sophos;i="6.04,330,1695711600"; d="scan'208";a="923820628" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 23:30:54 -0800 From: "Huang, Ying" To: Kairui Song Cc: linux-mm@kvack.org, Kairui Song , Andrew Morton , Chris Li , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , David Hildenbrand , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 2/9] mm/swap: move no readahead swapin code to a stand-alone helper In-Reply-To: <20240102175338.62012-3-ryncsn@gmail.com> (Kairui Song's message of "Wed, 3 Jan 2024 01:53:31 +0800") References: <20240102175338.62012-1-ryncsn@gmail.com> <20240102175338.62012-3-ryncsn@gmail.com> Date: Thu, 04 Jan 2024 15:28:57 +0800 Message-ID: <87plyhblqe.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: DEF92C001B X-Stat-Signature: qwow4et1hj7x3b41qb5b4xj8gscprkxj X-Rspam-User: X-HE-Tag: 1704353460-138679 X-HE-Meta: U2FsdGVkX1+6HI2C1AXu3gV34S18DbUwz/WAueKxY9y6rtelc3xDO8aGukts/HdeQDqe9soSUqritHQW4PpA54mNjkM48tBoNJNukaaIPgCHa1IfAcLe9NJ5ScFsBctZe7rjmMfsRl2I12CRGbi63wDUzCOWmD3c+ZhqOZbAMDpqLsntY853jKngabqadlzMJPxAhkvJyNSPwZP6w4pHZjpxIZkcrXtf1MvAX0hkI9OHFg1+KbLypg4Gn/onjOwGaq0l5Lsvni7st+3aUSFIVGsAoncvdRqx/R6hmEIzB+ty1Kf8ZTxamifd7lcqS5rIcL0Ly5P82QMIjor+28IGEPL0sQxw6+zHNSb7Shni/aV+Y1FQ+BCMPI5DYp6bzMsOEqYGyBcJBAILzJQkcA/IAljlgiXCaiqrGBtnjkSV/uCtAAUf7FZnZf4iEyy0eSos/vYDPvUsun7DdSKzfdaV2DtwKiORiCs4B3hjNlP2f3stUsobOhElgx9YX+e8as4RcOYUby4HdNnrhjaNStQj/+Pxq9uWt7o4m5zcfEb9GHFFJ8cd4SAzS8/adZbA/dVBhn0EcdQ/ZGePznlnT/j/xIx2N/PjERZEQNS5C2amSu2G5VnXUnjTzQGzMV0c5fa7XQW/T5XKGmMKNmyxito9NRdBaiXXrHOlMLOnf43ZwH5hlYq48WtP3d7ZIx2hlhGqzgKVyRu9vrIRI2EbqFPXz+rGZyDPciL1/+Ohd/d4/SaIB/V8fAFMWyFtzdO3MX7qgehAISyvxT0cblZUfa8FZM3kP32jXWgAMEO58yer7q+GH43I8/VMOLNEJoNbiRL8rocSHn/wMSjPZ1MO1VX9S9t4W04aLrka X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Kairui Song writes: > From: Kairui Song > > No feature change, simply move the routine to a standalone function to > be re-used later. The error path handling is copied from the "out_page" > label, to make the code change minimized for easier reviewing. > > Signed-off-by: Kairui Song > --- > mm/memory.c | 32 ++++---------------------------- > mm/swap.h | 8 ++++++++ > mm/swap_state.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 59 insertions(+), 28 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index a0a50d3754f0..0165c8cad489 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3803,7 +3803,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > swp_entry_t entry; > pte_t pte; > vm_fault_t ret = 0; > - void *shadow = NULL; > > if (!pte_unmap_same(vmf)) > goto out; > @@ -3867,33 +3866,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > if (!folio) { > if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && > __swap_count(entry) == 1) { > - /* skip swapcache */ > - folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, > - vma, vmf->address, false); > - page = &folio->page; > - if (folio) { > - __folio_set_locked(folio); > - __folio_set_swapbacked(folio); > - > - if (mem_cgroup_swapin_charge_folio(folio, > - vma->vm_mm, GFP_KERNEL, > - entry)) { > - ret = VM_FAULT_OOM; > - goto out_page; > - } > - mem_cgroup_swapin_uncharge_swap(entry); > - > - shadow = get_shadow_from_swap_cache(entry); > - if (shadow) > - workingset_refault(folio, shadow); > - > - folio_add_lru(folio); > - > - /* To provide entry to swap_read_folio() */ > - folio->swap = entry; > - swap_read_folio(folio, true, NULL); > - folio->private = NULL; > - } > + /* skip swapcache and readahead */ > + folio = swapin_direct(entry, GFP_HIGHUSER_MOVABLE, vmf); > + if (folio) > + page = &folio->page; > } else { > page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, > vmf); > diff --git a/mm/swap.h b/mm/swap.h > index 758c46ca671e..83eab7b67e77 100644 > --- a/mm/swap.h > +++ b/mm/swap.h > @@ -56,6 +56,8 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, > struct mempolicy *mpol, pgoff_t ilx); > struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, > struct vm_fault *vmf); > +struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, > + struct vm_fault *vmf); > > static inline unsigned int folio_swap_flags(struct folio *folio) > { > @@ -86,6 +88,12 @@ static inline struct folio *swap_cluster_readahead(swp_entry_t entry, > return NULL; > } > > +struct folio *swapin_direct(swp_entry_t entry, gfp_t flag, > + struct vm_fault *vmf) > +{ > + return NULL; > +} > + > static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mask, > struct vm_fault *vmf) > { > diff --git a/mm/swap_state.c b/mm/swap_state.c > index e671266ad772..24cb93ed5081 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -861,6 +861,53 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, > return folio; > } > > +/** > + * swapin_direct - swap in folios skipping swap cache and readahead swap in a folio ... > + * @entry: swap entry of this memory > + * @gfp_mask: memory allocation flags > + * @vmf: fault information > + * > + * Returns the struct folio for entry and addr after the swap entry is read > + * in. > + */ > +struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, > + struct vm_fault *vmf) > +{ > + struct vm_area_struct *vma = vmf->vma; > + struct folio *folio; > + void *shadow = NULL; > + > + /* skip swapcache */ > + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, You passed gfp_mask in, but don't use it. > + vma, vmf->address, false); > + if (folio) { > + __folio_set_locked(folio); > + __folio_set_swapbacked(folio); > + > + if (mem_cgroup_swapin_charge_folio(folio, > + vma->vm_mm, GFP_KERNEL, > + entry)) { > + folio_unlock(folio); > + folio_put(folio); > + return NULL; > + } > + mem_cgroup_swapin_uncharge_swap(entry); > + > + shadow = get_shadow_from_swap_cache(entry); > + if (shadow) > + workingset_refault(folio, shadow); > + > + folio_add_lru(folio); > + > + /* To provide entry to swap_read_folio() */ > + folio->swap = entry; > + swap_read_folio(folio, true, NULL); > + folio->private = NULL; > + } > + > + return folio; > +} > + > /** > * swapin_readahead - swap in pages in hope we need them soon > * @entry: swap entry of this memory -- Best Regards, Huang, Ying