From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5C5EC46CA2 for ; Sat, 16 Dec 2023 13:58:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B07886B0075; Sat, 16 Dec 2023 08:58:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AB82D6B0078; Sat, 16 Dec 2023 08:58:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9591F6B007B; Sat, 16 Dec 2023 08:58:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 840B36B0075 for ; Sat, 16 Dec 2023 08:58:24 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 61387A2978 for ; Sat, 16 Dec 2023 13:58:24 +0000 (UTC) X-FDA: 81572836128.01.EAA4A75 Received: from mail-lj1-f171.google.com (mail-lj1-f171.google.com [209.85.208.171]) by imf28.hostedemail.com (Postfix) with ESMTP id 8A714C0016 for ; Sat, 16 Dec 2023 13:58:22 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KktEDQ0u; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702735102; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QSjf7ibjEQssO5x/Uf4WCHnZajc7byrxR2lxs6GU5cc=; b=mOTwoaNHjne0/S4x9HCHNLnTgdwoIoa7H8GKx1ALnDR2w4VixKXdv0l7YEvrF494RARfum FDxANTXdv50BgPHSh0I+RO/sxMDQLMZhoCed8RBZzrFXkycZrWWteETztn6emKMFNkmOXY TRE8+LxLhE8NNtZ+syF97deoM8v2V70= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KktEDQ0u; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702735102; a=rsa-sha256; cv=none; b=3ystYWu4n9UhR99K60sNGzi7kcO/UCTiONW8X4TkVkGf5xBr1uDuWA7UlhwEq4ra2vBi1N 3/AAnrkwqKgcltVmHaYH90+7ZYdlsVbghclYuNbzJl/7hJEnDwDlokFvgTPimEjlz9FcJo dQxsK4fT+P7E6Wf1yjCQ9aZaryAe+dQ= Received: by mail-lj1-f171.google.com with SMTP id 38308e7fff4ca-2cc65e81feeso1180881fa.2 for ; Sat, 16 Dec 2023 05:58:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702735101; x=1703339901; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=QSjf7ibjEQssO5x/Uf4WCHnZajc7byrxR2lxs6GU5cc=; b=KktEDQ0usirumvJUusUKTcrB90mjKdJYqvyGFeHZO9eIaPaZapsV+6u9o36FMCHCK/ Bfb/SdYI+7XSEJFD4BV39RzV9DGWTpIYe7xt8P4ZJdOxvAPwQWw+NP0Mr5DuA/Mq6LLQ RbeyVSLltvWpYmB+Jr4PMfW+aXHWCjj+zQXGAv2qVQhakIbqOUxLvtwNSmJUGJm8Tf0S Wv2qnC8CpI2TWNNmGrfjXWtSLIVK7bH/1T+pdafSvefcDrxMZrsbI79H6Gf96cw6SD9E fWHBGEJy7z1WwbVmptR+gP/m0bW3RIAIcE+eKBV/qM3+tNpaJLdFh4aV/RvIZrkMCtW+ rdNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702735101; x=1703339901; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QSjf7ibjEQssO5x/Uf4WCHnZajc7byrxR2lxs6GU5cc=; b=HynrujUEWmJ17/BEwl2uHT7/JUXwCSGAhkP1KJFnu3NM8N2MDEu9cwcOJF3ZSAsSqg FBada3v6kYPd1HkSI3HXX2xYiklUhuv8RK71rMCf+5jA4m1KoCX1GB1n4hlEO06ithyi xhW5sIjpKfloCgqIUjhlI6Rl8FlR4icxnwwOAFlwknKaomIbhR7lVqmItXpHrbJqt4aq ABpNV/p3isMncdDwPTdAaH0/bepustvKur+/7ylYCcllQn+UmpxOqqWwEiQNOaXaURYg kAcqGxLREGLwh1aE0J+ByOfdI7R9ErlZdJ0xvu/9Pb+ifhc4rlg/uFwXu2tw+skKWItp ifHw== X-Gm-Message-State: AOJu0YwfmqMY1wODxRLXi4eg5/LAafL9eACOp8nseLOdzMJRAl6cyaoK +fsanb4LtMzN1EwUBMLoU8UcSgDutfC3OUGXYEZobmADsVaKbw== X-Google-Smtp-Source: AGHT+IFDjgWBvXn9njmihZOstD8AyVjsUHACKwRCBkUoEJev3cjv/oN4go0AyDMRt2+bhHpRz+8ZsrQzFaX6YHXQF64= X-Received: by 2002:a05:651c:220b:b0:2cc:304d:c3ee with SMTP id y11-20020a05651c220b00b002cc304dc3eemr4208136ljq.67.1702735100204; Sat, 16 Dec 2023 05:58:20 -0800 (PST) MIME-Version: 1.0 References: <20231213215842.671461-1-willy@infradead.org> <20231213215842.671461-14-willy@infradead.org> In-Reply-To: <20231213215842.671461-14-willy@infradead.org> From: Kairui Song Date: Sat, 16 Dec 2023 21:58:03 +0800 Message-ID: Subject: Re: [PATCH 13/13] mm: Convert swap_cluster_readahead and swap_vma_readahead to return a folio To: "Matthew Wilcox (Oracle)" Cc: Andrew Morton , linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 8A714C0016 X-Stat-Signature: okmrcwq3h9xwx5wiecxzkmxkuugak9m5 X-HE-Tag: 1702735102-413992 X-HE-Meta: U2FsdGVkX1+Q2w7/I5xwYv4VnXDRgK+sTNhNp6/5i89wgbtojaEQ+JDu41IHyU3lurorzCYszxjFmGt9XURXR6CCz5nxwIK8Z+j//OOcXFhjEQOqQULk7lNahjp1iVrD0vBL0VKHnmpElLXIvuG7lfmwwrRNxE2EoreQEm5dnP8fe/CFaw16LEZgrjTts6ds9f+UJUA+CbxBPwKSOL3fJCvntDKPEtFVXIE3QSbzZ8Glq7rVayCoNerHPhXvDqHyPN0sLoAnNCYKgDAFWAqucw4BPz+WvrHLacXqSilsU2Tc0V4TtoQOneJVUftqccDg/n0nyplfKV3IcRWKECWIIZ1yRm35rqykNloBSw653guWPQKH3tx5taQi2gjK8YRRQr9S+r9J2FOdh7/qKULvoVnwEUdxkgH0hbr+lJO5IvtV11XLz7m17WVJPpelw60LZXjOAL21UlZZwhVnsYHIz9i4vKnBWpiqi/NwshJ+hkNbz/H/fVyQgI5ywtvnFzEHz/vdWWvECbWqvx2ukvCvUrPp4nl/W8377kmQfQrP4z+wyKRt9V70jS5dQhW/kXl+40M1tlbwEGpOIrKXa45Ek90bVDFW7LF/ckUuoFmSB6bmyWkljl+5d2JcnDA0QGfgQ5bzO5F/2FSDshiF6St9Dm0ZuZEOiaxuJi6ymEmYO24LODPZWRituNC/cI9NolCAxodFJlB7L6ywlBz23v4FOJGhaDZFM7iB4KCB0ZqqHvaodgRqm7pD7HQAi+N34d2xktHgCJsfNFGoVpJekgisH5o/M+Bosmiy9YQx6SEUw/KTzqfWOpMkFiwjODiRtOy9BPcRz2819Z01N4C/4QM4U+M/a7BLWJ7ZxFCKCEw3zFaiuHiSIYltxbKB0YialdmrE+SPIgxECSlSh1ybF7e4cvb5zJGDh+s3zv95xjtF1TY0FIEGSg8SBrwrloSQcAmYkvy0gMCv26CiLtW8pKe fZAIeDK3 sruCsbWbpLbfP1PYwrM+5JQ0oRAkmeC7r0aGNPXq3qowZ5WoI+ivBrqjW6qx+AOQKCEevUWsJd69le/b8ZccN5wPVpn2ZRucGjM2gmtdmGfWaSn99T1cB3QQPEOB70Kq6QwZTIJWV8wiB6PowpuMVjOviFXqNGL7NKhTOiFAg+CRFNl9dq/YQNPNVufU3jXph4/LkEPzXh28OOXmAIyKzFKTV14bajfOm4hMiHo+VNMjErAAIt8Yuo0fBgh8WiX6nEu30VOwBrb0ShaEUzee6nuooIsLqwrmIbuHqu7OU7yhcBhGbCAq3jH4ONzqhyViqGYmMfG9DYzslF7GdjI4t2fBgFiiQMkoKABkjBOFaD2zVha7gmKZ7prKqHoD2mrKODCorPwgG9FcUtTldI1b+P4n2DmpT6OgL8ZLdpzcYJ+qzMCr777zG9gA3cAPaKHUp7nPRqf2AJS6gVsY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Matthew Wilcox (Oracle) =E4=BA=8E2023=E5=B9=B412=E6= =9C=8814=E6=97=A5=E5=91=A8=E5=9B=9B 05:59=E5=86=99=E9=81=93=EF=BC=9A > > shmem_swapin_cluster() immediately converts the page back to a folio, > and swapin_readahead() may as well call folio_file_page() once instead > of having each function call it. > > Signed-off-by: Matthew Wilcox (Oracle) > --- > mm/shmem.c | 8 +++----- > mm/swap.h | 6 +++--- > mm/swap_state.c | 21 ++++++++++----------- > 3 files changed, 16 insertions(+), 19 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index c62f904ba1ca..a4d388973021 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1570,15 +1570,13 @@ static struct folio *shmem_swapin_cluster(swp_ent= ry_t swap, gfp_t gfp, > { > struct mempolicy *mpol; > pgoff_t ilx; > - struct page *page; > + struct folio *folio; > > mpol =3D shmem_get_pgoff_policy(info, index, 0, &ilx); > - page =3D swap_cluster_readahead(swap, gfp, mpol, ilx); > + folio =3D swap_cluster_readahead(swap, gfp, mpol, ilx); > mpol_cond_put(mpol); > > - if (!page) > - return NULL; > - return page_folio(page); > + return folio; > } > > /* > diff --git a/mm/swap.h b/mm/swap.h > index 82c68ccb5ab1..758c46ca671e 100644 > --- a/mm/swap.h > +++ b/mm/swap.h > @@ -52,8 +52,8 @@ struct folio *read_swap_cache_async(swp_entry_t entry, = gfp_t gfp_mask, > struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_flags= , > struct mempolicy *mpol, pgoff_t ilx, bool *new_page_alloc= ated, > bool skip_if_exists); > -struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, > - struct mempolicy *mpol, pgoff_t ilx); > +struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, > + struct mempolicy *mpol, pgoff_t ilx); > struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, > struct vm_fault *vmf); > > @@ -80,7 +80,7 @@ static inline void show_swap_cache_info(void) > { > } > > -static inline struct page *swap_cluster_readahead(swp_entry_t entry, > +static inline struct folio *swap_cluster_readahead(swp_entry_t entry, > gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t i= lx) > { > return NULL; > diff --git a/mm/swap_state.c b/mm/swap_state.c > index 1cb1d5d0583e..793b5b9e4f96 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -629,7 +629,7 @@ static unsigned long swapin_nr_pages(unsigned long of= fset) > * @mpol: NUMA memory allocation policy to be applied > * @ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE > * > - * Returns the struct page for entry and addr, after queueing swapin. > + * Returns the struct folio for entry and addr, after queueing swapin. > * > * Primitive swap readahead code. We simply read an aligned block of > * (1 << page_cluster) entries in the swap area. This method is chosen > @@ -640,7 +640,7 @@ static unsigned long swapin_nr_pages(unsigned long of= fset) > * are used for every page of the readahead: neighbouring pages on swap > * are fairly likely to have been swapped out from the same node. > */ > -struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, > +struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, > struct mempolicy *mpol, pgoff_t ilx) > { > struct folio *folio; > @@ -692,7 +692,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry= , gfp_t gfp_mask, > if (unlikely(page_allocated)) > swap_read_folio(folio, false, NULL); > zswap_folio_swapin(folio); > - return folio_file_page(folio, swp_offset(entry)); > + return folio; > } > > int init_swap_address_space(unsigned int type, unsigned long nr_pages) > @@ -796,7 +796,7 @@ static void swap_ra_info(struct vm_fault *vmf, > * @targ_ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE > * @vmf: fault information > * > - * Returns the struct page for entry and addr, after queueing swapin. > + * Returns the struct folio for entry and addr, after queueing swapin. > * > * Primitive swap readahead code. We simply read in a few pages whose > * virtual addresses are around the fault address in the same vma. > @@ -804,9 +804,8 @@ static void swap_ra_info(struct vm_fault *vmf, > * Caller must hold read mmap_lock if vmf->vma is not NULL. > * > */ > -static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp= _mask, > - struct mempolicy *mpol, pgoff_t ta= rg_ilx, > - struct vm_fault *vmf) > +static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gf= p_mask, > + struct mempolicy *mpol, pgoff_t targ_ilx, struct vm_fault= *vmf) > { > struct blk_plug plug; > struct swap_iocb *splug =3D NULL; > @@ -868,7 +867,7 @@ static struct page *swap_vma_readahead(swp_entry_t ta= rg_entry, gfp_t gfp_mask, > if (unlikely(page_allocated)) > swap_read_folio(folio, false, NULL); > zswap_folio_swapin(folio); > - return folio_file_page(folio, swp_offset(entry)); > + return folio; > } > > /** > @@ -888,14 +887,14 @@ struct page *swapin_readahead(swp_entry_t entry, gf= p_t gfp_mask, > { > struct mempolicy *mpol; > pgoff_t ilx; > - struct page *page; > + struct folio *folio; > > mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); > - page =3D swap_use_vma_readahead() ? > + folio =3D swap_use_vma_readahead() ? > swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf) : > swap_cluster_readahead(entry, gfp_mask, mpol, ilx); > mpol_cond_put(mpol); > - return page; > + return folio_file_page(folio, swp_offset(entry)); Hi Matthew, There is a bug here, folio could be NULL, and cause NULL dereference.