From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AC77C61D96 for ; Tue, 21 Nov 2023 16:06:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 219476B04A2; Tue, 21 Nov 2023 11:06:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C8266B04A4; Tue, 21 Nov 2023 11:06:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06B106B04A7; Tue, 21 Nov 2023 11:06:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E83E36B04A2 for ; Tue, 21 Nov 2023 11:06:37 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B1C6840A35 for ; Tue, 21 Nov 2023 16:06:37 +0000 (UTC) X-FDA: 81482439234.10.5B40389 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf12.hostedemail.com (Postfix) with ESMTP id 1427B4003E for ; Tue, 21 Nov 2023 16:06:32 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=QvmCXyO6; spf=pass (imf12.hostedemail.com: domain of chrisl@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700582793; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mXX1ABc15u7CiHVRQQrjbJVyJ9yaHo+vkIXVxZ3qotU=; b=2pFZhhmJX+LGpj0+5t0k+VyBV71dEabajl5dvqwduO2Crb7rttM1hMW2Wsw3ZhJ1SMnfVi 65YiOyNfNcC/57ZArBl2XiLQEw5Q1qik+85jm1Ms8RSf5LItgwlSp4A6oLHm974QEkq2Hs 9WvaFbKHsFQXg49PrRi3xDjLSHXHmww= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700582793; a=rsa-sha256; cv=none; b=N4WeuAfqBFIoPxpavqKPqGrG4d77fAzpGnLpfB1fT+kKQWgkKD5gL3chrDXRf9Dlys4Qpi x/ObNpi6h1leMDCEZGyZiidV6eWlbCWgGsqmjSEGcdEWx7PO9zA1XmQ3w5U2lzGWlGSE8E O0qI2kGG5mO0e9E6/o/xGmzmQgxLBSI= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=QvmCXyO6; spf=pass (imf12.hostedemail.com: domain of chrisl@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id E5B9561A60 for ; Tue, 21 Nov 2023 16:06:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ACF03C433CC for ; Tue, 21 Nov 2023 16:06:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1700582791; bh=ePKC0MeblKMps/7S6gLuFwH40gWsMZm5efTapHlDGXg=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=QvmCXyO6VpEFa/MqeWBSV6Wcbv8lme9yX8emA0dlbYSQhi+ZPPNu2+/FHniDn9Rks AdkXiWn3ZRLbzRWqJoUxL8fYpvDIC6te3Xn5D3uhdP0cO9v8aZ0kJbbM7BahoLu8K7 ybo0gtMWt7E2w5BOY37llmiQtORybCOsYp2SYClu3DwHHsU4raqKZitWnCituMu89/ /7dm5agC/6EcO75/Aj3p7GYcnIHStqtu91j0PIeGgQRl7xQFcnQ63R7Hy1obq+ZTVZ 8jVoGN/AHi3w7x1sM0IpdCblKbZbdeg3mox9bI8xMsObp+gs/zNXjlkeDEtDLXAjG3 0D216TOwBD61w== Received: by mail-pj1-f52.google.com with SMTP id 98e67ed59e1d1-280351c32afso4598550a91.1 for ; Tue, 21 Nov 2023 08:06:31 -0800 (PST) X-Gm-Message-State: AOJu0Yw8URLEBw30S0lBeMRNMhRq7dendUhDRnv4iCrKt5LjrLYh1lkh mn/HpdmRECvrhoYug/mM9t6LHH1qgbRwlKy48fVpOQ== X-Google-Smtp-Source: AGHT+IFugt8donad+hMnLy4aSWfmTMDvul2xNH+gshB3rbWy/B7E/Ldr7uzXzMZ1rkdwRyz2k55OLHCcj5Oat/UHziw= X-Received: by 2002:a17:90a:f181:b0:280:125:e52e with SMTP id bv1-20020a17090af18100b002800125e52emr11158364pjb.35.1700582791085; Tue, 21 Nov 2023 08:06:31 -0800 (PST) MIME-Version: 1.0 References: <20231119194740.94101-1-ryncsn@gmail.com> <20231119194740.94101-12-ryncsn@gmail.com> In-Reply-To: <20231119194740.94101-12-ryncsn@gmail.com> From: Chris Li Date: Tue, 21 Nov 2023 08:06:19 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 11/24] mm/swap: also handle swapcache lookup in swapin_readahead To: Kairui Song Cc: linux-mm@kvack.org, Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: pgx6ptq4jz7115j74zc851kxcq7hrw3i X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 1427B4003E X-Rspam-User: X-HE-Tag: 1700582792-340813 X-HE-Meta: U2FsdGVkX1988DyS1VTR2DC3qTVFpdsFuyZSCWhqs5Mkau/shjnAYj1TlfXQxWwuZnGT2T/BeAxm6S0cat0tDHuQnBxUT/DPWX5m6N3bMEwph25GYOeOTxbOdW0fnUsG8GmzVRLB5gHgM/6sFGlKD7lmXcexTdvY3RMj4RMZTOpQlcrrBdAzUIxjkNcI+fmjmP9Wg1+t8s6nGUJ4SKGGpn2W6vvMAkKigvY0kj6J5HDzZaZwo4jYhR7Fk4IFHTmjvCp9n3uv4/TO1TdHWX0lW6dxdwxd7TGHehOOaXbGjVoaXYjEnPWMfJJkg1qTbaxRXODglT0T38CCcJH1/J24h5wY8knjmr0jHRTvsF3a8E/lTN4I31mDdbCkwfhU99Wd6Cz6M9+NNn/kMMSPC26CAAIX14dU2SfN4H/fe7Y0k5mKilu1QJBFLnQhx4wrXzEQc9ZTom9vTcS94roOKu1JUxyHmF5MB1yZuC7KUa4yD+Bk3pZ4ytGv+vg8k+Mzx/xt8butce1fWBNiF/Pyhh/DlDXdQbT/s7V+w6NMlZyinf12ajysOiwhdLbGBycVOwpA3652DvOwatdNP+aanwP6rqzqZNL2Ku+NDy+PAFLO5xuns6u/nmyB+/bULJk3bT3GeC8pKYd0KU0zFFcT2eWZc+vVnlpeMcaHp+/g/ZFykhsTKhbasNGUPXBo5hXRE/43HlBrIZOUxWEvCGsAVuEUDGar3aDv9ao3M4HCkiuPIdiRmoctWR1QQasEyB0E+hn4608C/7wHdtmsqDnaf6Bq7HdIMxhXep0LZNYQ0hTT7pENLTda4f1HvyHSIfJEFAaU17sjX6++yzupOxv/nkBs88D1SR4Qby3CuRKRonL+npoo5m1y1DrUod0niR/g4d1m1rWsj19T/U9rYnjvoXqV4lG7B6UH2n7wca4Oyohhpu2s32JXcJBeJA7s1ZqzziiHoRHldLro5yvGxk+MF2O F1flpnkl R04DIzbVPRIkWhX/ctpQBIHZ3mSb3xvfjeD2h3AtiHe/buwcwi3fRVNE6IMDRDvmRI6+e4UEUxy75XJ6UVxkqt2WAykAaolqf088yMyJ0JLDtqMAeLyxpffEKoS6dt6Pnd5Tny5UwTkg+SFie+6Q+4QIjriFubOXUjiVgRWg17K2a2zP4Hl+4fTHblyKl2x6ePSCQTkCKH+TK7z5bs659NtZJsbsZERL4YpxLg00287Q2tLHEPqm9rNt2Iw7pyqPlPDioWP4mFw0/As/1Uo52WcFV0ZD7X/67zDPeNd/wQ2V4eajF1Cfo3Dv/+KLswykY3ME1dRiZ5vNHRNwD8oEjwoayJmKBXaNPTu+4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sun, Nov 19, 2023 at 11:48=E2=80=AFAM Kairui Song wro= te: > > From: Kairui Song > > No feature change, just prepare for later commits. You need to have a proper commit message why this change needs to happen. Preparing is too generic, it does not give any real information. For example, it seems you want to reduce one swap cache lookup because swap_readahead already has it? I am a bit puzzled at this patch. It shuffles a lot of sensitive code. However I do not get the value. It seems like this patch should be merged with the later patch that depends on it to be judged together. > > Signed-off-by: Kairui Song > --- > mm/memory.c | 61 +++++++++++++++++++++++-------------------------- > mm/swap.h | 10 ++++++-- > mm/swap_state.c | 26 +++++++++++++-------- > mm/swapfile.c | 30 +++++++++++------------- > 4 files changed, 66 insertions(+), 61 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index f4237a2e3b93..22af9f3e8c75 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3786,13 +3786,13 @@ static vm_fault_t handle_pte_marker(struct vm_fau= lt *vmf) > vm_fault_t do_swap_page(struct vm_fault *vmf) > { > struct vm_area_struct *vma =3D vmf->vma; > - struct folio *swapcache, *folio =3D NULL; > + struct folio *swapcache =3D NULL, *folio =3D NULL; > + enum swap_cache_result cache_result; > struct page *page; > struct swap_info_struct *si =3D NULL; > rmap_t rmap_flags =3D RMAP_NONE; > bool exclusive =3D false; > swp_entry_t entry; > - bool swapcached; > pte_t pte; > vm_fault_t ret =3D 0; > > @@ -3850,42 +3850,37 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > if (unlikely(!si)) > goto out; > > - folio =3D swap_cache_get_folio(entry, vma, vmf->address); > - if (folio) > - page =3D folio_file_page(folio, swp_offset(entry)); > - swapcache =3D folio; Is the motivation that swap_readahead() already has a swap cache look up so= you remove this look up here? > - > - if (!folio) { > - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, > - vmf, &swapcached); > - if (page) { > - folio =3D page_folio(page); > - if (swapcached) > - swapcache =3D folio; > - } else { > + page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, > + vmf, &cache_result); > + if (page) { > + folio =3D page_folio(page); > + if (cache_result !=3D SWAP_CACHE_HIT) { > + /* Had to read the page from swap area: Major fau= lt */ > + ret =3D VM_FAULT_MAJOR; > + count_vm_event(PGMAJFAULT); > + count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); > + } > + if (cache_result !=3D SWAP_CACHE_BYPASS) > + swapcache =3D folio; > + if (PageHWPoison(page)) { There is a lot of code shuffle here. From the diff it is hard to tell if they are doing the same thing as before. > /* > - * Back out if somebody else faulted in this pte > - * while we released the pte lock. > + * hwpoisoned dirty swapcache pages are kept for = killing > + * owner processes (which may be unknown at hwpoi= son time) > */ > - vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf-= >pmd, > - vmf->address, &vmf->ptl); > - if (likely(vmf->pte && > - pte_same(ptep_get(vmf->pte), vmf->orig= _pte))) > - ret =3D VM_FAULT_OOM; > - goto unlock; > + ret =3D VM_FAULT_HWPOISON; > + goto out_release; > } > - > - /* Had to read the page from swap area: Major fault */ > - ret =3D VM_FAULT_MAJOR; > - count_vm_event(PGMAJFAULT); > - count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); > - } else if (PageHWPoison(page)) { > + } else { > /* > - * hwpoisoned dirty swapcache pages are kept for killing > - * owner processes (which may be unknown at hwpoison time= ) > + * Back out if somebody else faulted in this pte > + * while we released the pte lock. > */ > - ret =3D VM_FAULT_HWPOISON; > - goto out_release; > + vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, > + vmf->address, &vmf->ptl); > + if (likely(vmf->pte && > + pte_same(ptep_get(vmf->pte), vmf->orig_pte))) > + ret =3D VM_FAULT_OOM; > + goto unlock; > } > > ret |=3D folio_lock_or_retry(folio, vmf); > diff --git a/mm/swap.h b/mm/swap.h > index a9a654af791e..ac9136eee690 100644 > --- a/mm/swap.h > +++ b/mm/swap.h > @@ -30,6 +30,12 @@ extern struct address_space *swapper_spaces[]; > (&swapper_spaces[swp_type(entry)][swp_offset(entry) \ > >> SWAP_ADDRESS_SPACE_SHIFT]) > > +enum swap_cache_result { > + SWAP_CACHE_HIT, > + SWAP_CACHE_MISS, > + SWAP_CACHE_BYPASS, > +}; Does any function later care about CACHE_BYPASS? Again, better introduce it with the function that uses it. Don't introduce it for "just in case I might use it later". > + > void show_swap_cache_info(void); > bool add_to_swap(struct folio *folio); > void *get_shadow_from_swap_cache(swp_entry_t entry); > @@ -55,7 +61,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry,= gfp_t gfp_mask, > struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, > struct mempolicy *mpol, pgoff_t ilx); > struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, > - struct vm_fault *vmf, bool *swapcached); > + struct vm_fault *vmf, enum swap_cache_resul= t *result); > > static inline unsigned int folio_swap_flags(struct folio *folio) > { > @@ -92,7 +98,7 @@ static inline struct page *swap_cluster_readahead(swp_e= ntry_t entry, > } > > static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_m= ask, > - struct vm_fault *vmf, bool *swapcached) > + struct vm_fault *vmf, enum swap_cache_result *res= ult) > { > return NULL; > } > diff --git a/mm/swap_state.c b/mm/swap_state.c > index d87c20f9f7ec..e96d63bf8a22 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -908,8 +908,7 @@ static struct page *swapin_no_readahead(swp_entry_t e= ntry, gfp_t gfp_mask, > * @entry: swap entry of this memory > * @gfp_mask: memory allocation flags > * @vmf: fault information > - * @swapcached: pointer to a bool used as indicator if the > - * page is swapped in through swapcache. > + * @result: a return value to indicate swap cache usage. > * > * Returns the struct page for entry and addr, after queueing swapin. > * > @@ -918,30 +917,39 @@ static struct page *swapin_no_readahead(swp_entry_t= entry, gfp_t gfp_mask, > * or vma-based(ie, virtual address based on faulty address) readahead. > */ > struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, > - struct vm_fault *vmf, bool *swapcached) > + struct vm_fault *vmf, enum swap_cache_resul= t *result) > { > + enum swap_cache_result cache_result; > struct swap_info_struct *si; > struct mempolicy *mpol; > + struct folio *folio; > struct page *page; > pgoff_t ilx; > - bool cached; > + > + folio =3D swap_cache_get_folio(entry, vmf->vma, vmf->address); > + if (folio) { > + page =3D folio_file_page(folio, swp_offset(entry)); > + cache_result =3D SWAP_CACHE_HIT; > + goto done; > + } > > si =3D swp_swap_info(entry); > mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); > if (swap_use_no_readahead(si, swp_offset(entry))) { > page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx, = vmf->vma->vm_mm); > - cached =3D false; > + cache_result =3D SWAP_CACHE_BYPASS; > } else if (swap_use_vma_readahead(si)) { > page =3D swap_vma_readahead(entry, gfp_mask, mpol, ilx, v= mf); > - cached =3D true; > + cache_result =3D SWAP_CACHE_MISS; > } else { > page =3D swap_cluster_readahead(entry, gfp_mask, mpol, il= x); > - cached =3D true; > + cache_result =3D SWAP_CACHE_MISS; > } > mpol_cond_put(mpol); > > - if (swapcached) > - *swapcached =3D cached; > +done: > + if (result) > + *result =3D cache_result; > > return page; > } > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 01c3f53b6521..b6d57fff5e21 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1822,13 +1822,21 @@ static int unuse_pte_range(struct vm_area_struct = *vma, pmd_t *pmd, > > si =3D swap_info[type]; > do { > - struct folio *folio; > + struct page *page; > unsigned long offset; > unsigned char swp_count; > + struct folio *folio =3D NULL; > swp_entry_t entry; > int ret; > pte_t ptent; > > + struct vm_fault vmf =3D { > + .vma =3D vma, > + .address =3D addr, > + .real_address =3D addr, > + .pmd =3D pmd, > + }; Is this code move caused by skipping the swap cache look up here? This is very sensitive code related to swap cache racing. It needs very careful reviewing. Better not shuffle it for no good reason. Chris > + > if (!pte++) { > pte =3D pte_offset_map(pmd, addr); > if (!pte) > @@ -1847,22 +1855,10 @@ static int unuse_pte_range(struct vm_area_struct = *vma, pmd_t *pmd, > offset =3D swp_offset(entry); > pte_unmap(pte); > pte =3D NULL; > - > - folio =3D swap_cache_get_folio(entry, vma, addr); > - if (!folio) { > - struct page *page; > - struct vm_fault vmf =3D { > - .vma =3D vma, > - .address =3D addr, > - .real_address =3D addr, > - .pmd =3D pmd, > - }; > - > - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOV= ABLE, > - &vmf, NULL); > - if (page) > - folio =3D page_folio(page); > - } > + page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, > + &vmf, NULL); > + if (page) > + folio =3D page_folio(page); > if (!folio) { > /* > * The entry could have been freed, and will not > -- > 2.42.0 > >