From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15FDAC61D97 for ; Wed, 22 Nov 2023 18:08:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 879D16B0621; Wed, 22 Nov 2023 13:08:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 829CD6B0624; Wed, 22 Nov 2023 13:08:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F1806B0625; Wed, 22 Nov 2023 13:08:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 617226B0621 for ; Wed, 22 Nov 2023 13:08:25 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 30735B67CC for ; Wed, 22 Nov 2023 18:08:25 +0000 (UTC) X-FDA: 81486374970.30.E7F151E Received: from mail-lj1-f181.google.com (mail-lj1-f181.google.com [209.85.208.181]) by imf26.hostedemail.com (Postfix) with ESMTP id 26751140007 for ; Wed, 22 Nov 2023 18:08:22 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kxDdBWTl; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.181 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700676503; a=rsa-sha256; cv=none; b=FQT6n/SpVXLS1r+Hu36Sw/oC8v/jkCLqJ8XpU3/nC1vZeUIisx2lwjWUAlH1Fshj3l+4b/ oZj4WxSXYiPMJRNzwCzZLzKHeUUmZN3/WYCW5VnZbE6QVuy7y58lQVSY5VrsL9u5A1wrql QaGDx95h2jkx/8VLDly4WDbSDhmJ3QM= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kxDdBWTl; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.181 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700676503; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=z//RucA8tE1/CYD6Q8FXMQUzE3udbcwedLzht1PJJU0=; b=Zmw4G3te/ZJZwM+OZzig+f9ujqTw+Sv47H0te6arjmcpzAbraR6hVw8FpCD3bGw0ISfOrY KmiY+beSxxuHhZXSiVlZ77fYH2PIJqDER353mMCgQrvZVl+h/vujXqFiJGOOl3HxjHSnUE tqikNO9dxsREn3GBJVlypyKTcITDwmk= Received: by mail-lj1-f181.google.com with SMTP id 38308e7fff4ca-2c876f1e44dso960491fa.0 for ; Wed, 22 Nov 2023 10:08:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700676501; x=1701281301; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=z//RucA8tE1/CYD6Q8FXMQUzE3udbcwedLzht1PJJU0=; b=kxDdBWTl2BbZt2gTCUbA+cRL/SPMtS0kg6Psn663eLr7oA8aegh1y/Uq2yq3tlJ3l1 PeGh2CwLOcoqP0Cc6Chc0OJZVQXoVRK2O4QYwue7KHXXRi5/V63FWyeG3vb0ReOCy1ws JtH63J4qtfIfKpG16u0xVgp/LvllHmGm1KD9KI3VA0fQKnXt2u9nje4UBrDQ8KwUR2us zDr4p4YCCfZFYB53gU/Ny3rPdiQTItuUSfBgOrGpzUTr5MoP3qEVfKQnm9SuiDlUMCS2 iGh5E9UJ7ickSMO6G2lVYd0c9kCJW7F9bP75mXRRQYU0ZsjZiLJL4JaYawMQ/nG2UnqR Ob9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700676501; x=1701281301; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=z//RucA8tE1/CYD6Q8FXMQUzE3udbcwedLzht1PJJU0=; b=fBC/ZNvTSUO4oyLh+d3i9TvOtyvSastmT/3mWuOOC+QvHN4VF2TFAmAmNDMLMRvCNe YjfFbqyq8jP/dwHIHh91ThILvOBUJXYc7JEraMELDxWuLtY2/TnDNMp5z5rlQztXNB+Z 4HT3LjPl1DRlQiHveWiRWZSPRJxxE5LsPp+vLicf27bHvNLvQU1VpUf8k+KzK/HppQF/ k82iQDB75EdgH4Ql77GwnZuCxuRuttBP4JaDRp4dXwFeYqujQ6CHiLc3yRY35wSHT/ej mhqB2LCs1A8OENLKBaxsYy/f4AOJJF0ZCW3+JxxmJt5Ew8h637oTTyPOiCHJY3k69I9a fMgA== X-Gm-Message-State: AOJu0Yxp0yZzF5ntBPSFj7Xtrb3olY5QWpICn09Hs5/jyAHH5FHoM1/D 9Pa3Hljc3FnkM3lB+NIZ17JnEebPmW8P+gLxIwU0ywbBMT1kdg== X-Google-Smtp-Source: AGHT+IEH4JaV9faUPetkiI3Pq13W6T68SfFoLkf6MuquZjHLZ5crwlofQHtlnSF6LY+cHU+ljortfnDpM+AFlOiiSig= X-Received: by 2002:a2e:824f:0:b0:2c5:9e2:ed14 with SMTP id j15-20020a2e824f000000b002c509e2ed14mr2225732ljh.39.1700676501254; Wed, 22 Nov 2023 10:08:21 -0800 (PST) MIME-Version: 1.0 References: <20231119194740.94101-1-ryncsn@gmail.com> <20231119194740.94101-16-ryncsn@gmail.com> In-Reply-To: From: Kairui Song Date: Thu, 23 Nov 2023 02:08:03 +0800 Message-ID: Subject: Re: [PATCH 15/24] mm/swap: avoid an duplicated swap cache lookup for SYNCHRONOUS_IO device To: Chris Li Cc: linux-mm@kvack.org, Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 26751140007 X-Stat-Signature: hd6essd35isnk7x4nwgp7j3ikgntwzoq X-HE-Tag: 1700676502-401472 X-HE-Meta: U2FsdGVkX1/j9EV6/dQyTNmvw6zV/0nFCp5ZVbBp3TsofAFDG67zrNDZG999nRdl4fiN4S9/e0gpN5NskDg2bNrnZvKq8A2FYoJEH8D9TEbWZAzw9Is96mSDmclN+R7bTQJuu4jaefPufkj6by8EQBJHzFb0/N+VWwHLLR6CKT5aRTWV8ioJ5FA5BZ6JrnMmJmHT81oEUFyoW2LfHq1U1dS4xfkGvxSAk6gd8H9UIeUjKIOdO2P/JSwUjmI5zS8zeQb3eKTn3VdW1edNL527RS7nYUmNNMD93bpblzDoWdXR1gzIGgCsf884J+1+ZrJsU/CKuVc8By7vyp3GCBgFe0J/OsqgPFTksOv3a/rTsLiYTMfeMSk4xyiZOLwUJAafzyNZx9E7U/H0KR53v4YIauxpvPIw+EWv4ZSSaUispPxCog0q6QU/u3D08pa+F1LZggnw5mO4acVGw7hpjS/68N2u+3498eSQpu9q55WYMAzjFS6oR1mlvO9aiurj9iXywr2WpY3Lw/6oYVLlmcH6D6UmrJGVkaV5QhdZ1u7g+iOZ4KFRyoM+h6VO2hp0SfoAw4gPF+eJZ5R7lSQpQa4H8AEBkuSQCk3noL/gbdGawrBx/VaDA4NWZsXv++ko5mHHPo372AemVjhf0ARjPZ/t8lAXNkIWMgTRFiUKVaIkwZIvtnpxn31PZau/tHMJYJr+s8djWeFuS2Zaf9wyKo1GfFntZzLpBgtw226D7AJ1XY+m1yI7iE7KZTsLgCY1Y9Ly2xrwwlC0G+K5dzmO5lHO1chB03ETmpBZ8Oxe9Zgm9JOp4uHe6q1pt59XJvLkrPjRIjGpgF3qHRUZk+eEL3DBFUVnDGl4RtWp+0ngSsjhcmieW1XUT2bS1AXvlak1RFYksNcC+YZm+v8js6IGfxoOJpSAgI7wL3GNGckbWTwMihZM9Zve26gbMxflW6HGvLOo7XBCcAlPmMaIXeHViOo aUggBYU5 OV5SekfP7HDvbh28imOzwEnDu/dTPITMjg/UcCzsAPk+zmNb/nApEQgKOYQDpxWKqBfjVk0NxA8PQPsWRveQCVU8NcEVowdUYg/Ri3V1JaXkq/BRvEq9FRRYAFlzwFmB8URMYKAzZKJBmwfoX4TkSBFhsP9Vz47JpenImhG290wTomRM+mqdChErzBxo+buNDZwc3Jxm2NCckJqCfuCzTl8QCjBGRp1zJiVi8yLSKKBYRZd3VmF7VtiwQqayVQtiffvp6GPUOCT8FGyUbxrdfglK09KEB9aKU/NPVnH1ZWdqxUpui0dfaBEvDzl8RzkYcAUBVLtpj+bI0V3gduJyvGt80hmZcYJKhmllR7ATHJ9nTH0FqPpgFuPBO/Bd9luo5XMWS+3knVCse0mlNNb9Tv4iOC1jkeDopBQjEjv12KaqU7e5E4ZzLtw43oy3oEm3tCqmyGNAdiQpmN6oN4OpdhFhN3ClftQSlZRd+6FJWTMul3ek2L2/so8mZweODkdw7lOINoODf37hMpXhN4BAtk5dULKxghk6QG7hp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Chris Li =E4=BA=8E2023=E5=B9=B411=E6=9C=8822=E6=97=A5= =E5=91=A8=E4=B8=89 01:18=E5=86=99=E9=81=93=EF=BC=9A > > On Sun, Nov 19, 2023 at 11:48=E2=80=AFAM Kairui Song w= rote: > > > > From: Kairui Song > > > > When a xa_value is returned by the cache lookup, keep it to be used > > later for workingset refault check instead of doing the looking up agai= n > > in swapin_no_readahead. > > > > This does have a side effect of making swapoff also triggers workingset > > check, but should be fine since swapoff does effect the workload in man= y > > ways already. > > I need to sleep on it a bit to see if this will create another problem or= not. > > > > > Signed-off-by: Kairui Song > > --- > > mm/swap_state.c | 10 ++++------ > > 1 file changed, 4 insertions(+), 6 deletions(-) > > > > diff --git a/mm/swap_state.c b/mm/swap_state.c > > index e057c79fb06f..51de2a0412df 100644 > > --- a/mm/swap_state.c > > +++ b/mm/swap_state.c > > @@ -872,7 +872,6 @@ static struct page *swapin_no_readahead(swp_entry_t= entry, gfp_t gfp_mask, > > { > > struct folio *folio; > > struct page *page; > > - void *shadow =3D NULL; > > > > page =3D alloc_pages_mpol(gfp_mask, 0, mpol, ilx, numa_node_id(= )); > > folio =3D (struct folio *)page; > > @@ -888,10 +887,6 @@ static struct page *swapin_no_readahead(swp_entry_= t entry, gfp_t gfp_mask, > > > > mem_cgroup_swapin_uncharge_swap(entry); > > > > - shadow =3D get_shadow_from_swap_cache(entry); > > - if (shadow) > > - workingset_refault(folio, shadow); > > - > > folio_add_lru(folio); > > > > /* To provide entry to swap_readpage() */ > > @@ -922,11 +917,12 @@ struct page *swapin_readahead(swp_entry_t entry, = gfp_t gfp_mask, > > enum swap_cache_result cache_result; > > struct swap_info_struct *si; > > struct mempolicy *mpol; > > + void *shadow =3D NULL; > > struct folio *folio; > > struct page *page; > > pgoff_t ilx; > > > > - folio =3D swap_cache_get_folio(entry, vmf, NULL); > > + folio =3D swap_cache_get_folio(entry, vmf, &shadow); > > if (folio) { > > page =3D folio_file_page(folio, swp_offset(entry)); > > cache_result =3D SWAP_CACHE_HIT; > > @@ -938,6 +934,8 @@ struct page *swapin_readahead(swp_entry_t entry, gf= p_t gfp_mask, > > if (swap_use_no_readahead(si, swp_offset(entry))) { > > page =3D swapin_no_readahead(entry, gfp_mask, mpol, ilx= , vmf->vma->vm_mm); > > cache_result =3D SWAP_CACHE_BYPASS; > > + if (shadow) > > + workingset_refault(page_folio(page), shadow); > > It is inconsistent why other flavors of readahead do not do the > workingset_refault here. Because of the readaheads and swapcache. Every readahead pages need to be checked by workingset_refault with a different shadow (and so a different xarray entry search is needed). And since other swapin path need to insert page into swapcache, they will do extra xarray search/insert anyway so this optimization won't work. > I suggest keeping the workingset_refault in swapin_no_readahead() and > pass the shadow argument in. That sounds good to me.