From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D03ACCD1288 for ; Mon, 1 Apr 2024 15:15:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E381B6B0082; Mon, 1 Apr 2024 11:15:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE7F36B0083; Mon, 1 Apr 2024 11:15:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAF776B0085; Mon, 1 Apr 2024 11:15:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AE4DE6B0082 for ; Mon, 1 Apr 2024 11:15:39 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 20CD31401C0 for ; Mon, 1 Apr 2024 15:15:39 +0000 (UTC) X-FDA: 81961312398.03.F3B80D3 Received: from mail-lj1-f180.google.com (mail-lj1-f180.google.com [209.85.208.180]) by imf29.hostedemail.com (Postfix) with ESMTP id 499D4120011 for ; Mon, 1 Apr 2024 15:15:36 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MQCKwqSI; spf=pass (imf29.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.180 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711984537; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UGCJnGrY1HcT4tS9dtSCnBbe3N2cEX03n/ihCHy5DtM=; b=QLtk3Vi2d5VsSXArqPwaHGRhob3bnhanzgC47SfTQD8kAnkkO+H4IqMqjQov/6wG34/mMe l+j560sekKG3dvK43Pu7OPIAv1fIR/9mAEWJqW4FpB/KTZ/Hv34+zCr+TQYWeRtCW2DXXU 6/OHLc8IPXGxpDuxTRVzoYBnTjTBalI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711984537; a=rsa-sha256; cv=none; b=yQU8JvuJgMOVt+XIcyqr5+jdOgTLZdTYy9XLKhI4CsZrJxwW/VjPURcbCpJMC0HxfozWEw SNBM3eBxS1U1woV1Nel7kr1APiovbVbgEKabzjVc7KHAqpYhGQASml61h3I+soyYHQmdm1 txLnZd5H4XAFRmBGqWxiVCoRWTrRzuM= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MQCKwqSI; spf=pass (imf29.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.180 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lj1-f180.google.com with SMTP id 38308e7fff4ca-2d718efedb2so52783931fa.0 for ; Mon, 01 Apr 2024 08:15:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711984535; x=1712589335; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=UGCJnGrY1HcT4tS9dtSCnBbe3N2cEX03n/ihCHy5DtM=; b=MQCKwqSI2RSWJ6BRd8vZt7NOMAHaZZhCxpVVE038vxjozaS3UqK2S/I6n5MYx5vtbh X/dECUA3ZQaCfg550BTxJ5lYh3EfK/pdEYyx6JYssBjRAWcFK7barzDZslnurUxmDVKf oZU20LQAoxayLyQsw2GP3kluZ4BTwkVhHXCn9mMZQl1BZSIr2JxmuPgr8TQxsAamPVX2 pC06/mS953tx/at+G3QO0CMJRcJqEPBWawHJH4dIMo+dJ1BSyaYnKyVT0bMD3uGGvIia ZeucAbQTAXnNSrUA/7y6JV3yp2B9gHuhK0X0Ad1z10NiRpsrxPKP4fEVufrS38C1floh WJZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711984535; x=1712589335; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UGCJnGrY1HcT4tS9dtSCnBbe3N2cEX03n/ihCHy5DtM=; b=O5FQyOXRdZvsqyfisc/LVkx4Kf89R1Gbij9V93gCY39+mDPXW0iteHH/wrqIzPs8dh bSIwwVRcds3Fwebfu/kEx68hTs1U3L5Aqq6XBjM2TO+YTMgzu92CSj4n28YuSjmxvPxx a0lHwHOddQQOzmqQ3OSK5u2ZA9XGoHxD0kHHjHvOVVCASmri4RUQaIgHp1lxwfn++BBp r0K/a+CgjT2VbZXlyXgqXAFUlPUnO+A+7oD8UHQB7von1XO9Vw+O2jgk4V8sbfvoroIX edirv3NRbTum3o+1k/Q2udJwY4QZVejLQa5Bm6qXMSfZeJktapSw5FfHA/ZNBS7rRMKT kQpg== X-Forwarded-Encrypted: i=1; AJvYcCUzFOMBcQuhMUyvKJavraxXRW4X+2OMooG2uPirrZa/Jj6voJN2tN2CUdO5Tej+QxkEG4q0rSMmp3Zn/zcSpo9p+OE= X-Gm-Message-State: AOJu0YxOomfCWtXA4iaXLrVb/eqmQDtiQh0zOtt0iiHpMeExmao6lZwR oagKzTE20YYAyOjfaqs0u5O72diWMf2S/uylXU+UCNLkBoGgqnm4F91uB2+NZ3vaXEkHKFA8Ud2 EXZjBHR7a42aEMK0S9FN7XddR/j8= X-Google-Smtp-Source: AGHT+IGWPNJzVz4HVG8JBI+Lb3BFjPn8V5RTtoWCOJIC3fg/s5S8XIefdWJSSyvjaaFSLZqqNbyLoYL5IT3mnGdozmc= X-Received: by 2002:a05:651c:384:b0:2d6:a609:9a33 with SMTP id e4-20020a05651c038400b002d6a6099a33mr7549832ljp.0.1711984535069; Mon, 01 Apr 2024 08:15:35 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Kairui Song Date: Mon, 1 Apr 2024 23:15:18 +0800 Message-ID: Subject: Re: [PATCH] mm: swap: prejudgement swap_has_cache to avoid page allocation To: Zhaoyu Liu Cc: akpm@linux-foundation.org, ying.huang@intel.com, songmuchun@bytedance.com, david@redhat.com, willy@infradead.org, chrisl@kernel.org, nphamcs@gmail.com, yosryahmed@google.com, guo.ziliang@zte.com.cn, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: dkgd5133jiqne77psaq85d89pe6tenrp X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 499D4120011 X-Rspam-User: X-HE-Tag: 1711984536-787332 X-HE-Meta: U2FsdGVkX18oKaxnU/h7Rh+jVbIN5FggwIIsIQUhy4QYKX7UvelmWnB2UPkCuYWicD+wnLm5l/wEiKMfbODrethl5zWLdWBCpnH7UZsNnCWj95lqWbn2rCddi/jxL1vk7OAZ25k/NKSxizaXsEhCH7JAQPSAibBQE/vMSUfo9+m7VvdaDU8TqW0VUSap5L3LEzGXOJIdSDkpGchyC84MUsDb5Y25F9KuvLmJfZYnpCK9jjcF1Hm0DPbOzn9+WdpufpszooKgKuFDPMiUYhqojw5EiffQrUU9XLARGe8UkE3tyx+2kIWNIYCt83cbjMZ4cOrb4XWtynUZx6tw1he/xqgtVnzUKOS/BnYePOf9mQ5I3rpZcB/dUR+Bxe8/jUKnKph9pFb/wd9lCbMnlizsTfXCszra5feCcFn5dEq378AbESb8rvtFDqEuu19iwC3UP3dQqLTRNaJNMbLCqZ7inFbRM0OqZJDksOObMKAyyclDfGZ5/S5/epiCPEnxmvPhpeDYUuJVfEoAUYxGWssN8p2KlBA56KWEcHcZ5n8cU22O6N8z51NrP7mORSfvvfJVAvRt/7q8nF4hzo83wvcwPSrPZI1CD4ZMleDPSR4g1bwr2zzrJCtgBBBd0lPmA1f3VJlW1+p1uvJpum+j6KxpnAwocIfizPip7iuSEE+D6iII5NXVX6B9aaZXLMCTI9msBXK8VNY4/Svs/dhkyd6feXDtspu1ZMdZruzgIdpov7nOl0EYoangle4yZ2Znd14Qk/9HuaMHD6TijV69o9kvSVetpXb/H+2RsWcQ0S09Ws+3LFmVT9KYwoQPvVNUsCJFgcfqr2drQvTp2RsvEL1ZC1q9J/aSoR1nnyCPyd5qWq+q68OXrgnp5rTNCrQBN3rBRbB425/tvFlm3xUY1dr9yd5zPLmI3UcZRI309blv+J2565tBzsvEgYbAKVYN5SMpiNX/3Z5xqiOl01sc4sb U0EyZ1gG sJFFr3SIeRzN0zES6hhnERvQEwvaACTzpcKP9wQ309TJEKYfGlsTWNe0bGCAttZLtFmDzBsWgVjZId2MHbqFx2xb+ag== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Apr 1, 2024 at 10:15=E2=80=AFPM Zhaoyu Liu wrote: > Hi Zhaoyu Not sure why but I can't apply your patch, maybe you need to fix your email client? > Based on qemu arm64 - latest kernel + 100M memory + 1024M swapfile. > Create 1G anon mmap and set it to shared, and has two processes > randomly access the shared memory. When they are racing on swap cache, > on average, each "alloc_pages_mpol + swapcache_prepare + folio_put" > took about 1475 us. > > So skip page allocation if SWAP_HAS_CACHE was set, just > schedule_timeout_uninterruptible and continue to acquire page > via filemap_get_folio() from swap cache, to speedup > __read_swap_cache_async. > > Signed-off-by: Zhaoyu Liu > --- > include/linux/swap.h | 6 ++++++ > mm/swap_state.c | 10 ++++++++++ > mm/swapfile.c | 15 +++++++++++++++ > 3 files changed, 31 insertions(+) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index a211a0383425..8a0013299f38 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -480,6 +480,7 @@ extern sector_t swapdev_block(int, pgoff_t); > extern int __swap_count(swp_entry_t entry); > extern int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry)= ; > extern int swp_swapcount(swp_entry_t entry); > +extern bool swap_has_cache(struct swap_info_struct *si, swp_entry_t entr= y); > struct swap_info_struct *swp_swap_info(swp_entry_t entry); > struct backing_dev_info; > extern int init_swap_address_space(unsigned int type, unsigned long nr_pa= ges); > @@ -570,6 +571,11 @@ static inline int swp_swapcount(swp_entry_t entry) > return 0; > } > > +static inline bool swap_has_cache(struct swap_info_struct *si, swp_entry= _t entry) > +{ > + return false; > +} > + > static inline swp_entry_t folio_alloc_swap(struct folio *folio) > { > swp_entry_t entry; > diff --git a/mm/swap_state.c b/mm/swap_state.c > index bfc7e8c58a6d..f130cfc669ce 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -462,6 +462,15 @@ struct folio *__read_swap_cache_async(swp_entry_t en= try, gfp_t gfp_mask, > if (!swap_swapcount(si, entry) && swap_slot_cache_enabled) > goto fail_put_swap; > > + /* > + * Skipping page allocation if SWAP_HAS_CACHE was set, > + * just schedule_timeout_uninterruptible and continue to > + * acquire page via filemap_get_folio() from swap cache, > + * to speedup __read_swap_cache_async. > + */ > + if (swap_has_cache(si, entry)) > + goto skip_alloc; > + But will this cause more lock contention? You need to lock the cluster for the has_cache now. > /* > * Get a new folio to read into from swap. Allocate it now, > * before marking swap_map SWAP_HAS_CACHE, when -EEXIST will > @@ -483,6 +492,7 @@ struct folio *__read_swap_cache_async(swp_entry_t ent= ry, gfp_t gfp_mask, > if (err !=3D -EEXIST) > goto fail_put_swap; > > +skip_alloc: > /* > * Protect against a recursive call to __read_swap_cache_async() > * on the same entry waiting forever here because SWAP_HAS_CACHE > diff --git a/mm/swapfile.c b/mm/swapfile.c > index cf900794f5ed..5388950c4ca6 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1513,6 +1513,21 @@ int swp_swapcount(swp_entry_t entry) > return count; > } > > +/* > + * Verify that a swap entry has been tagged with SWAP_HAS_CACHE > + */ > +bool swap_has_cache(struct swap_info_struct *si, swp_entry_t entry) > +{ > + pgoff_t offset =3D swp_offset(entry); > + struct swap_cluster_info *ci; > + bool has_cache; > + > + ci =3D lock_cluster_or_swap_info(si, offset); > + has_cache =3D !!(si->swap_map[offset] & SWAP_HAS_CACHE); I think you also need to check swap_count here, if an entry was just freed or loaded into slot cache, it will also have SWAP_HAS_CACHE set. I have a very similar function in my another series (see __swap_has_cache): https://lore.kernel.org/all/20240326185032.72159-10-ryncsn@gmail.com/ The situation is different with this patch though. But this check is not reliable in both patches, having SWAP_HAS_CACHE doesn't mean the folio is in the cache, and even if it's in the cache, it might get freed very soon. So you need to ensure later checks can ensure the final result is not affected. eg. If swap_has_cache returns true, then swap cache is freed, and skip_if_exists is set to true, __read_swap_cache_async will return NULL for an entry that it should be able to alloc and cache, could this be a problem (for example, causing zswap writeback to fail with ENOMEM due to readahead)? Also the race window that you are trying to avoid seems to be very short and rare? Not sure if the whole idea is worth it and actually affects performance in a positive way, any data on that?