From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 432E0CD11C2 for ; Wed, 10 Apr 2024 18:55:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5923B6B0085; Wed, 10 Apr 2024 14:55:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5426C6B0088; Wed, 10 Apr 2024 14:55:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E3BA6B0089; Wed, 10 Apr 2024 14:55:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 228746B0085 for ; Wed, 10 Apr 2024 14:55:51 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C4D161208D3 for ; Wed, 10 Apr 2024 18:55:50 +0000 (UTC) X-FDA: 81994526460.20.D263A77 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf13.hostedemail.com (Postfix) with ESMTP id 5021E20009 for ; Wed, 10 Apr 2024 18:55:48 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cOzRWrha; spf=none (imf13.hostedemail.com: domain of tim.c.chen@linux.intel.com has no SPF policy when checking 198.175.65.16) smtp.mailfrom=tim.c.chen@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712775348; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JvtY1aTCRMR41N8/eENHnyqyhGka4Fwz7eeWAmuBHL8=; b=5Nj5zILmzBSNci3e4gUDVdekk5jyPSGWCt9p53RJV4YZRtOXUZoo9ABzCngARP6n7ln6Nx kGIBvq+0PEo9qczleTvKhdKsSpcmclk8ij7QC/uo1yE27b2d6sxKzghlcgpQzgevVHNJW/ FKiEuvXblewijpNCzE+jmo5rSJ+sRzk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712775348; a=rsa-sha256; cv=none; b=ixDAp5STLXrLD+PddifhyLKfXX/eIsG4grFFoTPG7foQmgHNJnrt5wDB1clgcx3VlKKv2P dviObK0cdSh2YQC+MvNEMtOHkmN6/rSXJwOMOv1XD9zwsD8N9D8GCfpRsi1mWv1e/OBU4B pzid2DwEu3I6fhxREKDu/OhMdmVNjy0= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cOzRWrha; spf=none (imf13.hostedemail.com: domain of tim.c.chen@linux.intel.com has no SPF policy when checking 198.175.65.16) smtp.mailfrom=tim.c.chen@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712775349; x=1744311349; h=message-id:subject:from:to:cc:date:in-reply-to: references:content-transfer-encoding:mime-version; bh=J991FT9jfRVYTUbgaIe7oj26wpuEPNq1SqOjrjc0wQA=; b=cOzRWrhaavN010KBHqRo8iJGwBBXCQgpISqf9JfMph6sc/g6QvW4Kid8 /xd5bz3zLWRUHUzr1v9kUiMv/Kdhcdo6jCR/x60beMi3USiQMm4xNyWkc M1sn6xhqD8O3knLtmmwvz8cUy0r/CZQ71XwSfXiefSrsMZ6jiZVnB57Qi eGSOVCj0b7ruHW6TEIlXMWYh1htCUbExygFcfG2RyBqUD2rMjmnc7C0dO REx3buXowIRzpOnLreBkNz9gWxiKRkiugDrUhgMAwVoOy+Mdi8hytukBv lnp9Y/o2vZFTesZPxixEqsM3dNOfITgDTBwyxHlXKOhY5Vm7IbFIwlD9q A==; X-CSE-ConnectionGUID: J7SjJa1gSC2bg9bRXY8AtQ== X-CSE-MsgGUID: MVD+H1E1Q6+lRdKxd8tsdg== X-IronPort-AV: E=McAfee;i="6600,9927,11039"; a="8289709" X-IronPort-AV: E=Sophos;i="6.07,191,1708416000"; d="scan'208";a="8289709" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Apr 2024 11:55:46 -0700 X-CSE-ConnectionGUID: mqg2BM3xSUeGkesDmyF6sQ== X-CSE-MsgGUID: V8yCf90lSsqaGZINY2GqeQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,191,1708416000"; d="scan'208";a="25137609" Received: from sgollapu-mobl.amr.corp.intel.com (HELO [10.209.38.205]) ([10.209.38.205]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Apr 2024 11:55:44 -0700 Message-ID: <801a0a7015137a7ef573e2cdd69dca019f328da7.camel@linux.intel.com> Subject: Re: [PATCH v2] mm: swap: prejudgement swap_has_cache to avoid page allocation From: Tim Chen To: Zhaoyu Liu , akpm@linux-foundation.org, ryncsn@gmail.com, nphamcs@gmail.com Cc: ying.huang@intel.com, songmuchun@bytedance.com, david@redhat.com, chrisl@kernel.org, guo.ziliang@zte.com.cn, yosryahmed@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Date: Wed, 10 Apr 2024 11:55:43 -0700 In-Reply-To: <20240408121439.GA252652@bytedance> References: <20240408121439.GA252652@bytedance> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.44.4 (3.44.4-2.fc36) MIME-Version: 1.0 X-Stat-Signature: b1cge3kmhguf8yyqm48wmhsipurphx8q X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 5021E20009 X-Rspam-User: X-HE-Tag: 1712775348-486134 X-HE-Meta: U2FsdGVkX194arhl2ndPsSn/CQB+kWw42uIupCf4Bb+/bdqiCJOFrusvCAy5nvjwX2GATwbvwqXuMKOohi4nvOaUPUkQoM7TTXonIrJIqvocAY3qa/Z3SKKq1SzaliJd6W4oe0WmxaFfBGz4mGP0RnSjNbSPA5z90VGWPmWACbWwZRJ1v9PIuNdZZ6P+3SPH0ibnN3P0IjjWgdTD81Wjdj7T9KitN7L7sTWqBInQMvqskRXTm9q5oZdXhCRQj116DdUFUJxckl7cQqxMg2OrgvFZ5TqKceS+y5dalmhIY/9KA4p9qb9O+w9/A58qV7pPAogJJWxAle4gTMldDhvQSYfDmsinyabqqhgWCxWXsPpqI6zp9DAClBBVoqd7LYNzqWMHXpHEpsYDpVaaWoRm8lfd9oX3Lm7Dmkjrhq4fzR99gjLx2059JPzBxzVcE1NmKi75hxZ/40pOesaf6Oq9uJwMCX6etWuI7ddYne37yNCz/tRFAqehpQma3hh9jwOSJTfYEh3Ndwm2WUoWO0Y+QsfBYS7SITMF+YoDGSAbVC2orLAddjsnbe73KL4yeJeRdIe7kg1ACu4b4WFroT7XIy7H+1ohb6OZTmEOnB8esKNq/udO/dtCGIMcsaE8HoVW771Y7Eb8mJ1S6VoGoJVSYRKqeAhHz/r6wdZTYoU8cwNRb/coM4MWWJBh3bNok7EJ5ioNJGag+S9m/mB1KZfVAvgIvVkuErQCEUjx7eS8C1v/nsyZQ5Tf34r/rk0p9UszDx8cJ9I4OzKiGbHA1pGNq1e0fK7VOJ2YSTTgbZhpF6fwPtkgfOGf57EPT85CIEahq7fdgRxP35rm5QO9f2Z30lJwfYcl/mr3+fep4ufzvtO1cLMJMKIqdRDOtWlhvhWDSa9sAibPsktc/qouZAWDvRCkvHi8gyyHtuYMpRZX4CEDgOngWCLnfkm0kXKZyiQMZyJNy7wtxvFUNsKQc1A qFVUg0Vr 05IFNv7grhMqqnLgfI4fg6t/VqDAf1X0BLyepQcOHorQinv+SNiZFVdyQkIPMvgM3mdrMRvdbE/OlpEhbbSO7VpKkMGM5vRtPC+OEpV8Js/DueUtB6FpT5I/+F47PBMCJkLb99FY06BkBCllIDbpJxYouAYGL784G5GlxwvoI6VUQAcVWlJtPhl7hZyWSKf1/+KKZyPV6DPse0zPg+YTHbOQeMKLhyokANnvGRkLObrF/SNY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, 2024-04-08 at 20:14 +0800, Zhaoyu Liu wrote: > Based on qemu arm64 - latest kernel + 100M memory + 1024M swapfile. > Create 1G anon mmap and set it to shared, and has two processes > randomly access the shared memory. When they are racing on swap cache, > on average, each "alloc_pages_mpol + swapcache_prepare + folio_put" > took about 1475 us. >=20 > So skip page allocation if SWAP_HAS_CACHE was set, just > schedule_timeout_uninterruptible and continue to acquire page > via filemap_get_folio() from swap cache, to speedup > __read_swap_cache_async. >=20 > Signed-off-by: Zhaoyu Liu > --- > Changes in v2: > - Fix the patch format and rebase to latest linux-next. > --- > include/linux/swap.h | 6 ++++++ > mm/swap_state.c | 10 ++++++++++ > mm/swapfile.c | 15 +++++++++++++++ > 3 files changed, 31 insertions(+) >=20 > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 11c53692f65f..a374070e05a7 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -492,6 +492,7 @@ extern sector_t swapdev_block(int, pgoff_t); > extern int __swap_count(swp_entry_t entry); > extern int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry= ); > extern int swp_swapcount(swp_entry_t entry); > +extern bool swap_has_cache(struct swap_info_struct *si, swp_entry_t entr= y); > struct swap_info_struct *swp_swap_info(swp_entry_t entry); > struct backing_dev_info; > extern int init_swap_address_space(unsigned int type, unsigned long nr_p= ages); > @@ -583,6 +584,11 @@ static inline int swp_swapcount(swp_entry_t entry) > return 0; > } > =20 > +static inline bool swap_has_cache(struct swap_info_struct *si, swp_entry= _t entry) > +{ > + return false; > +} > + > static inline swp_entry_t folio_alloc_swap(struct folio *folio) > { > swp_entry_t entry; > diff --git a/mm/swap_state.c b/mm/swap_state.c > index 642c30d8376c..f117fbf18b59 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -462,6 +462,15 @@ struct folio *__read_swap_cache_async(swp_entry_t en= try, gfp_t gfp_mask, > if (!swap_swapcount(si, entry) && swap_slot_cache_enabled) > goto fail_put_swap; > =20 > + /* > + * Skipping page allocation if SWAP_HAS_CACHE was set, > + * just schedule_timeout_uninterruptible and continue to > + * acquire page via filemap_get_folio() from swap cache, > + * to speedup __read_swap_cache_async. > + */ > + if (swap_has_cache(si, entry)) > + goto skip_alloc; > + I think most of the cases where a page already exists will be caught by filemap_get_folio(). The cases caught by this extra check should be when w= e have races between page cache update and the read async, which may not be that often. So please verify with benchmark that this extra check with its own overhead would buy us anything. Tim > /* > * Get a new folio to read into from swap. Allocate it now, > * before marking swap_map SWAP_HAS_CACHE, when -EEXIST will > @@ -483,6 +492,7 @@ struct folio *__read_swap_cache_async(swp_entry_t ent= ry, gfp_t gfp_mask, > if (err !=3D -EEXIST) > goto fail_put_swap; > =20 > +skip_alloc: > /* > * Protect against a recursive call to __read_swap_cache_async() > * on the same entry waiting forever here because SWAP_HAS_CACHE > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 3ee8957a46e6..b016ebc43b0d 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1511,6 +1511,21 @@ int swp_swapcount(swp_entry_t entry) > return count; > } > =20 > +/* > + * Verify that a swap entry has been tagged with SWAP_HAS_CACHE > + */ > +bool swap_has_cache(struct swap_info_struct *si, swp_entry_t entry) > +{ > + pgoff_t offset =3D swp_offset(entry); > + struct swap_cluster_info *ci; > + bool has_cache; > + > + ci =3D lock_cluster_or_swap_info(si, offset); > + has_cache =3D !!(si->swap_map[offset] & SWAP_HAS_CACHE); > + unlock_cluster_or_swap_info(si, ci); > + return has_cache; > +} > + > static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, > swp_entry_t entry, > unsigned int nr_pages)