From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4997C04FFE for ; Sat, 11 May 2024 05:56:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2804A6B015E; Sat, 11 May 2024 01:56:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 22CFB6B015F; Sat, 11 May 2024 01:56:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F7F76B0160; Sat, 11 May 2024 01:56:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id DF9CB6B015E for ; Sat, 11 May 2024 01:56:40 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 8EBC11A0416 for ; Sat, 11 May 2024 05:56:40 +0000 (UTC) X-FDA: 82105055760.09.5654FF6 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) by imf04.hostedemail.com (Postfix) with ESMTP id ABC1A40004 for ; Sat, 11 May 2024 05:56:37 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=EFcVQeZa; spf=pass (imf04.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.14 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715406998; a=rsa-sha256; cv=none; b=LbVLkos2ChSd6Nj04W29FZciDi24fw9CnEENfCFr/O7EY403fwY7XteagFONBeOigj9/Mq YUprITDC6QzUXjUK2qaj3c/OzHTC0hOLLvyIZo6QfMeMswM2CpKVr76JCOZYoRm3TMKDGh M3dDcYv1z73SR3ca6NUVVZTgFh5YgO0= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=EFcVQeZa; spf=pass (imf04.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.14 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715406998; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iWeSIUMhYI1d0Jyd9Qj73SRxxr7bX2DozhiyPAUsZyU=; b=Kxtc7afX8SQhaCQa3kUcLG7FGI5eq+4bNqayAfhxEQ/nWM6yvL8tB3SzZZxEvCqMS7jmaj F4hwYOIrMfKBVqHFH6CrQ2oFwZ+0h4/nZHq3isMtycYVnSbMRhyvUkB9b2/SoNSKB6HPTP VrI8Kc6gHkdvOT2HJah93XgRUm6W0fc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1715406998; x=1746942998; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=9c1YrJVoYXvpOrqx8Si67leFWzfTJAePXpqpB7BacIA=; b=EFcVQeZajEkFbbeQKrIykmSAA97utWunLzfUjWsK+PKQAjDhvh8GnFXx dFOtBU57GqGf/2lqvoYhyBBZIkBodLVSM7S0WjtbRS7hivQlzoxQ5iz76 KY17sFcVl5e2PBgtTnuIuXilstZjWBUgukYHF/QIMZ9QHQQuwQYF6EqlF GRt5SKtrpm8Ki70mk/EFwdVgnqeL5Si5yJX42LF9ZfQguMg7BmfQy5Whh 1V2QNv/GalAUMIF3vVGXHibF/9ZmI4tc6jvipt7sxMDWOgMIHrzFEDQ7f Ud76mlALTIu3hXH9Som2GQKougjDyqoXDVh64gFV1sGohPx5OqN1pk4SD A==; X-CSE-ConnectionGUID: m960swIWSsStqNdKLlreWw== X-CSE-MsgGUID: DR+RO4wsTVWjlwdfZZpSOQ== X-IronPort-AV: E=McAfee;i="6600,9927,11069"; a="15241217" X-IronPort-AV: E=Sophos;i="6.08,153,1712646000"; d="scan'208";a="15241217" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 May 2024 22:56:37 -0700 X-CSE-ConnectionGUID: v3YQjbwbQe+9E/9E/PWR/g== X-CSE-MsgGUID: LUXDQDhqQxuGvmH07QDRQg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,153,1712646000"; d="scan'208";a="60676274" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 May 2024 22:56:33 -0700 From: "Huang, Ying" To: Kairui Song Cc: linux-mm@kvack.org, Kairui Song , Andrew Morton , Matthew Wilcox , Chris Li , Barry Song , Ryan Roberts , Neil Brown , Minchan Kim , David Hildenbrand , Hugh Dickins , Yosry Ahmed , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 12/12] mm/swap: reduce swap cache search space In-Reply-To: <20240510114747.21548-13-ryncsn@gmail.com> (Kairui Song's message of "Fri, 10 May 2024 19:47:47 +0800") References: <20240510114747.21548-1-ryncsn@gmail.com> <20240510114747.21548-13-ryncsn@gmail.com> Date: Sat, 11 May 2024 13:54:40 +0800 Message-ID: <87jzk0hp0f.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: ABC1A40004 X-Rspam-User: X-Rspamd-Server: rspam12 X-Stat-Signature: cyqc6euy6gs415nj63qjxq8qgioczs18 X-HE-Tag: 1715406997-910450 X-HE-Meta: U2FsdGVkX1+FrbxWbnj6eQy0NJ0cYQMWK6SReQF51O40qVot9y2pHAH9BdLr59iA2GvaP+YHq7rKUSDJGWHNjAIqH57jU1Rvb3KDtkMRoF8S1Kb50B65RFoBatgB0P/A3siy8nxJ+19Dux0pSwGMhY4/LnQe6yNL76QzoidCjMuo0Lch/Tgay8asIxMUuSG8D+rlhvM1Z+YmpD+mtE8COD+XcPdyDKsDOJt2hMpB+czQCbW46vXQqhI9YnOs6qT3K4FLRmCNouCdBvpFl7wy5ENifhYkUyvDVO66OFREMU5tUznpRFQ6j9E0qNiJvFOyIvuNQKQx7Z7MkmAf30CLcMxO0PAAcZeuQDMcXiHsht/bL1MMO0j4KFKFuv7uANwkAra35ytL94f4VJ6xwc9oJ6e2ud2a8OIHym6MWIsenGnpxTuMfgPrja4h2c+D0eIN0bH06f/df7uMCOZkbG2KikkfmRV6twio2/UEH4zkLk8X8Zr7Fhwu8SV2eSGgKK1+KCHn8kDyRE6Ku97ki5F+x2gz/eBCHVq18rACV8obANWIbXF/peCUx56cbH3ORQS/3X+tuUEhn7OtbCS5bUbRg+RIpmyrN4RI9B21dAME5ElWxK2fysr26/6Hh0SRz5bQIPw3SUkFNDfh0dONCkO1LmPT2C2QTyIJiF49ba9FVXdJ+qzlWqhIWNVOH7x+5XyHFdJF4T9NTrxc5k2Piu7Ex/UJzLtSua6BGY+dCtrU3HmF/AeF4rKVkaPFpNth5uHR2Z0p7AZJypHAw0MtpMmG2XKnZPD3t8L/dOPz3Xio0ZPjP0Qqxilnnhgn4ld/C6axJVafT9xEbcJy3bNhuvHT3nW9mnU04CHMNwoyMYtwWKiXWgDKDnATmtDHilcxhC5G0Ckz6vBOOUYs7WfvBuSW2NwxydWRuYhYBJOZTQCsCyzmvfzFdLVLYCfhopSn+qQJVwFQCg6ZoxVdUaCe56A rk/nusmD MYwpfn7LqZ8e2IqDY/DV/hqaPFc6MRL7vB8YW6TSD+/jtZK5eZLsEEs9hYT84RmCvVE4S46m/PivEueZE5IMYzl2E3ZHENOZIRF97v7KDFc6fo4rZt4sTcL1UgmfLlIdAvjEO8CRBg0OB7tRpfOlzVwsiupZXw8gU2zL5Jkl7Z57jVcLQIRMrx7P7lTFCWebTXh1PslsWwGqy1ApbGBM0qbe3sP8RO4zM0+GMyBLHERlEIsmrk7zRVfT6yDFk6s2aGUKuw9a5qBlluO0eH9FZnstl6Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Kairui Song writes: > From: Kairui Song > > Currently we use one swap_address_space for every 64M chunk to reduce lock > contention, this is like having a set of smaller swap files inside one > swap device. But when doing swap cache look up or insert, we are > still using the offset of the whole large swap device. This is OK for > correctness, as the offset (key) is unique. > > But Xarray is specially optimized for small indexes, it creates the > radix tree levels lazily to be just enough to fit the largest key > stored in one Xarray. So we are wasting tree nodes unnecessarily. > > For 64M chunk it should only take at most 3 levels to contain everything. > But if we are using the offset from the whole swap device, the offset (key) > value will be way beyond 64M, and so will the tree level. > > Optimize this by using a new helper swap_cache_index to get a swap > entry's unique offset in its own 64M swap_address_space. > > I see a ~1% performance gain in benchmark and actual workload with > high memory pressure. > > Test with `time memhog 128G` inside a 8G memcg using 128G swap (ramdisk > with SWP_SYNCHRONOUS_IO dropped, tested 3 times, results are stable. The > test result is similar but the improvement is smaller if SWP_SYNCHRONOUS_IO > is enabled, as swap out path can never skip swap cache): > > Before: > 6.07user 250.74system 4:17.26elapsed 99%CPU (0avgtext+0avgdata 8373376maxresident)k > 0inputs+0outputs (55major+33555018minor)pagefaults 0swaps > > After (1.8% faster): > 6.08user 246.09system 4:12.58elapsed 99%CPU (0avgtext+0avgdata 8373248maxresident)k > 0inputs+0outputs (54major+33555027minor)pagefaults 0swaps > > Similar result with MySQL and sysbench using swap: > Before: > 94055.61 qps > > After (0.8% faster): > 94834.91 qps > > Radix tree slab usage is also very slightly lower. > > Signed-off-by: Kairui Song LGTM, Thanks! Reviewed-by: "Huang, Ying" > --- > mm/huge_memory.c | 2 +- > mm/memcontrol.c | 2 +- > mm/mincore.c | 2 +- > mm/shmem.c | 2 +- > mm/swap.h | 15 +++++++++++++++ > mm/swap_state.c | 17 +++++++++-------- > mm/swapfile.c | 6 +++--- > 7 files changed, 31 insertions(+), 15 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 317de2afd371..fcc0e86a2589 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2838,7 +2838,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, > split_page_memcg(head, order, new_order); > > if (folio_test_anon(folio) && folio_test_swapcache(folio)) { > - offset = swp_offset(folio->swap); > + offset = swap_cache_index(folio->swap); > swap_cache = swap_address_space(folio->swap); > xa_lock(&swap_cache->i_pages); > } > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index d127c9c5fabf..024aeb64d0be 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -6153,7 +6153,7 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma, > * Because swap_cache_get_folio() updates some statistics counter, > * we call find_get_page() with swapper_space directly. > */ > - page = find_get_page(swap_address_space(ent), swp_offset(ent)); > + page = find_get_page(swap_address_space(ent), swap_cache_index(ent)); > entry->val = ent.val; > > return page; > diff --git a/mm/mincore.c b/mm/mincore.c > index dad3622cc963..e31cf1bde614 100644 > --- a/mm/mincore.c > +++ b/mm/mincore.c > @@ -139,7 +139,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, > } else { > #ifdef CONFIG_SWAP > *vec = mincore_page(swap_address_space(entry), > - swp_offset(entry)); > + swap_cache_index(entry)); > #else > WARN_ON(1); > *vec = 1; > diff --git a/mm/shmem.c b/mm/shmem.c > index fa2a0ed97507..326315c12feb 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1756,7 +1756,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, > > old = *foliop; > entry = old->swap; > - swap_index = swp_offset(entry); > + swap_index = swap_cache_index(entry); > swap_mapping = swap_address_space(entry); > > /* > diff --git a/mm/swap.h b/mm/swap.h > index 82023ab93205..2c0e96272d49 100644 > --- a/mm/swap.h > +++ b/mm/swap.h > @@ -27,6 +27,7 @@ void __swap_writepage(struct folio *folio, struct writeback_control *wbc); > /* One swap address space for each 64M swap space */ > #define SWAP_ADDRESS_SPACE_SHIFT 14 > #define SWAP_ADDRESS_SPACE_PAGES (1 << SWAP_ADDRESS_SPACE_SHIFT) > +#define SWAP_ADDRESS_SPACE_MASK (SWAP_ADDRESS_SPACE_PAGES - 1) > extern struct address_space *swapper_spaces[]; > #define swap_address_space(entry) \ > (&swapper_spaces[swp_type(entry)][swp_offset(entry) \ > @@ -40,6 +41,15 @@ static inline loff_t swap_dev_pos(swp_entry_t entry) > return ((loff_t)swp_offset(entry)) << PAGE_SHIFT; > } > > +/* > + * Return the swap cache index of the swap entry. > + */ > +static inline pgoff_t swap_cache_index(swp_entry_t entry) > +{ > + BUILD_BUG_ON((SWP_OFFSET_MASK | SWAP_ADDRESS_SPACE_MASK) != SWP_OFFSET_MASK); > + return swp_offset(entry) & SWAP_ADDRESS_SPACE_MASK; > +} > + > void show_swap_cache_info(void); > bool add_to_swap(struct folio *folio); > void *get_shadow_from_swap_cache(swp_entry_t entry); > @@ -86,6 +96,11 @@ static inline struct address_space *swap_address_space(swp_entry_t entry) > return NULL; > } > > +static inline pgoff_t swap_cache_index(swp_entry_t entry) > +{ > + return 0; > +} > + > static inline void show_swap_cache_info(void) > { > } > diff --git a/mm/swap_state.c b/mm/swap_state.c > index 642c30d8376c..6e86c759dc1d 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -72,7 +72,7 @@ void show_swap_cache_info(void) > void *get_shadow_from_swap_cache(swp_entry_t entry) > { > struct address_space *address_space = swap_address_space(entry); > - pgoff_t idx = swp_offset(entry); > + pgoff_t idx = swap_cache_index(entry); > void *shadow; > > shadow = xa_load(&address_space->i_pages, idx); > @@ -89,7 +89,7 @@ int add_to_swap_cache(struct folio *folio, swp_entry_t entry, > gfp_t gfp, void **shadowp) > { > struct address_space *address_space = swap_address_space(entry); > - pgoff_t idx = swp_offset(entry); > + pgoff_t idx = swap_cache_index(entry); > XA_STATE_ORDER(xas, &address_space->i_pages, idx, folio_order(folio)); > unsigned long i, nr = folio_nr_pages(folio); > void *old; > @@ -144,7 +144,7 @@ void __delete_from_swap_cache(struct folio *folio, > struct address_space *address_space = swap_address_space(entry); > int i; > long nr = folio_nr_pages(folio); > - pgoff_t idx = swp_offset(entry); > + pgoff_t idx = swap_cache_index(entry); > XA_STATE(xas, &address_space->i_pages, idx); > > xas_set_update(&xas, workingset_update_node); > @@ -253,13 +253,14 @@ void clear_shadow_from_swap_cache(int type, unsigned long begin, > > for (;;) { > swp_entry_t entry = swp_entry(type, curr); > + unsigned long index = curr & SWAP_ADDRESS_SPACE_MASK; > struct address_space *address_space = swap_address_space(entry); > - XA_STATE(xas, &address_space->i_pages, curr); > + XA_STATE(xas, &address_space->i_pages, index); > > xas_set_update(&xas, workingset_update_node); > > xa_lock_irq(&address_space->i_pages); > - xas_for_each(&xas, old, end) { > + xas_for_each(&xas, old, min(index + (end - curr), SWAP_ADDRESS_SPACE_PAGES)) { > if (!xa_is_value(old)) > continue; > xas_store(&xas, NULL); > @@ -350,7 +351,7 @@ struct folio *swap_cache_get_folio(swp_entry_t entry, > { > struct folio *folio; > > - folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry)); > + folio = filemap_get_folio(swap_address_space(entry), swap_cache_index(entry)); > if (!IS_ERR(folio)) { > bool vma_ra = swap_use_vma_readahead(); > bool readahead; > @@ -420,7 +421,7 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, > si = get_swap_device(swp); > if (!si) > return ERR_PTR(-ENOENT); > - index = swp_offset(swp); > + index = swap_cache_index(swp); > folio = filemap_get_folio(swap_address_space(swp), index); > put_swap_device(si); > return folio; > @@ -447,7 +448,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, > * that would confuse statistics. > */ > folio = filemap_get_folio(swap_address_space(entry), > - swp_offset(entry)); > + swap_cache_index(entry)); > if (!IS_ERR(folio)) > goto got_folio; > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 0b0ae6e8c764..4f0e8b2ac8aa 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -142,7 +142,7 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si, > struct folio *folio; > int ret = 0; > > - folio = filemap_get_folio(swap_address_space(entry), offset); > + folio = filemap_get_folio(swap_address_space(entry), swap_cache_index(entry)); > if (IS_ERR(folio)) > return 0; > /* > @@ -2158,7 +2158,7 @@ static int try_to_unuse(unsigned int type) > (i = find_next_to_unuse(si, i)) != 0) { > > entry = swp_entry(type, i); > - folio = filemap_get_folio(swap_address_space(entry), i); > + folio = filemap_get_folio(swap_address_space(entry), swap_cache_index(entry)); > if (IS_ERR(folio)) > continue; > > @@ -3476,7 +3476,7 @@ EXPORT_SYMBOL_GPL(swapcache_mapping); > > pgoff_t __folio_swap_cache_index(struct folio *folio) > { > - return swp_offset(folio->swap); > + return swap_cache_index(folio->swap); > } > EXPORT_SYMBOL_GPL(__folio_swap_cache_index);