From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0AB3CDB482 for ; Wed, 18 Oct 2023 06:57:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2EB5A8D0139; Wed, 18 Oct 2023 02:57:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 29AD88D0016; Wed, 18 Oct 2023 02:57:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 162D38D0139; Wed, 18 Oct 2023 02:57:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 038318D0016 for ; Wed, 18 Oct 2023 02:57:16 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id C8D5C1A015A for ; Wed, 18 Oct 2023 06:57:15 +0000 (UTC) X-FDA: 81357675630.16.B6C5CBA Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by imf04.hostedemail.com (Postfix) with ESMTP id 4BD0F40010 for ; Wed, 18 Oct 2023 06:57:13 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=RPvAlBSY; spf=pass (imf04.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697612233; a=rsa-sha256; cv=none; b=CpBg40+FBCDp5g9Y/eyq936HCJn4gr3wkfOhcCJr+5NdrfqeWtCDscc52yiyUCfySYxJDi Jz3KtB+/2ps6hieoEVPJ6pr4k6DG4jgcA4BXYwKIZdXe8Y5avW7wHyl7JtEMEp3iS7SgHu cSnDaYlezqL+9Zg45ptwwjBgHq5+9N4= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=RPvAlBSY; spf=pass (imf04.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697612233; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DzpxvIfvWkzxo+idm+BhBEQm44lmtHUW04NwEw189iU=; b=LQF+LPre8HG8AglFaG7PjaUEuAEaSl2SgRI29ViYyg+aaHz1NsUNCOCJY+A49zFwEGmOnD QQb/v9ISGgPdxT3Q/zBLM9UVpLku9dxyy6t9nycHdXJz5Q605WaiEK5fJUmd6eII3fVrFf RzYBxtij8MUcLODRxH0QrAg3YSk0dy8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697612233; x=1729148233; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=5kCfENKLdy6RK47w9C+3d3YFaUvT9YppRcN0G7AScnU=; b=RPvAlBSYZKB4eY0vmWr52M6nC1dqofrxwWeVmeiJDEwpTLQEz+DFYR8S 5YLTDhsbiIz+fPM0uLX2uW4RFtcdxvw54x8MvYGobB8dE0oIk/pMlitgn nwTJPrkoTWIBFVNI6ER/AHqV5Y6KIUEmE79akToy3CRfq8+sRKPonCu1f nQk1FQBnpoQTSOmVrtEqUoqKhHaEjtuEC5NdY2jf71MYs7yo6r18W3NqP QsN5y8T6nXYjqdh2brfrXed1FOtUdTCB+b4r+WHGQGt/bFR31pWWqUFIe tVUmjV+1PGqgjyVuTxEgcF2OG399ttwtNGQKsFfuCYjbbUhrM45OINLFY A==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="365303866" X-IronPort-AV: E=Sophos;i="6.03,234,1694761200"; d="scan'208";a="365303866" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 23:57:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="880103185" X-IronPort-AV: E=Sophos;i="6.03,234,1694761200"; d="scan'208";a="880103185" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 23:57:07 -0700 From: "Huang, Ying" To: Ryan Roberts Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , , Subject: Re: [PATCH v2 2/2] mm: swap: Swap-out small-sized THP without splitting In-Reply-To: <20231017161302.2518826-3-ryan.roberts@arm.com> (Ryan Roberts's message of "Tue, 17 Oct 2023 17:13:02 +0100") References: <20231017161302.2518826-1-ryan.roberts@arm.com> <20231017161302.2518826-3-ryan.roberts@arm.com> Date: Wed, 18 Oct 2023 14:55:06 +0800 Message-ID: <87r0ls773p.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 4BD0F40010 X-Stat-Signature: s4k1qd479111f8obt9zk51qfidwkq1xj X-Rspam-User: X-HE-Tag: 1697612233-112092 X-HE-Meta: U2FsdGVkX1/cZoVC596MI/d+pkpNvbnzkF5TFy2rYWci3j5Q8rrgvKjcJ8x//yDvQzCALgSlYeH/XDiHiHypY/CKMMo+A7mC4/wbd3lGqcQO4c1CVkvUWGgMhx6lUZZZxfBSzuhf/Dpi5Cqp9h70XWhN6uQfdI4N2ZshEPEnjLbdfr9UgufUWx65Wt3lPaathVXnBaEOEP6r5JQIlGU/UZQeGAKGZxotXu14LPvIMh+fA1wTedzYn2lI2u8mRwzac6v3MVEW9iUXWBf6+dluxPW4nis1BgnDKlpewmPE9zyOdmd1wLWtkbK24vtJs1lKFKs4Wm5XpMvgbSPNaMqyCDP4AuaVjOuh5bMRBILoMnpheQQojEuQsbr84hH3L//sQzlzXTv9ROjbTPTgYemqxt5s9/B3cV36Mlue77duPJAiJkz4aQU5P2Bn8/968yUhH4jFAV16b8BXNjPW9YHkVBO4J3D/qIKht5rRreCWnWUPw+k8WTpUa3BqPFg2JRlTYCscP62RF3rLjSS3QscrktzOgCiiTGlo9ivFob5fBuwbLleNuzgqLrodl0ZaWwf/L5q5A7cx0HyFxXvLB3S7oteo5lkv3in0d7Qc0SmT6w/8qjk9+/+Tw7SA1+ysfUtGqgFDiuUPi8ZqMe6WJVbufFJE3r+O98zdhnJArhyCic6TYkpoU9EIKyX5ZmLP4MJzkFj72oATfqwH64L+Nwbd9Fx0ItKrw596IfG+lzsYGLa2SHC+EhrS9/bZJ0uoK69KOCb+mlJxVLbZBeTl1wpdydxU+JEJ0OFWo6ZkHlcC+A+6Vv3gl47ubcVSKWSgjD0onlLIS65AUD7f010tSRYQ4YDRTm3w1KlAbw50/ozPMQi8QrxrDz+rOhOFZ/hUePmRgLaixqnKk3etL/02KfeAPJe0a1CzQvL3iAiYggx9QyPe2GDHhSHybdjC4XArbtwlgr0tDqI1GwSH7Z0IR7P pzU9YnfN Er8jdKwFMKOnvGiSzoAK9/PrFqtnUE9ibN76Ai2O8yZQt7tfjmgJ3Owi4QpCWeE7RtxKk1Nf/Ai08BMSK93sERBnwnfSLPVPChUFtFbhdF2aF0MU1I+yts9BFnyc95XrulJANUh36t5r2doUPZLDl8+2x1/tdiTz/7mWDNu+U/reP3Bi39SPqiVulU8qxw2ffshcZiVrc0/g5czw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Ryan Roberts writes: > The upcoming anonymous small-sized THP feature enables performance > improvements by allocating large folios for anonymous memory. However > I've observed that on an arm64 system running a parallel workload (e.g. > kernel compilation) across many cores, under high memory pressure, the > speed regresses. This is due to bottlenecking on the increased number of > TLBIs added due to all the extra folio splitting. > > Therefore, solve this regression by adding support for swapping out > small-sized THP without needing to split the folio, just like is already > done for PMD-sized THP. This change only applies when CONFIG_THP_SWAP is > enabled, and when the swap backing store is a non-rotating block device. > These are the same constraints as for the existing PMD-sized THP > swap-out support. > > Note that no attempt is made to swap-in THP here - this is still done > page-by-page, like for PMD-sized THP. > > The main change here is to improve the swap entry allocator so that it > can allocate any power-of-2 number of contiguous entries between [4, (1 > << PMD_ORDER)] (THP cannot support order-1 folios). This is done by > allocating a cluster for each distinct order and allocating sequentially > from it until the cluster is full. This ensures that we don't need to > search the map and we get no fragmentation due to alignment padding for > different orders in the cluster. If there is no current cluster for a > given order, we attempt to allocate a free cluster from the list. If > there are no free clusters, we fail the allocation and the caller falls > back to splitting the folio and allocates individual entries (as per > existing PMD-sized THP fallback). > > The per-order current clusters are maintained per-cpu using the existing > percpu_cluster infrastructure. This is done to avoid interleving pages > from different tasks, which would prevent IO being batched. This is > already done for the order-0 allocations so we follow the same pattern. > > As far as I can tell, this should not cause any extra fragmentation > concerns, given how similar it is to the existing PMD-sized THP > allocation mechanism. There could be up to (PMD_ORDER-2) * nr_cpus > clusters in concurrent use though, which in a pathalogical case (cluster > set aside for every order for every cpu and only one huge entry > allocated from it) would tie up ~12MiB of unused swap entries for these > high orders (assuming PMD_ORDER=9). In practice, the number of orders in > use will be small and the amount of swap space reserved is very small > compared to a typical swap file. > > Note that PMD_ORDER is not compile-time constant on powerpc, so we have > to allocate the large_next[] array at runtime. > > I've run the tests on Ampere Altra (arm64), set up with a 35G block ram > device as the swap device and from inside a memcg limited to 40G memory. > I've then run `usemem` from vm-scalability with 70 processes (each has > its own core), each allocating and writing 1G of memory. I've repeated > everything 5 times and taken the mean and stdev: > > Mean Performance Improvement vs 4K/baseline > > | alloc size | baseline | + this series | > | | v6.6-rc4+anonfolio | | > |:-----------|--------------------:|--------------------:| > | 4K Page | 0.0% | 1.1% | > | 64K THP | -44.1% | 0.9% | > | 2M THP | 56.0% | 56.4% | > > So with this change, the regression for 64K swap performance goes away. > Both 4K and 64K benhcmarks are now bottlenecked on TLBI performance from > try_to_unmap_flush_dirty(), on arm64 at least. When using fewer cpus in > the test, I see upto 2x performance of 64K THP swapping compared to 4K. > > Signed-off-by: Ryan Roberts > --- > include/linux/swap.h | 6 ++++ > mm/swapfile.c | 74 +++++++++++++++++++++++++++++++++++--------- > mm/vmscan.c | 10 +++--- > 3 files changed, 71 insertions(+), 19 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index a073366a227c..35cbbe6509a9 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -268,6 +268,12 @@ struct swap_cluster_info { > struct percpu_cluster { > struct swap_cluster_info index; /* Current cluster index */ > unsigned int next; /* Likely next allocation offset */ > + unsigned int large_next[]; /* > + * next free offset within current > + * allocation cluster for large folios, > + * or UINT_MAX if no current cluster. > + * Index is (order - 1). > + */ > }; > > struct swap_cluster_list { > diff --git a/mm/swapfile.c b/mm/swapfile.c > index b83ad77e04c0..625964e53c22 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -987,35 +987,70 @@ static int scan_swap_map_slots(struct swap_info_struct *si, > return n_ret; > } > > -static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot) > +static int swap_alloc_large(struct swap_info_struct *si, swp_entry_t *slot, > + unsigned int nr_pages) This looks hacky. IMO, we should put the allocation logic inside percpu_cluster framework. If percpu_cluster framework doesn't work for you, just refactor it firstly. > { > + int order_idx; > unsigned long idx; > struct swap_cluster_info *ci; > + struct percpu_cluster *cluster; > unsigned long offset; > > /* > * Should not even be attempting cluster allocations when huge > * page swap is disabled. Warn and fail the allocation. > */ > - if (!IS_ENABLED(CONFIG_THP_SWAP)) { > + if (!IS_ENABLED(CONFIG_THP_SWAP) || > + nr_pages < 4 || nr_pages > SWAPFILE_CLUSTER || > + !is_power_of_2(nr_pages)) { > VM_WARN_ON_ONCE(1); > return 0; > } > > - if (cluster_list_empty(&si->free_clusters)) > + /* > + * Not using clusters so unable to allocate large entries. > + */ > + if (!si->cluster_info) > return 0; > > - idx = cluster_list_first(&si->free_clusters); > - offset = idx * SWAPFILE_CLUSTER; > - ci = lock_cluster(si, offset); > - alloc_cluster(si, idx); > - cluster_set_count(ci, SWAPFILE_CLUSTER); > + order_idx = ilog2(nr_pages) - 2; > + cluster = this_cpu_ptr(si->percpu_cluster); > + offset = cluster->large_next[order_idx]; > + > + if (offset == UINT_MAX) { > + if (cluster_list_empty(&si->free_clusters)) > + return 0; > + > + idx = cluster_list_first(&si->free_clusters); > + offset = idx * SWAPFILE_CLUSTER; > > - memset(si->swap_map + offset, SWAP_HAS_CACHE, SWAPFILE_CLUSTER); > + ci = lock_cluster(si, offset); > + alloc_cluster(si, idx); > + cluster_set_count(ci, SWAPFILE_CLUSTER); > + > + /* > + * If scan_swap_map_slots() can't find a free cluster, it will > + * check si->swap_map directly. To make sure this standby > + * cluster isn't taken by scan_swap_map_slots(), mark the swap > + * entries bad (occupied). (same approach as discard). > + */ > + memset(si->swap_map + offset + nr_pages, SWAP_MAP_BAD, > + SWAPFILE_CLUSTER - nr_pages); There's an issue with this solution. If the free space of swap device runs low, it's possible that - some cluster are put in the percpu_cluster of some CPUs the swap entries there are marked as used - no free swap entries elsewhere - nr_swap_pages isn't 0 So, we will still scan LRU, but swap allocation fails, although there's still free swap space. I think that we should follow the method we used for the original percpu_cluster. That is, if all free swap entries are in percpu_cluster, we will start to allocate from percpu_cluster. > + } else { > + idx = offset / SWAPFILE_CLUSTER; > + ci = lock_cluster(si, offset); > + } > + > + memset(si->swap_map + offset, SWAP_HAS_CACHE, nr_pages); > unlock_cluster(ci); > - swap_range_alloc(si, offset, SWAPFILE_CLUSTER); > + swap_range_alloc(si, offset, nr_pages); > *slot = swp_entry(si->type, offset); > > + offset += nr_pages; > + if (idx != offset / SWAPFILE_CLUSTER) > + offset = UINT_MAX; > + cluster->large_next[order_idx] = offset; > + > return 1; > } > [snip] -- Best Regards, Huang, Ying