From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9264BC54E58 for ; Tue, 12 Mar 2024 13:57:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F14D38D004A; Tue, 12 Mar 2024 09:57:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EC4748D0017; Tue, 12 Mar 2024 09:57:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8CB38D004A; Tue, 12 Mar 2024 09:57:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C844A8D0017 for ; Tue, 12 Mar 2024 09:57:06 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5D2AC160DF7 for ; Tue, 12 Mar 2024 13:57:06 +0000 (UTC) X-FDA: 81888538452.05.3040BA6 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf29.hostedemail.com (Postfix) with ESMTP id B494212000C for ; Tue, 12 Mar 2024 13:57:03 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf29.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710251823; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LPQhsPvaaR5gfmbR3+iS9mtrIhEEUvxNylmu4WNMems=; b=gRt9Qt0/857YMgMtpWsINszJxQ8eHfN2mptHEzj+wzd2lTDWNCRtvlX4otkY9IrtsMpFDV vSnFsvqa8lJEhfgbdI/LtJfa+MtgPe2p3j999clKGanxHiJuvuT/pPK7ZXuVdL0aft8vSQ 75sSQ5bIXEjavolfGa9N+dz3C9eY1P8= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf29.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710251823; a=rsa-sha256; cv=none; b=NX2Os2DdukekzRsFxifVTbzxh3QDqZdrjCktlMbWBxp81NhDDYZQGNSfeRGTrY7oeJNPIz hkbvU+v6VU6kcXH4pIAm6h5l1CDvRt/25wsp9BNbb9jngdoHx6vrlqRSFT/vho0IexXXsC +rFgxYZi8p4saleP+1jlDCtIcc/CQrY= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E82781007; Tue, 12 Mar 2024 06:57:39 -0700 (PDT) Received: from [10.1.27.122] (XHFQ2J9959.cambridge.arm.com [10.1.27.122]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8FB253F73F; Tue, 12 Mar 2024 06:57:00 -0700 (PDT) Message-ID: <2fbc83bf-2e51-40fa-8865-499911ba8102@arm.com> Date: Tue, 12 Mar 2024 13:56:58 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 0/6] Swap-out mTHP without splitting Content-Language: en-GB From: Ryan Roberts To: "Huang, Ying" Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240311150058.1122862-1-ryan.roberts@arm.com> <878r2n516c.fsf@yhuang6-desk2.ccr.corp.intel.com> <28914585-80bd-4308-b3aa-dd0dbb2cb201@arm.com> In-Reply-To: <28914585-80bd-4308-b3aa-dd0dbb2cb201@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Stat-Signature: tmhayymx1au8ixadxi3xba4kb711oj8j X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B494212000C X-HE-Tag: 1710251823-472997 X-HE-Meta: U2FsdGVkX18z86UMWN79eA+o4rhusjA8C8bOdJfEGHOLtQmBcaWtqg779DBtaC7rWN+JeTaurrH0oczdMV0BwPxQmIWNOvLGU5XTXfIwEqADt7XjJvk2lBH0yv/DMq39zqOPoWyS2XfGK2hk82qj0tJMwAAStXsTGwQuU4YWJ+iNWJGd9W90YlCLREIwhl2wkdPkX0/dox8iqAun0XXnf2qAX8krGPjQsk6Yn5VWUfrlgDf0asXnf5gy0l74EvWOGwhhpfdM2/q3AZ8ZlQIvryGlQL/fsgn6ScMb78/7IHqQSPl48l1jh+29XdJQN+phGQWgDMcTLo9/NWNJqPYi2BCngz9fzSJu0Fz1ovR96jG+DhRy2hXEr9lQqfGOXE9XxsIAYfkzU7WqE5LFUsef443bE3cVJgUeu3g16zN2ruRtLRHYqAN4zT+TJ9r46oyZnqaFTShcIrol3WsMlsEXGQsdPjlsTmMjyHUxq5xFjTW6Pt46n0vvDLv1p8ntjEvj3xma49NLDQT7XgAJRXn8/H389c844/Iq46NCClYx/B9kqcJCbJ+ZFvw8sUwt7U1JSUY2RoWhc+oQe7NQTx1CjaK+6KtcMvCRIKbXtgLOaU3N4ZjqtZtF9ONsM40wvpr+rDhFdCexhNjs4eSccQK1cNrsu6vvPHGYAGGFvIVS95rg33coVTrNTPB+7hY0w5A9X5KaRVSbh2JEBF97UqfAs/g9xis8ZhGoyS4Cb5Gtl/PpZ8YYeWICAF5ILn5R8badDnRtbCTqp2pjdhtdapAg6h8i8tVkLxUbACl0a7EkyJIc4K2UbzIpauovPMJaRhMHodRdGrtaRIbte3XNJ2iS9I2v8viZkAPg X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 12/03/2024 08:49, Ryan Roberts wrote: > On 12/03/2024 08:01, Huang, Ying wrote: >> Ryan Roberts writes: >> >>> Hi All, >>> >>> This series adds support for swapping out multi-size THP (mTHP) without needing >>> to first split the large folio via split_huge_page_to_list_to_order(). It >>> closely follows the approach already used to swap-out PMD-sized THP. >>> >>> There are a couple of reasons for swapping out mTHP without splitting: >>> >>> - Performance: It is expensive to split a large folio and under extreme memory >>> pressure some workloads regressed performance when using 64K mTHP vs 4K >>> small folios because of this extra cost in the swap-out path. This series >>> not only eliminates the regression but makes it faster to swap out 64K mTHP >>> vs 4K small folios. >>> >>> - Memory fragmentation avoidance: If we can avoid splitting a large folio >>> memory is less likely to become fragmented, making it easier to re-allocate >>> a large folio in future. >>> >>> - Performance: Enables a separate series [4] to swap-in whole mTHPs, which >>> means we won't lose the TLB-efficiency benefits of mTHP once the memory has >>> been through a swap cycle. >>> >>> I've done what I thought was the smallest change possible, and as a result, this >>> approach is only employed when the swap is backed by a non-rotating block device >>> (just as PMD-sized THP is supported today). Discussion against the RFC concluded >>> that this is sufficient. >>> >>> >>> Performance Testing >>> =================== >>> >>> I've run some swap performance tests on Ampere Altra VM (arm64) with 8 CPUs. The >>> VM is set up with a 35G block ram device as the swap device and the test is run >>> from inside a memcg limited to 40G memory. I've then run `usemem` from >>> vm-scalability with 70 processes, each allocating and writing 1G of memory. I've >>> repeated everything 6 times and taken the mean performance improvement relative >>> to 4K page baseline: >>> >>> | alloc size | baseline | + this series | >>> | | v6.6-rc4+anonfolio | | >>> |:-----------|--------------------:|--------------------:| >>> | 4K Page | 0.0% | 1.4% | >>> | 64K THP | -14.6% | 44.2% | >>> | 2M THP | 87.4% | 97.7% | >>> >>> So with this change, the 64K swap performance goes from a 15% regression to a >>> 44% improvement. 4K and 2M swap improves slightly too. >> >> I don't understand why the performance of 2M THP improves. The swap >> entry allocation becomes a little slower. Can you provide some >> perf-profile to root cause it? > > I didn't post the stdev, which is quite large (~10%), so that may explain some > of it: > > | kernel | mean_rel | std_rel | > |:---------|-----------:|----------:| > | base-4K | 0.0% | 5.5% | > | base-64K | -14.6% | 3.8% | > | base-2M | 87.4% | 10.6% | > | v4-4K | 1.4% | 3.7% | > | v4-64K | 44.2% | 11.8% | > | v4-2M | 97.7% | 13.3% | > > Regardless, I'll do some perf profiling and post results shortly. I did a lot more runs (24 for each config) and meaned them to try to remove the noise in the measurements. It's now only showing a 4% improvement for 2M. So I don't think the 2M improvement is real: | kernel | mean_rel | std_rel | |:---------|-----------:|----------:| | base-4K | 0.0% | 3.2% | | base-64K | -9.1% | 10.1% | | base-2M | 88.9% | 6.8% | | v4-4K | 0.5% | 3.1% | | v4-64K | 44.7% | 8.3% | | v4-2M | 93.3% | 7.8% | Looking at the perf data, the only thing that sticks out is that a big chunk of time is spent in during contpte_convert(), called as a result of try_to_unmap_one(). This is present in both the before and after configs. This is an arm64 function to "unfold" contpte mappings. Essentially, the PMD is being split during shrink_folio_list() with TTU_SPLIT_HUGE_PMD, meaning the THPs are PTE-mapped in contpte blocks. Then we are unmapping each pte one-by-one which means the contpte block needs to be unfolded. I think try_to_unmap_one() could potentially be optimized to batch unmap a contiguously mapped folio and avoid this unfold. But that would be an independent and separate piece of work. > >> >> -- >> Best Regards, >> Huang, Ying >> >>> This test also acts as a good stress test for swap and, more generally mm. A >>> couple of existing bugs were found as a result [5] [6]. >>> >>> >>> --- >>> The series applies against mm-unstable (d7182786dd0a). Although I've >>> additionally been running with a couple of extra fixes to avoid the issues at >>> [6]. >>> >>> >>> Changes since v3 [3] >>> ==================== >>> >>> - Renamed SWAP_NEXT_NULL -> SWAP_NEXT_INVALID (per Huang, Ying) >>> - Simplified max offset calculation (per Huang, Ying) >>> - Reinstated struct percpu_cluster to contain per-cluster, per-order `next` >>> offset (per Huang, Ying) >>> - Removed swap_alloc_large() and merged its functionality into >>> scan_swap_map_slots() (per Huang, Ying) >>> - Avoid extra cost of folio ref and lock due to removal of CLUSTER_FLAG_HUGE >>> by freeing swap entries in batches (see patch 2) (per DavidH) >>> - vmscan splits folio if its partially mapped (per Barry Song, DavidH) >>> - Avoid splitting in MADV_PAGEOUT path (per Barry Song) >>> - Dropped "mm: swap: Simplify ssd behavior when scanner steals entry" patch >>> since it's not actually a problem for THP as I first thought. >>> >>> >>> Changes since v2 [2] >>> ==================== >>> >>> - Reuse scan_swap_map_try_ssd_cluster() between order-0 and order > 0 >>> allocation. This required some refactoring to make everything work nicely >>> (new patches 2 and 3). >>> - Fix bug where nr_swap_pages would say there are pages available but the >>> scanner would not be able to allocate them because they were reserved for the >>> per-cpu allocator. We now allow stealing of order-0 entries from the high >>> order per-cpu clusters (in addition to exisiting stealing from order-0 >>> per-cpu clusters). >>> >>> >>> Changes since v1 [1] >>> ==================== >>> >>> - patch 1: >>> - Use cluster_set_count() instead of cluster_set_count_flag() in >>> swap_alloc_cluster() since we no longer have any flag to set. I was unable >>> to kill cluster_set_count_flag() as proposed against v1 as other call >>> sites depend explicitly setting flags to 0. >>> - patch 2: >>> - Moved large_next[] array into percpu_cluster to make it per-cpu >>> (recommended by Huang, Ying). >>> - large_next[] array is dynamically allocated because PMD_ORDER is not >>> compile-time constant for powerpc (fixes build error). >>> >>> >>> [1] https://lore.kernel.org/linux-mm/20231010142111.3997780-1-ryan.roberts@arm.com/ >>> [2] https://lore.kernel.org/linux-mm/20231017161302.2518826-1-ryan.roberts@arm.com/ >>> [3] https://lore.kernel.org/linux-mm/20231025144546.577640-1-ryan.roberts@arm.com/ >>> [4] https://lore.kernel.org/linux-mm/20240304081348.197341-1-21cnbao@gmail.com/ >>> [5] https://lore.kernel.org/linux-mm/20240311084426.447164-1-ying.huang@intel.com/ >>> [6] https://lore.kernel.org/linux-mm/79dad067-1d26-4867-8eb1-941277b9a77b@arm.com/ >>> >>> Thanks, >>> Ryan >>> >>> >>> Ryan Roberts (6): >>> mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags >>> mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() >>> mm: swap: Simplify struct percpu_cluster >>> mm: swap: Allow storage of all mTHP orders >>> mm: vmscan: Avoid split during shrink_folio_list() >>> mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD >>> >>> include/linux/pgtable.h | 28 ++++ >>> include/linux/swap.h | 33 +++-- >>> mm/huge_memory.c | 3 - >>> mm/internal.h | 48 +++++++ >>> mm/madvise.c | 101 ++++++++------ >>> mm/memory.c | 13 +- >>> mm/swapfile.c | 298 ++++++++++++++++++++++------------------ >>> mm/vmscan.c | 9 +- >>> 8 files changed, 332 insertions(+), 201 deletions(-) >>> >>> -- >>> 2.25.1 >