From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23648C5475B for ; Fri, 1 Mar 2024 16:27:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 572DA6B0074; Fri, 1 Mar 2024 11:27:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 523A26B007B; Fri, 1 Mar 2024 11:27:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 412476B0083; Fri, 1 Mar 2024 11:27:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 323356B0074 for ; Fri, 1 Mar 2024 11:27:40 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 106131612F3 for ; Fri, 1 Mar 2024 16:27:40 +0000 (UTC) X-FDA: 81849001080.07.0D096F7 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf21.hostedemail.com (Postfix) with ESMTP id 1B0A11C001F for ; Fri, 1 Mar 2024 16:27:37 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709310458; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k9BZTPLFn7yDM1VOXcXDe24NAxzcM3W9V2xqbSGt++w=; b=PpsCOnX9WByFNynOgfCmILgb9H9KSm0EKAs8b5UWVhdO965nxK6ixWj1x58wmCQeI512yQ 1ogMwZTZ3bFDi1SW61qaT2Lb5za8cf7Rd+9a48CJDRElHl/uzjro1bvWImPFL5lpCe8y4x HI371wCDOl6sfw6cdoI0bCnInkxjBkM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709310458; a=rsa-sha256; cv=none; b=gQ+p9VMkfFyFso4q4a9GKLz2rwtcZXBR/E407PdV8tPakmMINlUrfX2lVaaKlfY4Hbg/Tn 6/Egfd2HCQHdVtsTXcjz/kPFu542QpI6Y0y9va2QtkHLyiLA5MbChZ5O4pBoqJph2+tFqS BXlR5t1eLWpuHBLlM/bqLJBaIgjV7ZI= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 04DCF1FB; Fri, 1 Mar 2024 08:28:15 -0800 (PST) Received: from [10.57.68.58] (unknown [10.57.68.58]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E37143F73F; Fri, 1 Mar 2024 08:27:34 -0800 (PST) Message-ID: <5ebac77a-5c61-481f-8ac1-03bc4f4e2b1d@arm.com> Date: Fri, 1 Mar 2024 16:27:32 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Content-Language: en-GB To: David Hildenbrand , Andrew Morton , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20231025144546.577640-1-ryan.roberts@arm.com> <20231025144546.577640-2-ryan.roberts@arm.com> <6541e29b-f25a-48b8-a553-fd8febe85e5a@redhat.com> <2934125a-f2e2-417c-a9f9-3cb1e074a44f@redhat.com> <049818ca-e656-44e4-b336-934992c16028@arm.com> <4a73b16e-9317-477a-ac23-8033004b0637@arm.com> <1195531c-d985-47e2-b7a2-8895fbb49129@redhat.com> From: Ryan Roberts In-Reply-To: <1195531c-d985-47e2-b7a2-8895fbb49129@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 1B0A11C001F X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: s9tmhrh41f6ruk16swdp78dsd9b6z1si X-HE-Tag: 1709310457-630706 X-HE-Meta: U2FsdGVkX1/oPlEeJDF3+FgWRZdaKA1SfhL6bU2NaQ5Bzrlz3POGZ5ZnGbqTj8t7VFgYRvhB8UQggMZyy+3txmMiO2RAZbwgojJHsXS1V/WT33x51r3YQT08SAfFeW5oysnuYlUov1QEb+PefDFoVl3jWu4udsFA5ot0UWjLpxGHMAOQ8QT1UN3Xv/JgYkfACXAYBPRSu346Fg4T1WSbzjQAncxhW5IaMEH7F72XN/zOCS9Q79UGrtb4n+M9nEgB/wUFD0fWXwd5hEH8JuEqgaysuvoWdMX5uixIMHXG6wbBCRBg8m6XL4Fyy25G/w6AGLB4YiJVbN5BIhEl9WuFCkXU+js8qEnQ3S8urv/a6oHE2CtbfhOL2jtWCDK4zae1U27q005RD5h2i3bxtwgUe0gjvBl11811eIF1JI6m+mSZ6erLj+F3H8mZAFs33rdISdL8O0nnXYSBCqy6u8WLblkUQM5n7/XJnNd/BqRHFWvQfUUAlPN9xAJoWaq3NS0edAoZl6mHVjNHiUmKcF1ctSy+M7J/v1R1c+dwwgMJR+UjLvB1+MyUxV8UJHDZAJkCR8FGbdW5QMi3sQePu1ymv0CiBvZWAeMXZ7xoayW1i6g4QB+PmDb9NLK9ln9pTCHklJnFWh9DKloagv/KKJ6XFa7NvlFV0TNU3TGiXaODpSSt/yA18NLncFVSOqjFCVHwn19PRt0Gq2U5ILiqedFKU/H7NTHctqBZ2QSMuDmePyrOnM7VAQMleRT6G7kYq/6PiPR+y8CfbdG5KOglmw1bZTB1OxTz1b8hviZGQScj2C2DZTDSKtAQscfoZKL9TDdtqjThyzwmYIIj54BbHL2WjNxda+ZvzWqsYqDW/oaJTbCCT7MT0NGR73pvybnqXKvWY8PgbW4zP5I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 28/02/2024 15:12, David Hildenbrand wrote: > On 28.02.24 15:57, Ryan Roberts wrote: >> On 28/02/2024 12:12, David Hildenbrand wrote: >>>>> How relevant is it? Relevant enough that someone decided to put that >>>>> optimization in? I don't know :) >>>> >>>> I'll have one last go at convincing you: Huang Ying (original author) commented >>>> "I believe this should be OK.  Better to compare the performance too." at [1]. >>>> That implies to me that perhaps the optimization wasn't in response to a >>>> specific problem after all. Do you have any thoughts, Huang? >>> >>> Might make sense to include that in the patch description! >>> >>>> OK so if we really do need to keep this optimization, here are some ideas: >>>> >>>> Fundamentally, we would like to be able to figure out the size of the swap slot >>>> from the swap entry. Today swap supports 2 sizes; PAGE_SIZE and PMD_SIZE. For >>>> PMD_SIZE, it always uses a full cluster, so can easily add a flag to the >>>> cluster >>>> to mark it as PMD_SIZE. >>>> >>>> Going forwards, we want to support all sizes (power-of-2). Most of the time, a >>>> cluster will contain only one size of THPs, but this is not the case when a THP >>>> in the swapcache gets split or when an order-0 slot gets stolen. We expect >>>> these >>>> cases to be rare. >>>> >>>> 1) Keep the size of the smallest swap entry in the cluster header. Most of the >>>> time it will be the full size of the swap entry, but sometimes it will cover >>>> only a portion. In the latter case you may see a false negative for >>>> swap_page_trans_huge_swapped() meaning we take the slow path, but that is rare. >>>> There is one wrinkle: currently the HUGE flag is cleared in >>>> put_swap_folio(). We >>>> wouldn't want to do the equivalent in the new scheme (i.e. set the whole >>>> cluster >>>> to order-0). I think that is safe, but haven't completely convinced myself yet. >>>> >>>> 2) allocate 4 bits per (small) swap slot to hold the order. This will give >>>> precise information and is conceptually simpler to understand, but will cost >>>> more memory (half as much as the initial swap_map[] again). >>>> >>>> I still prefer to avoid this at all if we can (and would like to hear Huang's >>>> thoughts). But if its a choice between 1 and 2, I prefer 1 - I'll do some >>>> prototyping. >>> >>> Taking a step back: what about we simply batch unmapping of swap entries? >>> >>> That is, if we're unmapping a PTE range, we'll collect swap entries (under PT >>> lock) that reference consecutive swap offsets in the same swap file. >> >> Yes in principle, but there are 4 places where free_swap_and_cache() is called, >> and only 2 of those are really amenable to batching (zap_pte_range() and >> madvise_free_pte_range()). So the other two users will still take the "slow" >> path. Maybe those 2 callsites are the only ones that really matter? I can >> certainly have a stab at this approach. > > We can ignore the s390x one. That s390x code should only apply to KVM guest > memory where ordinary THP are not even supported. (and nobody uses mTHP there yet). > > Long story short: the VM can hint that some memory pages are now unused and the > hypervisor can reclaim them. That's what that callback does (zap guest-provided > guest memory). No need to worry about any batching for now. > > Then, there is the shmem one in shmem_free_swap(). I really don't know how shmem > handles THP+swapout. > > But looking at shmem_writepage(), we split any large folios before moving them > to the swapcache, so likely we don't care at all, because THP don't apply. > >> >>> >>> There, we can then first decrement all the swap counts, and then try minimizing >>> how often we actually have to try reclaiming swap space (lookup folio, see it's >>> a large folio that we cannot reclaim or could reclaim, ...). >>> >>> Might need some fine-tuning in swap code to "advance" to the next entry to try >>> freeing up, but we certainly can do better than what we would do right now. >> >> I'm not sure I've understood this. Isn't advancing just a matter of: >> >> entry = swp_entry(swp_type(entry), swp_offset(entry) + 1); > > I was talking about the advancing swapslot processing after decrementing the > swapcounts. > > Assume you decremented 512 swapcounts and some of them went to 0. AFAIU, you'd > have to start with the first swapslot that has now a swapcount=0 one and try to > reclaim swap. > > Assume you get a small folio, then you'll have to proceed with the next swap > slot and try to reclaim swap. > > Assume you get a large folio, then you can skip more swapslots (depending on > offset into the folio etc). > > If you get what I mean. :) > I've implemented the batching as David suggested, and I'm pretty confident it's correct. The only problem is that during testing I can't provoke the code to take the path. I've been pouring through the code but struggling to figure out under what situation you would expect the swap entry passed to free_swap_and_cache() to still have a cached folio? Does anyone have any idea? This is the original (unbatched) function, after my change, which caused David's concern that we would end up calling __try_to_reclaim_swap() far too much: int free_swap_and_cache(swp_entry_t entry) { struct swap_info_struct *p; unsigned char count; if (non_swap_entry(entry)) return 1; p = _swap_info_get(entry); if (p) { count = __swap_entry_free(p, entry); if (count == SWAP_HAS_CACHE) __try_to_reclaim_swap(p, swp_offset(entry), TTRS_UNMAPPED | TTRS_FULL); } return p != NULL; } The trouble is, whenever its called, count is always 0, so __try_to_reclaim_swap() never gets called. My test case is allocating 1G anon memory, then doing madvise(MADV_PAGEOUT) over it. Then doing either a munmap() or madvise(MADV_FREE), both of which cause this function to be called for every PTE, but count is always 0 after __swap_entry_free() so __try_to_reclaim_swap() is never called. I've tried for order-0 as well as PTE- and PMD-mapped 2M THP. I'm guessing the swapcache was already reclaimed as part of MADV_PAGEOUT? I'm using a block ram device as my backing store - I think this does synchronous IO so perhaps if I have a real block device with async IO I might have more luck? Just a guess... Or perhaps this code path is a corner case? In which case, perhaps its not worth adding the batching optimization after all? Thanks, Ryan