From: Ryan Roberts <ryan.roberts@arm.com>
To: Chris Li <chrisl@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Matthew Wilcox <willy@infradead.org>,
Huang Ying <ying.huang@intel.com>, Gao Xiang <xiang@kernel.org>,
Yu Zhao <yuzhao@google.com>, Yang Shi <shy828301@gmail.com>,
Michal Hocko <mhocko@suse.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Barry Song <21cnbao@gmail.com>, Lance Yang <ioworker0@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v6 1/6] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags
Date: Thu, 4 Apr 2024 08:06:10 +0100 [thread overview]
Message-ID: <2acd461f-1d75-434c-a2f2-a3a8e1daad8f@arm.com> (raw)
In-Reply-To: <CANeU7QnYOx-=xoeoLWotdQWOs2KMvw0E7LuRq27LO4RDA_ManQ@mail.gmail.com>
On 03/04/2024 23:12, Chris Li wrote:
> Hi Ryan,
>
> Sorry for the late reply. I want to review this series but don't have
> the chance to do it sooner.
No problem. This series is now in mm-unstable, so if you want to request any
changes in the other patches, I'd prefer it sooner rather than later, if possible.
>
> On Wed, Apr 3, 2024 at 4:40 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> As preparation for supporting small-sized THP in the swap-out path,
>> without first needing to split to order-0, Remove the CLUSTER_FLAG_HUGE,
>> which, when present, always implies PMD-sized THP, which is the same as
>> the cluster size.
>>
>> The only use of the flag was to determine whether a swap entry refers to
>> a single page or a PMD-sized THP in swap_page_trans_huge_swapped().
>> Instead of relying on the flag, we now pass in nr_pages, which
>> originates from the folio's number of pages. This allows the logic to
>> work for folios of any order.
>>
>> The one snag is that one of the swap_page_trans_huge_swapped() call
>> sites does not have the folio. But it was only being called there to
>> shortcut a call __try_to_reclaim_swap() in some cases.
>> __try_to_reclaim_swap() gets the folio and (via some other functions)
>> calls swap_page_trans_huge_swapped(). So I've removed the problematic
>> call site and believe the new logic should be functionally equivalent.
>>
>> That said, removing the fast path means that we will take a reference
>> and trylock a large folio much more often, which we would like to avoid.
>> The next patch will solve this.
>>
>> Removing CLUSTER_FLAG_HUGE also means we can remove split_swap_cluster()
>> which used to be called during folio splitting, since
>> split_swap_cluster()'s only job was to remove the flag.
>
> Seems necessary to remove the assumption of large folio be PMD size.
>
> Acked-by: Chris Li <chrisl@kernel.org>
Thanks!
>
>>
>> Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> ---
>> include/linux/swap.h | 10 ----------
>> mm/huge_memory.c | 3 ---
>> mm/swapfile.c | 47 ++++++++------------------------------------
>> 3 files changed, 8 insertions(+), 52 deletions(-)
>>
>> diff --git a/include/linux/swap.h b/include/linux/swap.h
>> index a211a0383425..f6f78198f000 100644
>> --- a/include/linux/swap.h
>> +++ b/include/linux/swap.h
>> @@ -259,7 +259,6 @@ struct swap_cluster_info {
>> };
>> #define CLUSTER_FLAG_FREE 1 /* This cluster is free */
>> #define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */
>> -#define CLUSTER_FLAG_HUGE 4 /* This cluster is backing a transparent huge page */
>>
>> /*
>> * We assign a cluster to each CPU, so each CPU can allocate swap entry from
>> @@ -590,15 +589,6 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
>> }
>> #endif /* CONFIG_SWAP */
>>
>> -#ifdef CONFIG_THP_SWAP
>> -extern int split_swap_cluster(swp_entry_t entry);
>> -#else
>> -static inline int split_swap_cluster(swp_entry_t entry)
>> -{
>> - return 0;
>> -}
>> -#endif
>> -
>> #ifdef CONFIG_MEMCG
>> static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
>> {
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index ea6d1f09a0b9..3ca9282a0dc9 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -2844,9 +2844,6 @@ static void __split_huge_page(struct page *page, struct list_head *list,
>> shmem_uncharge(folio->mapping->host, nr_dropped);
>> remap_page(folio, nr);
>>
>> - if (folio_test_swapcache(folio))
>> - split_swap_cluster(folio->swap);
>> -
>> /*
>> * set page to its compound_head when split to non order-0 pages, so
>> * we can skip unlocking it below, since PG_locked is transferred to
>> diff --git a/mm/swapfile.c b/mm/swapfile.c
>> index 5e6d2304a2a4..0d44ee2b4f9c 100644
>> --- a/mm/swapfile.c
>> +++ b/mm/swapfile.c
>> @@ -343,18 +343,6 @@ static inline void cluster_set_null(struct swap_cluster_info *info)
>> info->data = 0;
>> }
>>
>> -static inline bool cluster_is_huge(struct swap_cluster_info *info)
>> -{
>> - if (IS_ENABLED(CONFIG_THP_SWAP))
>> - return info->flags & CLUSTER_FLAG_HUGE;
>> - return false;
>> -}
>> -
>> -static inline void cluster_clear_huge(struct swap_cluster_info *info)
>> -{
>> - info->flags &= ~CLUSTER_FLAG_HUGE;
>> -}
>> -
>> static inline struct swap_cluster_info *lock_cluster(struct swap_info_struct *si,
>> unsigned long offset)
>> {
>> @@ -1027,7 +1015,7 @@ static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot)
>> offset = idx * SWAPFILE_CLUSTER;
>> ci = lock_cluster(si, offset);
>> alloc_cluster(si, idx);
>> - cluster_set_count_flag(ci, SWAPFILE_CLUSTER, CLUSTER_FLAG_HUGE);
>> + cluster_set_count(ci, SWAPFILE_CLUSTER);
>>
>> memset(si->swap_map + offset, SWAP_HAS_CACHE, SWAPFILE_CLUSTER);
>> unlock_cluster(ci);
>> @@ -1365,7 +1353,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry)
>>
>> ci = lock_cluster_or_swap_info(si, offset);
>> if (size == SWAPFILE_CLUSTER) {
>> - VM_BUG_ON(!cluster_is_huge(ci));
>> map = si->swap_map + offset;
>> for (i = 0; i < SWAPFILE_CLUSTER; i++) {
>> val = map[i];
>> @@ -1373,7 +1360,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry)
>> if (val == SWAP_HAS_CACHE)
>> free_entries++;
>> }
>> - cluster_clear_huge(ci);
>> if (free_entries == SWAPFILE_CLUSTER) {
>> unlock_cluster_or_swap_info(si, ci);
>> spin_lock(&si->lock);
>> @@ -1395,23 +1381,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry)
>> unlock_cluster_or_swap_info(si, ci);
>> }
>>
>> -#ifdef CONFIG_THP_SWAP
>> -int split_swap_cluster(swp_entry_t entry)
>> -{
>> - struct swap_info_struct *si;
>> - struct swap_cluster_info *ci;
>> - unsigned long offset = swp_offset(entry);
>> -
>> - si = _swap_info_get(entry);
>> - if (!si)
>> - return -EBUSY;
>> - ci = lock_cluster(si, offset);
>> - cluster_clear_huge(ci);
>> - unlock_cluster(ci);
>> - return 0;
>> -}
>> -#endif
>> -
>> static int swp_entry_cmp(const void *ent1, const void *ent2)
>> {
>> const swp_entry_t *e1 = ent1, *e2 = ent2;
>> @@ -1519,22 +1488,23 @@ int swp_swapcount(swp_entry_t entry)
>> }
>>
>> static bool swap_page_trans_huge_swapped(struct swap_info_struct *si,
>> - swp_entry_t entry)
>> + swp_entry_t entry,
>> + unsigned int nr_pages)
>> {
>> struct swap_cluster_info *ci;
>> unsigned char *map = si->swap_map;
>> unsigned long roffset = swp_offset(entry);
>> - unsigned long offset = round_down(roffset, SWAPFILE_CLUSTER);
>> + unsigned long offset = round_down(roffset, nr_pages);
>
> It is obvious this code only works for powers two nr_pages. The
> SWAPFILE_CLSTER is a power of two. If we switch to an API for
> nr_pages, we might want to warn/ban passing in the non-power of two
> nr_pages.
Indeed. I could change the prototype to pass order instead of nr_pages, then
generate nr_pages (= 1 << order) inside the function. But given the function is
static and only called from a single callsite, I don't see it as hugely
important. I'd prefer to leave as is at this stage, unless you have strong
objection.
>
>> int i;
>> bool ret = false;
>>
>> ci = lock_cluster_or_swap_info(si, offset);
>> - if (!ci || !cluster_is_huge(ci)) {
>> + if (!ci || nr_pages == 1) {
>> if (swap_count(map[roffset]))
>> ret = true;
>> goto unlock_out;
>> }
>> - for (i = 0; i < SWAPFILE_CLUSTER; i++) {
>> + for (i = 0; i < nr_pages; i++) {
>
> Here we assume the swap entry offset is contiguous. That is beyond
> your patch's scope. If in the future we want to have non-contiguous
> swap entries to swap out large pages, we will need to find out and
> change all the places that have the assumption of contiguous swap
> entries.
Yes there are tonnes of places that make this assumption :)
>
> Chris
>
>> if (swap_count(map[offset + i])) {
>> ret = true;
>> break;
>> @@ -1556,7 +1526,7 @@ static bool folio_swapped(struct folio *folio)
>> if (!IS_ENABLED(CONFIG_THP_SWAP) || likely(!folio_test_large(folio)))
>> return swap_swapcount(si, entry) != 0;
>>
>> - return swap_page_trans_huge_swapped(si, entry);
>> + return swap_page_trans_huge_swapped(si, entry, folio_nr_pages(folio));
>> }
>>
>> /**
>> @@ -1622,8 +1592,7 @@ int free_swap_and_cache(swp_entry_t entry)
>> }
>>
>> count = __swap_entry_free(p, entry);
>> - if (count == SWAP_HAS_CACHE &&
>> - !swap_page_trans_huge_swapped(p, entry))
>> + if (count == SWAP_HAS_CACHE)
>> __try_to_reclaim_swap(p, swp_offset(entry),
>> TTRS_UNMAPPED | TTRS_FULL);
>> put_swap_device(p);
>> --
>> 2.25.1
>>
next prev parent reply other threads:[~2024-04-04 7:06 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-03 11:40 [PATCH v6 0/6] Swap-out mTHP without splitting Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 1/6] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2024-04-03 22:12 ` Chris Li
2024-04-04 7:06 ` Ryan Roberts [this message]
2024-04-04 13:43 ` Chris Li
2024-04-08 11:56 ` Ryan Roberts
2024-04-05 9:25 ` David Hildenbrand
2024-04-03 11:40 ` [PATCH v6 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Ryan Roberts
[not found] ` <051052af-3b56-4290-98d3-fd5a1eb11ce1@redhat.com>
2024-04-08 9:22 ` Ryan Roberts
2024-04-08 9:43 ` David Hildenbrand
2024-04-08 10:07 ` Ryan Roberts
[not found] ` <79c5513b-b3f2-4fbb-a3c7-a09894d54d22@redhat.com>
2024-04-08 10:39 ` Ryan Roberts
2024-04-08 12:07 ` Ryan Roberts
2024-04-08 12:47 ` Ryan Roberts
2024-04-08 13:27 ` Ryan Roberts
2024-04-08 15:13 ` David Hildenbrand
2024-04-03 11:40 ` [PATCH v6 3/6] mm: swap: Simplify struct percpu_cluster Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 4/6] mm: swap: Allow storage of all mTHP orders Ryan Roberts
2024-04-05 10:38 ` David Hildenbrand
2024-04-07 6:02 ` Huang, Ying
2024-04-08 9:24 ` Ryan Roberts
2024-04-08 9:33 ` David Hildenbrand
2024-04-08 9:35 ` Ryan Roberts
2024-04-07 7:38 ` Barry Song
2024-04-08 9:28 ` Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 5/6] mm: vmscan: Avoid split during shrink_folio_list() Ryan Roberts
2024-04-05 10:42 ` David Hildenbrand
2024-04-08 9:31 ` Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Ryan Roberts
2024-04-03 17:17 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2acd461f-1d75-434c-a2f2-a3a8e1daad8f@arm.com \
--to=ryan.roberts@arm.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=chrisl@kernel.org \
--cc=david@redhat.com \
--cc=ioworker0@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=shy828301@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=xiang@kernel.org \
--cc=ying.huang@intel.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox