From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Huang Ying <ying.huang@intel.com>, Gao Xiang <xiang@kernel.org>,
Yu Zhao <yuzhao@google.com>, Yang Shi <shy828301@gmail.com>,
Michal Hocko <mhocko@suse.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Barry Song <21cnbao@gmail.com>, Chris Li <chrisl@kernel.org>,
Lance Yang <ioworker0@gmail.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()
Date: Tue, 9 Apr 2024 09:34:28 +0200 [thread overview]
Message-ID: <52173d5b-672d-4ef6-ad06-ec350c44d739@redhat.com> (raw)
In-Reply-To: <20240408183946.2991168-3-ryan.roberts@arm.com>
On 08.04.24 20:39, Ryan Roberts wrote:
> Now that we no longer have a convenient flag in the cluster to determine
> if a folio is large, free_swap_and_cache() will take a reference and
> lock a large folio much more often, which could lead to contention and
> (e.g.) failure to split large folios, etc.
>
> Let's solve that problem by batch freeing swap and cache with a new
> function, free_swap_and_cache_nr(), to free a contiguous range of swap
> entries together. This allows us to first drop a reference to each swap
> slot before we try to release the cache folio. This means we only try to
> release the folio once, only taking the reference and lock once - much
> better than the previous 512 times for the 2M THP case.
>
> Contiguous swap entries are gathered in zap_pte_range() and
> madvise_free_pte_range() in a similar way to how present ptes are
> already gathered in zap_pte_range().
>
> While we are at it, let's simplify by converting the return type of both
> functions to void. The return value was used only by zap_pte_range() to
> print a bad pte, and was ignored by everyone else, so the extra
> reporting wasn't exactly guaranteed. We will still get the warning with
> most of the information from get_swap_device(). With the batch version,
> we wouldn't know which pte was bad anyway so could print the wrong one.
>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> include/linux/pgtable.h | 29 ++++++++++++
> include/linux/swap.h | 12 +++--
> mm/internal.h | 63 ++++++++++++++++++++++++++
> mm/madvise.c | 12 +++--
> mm/memory.c | 13 +++---
> mm/swapfile.c | 97 +++++++++++++++++++++++++++++++++--------
> 6 files changed, 195 insertions(+), 31 deletions(-)
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index a3fc8150b047..75096025fe52 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -708,6 +708,35 @@ static inline void pte_clear_not_present_full(struct mm_struct *mm,
> }
> #endif
>
> +#ifndef clear_not_present_full_ptes
> +/**
> + * clear_not_present_full_ptes - Clear multiple not present PTEs which are
> + * consecutive in the pgtable.
> + * @mm: Address space the ptes represent.
> + * @addr: Address of the first pte.
> + * @ptep: Page table pointer for the first entry.
> + * @nr: Number of entries to clear.
> + * @full: Whether we are clearing a full mm.
> + *
> + * May be overridden by the architecture; otherwise, implemented as a simple
> + * loop over pte_clear_not_present_full().
> + *
> + * Context: The caller holds the page table lock. The PTEs are all not present.
> + * The PTEs are all in the same PMD.
> + */
> +static inline void clear_not_present_full_ptes(struct mm_struct *mm,
> + unsigned long addr, pte_t *ptep, unsigned int nr, int full)
> +{
> + for (;;) {
> + pte_clear_not_present_full(mm, addr, ptep, full);
> + if (--nr == 0)
> + break;
> + ptep++;
> + addr += PAGE_SIZE;
> + }
> +}
> +#endif
> +
> #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
> extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
> unsigned long address,
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index f6f78198f000..5737236dc3ce 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -471,7 +471,7 @@ extern int swap_duplicate(swp_entry_t);
> extern int swapcache_prepare(swp_entry_t);
> extern void swap_free(swp_entry_t);
> extern void swapcache_free_entries(swp_entry_t *entries, int n);
> -extern int free_swap_and_cache(swp_entry_t);
> +extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
> int swap_type_of(dev_t device, sector_t offset);
> int find_first_swap(dev_t *device);
> extern unsigned int count_swap_pages(int, int);
> @@ -520,8 +520,9 @@ static inline void put_swap_device(struct swap_info_struct *si)
> #define free_pages_and_swap_cache(pages, nr) \
> release_pages((pages), (nr));
>
> -/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
> -#define free_swap_and_cache(e) is_pfn_swap_entry(e)
> +static inline void free_swap_and_cache_nr(swp_entry_t entry, int nr)
> +{
> +}
>
> static inline void free_swap_cache(struct folio *folio)
> {
> @@ -589,6 +590,11 @@ static inline int add_swap_extent(struct swap_info_struct *sis,
> }
> #endif /* CONFIG_SWAP */
>
> +static inline void free_swap_and_cache(swp_entry_t entry)
> +{
> + free_swap_and_cache_nr(entry, 1);
> +}
> +
> #ifdef CONFIG_MEMCG
> static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
> {
> diff --git a/mm/internal.h b/mm/internal.h
> index 3bdc8693b54f..de68705624b0 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -11,6 +11,8 @@
> #include <linux/mm.h>
> #include <linux/pagemap.h>
> #include <linux/rmap.h>
> +#include <linux/swap.h>
> +#include <linux/swapops.h>
> #include <linux/tracepoint-defs.h>
>
> struct folio_batch;
> @@ -189,6 +191,67 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>
> return min(ptep - start_ptep, max_nr);
> }
> +
> +/**
> + * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
> + * @pte: The initial pte state; is_swap_pte(pte) must be true.
Likely we also want non_swap_entry() to be false.
> + *
> + * Increments the swap offset, while maintaining all other fields, including
> + * swap type, and any swp pte bits. The resulting pte is returned.
> + */
> +static inline pte_t pte_next_swp_offset(pte_t pte)
> +{
> + swp_entry_t entry = pte_to_swp_entry(pte);
> + pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
> + swp_offset(entry) + 1));
> +
> + if (pte_swp_soft_dirty(pte))
> + new = pte_swp_mksoft_dirty(new);
> + if (pte_swp_exclusive(pte))
> + new = pte_swp_mkexclusive(new);
> + if (pte_swp_uffd_wp(pte))
> + new = pte_swp_mkuffd_wp(new);
> +
> + return new;
> +}
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-04-09 7:34 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-08 18:39 [PATCH v7 0/7] Swap-out mTHP without splitting Ryan Roberts
2024-04-08 18:39 ` [PATCH v7 1/7] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2024-04-08 18:39 ` [PATCH v7 2/7] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Ryan Roberts
2024-04-09 7:34 ` David Hildenbrand [this message]
2024-04-09 8:51 ` Barry Song
2024-04-09 9:22 ` Barry Song
2024-04-09 9:24 ` David Hildenbrand
2024-04-09 9:41 ` Barry Song
2024-04-09 9:55 ` Ryan Roberts
2024-04-09 10:29 ` Barry Song
2024-04-09 10:42 ` David Hildenbrand
2024-04-09 11:18 ` [PATCH] FIXUP: " Ryan Roberts
2024-04-08 18:39 ` [PATCH v7 3/7] mm: swap: Simplify struct percpu_cluster Ryan Roberts
2024-04-08 18:39 ` [PATCH v7 4/7] mm: swap: Update get_swap_pages() to take folio order Ryan Roberts
2024-04-09 7:36 ` David Hildenbrand
2024-04-08 18:39 ` [PATCH v7 5/7] mm: swap: Allow storage of all mTHP orders Ryan Roberts
2024-05-13 7:30 ` Barry Song
2024-05-13 8:43 ` Ryan Roberts
2024-05-13 9:24 ` Barry Song
2024-04-08 18:39 ` [PATCH v7 6/7] mm: vmscan: Avoid split during shrink_folio_list() Ryan Roberts
2024-04-08 18:39 ` [PATCH v7 7/7] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Ryan Roberts
2024-06-03 21:18 ` [PATCH v7 0/7] Swap-out mTHP without splitting Yosry Ahmed
2024-06-03 22:01 ` Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52173d5b-672d-4ef6-ad06-ec350c44d739@redhat.com \
--to=david@redhat.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=chrisl@kernel.org \
--cc=ioworker0@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=xiang@kernel.org \
--cc=ying.huang@intel.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox