From: "Huang, Ying" <ying.huang@intel.com>
To: Kairui Song <ryncsn@gmail.com>
Cc: linux-mm@kvack.org, Kairui Song <kasong@tencent.com>,
Andrew Morton <akpm@linux-foundation.org>,
Chris Li <chrisl@kernel.org>, Hugh Dickins <hughd@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Matthew Wilcox <willy@infradead.org>,
Michal Hocko <mhocko@suse.com>,
Yosry Ahmed <yosryahmed@google.com>,
David Hildenbrand <david@redhat.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 8/9] mm/swap: introduce a helper for swapin without vmfault
Date: Tue, 09 Jan 2024 09:08:59 +0800 [thread overview]
Message-ID: <875y039utw.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <20240102175338.62012-9-ryncsn@gmail.com> (Kairui Song's message of "Wed, 3 Jan 2024 01:53:37 +0800")
Kairui Song <ryncsn@gmail.com> writes:
> From: Kairui Song <kasong@tencent.com>
>
> There are two places where swapin is not caused by direct anon page fault:
> - shmem swapin, invoked indirectly through shmem mapping
> - swapoff
>
> They used to construct a pseudo vmfault struct for swapin function.
> Shmem has dropped the pseudo vmfault recently in commit ddc1a5cbc05d
> ("mempolicy: alloc_pages_mpol() for NUMA policy without vma"). Swapoff
> path is still using one.
>
> Introduce a helper for them both, this help save stack usage for swapoff
> path, and help apply a unified swapin cache and readahead policy check.
>
> Due to missing vmfault info, the caller have to pass in mempolicy
> explicitly, make it different from swapin_entry and name it
> swapin_entry_mpol.
>
> This commit convert swapoff to use this helper, follow-up commits will
> convert shmem to use it too.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
> mm/swap.h | 9 +++++++++
> mm/swap_state.c | 40 ++++++++++++++++++++++++++++++++--------
> mm/swapfile.c | 15 ++++++---------
> 3 files changed, 47 insertions(+), 17 deletions(-)
>
> diff --git a/mm/swap.h b/mm/swap.h
> index 9180411afcfe..8f790a67b948 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -73,6 +73,9 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> struct mempolicy *mpol, pgoff_t ilx);
> struct folio *swapin_entry(swp_entry_t entry, gfp_t flag,
> struct vm_fault *vmf, enum swap_cache_result *result);
> +struct folio *swapin_entry_mpol(swp_entry_t entry, gfp_t gfp_mask,
> + struct mempolicy *mpol, pgoff_t ilx,
> + enum swap_cache_result *result);
>
> static inline unsigned int folio_swap_flags(struct folio *folio)
> {
> @@ -109,6 +112,12 @@ static inline struct folio *swapin_entry(swp_entry_t swp, gfp_t gfp_mask,
> return NULL;
> }
>
> +static inline struct page *swapin_entry_mpol(swp_entry_t entry, gfp_t gfp_mask,
> + struct mempolicy *mpol, pgoff_t ilx, enum swap_cache_result *result)
> +{
> + return NULL;
> +}
> +
> static inline int swap_writepage(struct page *p, struct writeback_control *wbc)
> {
> return 0;
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 21badd4f0fc7..3edf4b63158d 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -880,14 +880,13 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> * in.
> */
> static struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask,
> - struct vm_fault *vmf, void *shadow)
> + struct mempolicy *mpol, pgoff_t ilx,
> + void *shadow)
> {
> - struct vm_area_struct *vma = vmf->vma;
> struct folio *folio;
>
> - /* skip swapcache */
> - folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
> - vma, vmf->address, false);
> + folio = (struct folio *)alloc_pages_mpol(gfp_mask, 0,
> + mpol, ilx, numa_node_id());
> if (folio) {
> if (mem_cgroup_swapin_charge_folio(folio, NULL,
> GFP_KERNEL, entry)) {
> @@ -943,18 +942,18 @@ struct folio *swapin_entry(swp_entry_t entry, gfp_t gfp_mask,
> goto done;
> }
>
> + mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx);
> if (swap_use_no_readahead(swp_swap_info(entry), entry)) {
> - folio = swapin_direct(entry, gfp_mask, vmf, shadow);
> + folio = swapin_direct(entry, gfp_mask, mpol, ilx, shadow);
> cache_result = SWAP_CACHE_BYPASS;
> } else {
> - mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx);
> if (swap_use_vma_readahead())
> folio = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf);
> else
> folio = swap_cluster_readahead(entry, gfp_mask, mpol, ilx);
> - mpol_cond_put(mpol);
> cache_result = SWAP_CACHE_MISS;
> }
> + mpol_cond_put(mpol);
> done:
> if (result)
> *result = cache_result;
> @@ -962,6 +961,31 @@ struct folio *swapin_entry(swp_entry_t entry, gfp_t gfp_mask,
> return folio;
> }
>
> +struct folio *swapin_entry_mpol(swp_entry_t entry, gfp_t gfp_mask,
> + struct mempolicy *mpol, pgoff_t ilx,
> + enum swap_cache_result *result)
> +{
> + enum swap_cache_result cache_result;
> + void *shadow = NULL;
> + struct folio *folio;
> +
> + folio = swap_cache_get_folio(entry, NULL, 0, &shadow);
> + if (folio) {
> + cache_result = SWAP_CACHE_HIT;
> + } else if (swap_use_no_readahead(swp_swap_info(entry), entry)) {
> + folio = swapin_direct(entry, gfp_mask, mpol, ilx, shadow);
> + cache_result = SWAP_CACHE_BYPASS;
> + } else {
> + folio = swap_cluster_readahead(entry, gfp_mask, mpol, ilx);
> + cache_result = SWAP_CACHE_MISS;
> + }
> +
> + if (result)
> + *result = cache_result;
> +
> + return folio;
> +}
> +
> #ifdef CONFIG_SYSFS
> static ssize_t vma_ra_enabled_show(struct kobject *kobj,
> struct kobj_attribute *attr, char *buf)
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 5aa44de11edc..2f77bf143af8 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1840,18 +1840,13 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> do {
> struct folio *folio;
> unsigned long offset;
> + struct mempolicy *mpol;
> unsigned char swp_count;
> swp_entry_t entry;
> + pgoff_t ilx;
> int ret;
> pte_t ptent;
>
> - struct vm_fault vmf = {
> - .vma = vma,
> - .address = addr,
> - .real_address = addr,
> - .pmd = pmd,
> - };
> -
> if (!pte++) {
> pte = pte_offset_map(pmd, addr);
> if (!pte)
> @@ -1871,8 +1866,10 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> pte_unmap(pte);
> pte = NULL;
>
> - folio = swapin_entry(entry, GFP_HIGHUSER_MOVABLE,
> - &vmf, NULL);
> + mpol = get_vma_policy(vma, addr, 0, &ilx);
> + folio = swapin_entry_mpol(entry, GFP_HIGHUSER_MOVABLE,
> + mpol, ilx, NULL);
> + mpol_cond_put(mpol);
> if (!folio) {
> /*
> * The entry could have been freed, and will not
IIUC, after the change, we will always use cluster readahead for
swapoff. This may be OK. But, at least we need some test results which
show that this will not cause any issue for this behavior change. And
the behavior change should be described explicitly in patch description.
And I don't think it's a good abstraction to make swapin_entry_mpol()
always use cluster swapin, while swapin_entry() will try to use vma
swapin. I think we can add "struct mempolicy *mpol" and "pgoff_t ilx"
to swapin_entry() as parameters, and use them if vmf == NULL. If we
want to enforce cluster swapin in swapoff path, it will be better to add
some comments to describe why.
--
Best Regards,
Huang, Ying
next prev parent reply other threads:[~2024-01-09 1:11 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-02 17:53 [PATCH v2 0/9] swapin refactor for optimization and unified readahead Kairui Song
2024-01-02 17:53 ` [PATCH v2 1/9] mm/swapfile.c: add back some comment Kairui Song
2024-01-02 17:53 ` [PATCH v2 2/9] mm/swap: move no readahead swapin code to a stand-alone helper Kairui Song
2024-01-04 7:28 ` Huang, Ying
2024-01-05 7:43 ` Kairui Song
2024-01-02 17:53 ` [PATCH v2 3/9] mm/swap: avoid doing extra unlock error checks for direct swapin Kairui Song
2024-01-04 8:10 ` Huang, Ying
2024-01-09 9:38 ` Kairui Song
2024-01-02 17:53 ` [PATCH v2 4/9] mm/swap: always account swapped in page into current memcg Kairui Song
2024-01-05 7:14 ` Huang, Ying
2024-01-05 7:33 ` Kairui Song
2024-01-08 7:44 ` Huang, Ying
2024-01-09 9:42 ` Kairui Song
2024-01-02 17:53 ` [PATCH v2 5/9] mm/swap: introduce swapin_entry for unified readahead policy Kairui Song
2024-01-05 7:28 ` Huang, Ying
2024-01-10 2:42 ` Kairui Song
2024-01-02 17:53 ` [PATCH v2 6/9] mm/swap: handle swapcache lookup in swapin_entry Kairui Song
2024-01-08 8:26 ` Huang, Ying
2024-01-10 2:53 ` Kairui Song
2024-01-15 1:45 ` Huang, Ying
2024-01-15 17:11 ` Kairui Song
2024-01-02 17:53 ` [PATCH v2 7/9] mm/swap: avoid a duplicated swap cache lookup for SWP_SYNCHRONOUS_IO Kairui Song
2024-01-03 12:50 ` kernel test robot
2024-01-02 17:53 ` [PATCH v2 8/9] mm/swap: introduce a helper for swapin without vmfault Kairui Song
2024-01-09 1:08 ` Huang, Ying [this message]
2024-01-10 3:32 ` Kairui Song
2024-01-15 1:52 ` Huang, Ying
2024-01-21 18:40 ` Kairui Song
2024-01-22 6:38 ` Huang, Ying
2024-01-22 11:35 ` Kairui Song
2024-01-24 3:31 ` Huang, Ying
2024-01-02 17:53 ` [PATCH v2 9/9] mm/swap, shmem: use new swapin helper to skip readahead conditionally Kairui Song
2024-01-03 11:56 ` kernel test robot
2024-01-03 13:45 ` kernel test robot
2024-01-09 2:03 ` Huang, Ying
2024-01-10 3:35 ` Kairui Song
2024-01-30 0:39 ` Kairui Song
2024-01-30 2:01 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=875y039utw.fsf@yhuang6-desk2.ccr.corp.intel.com \
--to=ying.huang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=chrisl@kernel.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=ryncsn@gmail.com \
--cc=willy@infradead.org \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox