From: Kairui Song <ryncsn@gmail.com>
To: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
Chris Li <chrisl@kernel.org>,
"Huang, Ying" <ying.huang@intel.com>,
Hugh Dickins <hughd@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Matthew Wilcox <willy@infradead.org>,
Michal Hocko <mhocko@suse.com>,
Yosry Ahmed <yosryahmed@google.com>,
David Hildenbrand <david@redhat.com>,
linux-kernel@vger.kernel.org, Kairui Song <kasong@tencent.com>
Subject: [PATCH v2 7/9] mm/swap: avoid a duplicated swap cache lookup for SWP_SYNCHRONOUS_IO
Date: Wed, 3 Jan 2024 01:53:36 +0800 [thread overview]
Message-ID: <20240102175338.62012-8-ryncsn@gmail.com> (raw)
In-Reply-To: <20240102175338.62012-1-ryncsn@gmail.com>
From: Kairui Song <kasong@tencent.com>
When a xa_value is returned by the cache lookup, keep it to be used
later for workingset refault check instead of doing the looking up again
in swapin_no_readahead.
This does have a side effect of making swapoff also triggers workingset
check, but should be fine since swapoff does affect the workload in many
ways already.
After this commit, swappin is about 4% faster for ZRAM, micro benchmark
result which use madvise to swap out 10G zero-filled data to ZRAM then
read them in:
Before: 11143285 us
After: 10692644 us (+4.1%)
Signed-off-by: Kairui Song <kasong@tencent.com>
---
mm/shmem.c | 2 +-
mm/swap.h | 3 ++-
mm/swap_state.c | 24 +++++++++++++-----------
3 files changed, 16 insertions(+), 13 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 928aa2304932..9da9f7a0e620 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1872,7 +1872,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
}
/* Look it up and read it in.. */
- folio = swap_cache_get_folio(swap, NULL, 0);
+ folio = swap_cache_get_folio(swap, NULL, 0, NULL);
if (!folio) {
/* Or update major stats only when swapin succeeds?? */
if (fault_type) {
diff --git a/mm/swap.h b/mm/swap.h
index 1f4cdb324bf0..9180411afcfe 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -58,7 +58,8 @@ void delete_from_swap_cache(struct folio *folio);
void clear_shadow_from_swap_cache(int type, unsigned long begin,
unsigned long end);
struct folio *swap_cache_get_folio(swp_entry_t entry,
- struct vm_area_struct *vma, unsigned long addr);
+ struct vm_area_struct *vma, unsigned long addr,
+ void **shadowp);
struct folio *filemap_get_incore_folio(struct address_space *mapping,
pgoff_t index);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index f6f1e6f5d782..21badd4f0fc7 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -335,12 +335,18 @@ static inline bool swap_use_vma_readahead(void)
* Caller must lock the swap device or hold a reference to keep it valid.
*/
struct folio *swap_cache_get_folio(swp_entry_t entry,
- struct vm_area_struct *vma, unsigned long addr)
+ struct vm_area_struct *vma, unsigned long addr, void **shadowp)
{
struct folio *folio;
- folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry));
- if (!IS_ERR(folio)) {
+ folio = filemap_get_entry(swap_address_space(entry), swp_offset(entry));
+ if (xa_is_value(folio)) {
+ if (shadowp)
+ *shadowp = folio;
+ return NULL;
+ }
+
+ if (folio) {
bool vma_ra = swap_use_vma_readahead();
bool readahead;
@@ -370,8 +376,6 @@ struct folio *swap_cache_get_folio(swp_entry_t entry,
if (!vma || !vma_ra)
atomic_inc(&swapin_readahead_hits);
}
- } else {
- folio = NULL;
}
return folio;
@@ -876,11 +880,10 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
* in.
*/
static struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask,
- struct vm_fault *vmf)
+ struct vm_fault *vmf, void *shadow)
{
struct vm_area_struct *vma = vmf->vma;
struct folio *folio;
- void *shadow = NULL;
/* skip swapcache */
folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
@@ -897,7 +900,6 @@ static struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask,
mem_cgroup_swapin_uncharge_swap(entry);
- shadow = get_shadow_from_swap_cache(entry);
if (shadow)
workingset_refault(folio, shadow);
@@ -931,17 +933,18 @@ struct folio *swapin_entry(swp_entry_t entry, gfp_t gfp_mask,
{
enum swap_cache_result cache_result;
struct mempolicy *mpol;
+ void *shadow = NULL;
struct folio *folio;
pgoff_t ilx;
- folio = swap_cache_get_folio(entry, vmf->vma, vmf->address);
+ folio = swap_cache_get_folio(entry, vmf->vma, vmf->address, &shadow);
if (folio) {
cache_result = SWAP_CACHE_HIT;
goto done;
}
if (swap_use_no_readahead(swp_swap_info(entry), entry)) {
- folio = swapin_direct(entry, gfp_mask, vmf);
+ folio = swapin_direct(entry, gfp_mask, vmf, shadow);
cache_result = SWAP_CACHE_BYPASS;
} else {
mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx);
@@ -952,7 +955,6 @@ struct folio *swapin_entry(swp_entry_t entry, gfp_t gfp_mask,
mpol_cond_put(mpol);
cache_result = SWAP_CACHE_MISS;
}
-
done:
if (result)
*result = cache_result;
--
2.43.0
next prev parent reply other threads:[~2024-01-02 17:54 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-02 17:53 [PATCH v2 0/9] swapin refactor for optimization and unified readahead Kairui Song
2024-01-02 17:53 ` [PATCH v2 1/9] mm/swapfile.c: add back some comment Kairui Song
2024-01-02 17:53 ` [PATCH v2 2/9] mm/swap: move no readahead swapin code to a stand-alone helper Kairui Song
2024-01-04 7:28 ` Huang, Ying
2024-01-05 7:43 ` Kairui Song
2024-01-02 17:53 ` [PATCH v2 3/9] mm/swap: avoid doing extra unlock error checks for direct swapin Kairui Song
2024-01-04 8:10 ` Huang, Ying
2024-01-09 9:38 ` Kairui Song
2024-01-02 17:53 ` [PATCH v2 4/9] mm/swap: always account swapped in page into current memcg Kairui Song
2024-01-05 7:14 ` Huang, Ying
2024-01-05 7:33 ` Kairui Song
2024-01-08 7:44 ` Huang, Ying
2024-01-09 9:42 ` Kairui Song
2024-01-02 17:53 ` [PATCH v2 5/9] mm/swap: introduce swapin_entry for unified readahead policy Kairui Song
2024-01-05 7:28 ` Huang, Ying
2024-01-10 2:42 ` Kairui Song
2024-01-02 17:53 ` [PATCH v2 6/9] mm/swap: handle swapcache lookup in swapin_entry Kairui Song
2024-01-08 8:26 ` Huang, Ying
2024-01-10 2:53 ` Kairui Song
2024-01-15 1:45 ` Huang, Ying
2024-01-15 17:11 ` Kairui Song
2024-01-02 17:53 ` Kairui Song [this message]
2024-01-03 12:50 ` [PATCH v2 7/9] mm/swap: avoid a duplicated swap cache lookup for SWP_SYNCHRONOUS_IO kernel test robot
2024-01-02 17:53 ` [PATCH v2 8/9] mm/swap: introduce a helper for swapin without vmfault Kairui Song
2024-01-09 1:08 ` Huang, Ying
2024-01-10 3:32 ` Kairui Song
2024-01-15 1:52 ` Huang, Ying
2024-01-21 18:40 ` Kairui Song
2024-01-22 6:38 ` Huang, Ying
2024-01-22 11:35 ` Kairui Song
2024-01-24 3:31 ` Huang, Ying
2024-01-02 17:53 ` [PATCH v2 9/9] mm/swap, shmem: use new swapin helper to skip readahead conditionally Kairui Song
2024-01-03 11:56 ` kernel test robot
2024-01-03 13:45 ` kernel test robot
2024-01-09 2:03 ` Huang, Ying
2024-01-10 3:35 ` Kairui Song
2024-01-30 0:39 ` Kairui Song
2024-01-30 2:01 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240102175338.62012-8-ryncsn@gmail.com \
--to=ryncsn@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=chrisl@kernel.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox