From: Nhat Pham <nphamcs@gmail.com>
To: kasong@tencent.com
Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org,
apopple@nvidia.com, axelrasmussen@google.com, baohua@kernel.org,
baolin.wang@linux.alibaba.com, bhe@redhat.com, byungchul@sk.com,
cgroups@vger.kernel.org, chengming.zhou@linux.dev,
chrisl@kernel.org, corbet@lwn.net, david@kernel.org,
dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org,
hughd@google.com, jannh@google.com, joshua.hahnjy@gmail.com,
lance.yang@linux.dev, lenb@kernel.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-pm@vger.kernel.org, lorenzo.stoakes@oracle.com,
matthew.brost@intel.com, mhocko@suse.com, muchun.song@linux.dev,
npache@redhat.com, nphamcs@gmail.com, pavel@kernel.org,
peterx@redhat.com, peterz@infradead.org, pfalcato@suse.de,
rafael@kernel.org, rakie.kim@sk.com, roman.gushchin@linux.dev,
rppt@kernel.org, ryan.roberts@arm.com, shakeel.butt@linux.dev,
shikemeng@huaweicloud.com, surenb@google.com, tglx@kernel.org,
vbabka@suse.cz, weixugc@google.com, ying.huang@linux.alibaba.com,
yosry.ahmed@linux.dev, yuanchu@google.com,
zhengqi.arch@bytedance.com, ziy@nvidia.com, kernel-team@meta.com,
riel@surriel.com
Subject: [PATCH] vswap: fix poor batching behavior of vswap free path
Date: Fri, 20 Feb 2026 13:05:39 -0800 [thread overview]
Message-ID: <20260220210539.989603-1-nphamcs@gmail.com> (raw)
In-Reply-To: <CAMgjq7AQNGK-a=AOgvn4-V+zGO21QMbMTVbrYSW_R2oDSLoC+A@mail.gmail.com>
Kairui, could you apply this patch on top of the vswap series and run it
on your test suite? It runs fairly well on my system (I actually rerun
the benchmark on a different host to double check as well), but I'd love
to get some data from your ends as well.
If there are serious discrepancies, could you also include your build
config etc.? There might be differences in our setups, but since I
managed to reproduce the free time regression on my first try I figured
I should just fix it first :)
---------------
Fix two issues that make the swap free path inefficient:
1. At the PTE zapping step, we are unnecessarily resolving the backends,
and fall back to batch size of 1, even though virtual swap
infrastructure now already supports freeing of mixed backend ranges
(as long the PTEs contain virtually contiguous swap slots).
2. Optimize vswap_free() by batching consecutive free operations, and
avoid releasing locks unnecessarily (most notably, when we release
non-disk-swap backends).
Per a report from Kairui Song ([1]), I have run the following benchmark:
free -m
total used free shared buff/cache available
Mem: 31596 5094 11667 19 15302 26502
Swap: 65535 33 65502
Running the usemem benchmark with n = 1, 56G for 5 times, and average
out the result:
Baseline (6.19):
real: mean: 190.93s, stdev: 5.09s
user: mean: 46.62s, stdev: 0.27s
sys: mean: 128.51s, stdev: 5.17s
throughput: mean: 382093 KB/s, stdev: 11173.6 KB/s
free time: mean: 7916690.2 usecs, stdev: 88923.0 usecs
VSS without this patch:
real: mean: 194.59s, stdev: 7.61s
user: mean: 46.71s, stdev: 0.46s
sys: mean: 131.97s, stdev: 7.93s
throughput: mean: 379236.4 KB/s, stdev: 15912.26 KB/s
free time: mean: 10115572.2 usecs, stdev: 108318.35 usecs
VSS with this patch:
real: mean: 187.66s, stdev: 5.67s
user: mean: 46.5s, stdev: 0.16s
sys: mean: 125.3s, stdev: 5.58s
throughput: mean: 387506.4 KB/s, stdev: 12556.56 KB/s
free time: mean: 7029733.8 usecs, stdev: 124661.34 usecs
[1]: https://lore.kernel.org/linux-mm/CAMgjq7AQNGK-a=AOgvn4-V+zGO21QMbMTVbrYSW_R2oDSLoC+A@mail.gmail.com/
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
include/linux/memcontrol.h | 6 +
mm/internal.h | 18 ++-
mm/madvise.c | 2 +-
mm/memcontrol.c | 2 +-
mm/memory.c | 8 +-
mm/vswap.c | 294 ++++++++++++++++++-------------------
6 files changed, 165 insertions(+), 165 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 0651865a4564f..0f7f5489e1675 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -827,6 +827,7 @@ static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg)
return memcg->id.id;
}
struct mem_cgroup *mem_cgroup_from_id(unsigned short id);
+void mem_cgroup_id_put_many(struct mem_cgroup *memcg, unsigned int n);
#ifdef CONFIG_SHRINKER_DEBUG
static inline unsigned long mem_cgroup_ino(struct mem_cgroup *memcg)
@@ -1289,6 +1290,11 @@ static inline struct mem_cgroup *mem_cgroup_from_id(unsigned short id)
return NULL;
}
+static inline void mem_cgroup_id_put_many(struct mem_cgroup *memcg,
+ unsigned int n)
+{
+}
+
#ifdef CONFIG_SHRINKER_DEBUG
static inline unsigned long mem_cgroup_ino(struct mem_cgroup *memcg)
{
diff --git a/mm/internal.h b/mm/internal.h
index cfe97501e4885..df991f601702c 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -327,8 +327,6 @@ static inline swp_entry_t swap_nth(swp_entry_t entry, long n)
return (swp_entry_t) { entry.val + n };
}
-swp_entry_t swap_move(swp_entry_t entry, long delta);
-
/**
* pte_move_swp_offset - Move the swap entry offset field of a swap pte
* forward or backward by delta
@@ -342,7 +340,7 @@ swp_entry_t swap_move(swp_entry_t entry, long delta);
static inline pte_t pte_move_swp_offset(pte_t pte, long delta)
{
softleaf_t entry = softleaf_from_pte(pte);
- pte_t new = swp_entry_to_pte(swap_move(entry, delta));
+ pte_t new = swp_entry_to_pte(swap_nth(entry, delta));
if (pte_swp_soft_dirty(pte))
new = pte_swp_mksoft_dirty(new);
@@ -372,6 +370,7 @@ static inline pte_t pte_next_swp_offset(pte_t pte)
* @start_ptep: Page table pointer for the first entry.
* @max_nr: The maximum number of table entries to consider.
* @pte: Page table entry for the first entry.
+ * @free_batch: Whether the batch will be passed to free_swap_and_cache_nr().
*
* Detect a batch of contiguous swap entries: consecutive (non-present) PTEs
* containing swap entries all with consecutive offsets and targeting the same
@@ -382,13 +381,15 @@ static inline pte_t pte_next_swp_offset(pte_t pte)
*
* Return: the number of table entries in the batch.
*/
-static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte)
+static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte,
+ bool free_batch)
{
pte_t expected_pte = pte_next_swp_offset(pte);
const pte_t *end_ptep = start_ptep + max_nr;
const softleaf_t entry = softleaf_from_pte(pte);
pte_t *ptep = start_ptep + 1;
unsigned short cgroup_id;
+ int nr;
VM_WARN_ON(max_nr < 1);
VM_WARN_ON(!softleaf_is_swap(entry));
@@ -408,7 +409,14 @@ static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte)
ptep++;
}
- return ptep - start_ptep;
+ nr = ptep - start_ptep;
+ /*
+ * free_swap_and_cache_nr can handle mixed backends, as long as virtual
+ * swap entries backing these PTEs are contiguous.
+ */
+ if (!free_batch && !vswap_can_swapin_thp(entry, nr))
+ return 1;
+ return nr;
}
#endif /* CONFIG_MMU */
diff --git a/mm/madvise.c b/mm/madvise.c
index b617b1be0f535..441da03c5d2b9 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -692,7 +692,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
if (softleaf_is_swap(entry)) {
max_nr = (end - addr) / PAGE_SIZE;
- nr = swap_pte_batch(pte, max_nr, ptent);
+ nr = swap_pte_batch(pte, max_nr, ptent, true);
nr_swap -= nr;
free_swap_and_cache_nr(entry, nr);
clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 50be8066bebec..bfa25eaffa12a 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3597,7 +3597,7 @@ void __maybe_unused mem_cgroup_id_get_many(struct mem_cgroup *memcg,
refcount_add(n, &memcg->id.ref);
}
-static void mem_cgroup_id_put_many(struct mem_cgroup *memcg, unsigned int n)
+void mem_cgroup_id_put_many(struct mem_cgroup *memcg, unsigned int n)
{
if (refcount_sub_and_test(n, &memcg->id.ref)) {
mem_cgroup_id_remove(memcg);
diff --git a/mm/memory.c b/mm/memory.c
index a16bf84ebaaf9..59645ad238e22 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1742,7 +1742,7 @@ static inline int zap_nonpresent_ptes(struct mmu_gather *tlb,
if (!should_zap_cows(details))
return 1;
- nr = swap_pte_batch(pte, max_nr, ptent);
+ nr = swap_pte_batch(pte, max_nr, ptent, true);
rss[MM_SWAPENTS] -= nr;
free_swap_and_cache_nr(entry, nr);
} else if (softleaf_is_migration(entry)) {
@@ -4491,7 +4491,7 @@ static bool can_swapin_thp(struct vm_fault *vmf, pte_t *ptep, int nr_pages)
if (!pte_same(pte, pte_move_swp_offset(vmf->orig_pte, -idx)))
return false;
entry = softleaf_from_pte(pte);
- if (swap_pte_batch(ptep, nr_pages, pte) != nr_pages)
+ if (swap_pte_batch(ptep, nr_pages, pte, false) != nr_pages)
return false;
/*
@@ -4877,7 +4877,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
pte_t folio_pte = ptep_get(folio_ptep);
if (!pte_same(folio_pte, pte_move_swp_offset(vmf->orig_pte, -idx)) ||
- swap_pte_batch(folio_ptep, nr, folio_pte) != nr)
+ swap_pte_batch(folio_ptep, nr, folio_pte, false) != nr)
goto out_nomap;
page_idx = idx;
@@ -4906,7 +4906,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
folio_ptep = vmf->pte - idx;
folio_pte = ptep_get(folio_ptep);
if (!pte_same(folio_pte, pte_move_swp_offset(vmf->orig_pte, -idx)) ||
- swap_pte_batch(folio_ptep, nr, folio_pte) != nr)
+ swap_pte_batch(folio_ptep, nr, folio_pte, false) != nr)
goto check_folio;
page_idx = idx;
diff --git a/mm/vswap.c b/mm/vswap.c
index 2a071d5ae173c..047c6476ef23c 100644
--- a/mm/vswap.c
+++ b/mm/vswap.c
@@ -481,18 +481,18 @@ static void vswap_cluster_free(struct vswap_cluster *cluster)
kvfree_rcu(cluster, rcu);
}
-static inline void release_vswap_slot(struct vswap_cluster *cluster,
- unsigned long index)
+static inline void release_vswap_slot_nr(struct vswap_cluster *cluster,
+ unsigned long index, int nr)
{
unsigned long slot_index = VSWAP_IDX_WITHIN_CLUSTER_VAL(index);
VM_WARN_ON(!spin_is_locked(&cluster->lock));
- cluster->count--;
+ cluster->count -= nr;
- bitmap_clear(cluster->bitmap, slot_index, 1);
+ bitmap_clear(cluster->bitmap, slot_index, nr);
/* we only free uncached empty clusters */
- if (refcount_dec_and_test(&cluster->refcnt))
+ if (refcount_sub_and_test(nr, &cluster->refcnt))
vswap_cluster_free(cluster);
else if (cluster->full && cluster_is_alloc_candidate(cluster)) {
cluster->full = false;
@@ -505,7 +505,7 @@ static inline void release_vswap_slot(struct vswap_cluster *cluster,
}
}
- atomic_dec(&vswap_used);
+ atomic_sub(nr, &vswap_used);
}
/*
@@ -527,23 +527,29 @@ void vswap_rmap_set(struct swap_cluster_info *ci, swp_slot_t slot,
}
/*
- * Caller needs to handle races with other operations themselves.
+ * release_backing - release the backend storage for a given range of virtual
+ * swap slots.
+ *
+ * Entered with the cluster locked, but might drop the lock in between.
+ * This is because several operations, such as releasing physical swap slots
+ * (i.e swap_slot_free_nr()) require the cluster to be unlocked to avoid
+ * deadlocks.
*
- * Specifically, this function is safe to be called in contexts where the swap
- * entry has been added to the swap cache and the associated folio is locked.
- * We cannot race with other accessors, and the swap entry is guaranteed to be
- * valid the whole time (since swap cache implies one refcount).
+ * This is safe, because:
+ *
+ * 1. The swap entry to be freed has refcnt (swap count and swapcache pin)
+ * down to 0, so no one can change its internal state
*
- * We cannot assume that the backends will be of the same type,
- * contiguous, etc. We might have a large folio coalesced from subpages with
- * mixed backend, which is only rectified when it is reclaimed.
+ * 2. The swap entry to be freed still holds a refcnt to the cluster, keeping
+ * the cluster itself valid.
+ *
+ * We will exit the function with the cluster re-locked.
*/
- static void release_backing(swp_entry_t entry, int nr)
+static void release_backing(struct vswap_cluster *cluster, swp_entry_t entry,
+ int nr)
{
- struct vswap_cluster *cluster = NULL;
struct swp_desc *desc;
unsigned long flush_nr, phys_swap_start = 0, phys_swap_end = 0;
- unsigned long phys_swap_released = 0;
unsigned int phys_swap_type = 0;
bool need_flushing_phys_swap = false;
swp_slot_t flush_slot;
@@ -551,9 +557,8 @@ void vswap_rmap_set(struct swap_cluster_info *ci, swp_slot_t slot,
VM_WARN_ON(!entry.val);
- rcu_read_lock();
for (i = 0; i < nr; i++) {
- desc = vswap_iter(&cluster, entry.val + i);
+ desc = __vswap_iter(cluster, entry.val + i);
VM_WARN_ON(!desc);
/*
@@ -573,7 +578,6 @@ void vswap_rmap_set(struct swap_cluster_info *ci, swp_slot_t slot,
if (desc->type == VSWAP_ZSWAP && desc->zswap_entry) {
zswap_entry_free(desc->zswap_entry);
} else if (desc->type == VSWAP_SWAPFILE) {
- phys_swap_released++;
if (!phys_swap_start) {
/* start a new contiguous range of phys swap */
phys_swap_start = swp_slot_offset(desc->slot);
@@ -589,56 +593,49 @@ void vswap_rmap_set(struct swap_cluster_info *ci, swp_slot_t slot,
if (need_flushing_phys_swap) {
spin_unlock(&cluster->lock);
- cluster = NULL;
swap_slot_free_nr(flush_slot, flush_nr);
+ mem_cgroup_uncharge_swap(entry, flush_nr);
+ spin_lock(&cluster->lock);
need_flushing_phys_swap = false;
}
}
- if (cluster)
- spin_unlock(&cluster->lock);
- rcu_read_unlock();
/* Flush any remaining physical swap range */
if (phys_swap_start) {
flush_slot = swp_slot(phys_swap_type, phys_swap_start);
flush_nr = phys_swap_end - phys_swap_start;
+ spin_unlock(&cluster->lock);
swap_slot_free_nr(flush_slot, flush_nr);
+ mem_cgroup_uncharge_swap(entry, flush_nr);
+ spin_lock(&cluster->lock);
}
+}
- if (phys_swap_released)
- mem_cgroup_uncharge_swap(entry, phys_swap_released);
- }
+static void __vswap_swap_cgroup_clear(struct vswap_cluster *cluster,
+ swp_entry_t entry, unsigned int nr_ents);
/*
- * Entered with the cluster locked, but might unlock the cluster.
- * This is because several operations, such as releasing physical swap slots
- * (i.e swap_slot_free_nr()) require the cluster to be unlocked to avoid
- * deadlocks.
- *
- * This is safe, because:
- *
- * 1. The swap entry to be freed has refcnt (swap count and swapcache pin)
- * down to 0, so no one can change its internal state
- *
- * 2. The swap entry to be freed still holds a refcnt to the cluster, keeping
- * the cluster itself valid.
- *
- * We will exit the function with the cluster re-locked.
+ * Entered with the cluster locked. We will exit the function with the cluster
+ * still locked.
*/
-static void vswap_free(struct vswap_cluster *cluster, struct swp_desc *desc,
- swp_entry_t entry)
+static void vswap_free_nr(struct vswap_cluster *cluster, swp_entry_t entry,
+ int nr)
{
- /* Clear shadow if present */
- if (xa_is_value(desc->shadow))
- desc->shadow = NULL;
- spin_unlock(&cluster->lock);
+ struct swp_desc *desc;
+ int i;
- release_backing(entry, 1);
- mem_cgroup_clear_swap(entry, 1);
+ for (i = 0; i < nr; i++) {
+ desc = __vswap_iter(cluster, entry.val + i);
+ /* Clear shadow if present */
+ if (xa_is_value(desc->shadow))
+ desc->shadow = NULL;
+ }
- /* erase forward mapping and release the virtual slot for reallocation */
- spin_lock(&cluster->lock);
- release_vswap_slot(cluster, entry.val);
+ release_backing(cluster, entry, nr);
+ __vswap_swap_cgroup_clear(cluster, entry, nr);
+
+ /* erase forward mapping and release the virtual slots for reallocation */
+ release_vswap_slot_nr(cluster, entry.val, nr);
}
/**
@@ -820,18 +817,32 @@ static bool vswap_free_nr_any_cache_only(swp_entry_t entry, int nr)
struct vswap_cluster *cluster = NULL;
struct swp_desc *desc;
bool ret = false;
- int i;
+ swp_entry_t free_start;
+ int i, free_nr = 0;
+ free_start.val = 0;
rcu_read_lock();
for (i = 0; i < nr; i++) {
+ /* flush pending free batch at cluster boundary */
+ if (free_nr && !VSWAP_IDX_WITHIN_CLUSTER_VAL(entry.val)) {
+ vswap_free_nr(cluster, free_start, free_nr);
+ free_nr = 0;
+ }
desc = vswap_iter(&cluster, entry.val);
VM_WARN_ON(!desc);
ret |= (desc->swap_count == 1 && desc->in_swapcache);
desc->swap_count--;
- if (!desc->swap_count && !desc->in_swapcache)
- vswap_free(cluster, desc, entry);
+ if (!desc->swap_count && !desc->in_swapcache) {
+ if (!free_nr++)
+ free_start = entry;
+ } else if (free_nr) {
+ vswap_free_nr(cluster, free_start, free_nr);
+ free_nr = 0;
+ }
entry.val++;
}
+ if (free_nr)
+ vswap_free_nr(cluster, free_start, free_nr);
if (cluster)
spin_unlock(&cluster->lock);
rcu_read_unlock();
@@ -954,19 +965,33 @@ void swapcache_clear(swp_entry_t entry, int nr)
{
struct vswap_cluster *cluster = NULL;
struct swp_desc *desc;
- int i;
+ swp_entry_t free_start;
+ int i, free_nr = 0;
if (!nr)
return;
+ free_start.val = 0;
rcu_read_lock();
for (i = 0; i < nr; i++) {
+ /* flush pending free batch at cluster boundary */
+ if (free_nr && !VSWAP_IDX_WITHIN_CLUSTER_VAL(entry.val)) {
+ vswap_free_nr(cluster, free_start, free_nr);
+ free_nr = 0;
+ }
desc = vswap_iter(&cluster, entry.val);
desc->in_swapcache = false;
- if (!desc->swap_count)
- vswap_free(cluster, desc, entry);
+ if (!desc->swap_count) {
+ if (!free_nr++)
+ free_start = entry;
+ } else if (free_nr) {
+ vswap_free_nr(cluster, free_start, free_nr);
+ free_nr = 0;
+ }
entry.val++;
}
+ if (free_nr)
+ vswap_free_nr(cluster, free_start, free_nr);
if (cluster)
spin_unlock(&cluster->lock);
rcu_read_unlock();
@@ -1107,11 +1132,13 @@ void vswap_store_folio(swp_entry_t entry, struct folio *folio)
VM_BUG_ON(!folio_test_locked(folio));
VM_BUG_ON(folio->swap.val != entry.val);
- release_backing(entry, nr);
-
rcu_read_lock();
+ desc = vswap_iter(&cluster, entry.val);
+ VM_WARN_ON(!desc);
+ release_backing(cluster, entry, nr);
+
for (i = 0; i < nr; i++) {
- desc = vswap_iter(&cluster, entry.val + i);
+ desc = __vswap_iter(cluster, entry.val + i);
VM_WARN_ON(!desc);
desc->type = VSWAP_FOLIO;
desc->swap_cache = folio;
@@ -1136,11 +1163,13 @@ void swap_zeromap_folio_set(struct folio *folio)
VM_BUG_ON(!folio_test_locked(folio));
VM_BUG_ON(!entry.val);
- release_backing(entry, nr);
-
rcu_read_lock();
+ desc = vswap_iter(&cluster, entry.val);
+ VM_WARN_ON(!desc);
+ release_backing(cluster, entry, nr);
+
for (i = 0; i < nr; i++) {
- desc = vswap_iter(&cluster, entry.val + i);
+ desc = __vswap_iter(cluster, entry.val + i);
VM_WARN_ON(!desc);
desc->type = VSWAP_ZERO;
}
@@ -1261,89 +1290,6 @@ bool vswap_can_swapin_thp(swp_entry_t entry, int nr)
(type == VSWAP_ZERO || type == VSWAP_SWAPFILE);
}
-/**
- * swap_move - increment the swap slot by delta, checking the backing state and
- * return 0 if the backing state does not match (i.e wrong backing
- * state type, or wrong offset on the backing stores).
- * @entry: the original virtual swap slot.
- * @delta: the offset to increment the original slot.
- *
- * Note that this function is racy unless we can pin the backing state of these
- * swap slots down with swapcache_prepare().
- *
- * Caller should only rely on this function as a best-effort hint otherwise,
- * and should double-check after ensuring the whole range is pinned down.
- *
- * Return: the incremented virtual swap slot if the backing state matches, or
- * 0 if the backing state does not match.
- */
-swp_entry_t swap_move(swp_entry_t entry, long delta)
-{
- struct vswap_cluster *cluster = NULL;
- struct swp_desc *desc, *next_desc;
- swp_entry_t next_entry;
- struct folio *folio = NULL, *next_folio = NULL;
- enum swap_type type, next_type;
- swp_slot_t slot = {0}, next_slot = {0};
-
- next_entry.val = entry.val + delta;
-
- rcu_read_lock();
-
- /* Look up first descriptor and get its type and backing store */
- desc = vswap_iter(&cluster, entry.val);
- if (!desc) {
- rcu_read_unlock();
- return (swp_entry_t){0};
- }
-
- type = desc->type;
- if (type == VSWAP_ZSWAP) {
- /* zswap not supported for move */
- spin_unlock(&cluster->lock);
- rcu_read_unlock();
- return (swp_entry_t){0};
- }
- if (type == VSWAP_FOLIO)
- folio = desc->swap_cache;
- else if (type == VSWAP_SWAPFILE)
- slot = desc->slot;
-
- /* Look up second descriptor and get its type and backing store */
- next_desc = vswap_iter(&cluster, next_entry.val);
- if (!next_desc) {
- rcu_read_unlock();
- return (swp_entry_t){0};
- }
-
- next_type = next_desc->type;
- if (next_type == VSWAP_FOLIO)
- next_folio = next_desc->swap_cache;
- else if (next_type == VSWAP_SWAPFILE)
- next_slot = next_desc->slot;
-
- if (cluster)
- spin_unlock(&cluster->lock);
-
- rcu_read_unlock();
-
- /* Check if types match */
- if (next_type != type)
- return (swp_entry_t){0};
-
- /* Check backing state consistency */
- if (type == VSWAP_SWAPFILE &&
- (swp_slot_type(next_slot) != swp_slot_type(slot) ||
- swp_slot_offset(next_slot) !=
- swp_slot_offset(slot) + delta))
- return (swp_entry_t){0};
-
- if (type == VSWAP_FOLIO && next_folio != folio)
- return (swp_entry_t){0};
-
- return next_entry;
-}
-
/*
* Return the count of contiguous swap entries that share the same
* VSWAP_ZERO status as the starting entry. If is_zeromap is not NULL,
@@ -1863,11 +1809,10 @@ void zswap_entry_store(swp_entry_t swpentry, struct zswap_entry *entry)
struct vswap_cluster *cluster = NULL;
struct swp_desc *desc;
- release_backing(swpentry, 1);
-
rcu_read_lock();
desc = vswap_iter(&cluster, swpentry.val);
VM_WARN_ON(!desc);
+ release_backing(cluster, swpentry, 1);
desc->zswap_entry = entry;
desc->type = VSWAP_ZSWAP;
spin_unlock(&cluster->lock);
@@ -1914,17 +1859,22 @@ bool zswap_empty(swp_entry_t swpentry)
#endif /* CONFIG_ZSWAP */
#ifdef CONFIG_MEMCG
-static unsigned short vswap_cgroup_record(swp_entry_t entry,
- unsigned short memcgid, unsigned int nr_ents)
+/*
+ * __vswap_cgroup_record - record mem_cgroup for a set of swap entries
+ *
+ * Entered with the cluster locked. We will exit the function with the cluster
+ * still locked.
+ */
+static unsigned short __vswap_cgroup_record(struct vswap_cluster *cluster,
+ swp_entry_t entry, unsigned short memcgid,
+ unsigned int nr_ents)
{
- struct vswap_cluster *cluster = NULL;
struct swp_desc *desc;
unsigned short oldid, iter = 0;
int i;
- rcu_read_lock();
for (i = 0; i < nr_ents; i++) {
- desc = vswap_iter(&cluster, entry.val + i);
+ desc = __vswap_iter(cluster, entry.val + i);
VM_WARN_ON(!desc);
oldid = desc->memcgid;
desc->memcgid = memcgid;
@@ -1932,6 +1882,37 @@ static unsigned short vswap_cgroup_record(swp_entry_t entry,
iter = oldid;
VM_WARN_ON(iter != oldid);
}
+
+ return oldid;
+}
+
+/*
+ * Clear swap cgroup for a range of swap entries.
+ * Entered with the cluster locked. Caller must be under rcu_read_lock().
+ */
+static void __vswap_swap_cgroup_clear(struct vswap_cluster *cluster,
+ swp_entry_t entry, unsigned int nr_ents)
+{
+ unsigned short id;
+ struct mem_cgroup *memcg;
+
+ id = __vswap_cgroup_record(cluster, entry, 0, nr_ents);
+ memcg = mem_cgroup_from_id(id);
+ if (memcg)
+ mem_cgroup_id_put_many(memcg, nr_ents);
+}
+
+static unsigned short vswap_cgroup_record(swp_entry_t entry,
+ unsigned short memcgid, unsigned int nr_ents)
+{
+ struct vswap_cluster *cluster = NULL;
+ struct swp_desc *desc;
+ unsigned short oldid;
+
+ rcu_read_lock();
+ desc = vswap_iter(&cluster, entry.val);
+ VM_WARN_ON(!desc);
+ oldid = __vswap_cgroup_record(cluster, entry, memcgid, nr_ents);
spin_unlock(&cluster->lock);
rcu_read_unlock();
@@ -1999,6 +1980,11 @@ unsigned short lookup_swap_cgroup_id(swp_entry_t entry)
rcu_read_unlock();
return ret;
}
+#else /* !CONFIG_MEMCG */
+static void __vswap_swap_cgroup_clear(struct vswap_cluster *cluster,
+ swp_entry_t entry, unsigned int nr_ents)
+{
+}
#endif /* CONFIG_MEMCG */
int vswap_init(void)
--
2.47.3
next prev parent reply other threads:[~2026-02-20 22:38 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-08 21:58 [PATCH v3 00/20] Virtual Swap Space Nhat Pham
2026-02-08 21:58 ` [PATCH v3 01/20] mm/swap: decouple swap cache from physical swap infrastructure Nhat Pham
2026-02-08 22:26 ` [PATCH v3 00/20] Virtual Swap Space Nhat Pham
2026-02-10 17:59 ` Kairui Song
2026-02-10 18:52 ` Johannes Weiner
2026-02-10 19:11 ` Nhat Pham
2026-02-10 19:23 ` Nhat Pham
2026-02-12 5:07 ` Chris Li
2026-02-17 23:36 ` Nhat Pham
2026-02-10 21:58 ` Chris Li
2026-02-20 21:05 ` Nhat Pham [this message]
2026-02-08 22:31 ` Nhat Pham
2026-02-09 12:20 ` Chris Li
2026-02-10 2:36 ` Johannes Weiner
2026-02-10 21:24 ` Chris Li
2026-02-10 23:01 ` Johannes Weiner
2026-02-10 18:00 ` Nhat Pham
2026-02-10 23:17 ` Chris Li
2026-02-08 22:39 ` Nhat Pham
2026-02-09 2:22 ` [PATCH v3 01/20] mm/swap: decouple swap cache from physical swap infrastructure kernel test robot
2026-02-08 21:58 ` [PATCH v3 02/20] swap: rearrange the swap header file Nhat Pham
2026-02-08 21:58 ` [PATCH v3 03/20] mm: swap: add an abstract API for locking out swapoff Nhat Pham
2026-02-08 21:58 ` [PATCH v3 04/20] zswap: add new helpers for zswap entry operations Nhat Pham
2026-02-08 21:58 ` [PATCH v3 05/20] mm/swap: add a new function to check if a swap entry is in swap cached Nhat Pham
2026-02-08 21:58 ` [PATCH v3 06/20] mm: swap: add a separate type for physical swap slots Nhat Pham
2026-02-08 21:58 ` [PATCH v3 07/20] mm: create scaffolds for the new virtual swap implementation Nhat Pham
2026-02-08 21:58 ` [PATCH v3 08/20] zswap: prepare zswap for swap virtualization Nhat Pham
2026-02-08 21:58 ` [PATCH v3 09/20] mm: swap: allocate a virtual swap slot for each swapped out page Nhat Pham
2026-02-09 17:12 ` kernel test robot
2026-02-11 13:42 ` kernel test robot
2026-02-08 21:58 ` [PATCH v3 10/20] swap: move swap cache to virtual swap descriptor Nhat Pham
2026-02-08 21:58 ` [PATCH v3 11/20] zswap: move zswap entry management to the " Nhat Pham
2026-02-08 21:58 ` [PATCH v3 12/20] swap: implement the swap_cgroup API using virtual swap Nhat Pham
2026-02-08 21:58 ` [PATCH v3 13/20] swap: manage swap entry lifecycle at the virtual swap layer Nhat Pham
2026-02-08 21:58 ` [PATCH v3 14/20] mm: swap: decouple virtual swap slot from backing store Nhat Pham
2026-02-10 6:31 ` Dan Carpenter
2026-02-08 21:58 ` [PATCH v3 15/20] zswap: do not start zswap shrinker if there is no physical swap slots Nhat Pham
2026-02-08 21:58 ` [PATCH v3 16/20] swap: do not unnecesarily pin readahead swap entries Nhat Pham
2026-02-08 21:58 ` [PATCH v3 17/20] swapfile: remove zeromap bitmap Nhat Pham
2026-02-08 21:58 ` [PATCH v3 18/20] memcg: swap: only charge physical swap slots Nhat Pham
2026-02-09 2:01 ` kernel test robot
2026-02-09 2:12 ` kernel test robot
2026-02-08 21:58 ` [PATCH v3 19/20] swap: simplify swapoff using virtual swap Nhat Pham
2026-02-08 21:58 ` [PATCH v3 20/20] swapfile: replace the swap map with bitmaps Nhat Pham
2026-02-08 22:51 ` [PATCH v3 00/20] Virtual Swap Space Nhat Pham
2026-02-12 12:23 ` David Hildenbrand (Arm)
2026-02-12 17:29 ` Nhat Pham
2026-02-12 17:39 ` Nhat Pham
2026-02-12 20:11 ` David Hildenbrand (Arm)
2026-02-12 17:41 ` David Hildenbrand (Arm)
2026-02-12 17:45 ` Nhat Pham
2026-02-10 15:45 ` [syzbot ci] " syzbot ci
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260220210539.989603-1-nphamcs@gmail.com \
--to=nphamcs@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=axelrasmussen@google.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=byungchul@sk.com \
--cc=cgroups@vger.kernel.org \
--cc=chengming.zhou@linux.dev \
--cc=chrisl@kernel.org \
--cc=corbet@lwn.net \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=joshua.hahnjy@gmail.com \
--cc=kasong@tencent.com \
--cc=kernel-team@meta.com \
--cc=lance.yang@linux.dev \
--cc=lenb@kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-pm@vger.kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=npache@redhat.com \
--cc=pavel@kernel.org \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=pfalcato@suse.de \
--cc=rafael@kernel.org \
--cc=rakie.kim@sk.com \
--cc=riel@surriel.com \
--cc=roman.gushchin@linux.dev \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=shikemeng@huaweicloud.com \
--cc=surenb@google.com \
--cc=tglx@kernel.org \
--cc=vbabka@suse.cz \
--cc=weixugc@google.com \
--cc=ying.huang@linux.alibaba.com \
--cc=yosry.ahmed@linux.dev \
--cc=yuanchu@google.com \
--cc=zhengqi.arch@bytedance.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox