* [PATCH v3 mm-hotfixes] mm/zswap: fix inconsistency when zswap_store_page() fails
@ 2025-01-29 10:08 Hyeonggon Yoo
2025-01-29 15:52 ` Yosry Ahmed
2025-01-31 23:44 ` Nhat Pham
0 siblings, 2 replies; 3+ messages in thread
From: Hyeonggon Yoo @ 2025-01-29 10:08 UTC (permalink / raw)
To: Kanchana P Sridhar, Johannes Weiner, Yosry Ahmed, Nhat Pham,
Chengming Zhou, Andrew Morton
Cc: linux-mm, Hyeonggon Yoo, stable
Commit b7c0ccdfbafd ("mm: zswap: support large folios in zswap_store()")
skips charging any zswap entries when it failed to zswap the entire
folio.
However, when some base pages are zswapped but it failed to zswap
the entire folio, the zswap operation is rolled back.
When freeing zswap entries for those pages, zswap_entry_free() uncharges
the zswap entries that were not previously charged, causing zswap charging
to become inconsistent.
This inconsistency triggers two warnings with following steps:
# On a machine with 64GiB of RAM and 36GiB of zswap
$ stress-ng --bigheap 2 # wait until the OOM-killer kills stress-ng
$ sudo reboot
The two warnings are:
in mm/memcontrol.c:163, function obj_cgroup_release():
WARN_ON_ONCE(nr_bytes & (PAGE_SIZE - 1));
in mm/page_counter.c:60, function page_counter_cancel():
if (WARN_ONCE(new < 0, "page_counter underflow: %ld nr_pages=%lu\n",
new, nr_pages))
zswap_stored_pages also becomes inconsistent in the same way.
As suggested by Kanchana, increment zswap_stored_pages and charge zswap
entries within zswap_store_page() when it succeeds. This way,
zswap_entry_free() will decrement the counter and uncharge the entries
when it failed to zswap the entire folio.
While this could potentially be optimized by batching objcg charging
and incrementing the counter, let's focus on fixing the bug this time
and leave the optimization for later after some evaluation.
After resolving the inconsistency, the warnings disappear.
Fixes: b7c0ccdfbafd ("mm: zswap: support large folios in zswap_store()")
Cc: stable@vger.kernel.org
Co-developed-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
v2 -> v3:
- Adjusted Kanchana's feedback:
- Fixed inconsistency in zswap_stored_pages
- Now objcg charging and incrementing zswap_store_pages is done
within zswap_stored_pages, one by one
mm/zswap.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index 6504174fbc6a..f0bd962bffd5 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1504,11 +1504,14 @@ static ssize_t zswap_store_page(struct page *page,
entry->pool = pool;
entry->swpentry = page_swpentry;
entry->objcg = objcg;
+ if (objcg)
+ obj_cgroup_charge_zswap(objcg, entry->length);
entry->referenced = true;
if (entry->length) {
INIT_LIST_HEAD(&entry->lru);
zswap_lru_add(&zswap_list_lru, entry);
}
+ atomic_long_inc(&zswap_stored_pages);
return entry->length;
@@ -1526,7 +1529,6 @@ bool zswap_store(struct folio *folio)
struct obj_cgroup *objcg = NULL;
struct mem_cgroup *memcg = NULL;
struct zswap_pool *pool;
- size_t compressed_bytes = 0;
bool ret = false;
long index;
@@ -1569,15 +1571,11 @@ bool zswap_store(struct folio *folio)
bytes = zswap_store_page(page, objcg, pool);
if (bytes < 0)
goto put_pool;
- compressed_bytes += bytes;
}
- if (objcg) {
- obj_cgroup_charge_zswap(objcg, compressed_bytes);
+ if (objcg)
count_objcg_events(objcg, ZSWPOUT, nr_pages);
- }
- atomic_long_add(nr_pages, &zswap_stored_pages);
count_vm_events(ZSWPOUT, nr_pages);
ret = true;
--
2.47.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v3 mm-hotfixes] mm/zswap: fix inconsistency when zswap_store_page() fails
2025-01-29 10:08 [PATCH v3 mm-hotfixes] mm/zswap: fix inconsistency when zswap_store_page() fails Hyeonggon Yoo
@ 2025-01-29 15:52 ` Yosry Ahmed
2025-01-31 23:44 ` Nhat Pham
1 sibling, 0 replies; 3+ messages in thread
From: Yosry Ahmed @ 2025-01-29 15:52 UTC (permalink / raw)
To: Hyeonggon Yoo
Cc: Kanchana P Sridhar, Johannes Weiner, Nhat Pham, Chengming Zhou,
Andrew Morton, linux-mm, stable
On Wed, Jan 29, 2025 at 07:08:44PM +0900, Hyeonggon Yoo wrote:
> Commit b7c0ccdfbafd ("mm: zswap: support large folios in zswap_store()")
> skips charging any zswap entries when it failed to zswap the entire
> folio.
>
> However, when some base pages are zswapped but it failed to zswap
> the entire folio, the zswap operation is rolled back.
> When freeing zswap entries for those pages, zswap_entry_free() uncharges
> the zswap entries that were not previously charged, causing zswap charging
> to become inconsistent.
>
> This inconsistency triggers two warnings with following steps:
> # On a machine with 64GiB of RAM and 36GiB of zswap
> $ stress-ng --bigheap 2 # wait until the OOM-killer kills stress-ng
> $ sudo reboot
>
> The two warnings are:
> in mm/memcontrol.c:163, function obj_cgroup_release():
> WARN_ON_ONCE(nr_bytes & (PAGE_SIZE - 1));
>
> in mm/page_counter.c:60, function page_counter_cancel():
> if (WARN_ONCE(new < 0, "page_counter underflow: %ld nr_pages=%lu\n",
> new, nr_pages))
>
> zswap_stored_pages also becomes inconsistent in the same way.
>
> As suggested by Kanchana, increment zswap_stored_pages and charge zswap
> entries within zswap_store_page() when it succeeds. This way,
> zswap_entry_free() will decrement the counter and uncharge the entries
> when it failed to zswap the entire folio.
>
> While this could potentially be optimized by batching objcg charging
> and incrementing the counter, let's focus on fixing the bug this time
> and leave the optimization for later after some evaluation.
>
> After resolving the inconsistency, the warnings disappear.
>
> Fixes: b7c0ccdfbafd ("mm: zswap: support large folios in zswap_store()")
> Cc: stable@vger.kernel.org
> Co-developed-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
> Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
I have a few nits, but generally:
Acked-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> ---
>
> v2 -> v3:
> - Adjusted Kanchana's feedback:
> - Fixed inconsistency in zswap_stored_pages
> - Now objcg charging and incrementing zswap_store_pages is done
> within zswap_stored_pages, one by one
>
> mm/zswap.c | 10 ++++------
> 1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 6504174fbc6a..f0bd962bffd5 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1504,11 +1504,14 @@ static ssize_t zswap_store_page(struct page *page,
> entry->pool = pool;
> entry->swpentry = page_swpentry;
> entry->objcg = objcg;
> + if (objcg)
> + obj_cgroup_charge_zswap(objcg, entry->length);
nit: This can be moved to the existing if (objcg) check where we call
obj_cgroup_get(). At that point there shouldn't be a possibility of
failure. If you want to keep it here to make it obvious that we only
charge when we set entry->objcg that's fine, but we can probably move
obj_cgroup_get() here as well in this case.
> entry->referenced = true;
> if (entry->length) {
> INIT_LIST_HEAD(&entry->lru);
> zswap_lru_add(&zswap_list_lru, entry);
> }
> + atomic_long_inc(&zswap_stored_pages);
nit: If you keep the charging after setting entry->objcg because that's
when the freeing path will uncharge, then perhaps you want to move this
after the tree store is successful, because at that point the freeing
path will decrement the counter.
> return entry->length;
>
> @@ -1526,7 +1529,6 @@ bool zswap_store(struct folio *folio)
> struct obj_cgroup *objcg = NULL;
> struct mem_cgroup *memcg = NULL;
> struct zswap_pool *pool;
> - size_t compressed_bytes = 0;
> bool ret = false;
> long index;
>
> @@ -1569,15 +1571,11 @@ bool zswap_store(struct folio *folio)
> bytes = zswap_store_page(page, objcg, pool);
> if (bytes < 0)
> goto put_pool;
Do we need 'bytes' anymore? I think we don't even need
zswap_store_page() to return the compressed size anymore. Seems like a
boolean will suffice.
> - compressed_bytes += bytes;
> }
>
> - if (objcg) {
> - obj_cgroup_charge_zswap(objcg, compressed_bytes);
> + if (objcg)
> count_objcg_events(objcg, ZSWPOUT, nr_pages);
> - }
>
> - atomic_long_add(nr_pages, &zswap_stored_pages);
> count_vm_events(ZSWPOUT, nr_pages);
>
> ret = true;
> --
> 2.47.1
>
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v3 mm-hotfixes] mm/zswap: fix inconsistency when zswap_store_page() fails
2025-01-29 10:08 [PATCH v3 mm-hotfixes] mm/zswap: fix inconsistency when zswap_store_page() fails Hyeonggon Yoo
2025-01-29 15:52 ` Yosry Ahmed
@ 2025-01-31 23:44 ` Nhat Pham
1 sibling, 0 replies; 3+ messages in thread
From: Nhat Pham @ 2025-01-31 23:44 UTC (permalink / raw)
To: Hyeonggon Yoo
Cc: Kanchana P Sridhar, Johannes Weiner, Yosry Ahmed, Chengming Zhou,
Andrew Morton, linux-mm, stable
On Wed, Jan 29, 2025 at 2:08 AM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote:
>
> Commit b7c0ccdfbafd ("mm: zswap: support large folios in zswap_store()")
> skips charging any zswap entries when it failed to zswap the entire
> folio.
>
> However, when some base pages are zswapped but it failed to zswap
> the entire folio, the zswap operation is rolled back.
> When freeing zswap entries for those pages, zswap_entry_free() uncharges
> the zswap entries that were not previously charged, causing zswap charging
> to become inconsistent.
>
> This inconsistency triggers two warnings with following steps:
> # On a machine with 64GiB of RAM and 36GiB of zswap
> $ stress-ng --bigheap 2 # wait until the OOM-killer kills stress-ng
> $ sudo reboot
>
> The two warnings are:
> in mm/memcontrol.c:163, function obj_cgroup_release():
> WARN_ON_ONCE(nr_bytes & (PAGE_SIZE - 1));
>
> in mm/page_counter.c:60, function page_counter_cancel():
> if (WARN_ONCE(new < 0, "page_counter underflow: %ld nr_pages=%lu\n",
> new, nr_pages))
>
> zswap_stored_pages also becomes inconsistent in the same way.
Nice catch haha.
>
> As suggested by Kanchana, increment zswap_stored_pages and charge zswap
> entries within zswap_store_page() when it succeeds. This way,
> zswap_entry_free() will decrement the counter and uncharge the entries
> when it failed to zswap the entire folio.
>
> While this could potentially be optimized by batching objcg charging
> and incrementing the counter, let's focus on fixing the bug this time
> and leave the optimization for later after some evaluation.
>
> After resolving the inconsistency, the warnings disappear.
>
> Fixes: b7c0ccdfbafd ("mm: zswap: support large folios in zswap_store()")
> Cc: stable@vger.kernel.org
> Co-developed-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
> Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
With your fixlet applied:
Acked-by: Nhat Pham <nphamcs@gmail.com>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-01-31 23:44 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-29 10:08 [PATCH v3 mm-hotfixes] mm/zswap: fix inconsistency when zswap_store_page() fails Hyeonggon Yoo
2025-01-29 15:52 ` Yosry Ahmed
2025-01-31 23:44 ` Nhat Pham
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox