* [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts
@ 2024-06-27 7:59 Chengming Zhou
2024-06-27 7:59 ` [PATCH 2/2] mm/zsmalloc: move record_obj() into obj_malloc() Chengming Zhou
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: Chengming Zhou @ 2024-06-27 7:59 UTC (permalink / raw)
To: minchan, senozhatsky, akpm; +Cc: linux-mm, linux-kernel, chengming.zhou
We always use insert_zspage() and remove_zspage() to update zspage's
fullness location, which will account correctly.
But this special async free path use "splice" instead of remove_zspage(),
so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease.
Fix it by decreasing when iterate over the zspage free list.
Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
---
mm/zsmalloc.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index fec1a39e5bbe..7fc25fa4e6b3 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work)
class = zspage_class(pool, zspage);
spin_lock(&class->lock);
+ class_stat_dec(class, ZS_INUSE_RATIO_0, 1);
__free_zspage(pool, class, zspage);
spin_unlock(&class->lock);
}
--
2.20.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/2] mm/zsmalloc: move record_obj() into obj_malloc()
2024-06-27 7:59 [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts Chengming Zhou
@ 2024-06-27 7:59 ` Chengming Zhou
2024-06-28 2:25 ` Sergey Senozhatsky
2024-06-27 20:33 ` [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts Andrew Morton
2024-06-28 0:55 ` Sergey Senozhatsky
2 siblings, 1 reply; 12+ messages in thread
From: Chengming Zhou @ 2024-06-27 7:59 UTC (permalink / raw)
To: minchan, senozhatsky, akpm; +Cc: linux-mm, linux-kernel, chengming.zhou
We always record_obj() to make handle points to object after obj_malloc(),
so simplify the code by moving record_obj() into obj_malloc(). There
should be no functional change.
Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
---
mm/zsmalloc.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 7fc25fa4e6b3..c2f4e62ffb46 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1306,7 +1306,6 @@ static unsigned long obj_malloc(struct zs_pool *pool,
void *vaddr;
class = pool->size_class[zspage->class];
- handle |= OBJ_ALLOCATED_TAG;
obj = get_freeobj(zspage);
offset = obj * class->size;
@@ -1322,15 +1321,16 @@ static unsigned long obj_malloc(struct zs_pool *pool,
set_freeobj(zspage, link->next >> OBJ_TAG_BITS);
if (likely(!ZsHugePage(zspage)))
/* record handle in the header of allocated chunk */
- link->handle = handle;
+ link->handle = handle | OBJ_ALLOCATED_TAG;
else
/* record handle to page->index */
- zspage->first_page->index = handle;
+ zspage->first_page->index = handle | OBJ_ALLOCATED_TAG;
kunmap_atomic(vaddr);
mod_zspage_inuse(zspage, 1);
obj = location_to_obj(m_page, obj);
+ record_obj(handle, obj);
return obj;
}
@@ -1348,7 +1348,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
*/
unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
{
- unsigned long handle, obj;
+ unsigned long handle;
struct size_class *class;
int newfg;
struct zspage *zspage;
@@ -1371,10 +1371,9 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
spin_lock(&class->lock);
zspage = find_get_zspage(class);
if (likely(zspage)) {
- obj = obj_malloc(pool, zspage, handle);
+ obj_malloc(pool, zspage, handle);
/* Now move the zspage to another fullness group, if required */
fix_fullness_group(class, zspage);
- record_obj(handle, obj);
class_stat_inc(class, ZS_OBJS_INUSE, 1);
goto out;
@@ -1389,10 +1388,9 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
}
spin_lock(&class->lock);
- obj = obj_malloc(pool, zspage, handle);
+ obj_malloc(pool, zspage, handle);
newfg = get_fullness_group(class, zspage);
insert_zspage(class, zspage, newfg);
- record_obj(handle, obj);
atomic_long_add(class->pages_per_zspage, &pool->pages_allocated);
class_stat_inc(class, ZS_OBJS_ALLOCATED, class->objs_per_zspage);
class_stat_inc(class, ZS_OBJS_INUSE, 1);
@@ -1591,7 +1589,6 @@ static void migrate_zspage(struct zs_pool *pool, struct zspage *src_zspage,
free_obj = obj_malloc(pool, dst_zspage, handle);
zs_object_copy(class, free_obj, used_obj);
obj_idx++;
- record_obj(handle, free_obj);
obj_free(class->size, used_obj);
/* Stop if there is no more space */
--
2.20.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts
2024-06-27 7:59 [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts Chengming Zhou
2024-06-27 7:59 ` [PATCH 2/2] mm/zsmalloc: move record_obj() into obj_malloc() Chengming Zhou
@ 2024-06-27 20:33 ` Andrew Morton
2024-06-28 0:51 ` Sergey Senozhatsky
2024-06-28 0:55 ` Sergey Senozhatsky
2 siblings, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2024-06-27 20:33 UTC (permalink / raw)
To: Chengming Zhou; +Cc: minchan, senozhatsky, linux-mm, linux-kernel
On Thu, 27 Jun 2024 15:59:58 +0800 Chengming Zhou <chengming.zhou@linux.dev> wrote:
> We always use insert_zspage() and remove_zspage() to update zspage's
> fullness location, which will account correctly.
>
> But this special async free path use "splice" instead of remove_zspage(),
> so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease.
>
> Fix it by decreasing when iterate over the zspage free list.
>
> ...
>
> Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
> +++ b/mm/zsmalloc.c
> @@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work)
>
> class = zspage_class(pool, zspage);
> spin_lock(&class->lock);
> + class_stat_dec(class, ZS_INUSE_RATIO_0, 1);
> __free_zspage(pool, class, zspage);
> spin_unlock(&class->lock);
> }
What are the runtime effects of this bug? Should we backport the fix
into earlier kernels? And are we able to identify the appropriate
Fixes: target?
Thanks.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts
2024-06-27 20:33 ` [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts Andrew Morton
@ 2024-06-28 0:51 ` Sergey Senozhatsky
0 siblings, 0 replies; 12+ messages in thread
From: Sergey Senozhatsky @ 2024-06-28 0:51 UTC (permalink / raw)
To: Andrew Morton
Cc: Chengming Zhou, minchan, senozhatsky, linux-mm, linux-kernel
On (24/06/27 13:33), Andrew Morton wrote:
> On Thu, 27 Jun 2024 15:59:58 +0800 Chengming Zhou <chengming.zhou@linux.dev> wrote:
> > We always use insert_zspage() and remove_zspage() to update zspage's
> > fullness location, which will account correctly.
> >
> > But this special async free path use "splice" instead of remove_zspage(),
> > so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease.
> >
> > Fix it by decreasing when iterate over the zspage free list.
> >
> > ...
> >
> > Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
> > +++ b/mm/zsmalloc.c
> > @@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work)
> >
> > class = zspage_class(pool, zspage);
> > spin_lock(&class->lock);
> > + class_stat_dec(class, ZS_INUSE_RATIO_0, 1);
> > __free_zspage(pool, class, zspage);
> > spin_unlock(&class->lock);
> > }
>
> What are the runtime effects of this bug? Should we backport the fix
> into earlier kernels? And are we able to identify the appropriate
> Fixes: target?
I don't think this has any run-time visible effects. Class stats
(ZS_OBJS_ALLOCATED and ZS_OBJS_INUSE) play their role during compaction
(defragmentation), but ZS_INUSE_RATIO_0 is for zspage fullness type,
moreover for empty zspage, which we don't look at during compaction.
With CONFIG_ZSMALLOC_STAT enabled we show pool/class stats to the users
via zs_stats_size_show() but ZS_INUSE_RATIO_0 is ignored. So no one
(external) should know what value is there and ZS_INUSE_RATIO_0 should
never be of any importance to zsmalloc (internally).
Code in question (async_free_zspage()) was introduced by 48b4800a1c6af
in 2016-07-26, so it's been a long time.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts
2024-06-27 7:59 [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts Chengming Zhou
2024-06-27 7:59 ` [PATCH 2/2] mm/zsmalloc: move record_obj() into obj_malloc() Chengming Zhou
2024-06-27 20:33 ` [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts Andrew Morton
@ 2024-06-28 0:55 ` Sergey Senozhatsky
2024-06-28 1:08 ` Sergey Senozhatsky
2 siblings, 1 reply; 12+ messages in thread
From: Sergey Senozhatsky @ 2024-06-28 0:55 UTC (permalink / raw)
To: Chengming Zhou; +Cc: minchan, senozhatsky, akpm, linux-mm, linux-kernel
On (24/06/27 15:59), Chengming Zhou wrote:
> We always use insert_zspage() and remove_zspage() to update zspage's
> fullness location, which will account correctly.
>
> But this special async free path use "splice" instead of remove_zspage(),
> so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease.
>
> Fix it by decreasing when iterate over the zspage free list.
So this doesn't fix anything. ZS_INUSE_RATIO_0 is just a "placeholder"
which is never used anywhere.
We can land this patch, if you insist/prefer, but it's some sort of
a NOOP, essentially (in a sense that it has no visible effects).
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts
2024-06-28 0:55 ` Sergey Senozhatsky
@ 2024-06-28 1:08 ` Sergey Senozhatsky
2024-06-28 3:19 ` Chengming Zhou
0 siblings, 1 reply; 12+ messages in thread
From: Sergey Senozhatsky @ 2024-06-28 1:08 UTC (permalink / raw)
To: Chengming Zhou; +Cc: minchan, akpm, linux-mm, linux-kernel, Sergey Senozhatsky
On (24/06/28 09:55), Sergey Senozhatsky wrote:
> We can land this patch, if you insist/prefer, but it's some sort of
> a NOOP, essentially (in a sense that it has no visible effects).
Forgot to mention, sorry. If we land this then I'll ask for a different
commit message. "Fix" sounds alarming. It would be better to say something
like "keep that stat counter accurate, but RATIO_O is never used anywhere".
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/2] mm/zsmalloc: move record_obj() into obj_malloc()
2024-06-27 7:59 ` [PATCH 2/2] mm/zsmalloc: move record_obj() into obj_malloc() Chengming Zhou
@ 2024-06-28 2:25 ` Sergey Senozhatsky
0 siblings, 0 replies; 12+ messages in thread
From: Sergey Senozhatsky @ 2024-06-28 2:25 UTC (permalink / raw)
To: Chengming Zhou; +Cc: minchan, senozhatsky, akpm, linux-mm, linux-kernel
On (24/06/27 15:59), Chengming Zhou wrote:
> We always record_obj() to make handle points to object after obj_malloc(),
> so simplify the code by moving record_obj() into obj_malloc(). There
> should be no functional change.
>
> Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
I guess I don't have a strong opinion on that.
FWIW
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
[..]
> @@ -1591,7 +1589,6 @@ static void migrate_zspage(struct zs_pool *pool, struct zspage *src_zspage,
> free_obj = obj_malloc(pool, dst_zspage, handle);
> zs_object_copy(class, free_obj, used_obj);
> obj_idx++;
> - record_obj(handle, free_obj);
> obj_free(class->size, used_obj);
I sort of like how here we would copy the object first and then record it,
some sort of "commit" stage. But I don't see any issues with the new code.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts
2024-06-28 1:08 ` Sergey Senozhatsky
@ 2024-06-28 3:19 ` Chengming Zhou
2024-07-01 1:37 ` Sergey Senozhatsky
0 siblings, 1 reply; 12+ messages in thread
From: Chengming Zhou @ 2024-06-28 3:19 UTC (permalink / raw)
To: Sergey Senozhatsky, akpm; +Cc: minchan, linux-mm, linux-kernel
On 2024/6/28 09:08, Sergey Senozhatsky wrote:
> On (24/06/28 09:55), Sergey Senozhatsky wrote:
>> We can land this patch, if you insist/prefer, but it's some sort of
>> a NOOP, essentially (in a sense that it has no visible effects).
>
> Forgot to mention, sorry. If we land this then I'll ask for a different
> commit message. "Fix" sounds alarming. It would be better to say something
> like "keep that stat counter accurate, but RATIO_O is never used anywhere".
You're right, "fix" sounds alarming although it's a fix, but it has no
any virtual effect. I just found it when I last time changed to the
per-class locking but forgot to send it.
Andrew, could you please help to change the subject as Sergey asked?
Sorry, I should have noted these details in the changelog when I wrote
this subject.
Thanks.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts
2024-06-28 3:19 ` Chengming Zhou
@ 2024-07-01 1:37 ` Sergey Senozhatsky
2024-07-01 2:20 ` Chengming Zhou
0 siblings, 1 reply; 12+ messages in thread
From: Sergey Senozhatsky @ 2024-07-01 1:37 UTC (permalink / raw)
To: Chengming Zhou; +Cc: Sergey Senozhatsky, akpm, minchan, linux-mm, linux-kernel
On (24/06/28 11:19), Chengming Zhou wrote:
> Andrew, could you please help to change the subject as Sergey asked?
> Sorry, I should have noted these details in the changelog when I wrote
> this subject.
Chengming, can I ask you to resend these patches with a proper commit
message?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts
2024-07-01 1:37 ` Sergey Senozhatsky
@ 2024-07-01 2:20 ` Chengming Zhou
2024-07-01 2:49 ` Chengming Zhou
0 siblings, 1 reply; 12+ messages in thread
From: Chengming Zhou @ 2024-07-01 2:20 UTC (permalink / raw)
To: Sergey Senozhatsky; +Cc: akpm, minchan, linux-mm, linux-kernel
On 2024/7/1 09:37, Sergey Senozhatsky wrote:
> On (24/06/28 11:19), Chengming Zhou wrote:
>> Andrew, could you please help to change the subject as Sergey asked?
>> Sorry, I should have noted these details in the changelog when I wrote
>> this subject.
>
> Chengming, can I ask you to resend these patches with a proper commit
> message?
Of course, will update and send later.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts
2024-07-01 2:20 ` Chengming Zhou
@ 2024-07-01 2:49 ` Chengming Zhou
2024-07-01 3:13 ` Sergey Senozhatsky
0 siblings, 1 reply; 12+ messages in thread
From: Chengming Zhou @ 2024-07-01 2:49 UTC (permalink / raw)
To: Sergey Senozhatsky; +Cc: akpm, minchan, linux-mm, linux-kernel
On 2024/7/1 10:20, Chengming Zhou wrote:
> On 2024/7/1 09:37, Sergey Senozhatsky wrote:
>> On (24/06/28 11:19), Chengming Zhou wrote:
>>> Andrew, could you please help to change the subject as Sergey asked?
>>> Sorry, I should have noted these details in the changelog when I wrote
>>> this subject.
>>
>> Chengming, can I ask you to resend these patches with a proper commit
>> message?
>
> Of course, will update and send later.
I just pulled mm/mm-unstable and ready to update, but find Andrew has
already helped to change the subject and commit message as below, which
is great enough! Thanks!
commit 84d0abc5905bbdf29dc7ff8083d21145d78a3ffe
Author: Chengming Zhou <chengming.zhou@linux.dev>
Date: Thu Jun 27 15:59:58 2024 +0800
mm/zsmalloc: clarify class per-fullness zspage counts
We always use insert_zspage() and remove_zspage() to update zspage's
fullness location, which will account correctly.
But this special async free path use "splice" instead of
remove_zspage(),
so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease.
Clean things up by decreasing when iterate over the zspage free list.
This doesn't actually fix anything. ZS_INUSE_RATIO_0 is just a
"placeholder" which is never used anywhere.
Link:
https://lkml.kernel.org/r/20240627075959.611783-1-chengming.zhou@linux.dev
Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts
2024-07-01 2:49 ` Chengming Zhou
@ 2024-07-01 3:13 ` Sergey Senozhatsky
0 siblings, 0 replies; 12+ messages in thread
From: Sergey Senozhatsky @ 2024-07-01 3:13 UTC (permalink / raw)
To: Chengming Zhou; +Cc: Sergey Senozhatsky, akpm, minchan, linux-mm, linux-kernel
On (24/07/01 10:49), Chengming Zhou wrote:
> On 2024/7/1 10:20, Chengming Zhou wrote:
> > On 2024/7/1 09:37, Sergey Senozhatsky wrote:
> > > On (24/06/28 11:19), Chengming Zhou wrote:
> > > > Andrew, could you please help to change the subject as Sergey asked?
> > > > Sorry, I should have noted these details in the changelog when I wrote
> > > > this subject.
> > >
> > > Chengming, can I ask you to resend these patches with a proper commit
> > > message?
> >
> > Of course, will update and send later.
>
> I just pulled mm/mm-unstable and ready to update, but find Andrew has
> already helped to change the subject and commit message as below, which
> is great enough! Thanks!
Oh, I see, thanks for checking it. Thank you Andrew!
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2024-07-01 3:13 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-27 7:59 [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts Chengming Zhou
2024-06-27 7:59 ` [PATCH 2/2] mm/zsmalloc: move record_obj() into obj_malloc() Chengming Zhou
2024-06-28 2:25 ` Sergey Senozhatsky
2024-06-27 20:33 ` [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts Andrew Morton
2024-06-28 0:51 ` Sergey Senozhatsky
2024-06-28 0:55 ` Sergey Senozhatsky
2024-06-28 1:08 ` Sergey Senozhatsky
2024-06-28 3:19 ` Chengming Zhou
2024-07-01 1:37 ` Sergey Senozhatsky
2024-07-01 2:20 ` Chengming Zhou
2024-07-01 2:49 ` Chengming Zhou
2024-07-01 3:13 ` Sergey Senozhatsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox