From: Andrew Morton <akpm@linux-foundation.org>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Minchan Kim <minchan@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: use unique zsmalloc caches names
Date: Thu, 5 Sep 2024 14:52:09 -0700 [thread overview]
Message-ID: <20240905145209.641c8f127ba353832a1be778@linux-foundation.org> (raw)
In-Reply-To: <20240905064736.2250735-1-senozhatsky@chromium.org>
On Thu, 5 Sep 2024 15:47:23 +0900 Sergey Senozhatsky <senozhatsky@chromium.org> wrote:
> Each zsmalloc pool maintains several named kmem-caches for
> zs_handle-s and zspage-s. On a system with multiple zsmalloc
> pools and CONFIG_DEBUG_VM this triggers kmem_cache_sanity_check():
>
> kmem_cache of name 'zspage' already exists
> WARNING: at mm/slab_common.c:108 do_kmem_cache_create_usercopy+0xb5/0x310
> ...
>
> kmem_cache of name 'zs_handle' already exists
> WARNING: at mm/slab_common.c:108 do_kmem_cache_create_usercopy+0xb5/0x310
> ...
This is old code. Did something recently change to trigger this warning?
> We provide zram device name when init its zsmalloc pool, so we can
> use that same name for zsmalloc caches and, hence, create unique
> names that can easily be linked to zram device that has created
> them.
>
> So instead of having this
>
> cat /proc/slabinfo
> slabinfo - version: 2.1
> zspage 46 46 ...
> zs_handle 128 128 ...
> zspage 34270 34270 ...
> zs_handle 34816 34816 ...
> zspage 0 0 ...
> zs_handle 0 0 ...
>
> We now have this
>
> cat /proc/slabinfo
> slabinfo - version: 2.1
> zspage-zram2 46 46 ...
> zs_handle-zram2 128 128 ...
> zspage-zram0 34270 34270 ...
> zs_handle-zram0 34816 34816 ...
> zspage-zram1 0 0 ...
> zs_handle-zram1 0 0 ...
>
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -293,13 +293,17 @@ static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) {}
>
> static int create_cache(struct zs_pool *pool)
> {
> - pool->handle_cachep = kmem_cache_create("zs_handle", ZS_HANDLE_SIZE,
> - 0, 0, NULL);
> + char name[32];
> +
> + snprintf(name, sizeof(name), "zs_handle-%s", pool->name);
Always scary seeing code making such assumptions about it arguments in
this fashion. Can we use kasprintf() and sleep well at night?
> + pool->handle_cachep = kmem_cache_create(name, ZS_HANDLE_SIZE,
> + 0, 0, NULL);
> if (!pool->handle_cachep)
> return 1;
>
> - pool->zspage_cachep = kmem_cache_create("zspage", sizeof(struct zspage),
> - 0, 0, NULL);
> + snprintf(name, sizeof(name), "zspage-%s", pool->name);
> + pool->zspage_cachep = kmem_cache_create(name, sizeof(struct zspage),
> + 0, 0, NULL);
> if (!pool->zspage_cachep) {
> kmem_cache_destroy(pool->handle_cachep);
> pool->handle_cachep = NULL;
I guess we want to backport this into earlier kernels? If so, what
would be a suitable Fixes:?
next prev parent reply other threads:[~2024-09-05 21:52 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-05 6:47 Sergey Senozhatsky
2024-09-05 21:52 ` Andrew Morton [this message]
2024-09-06 3:45 ` Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240905145209.641c8f127ba353832a1be778@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=senozhatsky@chromium.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox