From: Vlastimil Babka <vbabka@suse.cz>
To: Jianfeng Wang <jianfeng.w.wang@oracle.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: cl@linux.com, akpm@linux-foundation.org, penberg@kernel.org,
rientjes@google.com
Subject: Re: [PATCH v3 0/2] slub: introduce count_partial_free_approx()
Date: Mon, 22 Apr 2024 09:56:20 +0200 [thread overview]
Message-ID: <1d795e22-cbcd-4107-978e-96bb459629a4@suse.cz> (raw)
In-Reply-To: <20240419175611.47413-1-jianfeng.w.wang@oracle.com>
On 4/19/24 7:56 PM, Jianfeng Wang wrote:
> This patch fixes a known issue in get_slabinfo() which relies on
> count_partial() to get the exact count of free objects in a
> kmem_cache_node's partial list. For some slubs, their partial lists
> can be extremely long. Currently, count_partial() traverses a partial
> list to get the exact count of objects. This process may take a long
> time, during which slab allocations are blocked and IRQs are disabled.
> In production, even NMI watchdog can be triggered due to this matter.
>
> The proposed fix is to limit the number of slabs to scan and output an
> approximated count for a long partial list. The v1 patch counts N slabs
> from the list's head and then uses it to estimate the total object
> count in the list. As suggested by Vlastimil, the v2 patch uses an
> alternative, i.e., counting N/2 from the list's head and tail, produces
> a more accurate approximation after the partial list is sorted by
> kmem_cache_shrink(). In this version, the implementation is moved to a
> new function count_partial_free_approx(). count_partial() is still used
> in sysfs for users who still want the exact object count.
Added to slab/for-next, thanks!
>
> ---
> Changes since v2 [2]
> - Introduce count_partial_free_approx() and keep count_partial()
> - Use count_partial_free_approx() in get_slabinfo() and slab_out_of_memory()
>
> Changes since v1 [1]
> - Update the approximation method by counting from the list's head and tail
> - Cap the approximation by the total object count
> - Update the commit message to add benchmark results and explain the choice
>
> [1] https://lore.kernel.org/linux-mm/20240411164023.99368-1-jianfeng.w.wang@oracle.com/
> [2] https://lore.kernel.org/linux-mm/20240417185938.5237-2-jianfeng.w.wang@oracle.com/
>
> Thanks,
> --Jianfeng
>
> Jianfeng Wang (2):
> slub: introduce count_partial_free_approx()
> slub: use count_partial_free_approx() in slab_out_of_memory()
>
> mm/slub.c | 41 +++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 39 insertions(+), 2 deletions(-)
>
prev parent reply other threads:[~2024-04-22 7:56 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-19 17:56 Jianfeng Wang
2024-04-19 17:56 ` [PATCH v3 1/2] " Jianfeng Wang
2024-04-20 0:18 ` David Rientjes
2024-04-22 7:49 ` Vlastimil Babka
2024-04-19 17:56 ` [PATCH v3 2/2] slub: use count_partial_free_approx() in slab_out_of_memory() Jianfeng Wang
2024-04-20 0:18 ` David Rientjes
2024-04-22 7:56 ` Vlastimil Babka [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1d795e22-cbcd-4107-978e-96bb459629a4@suse.cz \
--to=vbabka@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=jianfeng.w.wang@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox