From: Michal Hocko <mhocko@kernel.org>
To: Yang Shi <yang.s@alibaba-inc.com>
Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com,
iamjoonsoo.kim@lge.com, akpm@linux-foundation.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 3/3] mm: oom: show unreclaimable slab info when unreclaimable slabs > user memory
Date: Fri, 6 Oct 2017 11:37:02 +0200 [thread overview]
Message-ID: <20171006093702.3ca2p6ymyycwfgbk@dhcp22.suse.cz> (raw)
In-Reply-To: <1507152550-46205-4-git-send-email-yang.s@alibaba-inc.com>
On Thu 05-10-17 05:29:10, Yang Shi wrote:
> Kernel may panic when oom happens without killable process sometimes it
> is caused by huge unreclaimable slabs used by kernel.
>
> Although kdump could help debug such problem, however, kdump is not
> available on all architectures and it might be malfunction sometime.
> And, since kernel already panic it is worthy capturing such information
> in dmesg to aid touble shooting.
>
> Print out unreclaimable slab info (used size and total size) which
> actual memory usage is not zero (num_objs * size != 0) when
> unreclaimable slabs amount is greater than total user memory (LRU
> pages).
>
> The output looks like:
>
> Unreclaimable slab info:
> Name Used Total
> rpc_buffers 31KB 31KB
> rpc_tasks 7KB 7KB
> ebitmap_node 1964KB 1964KB
> avtab_node 5024KB 5024KB
> xfs_buf 1402KB 1402KB
> xfs_ili 134KB 134KB
> xfs_efi_item 115KB 115KB
> xfs_efd_item 115KB 115KB
> xfs_buf_item 134KB 134KB
> xfs_log_item_desc 342KB 342KB
> xfs_trans 1412KB 1412KB
> xfs_ifork 212KB 212KB
OK this looks better. The naming is not the greatest but I will not
nitpick on this. I have one question though
>
> Signed-off-by: Yang Shi <yang.s@alibaba-inc.com>
[...]
> +void dump_unreclaimable_slab(void)
> +{
> + struct kmem_cache *s, *s2;
> + struct slabinfo sinfo;
> +
> + /*
> + * Here acquiring slab_mutex is risky since we don't prefer to get
> + * sleep in oom path. But, without mutex hold, it may introduce a
> + * risk of crash.
> + * Use mutex_trylock to protect the list traverse, dump nothing
> + * without acquiring the mutex.
> + */
> + if (!mutex_trylock(&slab_mutex)) {
> + pr_warn("excessive unreclaimable slab but cannot dump stats\n");
> + return;
> + }
> +
> + pr_info("Unreclaimable slab info:\n");
> + pr_info("Name Used Total\n");
> +
> + list_for_each_entry_safe(s, s2, &slab_caches, list) {
> + if (!is_root_cache(s) || (s->flags & SLAB_RECLAIM_ACCOUNT))
> + continue;
> +
> + memset(&sinfo, 0, sizeof(sinfo));
why do you zero out the structure. All the fields you are printing are
filled out in get_slabinfo.
> + get_slabinfo(s, &sinfo);
> +
> + if (sinfo.num_objs > 0)
> + pr_info("%-17s %10luKB %10luKB\n", cache_name(s),
> + (sinfo.active_objs * s->size) / 1024,
> + (sinfo.num_objs * s->size) / 1024);
> + }
> + mutex_unlock(&slab_mutex);
> +}
> +
> #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
> void *memcg_slab_start(struct seq_file *m, loff_t *pos)
> {
> --
> 1.8.3.1
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-10-06 9:37 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-04 21:29 [PATCH 0/3 v10] oom: capture unreclaimable slab info in oom message Yang Shi
2017-10-04 21:29 ` [PATCH 1/3] tools: slabinfo: add "-U" option to show unreclaimable slabs only Yang Shi
2017-10-04 21:29 ` [PATCH 2/3] mm: slabinfo: dump CONFIG_SLABINFO Yang Shi
2017-10-07 11:45 ` kbuild test robot
2017-10-07 12:06 ` kbuild test robot
2017-10-04 21:29 ` [PATCH 3/3] mm: oom: show unreclaimable slab info when unreclaimable slabs > user memory Yang Shi
2017-10-06 9:37 ` Michal Hocko [this message]
2017-10-06 16:37 ` Yang Shi
2017-10-09 6:33 ` Michal Hocko
2017-10-09 6:36 ` Michal Hocko
2017-10-09 16:44 ` Yang Shi
2017-10-09 18:53 ` Yang Shi
2017-10-09 21:00 ` Yang Shi
2017-10-07 10:10 ` kbuild test robot
2017-10-07 13:05 ` kbuild test robot
-- strict thread matches above, loose matches on Subject: below --
2017-10-10 17:25 [PATCH 0/3 v11] oom: capture unreclaimable slab info in oom message Yang Shi
2017-10-10 17:25 ` [PATCH 3/3] mm: oom: show unreclaimable slab info when unreclaimable slabs > user memory Yang Shi
2017-10-17 0:15 ` David Rientjes
2017-10-17 7:44 ` Michal Hocko
2017-10-17 20:59 ` David Rientjes
2017-10-17 21:40 ` Yang Shi
2017-10-17 21:50 ` David Rientjes
2017-10-17 22:20 ` Yang Shi
2017-10-17 22:39 ` David Rientjes
2017-10-18 19:09 ` Yang Shi
2017-10-19 7:28 ` Michal Hocko
2017-10-19 23:12 ` Yang Shi
2017-10-03 18:06 [PATCH 0/3 v8] oom: capture unreclaimable slab info in oom message Yang Shi
2017-10-03 18:06 ` [PATCH 3/3] mm: oom: show unreclaimable slab info when unreclaimable slabs > user memory Yang Shi
2017-10-04 14:27 ` Michal Hocko
2017-10-04 17:37 ` Yang Shi
2017-10-04 18:08 ` Yang Shi
2017-10-05 7:57 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171006093702.3ca2p6ymyycwfgbk@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=yang.s@alibaba-inc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox