linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Waiman Long <longman@redhat.com>
To: Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Michal Hocko <mhocko@kernel.org>, Roman Gushchin <guro@fb.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Shakeel Butt <shakeelb@google.com>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Waiman Long <longman@redhat.com>
Subject: [PATCH] mm, memcg: Add a memcg_slabinfo debugfs file
Date: Wed, 19 Jun 2019 10:46:10 -0400	[thread overview]
Message-ID: <20190619144610.12520-1-longman@redhat.com> (raw)

There are concerns about memory leaks from extensive use of memory
cgroups as each memory cgroup creates its own set of kmem caches. There
is a possiblity that the memcg kmem caches may remain even after the
memory cgroup removal. Therefore, it will be useful to show how many
memcg caches are present for each of the kmem caches.

This patch introduces a new <debugfs>/memcg_slabinfo file which is
somewhat similar to /proc/slabinfo in format, but lists only slabs that
are in memcg kmem caches. Information available in /proc/slabinfo are
not repeated in memcg_slabinfo.

A portion of a sample output of the file was:

  # <name> <active_objs> <num_objs> <active_slabs> <num_slabs> <num_caches> <num_empty_caches>
  rpc_inode_cache        0      0      0      0   1   1
  xfs_inode           6342   7888    232    232  59  13
  RAWv6                  0      0      0      0   2   2
  UDPv6                100    100      4      4   5   3
  TCPv6                  0      0      0      0   1   1
  UNIX                4864   4864    152    152  53  35
  RAW                    0      0      0      0   1   1
  TCP                   14     14      1      1   2   1

Besides the number of objects and slabs in the memcg kmem caches only,
it also shows the total number of memcg kmem caches associated with each
root kmem cache as well as the number memcg kmem caches that are empty.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 mm/slab_common.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 58251ba63e4a..63fb18f4f811 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -17,6 +17,7 @@
 #include <linux/uaccess.h>
 #include <linux/seq_file.h>
 #include <linux/proc_fs.h>
+#include <linux/debugfs.h>
 #include <asm/cacheflush.h>
 #include <asm/tlbflush.h>
 #include <asm/page.h>
@@ -1498,6 +1499,58 @@ static int __init slab_proc_init(void)
 	return 0;
 }
 module_init(slab_proc_init);
+
+#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_MEMCG)
+/*
+ * Display information about slabs that are in memcg kmem caches, but not
+ * in the root kmem caches.
+ */
+static int memcg_slabinfo_show(struct seq_file *m, void *unused)
+{
+	struct kmem_cache *s, *c;
+	struct slabinfo sinfo, cinfo;
+
+	mutex_lock(&slab_mutex);
+	seq_puts(m, "# <name> <active_objs> <num_objs> <active_slabs>");
+	seq_puts(m, " <num_slabs> <num_caches> <num_empty_caches>\n");
+	memset(&sinfo, 0, sizeof(sinfo));
+	list_for_each_entry(s, &slab_root_caches, root_caches_node) {
+		int scnt = 0;	/* memcg kmem cache count */
+		int ecnt = 0;	/* # of empty kmem caches */
+
+		for_each_memcg_cache(c, s) {
+			memset(&cinfo, 0, sizeof(cinfo));
+			get_slabinfo(c, &cinfo);
+
+			sinfo.active_slabs += cinfo.active_slabs;
+			sinfo.num_slabs += cinfo.num_slabs;
+			sinfo.active_objs += cinfo.active_objs;
+			sinfo.num_objs += cinfo.num_objs;
+			scnt++;
+			if (!cinfo.num_slabs)
+				ecnt++;
+		}
+		if (!scnt)
+			continue;
+		seq_printf(m, "%-17s %6lu %6lu %6lu %6lu %3d %3d\n",
+			   cache_name(s), sinfo.active_objs, sinfo.num_objs,
+			   sinfo.active_slabs, sinfo.num_slabs, scnt, ecnt);
+		memset(&sinfo, 0, sizeof(sinfo));
+	}
+	mutex_unlock(&slab_mutex);
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(memcg_slabinfo);
+
+static int __init memcg_slabinfo_init(void)
+{
+	debugfs_create_file("memcg_slabinfo", S_IFREG | S_IRUGO,
+			    NULL, NULL, &memcg_slabinfo_fops);
+	return 0;
+}
+
+late_initcall(memcg_slabinfo_init);
+#endif /* CONFIG_DEBUG_FS && CONFIG_MEMCG */
 #endif /* CONFIG_SLAB || CONFIG_SLUB_DEBUG */
 
 static __always_inline void *__do_krealloc(const void *p, size_t new_size,
-- 
2.18.1


             reply	other threads:[~2019-06-19 14:46 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-19 14:46 Waiman Long [this message]
2019-06-19 15:18 ` Shakeel Butt
2019-06-19 15:30   ` Waiman Long
2019-06-19 15:35     ` Shakeel Butt
2019-06-19 15:47       ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190619144610.12520-1-longman@redhat.com \
    --to=longman@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=shakeelb@google.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox