From: Yang Yingliang <yangyingliang@huawei.com>
To: <tj@kernel.org>, <lizefan@huawei.com>, <hannes@cmpxchg.org>
Cc: <cgroups@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<wangkefeng.wang@huawei.com>, <linux-mm@kvack.org>
Subject: Re: memleak in cgroup
Date: Mon, 27 Apr 2020 15:48:13 +0800 [thread overview]
Message-ID: <34dfdb52-6efd-7b11-07c8-9461a13b3aa4@huawei.com> (raw)
In-Reply-To: <6e4d5208-ba26-93ed-c600-4776fc620456@huawei.com>
[-- Attachment #1: Type: text/plain, Size: 7366 bytes --]
+cc linux-mm@kvack.org <mailto:linux-mm@kvack.org>
On 2020/4/26 19:21, Yang Yingliang wrote:
> Hi,
>
> When I doing the follow test in kernel-5.7-rc2, I found mem-free is
> decreased
>
> #!/bin/sh
> cd /sys/fs/cgroup/memory/
>
> for((i=0;i<45;i++))
> do
> for((j=0;j<60000;j++))
> do
> mkdir /sys/fs/cgroup/memory/yyl-cg$j
> done
> sleep 1
> ls /sys/fs/cgroup/memory/ | grep yyl | xargs rmdir
> done
>
>
> before test the /proc/meminfo is:
>
> MemTotal: 493554824 kB
> MemFree: 491240912 kB
> MemAvailable: 489424520 kB
> Buffers: 4112 kB
> Cached: 65400 kB
> SwapCached: 0 kB
> Active: 156016 kB
> Inactive: 37720 kB
> Active(anon): 128372 kB
> Inactive(anon): 7188 kB
> Active(file): 27644 kB
> Inactive(file): 30532 kB
> Unevictable: 0 kB
> Mlocked: 0 kB
> SwapTotal: 4194300 kB
> SwapFree: 4194300 kB
> Dirty: 112 kB
> Writeback: 0 kB
> AnonPages: 124356 kB
> Mapped: 53724 kB
> Shmem: 11036 kB
> KReclaimable: 93488 kB
> Slab: 599660 kB
> SReclaimable: 93488 kB
> SUnreclaim: 506172 kB
> KernelStack: 23008 kB
> PageTables: 4340 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 250971712 kB
> Committed_AS: 1834448 kB
> VmallocTotal: 135290159040 kB
> VmallocUsed: 229284 kB
> VmallocChunk: 0 kB
> Percpu: 80896 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 43008 kB
> ShmemHugePages: 0 kB
> ShmemPmdMapped: 0 kB
> FileHugePages: 0 kB
> FilePmdMapped: 0 kB
> CmaTotal: 65536 kB
> CmaFree: 40480 kB
> HugePages_Total: 0
> HugePages_Free: 0
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> Hugetlb: 0 kB
>
> after test:
> MemTotal: 493554824 kB
> MemFree: 484492920 kB
> MemAvailable: 482801124 kB
> Buffers: 21984 kB
> Cached: 151380 kB
> SwapCached: 0 kB
> Active: 230000 kB
> Inactive: 68068 kB
> Active(anon): 130108 kB
> Inactive(anon): 13804 kB
> Active(file): 99892 kB
> Inactive(file): 54264 kB
> Unevictable: 0 kB
> Mlocked: 0 kB
> SwapTotal: 4194300 kB
> SwapFree: 4194300 kB
> Dirty: 36 kB
> Writeback: 0 kB
> AnonPages: 125080 kB
> Mapped: 55520 kB
> Shmem: 19220 kB
> KReclaimable: 246696 kB
> Slab: 5381572 kB
> SReclaimable: 246696 kB
> SUnreclaim: 5134876 kB
> KernelStack: 27360 kB
> PageTables: 4172 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 250971712 kB
> Committed_AS: 1588600 kB
> VmallocTotal: 135290159040 kB
> VmallocUsed: 230836 kB
> VmallocChunk: 0 kB
> Percpu: 1827840 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 43008 kB
> ShmemHugePages: 0 kB
> ShmemPmdMapped: 0 kB
> FileHugePages: 0 kB
> FilePmdMapped: 0 kB
> CmaTotal: 65536 kB
> CmaFree: 40480 kB
> HugePages_Total: 0
> HugePages_Free: 0
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> Hugetlb: 0 kB
>
> after echo 3 > /proc/sys/vm/drop_caches
> MemTotal: 493554824 kB
> MemFree: 485104048 kB
> MemAvailable: 483358392 kB
> Buffers: 6168 kB
> Cached: 79904 kB
> SwapCached: 0 kB
> Active: 165348 kB
> Inactive: 45780 kB
> Active(anon): 130528 kB
> Inactive(anon): 13800 kB
> Active(file): 34820 kB
> Inactive(file): 31980 kB
> Unevictable: 0 kB
> Mlocked: 0 kB
> SwapTotal: 4194300 kB
> SwapFree: 4194300 kB
> Dirty: 8 kB
> Writeback: 0 kB
> AnonPages: 125236 kB
> Mapped: 55516 kB
> Shmem: 19220 kB
> KReclaimable: 226332 kB
> Slab: 5353952 kB
> SReclaimable: 226332 kB
> SUnreclaim: 5127620 kB
> KernelStack: 23040 kB
> PageTables: 4212 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 250971712 kB
> Committed_AS: 1672424 kB
> VmallocTotal: 135290159040 kB
> VmallocUsed: 230436 kB
> VmallocChunk: 0 kB
> Percpu: 1379840 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 43008 kB
> ShmemHugePages: 0 kB
> ShmemPmdMapped: 0 kB
> FileHugePages: 0 kB
> FilePmdMapped: 0 kB
> CmaTotal: 65536 kB
> CmaFree: 40480 kB
> HugePages_Total: 0
> HugePages_Free: 0
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> Hugetlb: 0 kB
>
> after test and drop caches, the /proc/cgroups is:
> #subsys_name hierarchy num_cgroups enabled
> cpuset 11 1 1
> cpu 2 1 1
> cpuacct 2 1 1
> blkio 8 1 1
> memory 5 83 1
> devices 3 41 1
> freezer 6 1 1
> net_cls 9 1 1
> perf_event 10 1 1
> net_prio 9 1 1
> hugetlb 4 1 1
> pids 7 51 1
> rdma 12 1 1
>
> All the dir that created by the script is already removed, but I got:
> - MemFree is decreased about 6.7G
> - SUnreclaim is increased about 4.6G
> - Percpu is increased about 1.7G
>
> It seems we have memory leak in cgroup ?
[-- Attachment #2: Type: text/html, Size: 11237 bytes --]
next parent reply other threads:[~2020-04-27 7:48 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <6e4d5208-ba26-93ed-c600-4776fc620456@huawei.com>
2020-04-27 7:48 ` Yang Yingliang [this message]
2020-04-27 17:13 ` Johannes Weiner
2020-04-27 17:24 ` Roman Gushchin
2020-04-28 9:10 ` Yang Yingliang
2020-04-28 17:45 ` Roman Gushchin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=34dfdb52-6efd-7b11-07c8-9461a13b3aa4@huawei.com \
--to=yangyingliang@huawei.com \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizefan@huawei.com \
--cc=tj@kernel.org \
--cc=wangkefeng.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox