From: Johannes Weiner <hannes@cmpxchg.org>
To: Muchun Song <songmuchun@bytedance.com>
Cc: willy@infradead.org, akpm@linux-foundation.org,
mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com,
guro@fb.com, shy828301@gmail.com, alexs@kernel.org,
richard.weiyang@gmail.com, david@fromorbit.com,
trond.myklebust@hammerspace.com, anna.schumaker@netapp.com,
jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, linux-nfs@vger.kernel.org,
zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com,
fam.zheng@bytedance.com, smuchun@gmail.com
Subject: Re: [PATCH v4 01/17] mm: list_lru: optimize memory consumption of arrays of per cgroup lists
Date: Tue, 14 Dec 2021 14:38:39 +0100 [thread overview]
Message-ID: <YbieX3WCUt7hdZlW@cmpxchg.org> (raw)
In-Reply-To: <20211213165342.74704-2-songmuchun@bytedance.com>
On Tue, Dec 14, 2021 at 12:53:26AM +0800, Muchun Song wrote:
> The list_lru uses an array (list_lru_memcg->lru) to store pointers
> which point to the list_lru_one. And the array is per memcg per node.
> Therefore, the size of the arrays will be 10K * number_of_node * 8 (
> a pointer size on 64 bits system) when we run 10k containers in the
> system. The memory consumption of the arrays becomes significant. The
> more numa node, the more memory it consumes.
The complexity for the lists themselves is still nrmemcgs * nrnodes
right? But the rcu_head goes from that to nrmemcgs.
> I have done a simple test, which creates 10K memcg and mount point
> each in a two-node system. The memory consumption of the list_lru
> will be 24464MB. After converting the array from per memcg per node
> to per memcg, the memory consumption is going to be 21957MB. It is
> reduces by 2.5GB. In our AMD servers, there are 8 numa nodes in
> those system, the memory consumption could be more significant.
The code looks good to me, but it would be useful to include a
high-level overview of the new scheme, explain that the savings come
from the rcu heads, that it simplifies the alloc/dealloc path etc.
With that,
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
next prev parent reply other threads:[~2021-12-14 13:39 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-13 16:53 [PATCH v4 00/17] Optimize list lru memory consumption Muchun Song
2021-12-13 16:53 ` [PATCH v4 01/17] mm: list_lru: optimize memory consumption of arrays of per cgroup lists Muchun Song
2021-12-14 13:38 ` Johannes Weiner [this message]
2021-12-15 12:09 ` Muchun Song
2021-12-13 16:53 ` [PATCH v4 02/17] mm: introduce kmem_cache_alloc_lru Muchun Song
2021-12-14 13:50 ` Johannes Weiner
2021-12-15 12:34 ` Muchun Song
2021-12-13 16:53 ` [PATCH v4 03/17] fs: introduce alloc_inode_sb() to allocate filesystems specific inode Muchun Song
2021-12-13 16:53 ` [PATCH v4 04/17] fs: allocate inode by using alloc_inode_sb() Muchun Song
2021-12-13 16:53 ` [PATCH v4 05/17] f2fs: " Muchun Song
2021-12-13 16:53 ` [PATCH v4 06/17] nfs42: use a specific kmem_cache to allocate nfs4_xattr_entry Muchun Song
2021-12-13 16:53 ` [PATCH v4 07/17] mm: dcache: use kmem_cache_alloc_lru() to allocate dentry Muchun Song
2021-12-13 16:53 ` [PATCH v4 08/17] xarray: use kmem_cache_alloc_lru to allocate xa_node Muchun Song
2021-12-13 16:53 ` [PATCH v4 09/17] mm: workingset: use xas_set_lru() to pass shadow_nodes Muchun Song
2021-12-14 14:09 ` Johannes Weiner
2021-12-15 12:36 ` Muchun Song
2021-12-13 16:53 ` [PATCH v4 10/17] mm: memcontrol: move memcg_online_kmem() to mem_cgroup_css_online() Muchun Song
2021-12-13 16:53 ` [PATCH v4 11/17] mm: list_lru: allocate list_lru_one only when needed Muchun Song
[not found] ` <20211216130102.GE10708@xsang-OptiPlex-9020>
2021-12-16 14:56 ` [External] [mm] 86cda95957: BUG:sleeping_function_called_from_invalid_context_at_include/linux/sched/mm.h Muchun Song
2021-12-13 16:53 ` [PATCH v4 12/17] mm: list_lru: rename memcg_drain_all_list_lrus to memcg_reparent_list_lrus Muchun Song
2021-12-13 16:53 ` [PATCH v4 13/17] mm: list_lru: replace linear array with xarray Muchun Song
2021-12-13 16:53 ` [PATCH v4 14/17] mm: memcontrol: reuse memory cgroup ID for kmem ID Muchun Song
2021-12-13 16:53 ` [PATCH v4 15/17] mm: memcontrol: fix cannot alloc the maximum memcg ID Muchun Song
2021-12-13 16:53 ` [PATCH v4 16/17] mm: list_lru: rename list_lru_per_memcg to list_lru_memcg Muchun Song
2021-12-13 16:53 ` [PATCH v4 17/17] mm: memcontrol: rename memcg_cache_id to memcg_kmem_id Muchun Song
2021-12-17 10:05 ` [PATCH v4 00/17] Optimize list lru memory consumption xiaoqiang zhao
2021-12-17 11:06 ` Muchun Song
2021-12-17 13:12 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YbieX3WCUt7hdZlW@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=akpm@linux-foundation.org \
--cc=alexs@kernel.org \
--cc=anna.schumaker@netapp.com \
--cc=chao@kernel.org \
--cc=david@fromorbit.com \
--cc=duanxiongchun@bytedance.com \
--cc=fam.zheng@bytedance.com \
--cc=guro@fb.com \
--cc=jaegeuk@kernel.org \
--cc=kari.argillander@gmail.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
--cc=mhocko@kernel.org \
--cc=richard.weiyang@gmail.com \
--cc=shakeelb@google.com \
--cc=shy828301@gmail.com \
--cc=smuchun@gmail.com \
--cc=songmuchun@bytedance.com \
--cc=trond.myklebust@hammerspace.com \
--cc=vdavydov.dev@gmail.com \
--cc=willy@infradead.org \
--cc=zhengqi.arch@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox