From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C269C433F5 for ; Fri, 17 Dec 2021 10:05:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24CEE6B0071; Fri, 17 Dec 2021 05:05:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D6906B0072; Fri, 17 Dec 2021 05:05:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0508D6B0073; Fri, 17 Dec 2021 05:05:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0114.hostedemail.com [216.40.44.114]) by kanga.kvack.org (Postfix) with ESMTP id E90966B0071 for ; Fri, 17 Dec 2021 05:05:19 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B6E668849B for ; Fri, 17 Dec 2021 10:05:09 +0000 (UTC) X-FDA: 78926853138.26.FFE8515 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf11.hostedemail.com (Postfix) with ESMTP id 5296D40024 for ; Fri, 17 Dec 2021 10:05:07 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id y7so1473529plp.0 for ; Fri, 17 Dec 2021 02:05:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding; bh=Z08hgyLKyC+mqj0Pso0MQR6qHx26nk43FCGBXCexklg=; b=US3ab/jFbxeGltkvE5jlMLseNAyah+ivbgz/zVhoYtlWQopERp0z25JwfSEh8HkvmU iIdzGCR2rHkMltxcI8Ck857vrqofJ+RevYKqXEdk/9FAUnpWNSD9uF2CLRIB4ulG5S/2 cqEVHWqK1AIF0AKj/7cUtJl8SKPn1xtez2ITAoyMOmuBfpok67NNT0NqJQuqidNJlhW2 uClPhcxZIPWf6R3kfTj/LwzWZKrBDn9i0LAZ5T4zZKV+6EHxJvrvLCY0AWnGPb6l0JAg 2XNTBuSNPS6Kdk83Hs8eWURH0hwgJ8D8YIWsHpGpeVvSa6hoi5LBu3IGAaU7XC7TY6gR 0U9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=Z08hgyLKyC+mqj0Pso0MQR6qHx26nk43FCGBXCexklg=; b=0MOLhPiOlplmuuv8dYUzKZmnrqcCQd5qqqfYv+2GdO4QINDZjOTtTb9cQVyIeiXEM4 tb7ojkup+b5JcOooWja8DNuLvtJIxHReODeyl5X5WFwAU4G7By2WnAapcOEtFZSoDIzh EiXYJKRayE+mVVt5oWvi9JlOm6PUenfUvygszHwbUfo/t8lk8bH5emIIMo92TeBaCIp7 YOP6D2B0pMbYMIq+6QhSYOPdWjyYBiiT7kL2XFdUqsR+0n1FSOE8wZWRhSBHT/1vtVLL sO2oPSjTInm3Yr9XUEttnMSvYNGHZ907rbXIY0x3FwUmF2AZo6u/v/Vi1lkewLTcz4Pb AtOA== X-Gm-Message-State: AOAM531VuBGt0cka/sD4L7OJR5HkkjQJ2YPkvLwxcVK2VsFvYTSf4KxF JMgDViCT0arRO+EgyNZAQsQ= X-Google-Smtp-Source: ABdhPJyz8HazZTZ9kAEYjQFVTTf7ukP2gBQKQv5I4THRjavrYACumvXU2UP3HWRKwZX5LxJdMhM/Ig== X-Received: by 2002:a17:902:b7c2:b0:148:c291:2aa with SMTP id v2-20020a170902b7c200b00148c29102aamr2269857plz.118.1639735508270; Fri, 17 Dec 2021 02:05:08 -0800 (PST) Received: from [10.12.233.25] (5e.8a.38a9.ip4.static.sl-reverse.com. [169.56.138.94]) by smtp.gmail.com with ESMTPSA id h20sm7470992pgh.13.2021.12.17.02.05.02 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 17 Dec 2021 02:05:07 -0800 (PST) Subject: Re: [PATCH v4 00/17] Optimize list lru memory consumption To: Muchun Song , willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, guro@fb.com, shy828301@gmail.com, alexs@kernel.org, richard.weiyang@gmail.com, david@fromorbit.com, trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, kari.argillander@gmail.com Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com References: <20211213165342.74704-1-songmuchun@bytedance.com> From: xiaoqiang zhao Message-ID: <745ddcd6-77e3-22e0-1f8e-e6b05c644eb4@gmail.com> Date: Fri, 17 Dec 2021 18:05:00 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20211213165342.74704-1-songmuchun@bytedance.com> Content-Type: text/plain; charset=utf-8 X-Stat-Signature: yimi9x5n8js6r9fifuqbwcshb9mue5da X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 5296D40024 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="US3ab/jF"; spf=pass (imf11.hostedemail.com: domain of zhaoxiaoqiang007@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=zhaoxiaoqiang007@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1639735507-239324 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2021/12/14 0:53, Muchun Song =E5=86=99=E9=81=93: > This series is based on Linux 5.16-rc3. >=20 > In our server, we found a suspected memory leak problem. The kmalloc-32 > consumes more than 6GB of memory. Other kmem_caches consume less than 2= GB > memory. >=20 > After our in-depth analysis, the memory consumption of kmalloc-32 slab > cache is the cause of list_lru_one allocation. IIUC, you mean: "the memory consumption of kmalloc-32 slab cache is caused by list_lru_one allocation" >=20 > crash> p memcg_nr_cache_ids > memcg_nr_cache_ids =3D $2 =3D 24574 >=20 > memcg_nr_cache_ids is very large and memory consumption of each list_lr= u > can be calculated with the following formula. >=20 > num_numa_node * memcg_nr_cache_ids * 32 (kmalloc-32) >=20 > There are 4 numa nodes in our system, so each list_lru consumes ~3MB. >=20 > crash> list super_blocks | wc -l > 952 >=20 > Every mount will register 2 list lrus, one is for inode, another is for > dentry. There are 952 super_blocks. So the total memory is 952 * 2 * 3 > MB (~5.6GB). But now the number of memory cgroups is less than 500. So = I > guess more than 12286 memory cgroups have been created on this machine = (I > do not know why there are so many cgroups, it may be a user's bug or > the user really want to do that). Because memcg_nr_cache_ids has not be= en > reduced to a suitable value. It leads to waste a lot of memory. If we w= ant > to reduce memcg_nr_cache_ids, we have to *reboot* the server. This is n= ot > what we want. >=20 > In order to reduce memcg_nr_cache_ids, I had posted a patchset [1] to d= o > this. But this did not fundamentally solve the problem. >=20 > We currently allocate scope for every memcg to be able to tracked on ev= ery > superblock instantiated in the system, regardless of whether that super= block > is even accessible to that memcg. >=20 > These huge memcg counts come from container hosts where memcgs are conf= ined > to just a small subset of the total number of superblocks that instanti= ated > at any given point in time. >=20 > For these systems with huge container counts, list_lru does not need th= e > capability of tracking every memcg on every superblock. >=20 > What it comes down to is that the list_lru is only needed for a given m= emcg > if that memcg is instatiating and freeing objects on a given list_lru. >=20 > As Dave said, "Which makes me think we should be moving more towards 'a= dd the > memcg to the list_lru at the first insert' model rather than 'instantia= te > all at memcg init time just in case'." >=20 > This patchset aims to optimize the list lru memory consumption from dif= ferent > aspects. >=20 > I had done a easy test to show the optimization. I create 10k memory cg= roups > and mount 10k filesystems in the systems. We use free command to show h= ow many > memory does the systems comsumes after this operation (There are 2 numa= nodes > in the system). >=20 > +-----------------------+------------------------+ > | condition | memory consumption | > +-----------------------+------------------------+ > | without this patchset | 24464 MB | > +-----------------------+------------------------+ > | after patch 1 | 21957 MB | <--------+ > +-----------------------+------------------------+ | > | after patch 11 | 6895 MB | | > +-----------------------+------------------------+ | > | after patch 13 | 4367 MB | | > +-----------------------+------------------------+ | > | > The more the number of nodes, the more obvious the effect---+ >=20 > BTW, there was a recent discussion [2] on the same issue. >=20 > [1] https://lore.kernel.org/linux-fsdevel/20210428094949.43579-1-songmu= chun@bytedance.com/ > [2] https://lore.kernel.org/linux-fsdevel/20210405054848.GA1077931@in.i= bm.com/ >=20 > This series not only optimizes the memory usage of list_lru but also > simplifies the code. >=20 > Changelog in v4: > - Remove some code cleanup patches since they are already merged. > - Collect Acked-by from Theodore. > - Fix ntfs3 (Thanks Argillander). >=20 > Changelog in v3: > - Fix mixing advanced and normal XArray concepts (Thanks to Matthew). > - Split one patch into per-filesystem patches. >=20 > Changelog in v2: > - Update Documentation/filesystems/porting.rst suggested by Dave. > - Add a comment above alloc_inode_sb() suggested by Dave. > - Rework some patch's commit log. > - Add patch 18-21. >=20 > Thanks Dave. >=20 > Muchun Song (17): > mm: list_lru: optimize memory consumption of arrays of per cgroup > lists > mm: introduce kmem_cache_alloc_lru > fs: introduce alloc_inode_sb() to allocate filesystems specific inode > fs: allocate inode by using alloc_inode_sb() > f2fs: allocate inode by using alloc_inode_sb() > nfs42: use a specific kmem_cache to allocate nfs4_xattr_entry > mm: dcache: use kmem_cache_alloc_lru() to allocate dentry > xarray: use kmem_cache_alloc_lru to allocate xa_node > mm: workingset: use xas_set_lru() to pass shadow_nodes > mm: memcontrol: move memcg_online_kmem() to mem_cgroup_css_online() > mm: list_lru: allocate list_lru_one only when needed > mm: list_lru: rename memcg_drain_all_list_lrus to > memcg_reparent_list_lrus > mm: list_lru: replace linear array with xarray > mm: memcontrol: reuse memory cgroup ID for kmem ID > mm: memcontrol: fix cannot alloc the maximum memcg ID > mm: list_lru: rename list_lru_per_memcg to list_lru_memcg > mm: memcontrol: rename memcg_cache_id to memcg_kmem_id >=20 > Documentation/filesystems/porting.rst | 5 + > block/bdev.c | 2 +- > drivers/dax/super.c | 2 +- > fs/9p/vfs_inode.c | 2 +- > fs/adfs/super.c | 2 +- > fs/affs/super.c | 2 +- > fs/afs/super.c | 2 +- > fs/befs/linuxvfs.c | 2 +- > fs/bfs/inode.c | 2 +- > fs/btrfs/inode.c | 2 +- > fs/ceph/inode.c | 2 +- > fs/cifs/cifsfs.c | 2 +- > fs/coda/inode.c | 2 +- > fs/dcache.c | 3 +- > fs/ecryptfs/super.c | 2 +- > fs/efs/super.c | 2 +- > fs/erofs/super.c | 2 +- > fs/exfat/super.c | 2 +- > fs/ext2/super.c | 2 +- > fs/ext4/super.c | 2 +- > fs/f2fs/super.c | 8 +- > fs/fat/inode.c | 2 +- > fs/freevxfs/vxfs_super.c | 2 +- > fs/fuse/inode.c | 2 +- > fs/gfs2/super.c | 2 +- > fs/hfs/super.c | 2 +- > fs/hfsplus/super.c | 2 +- > fs/hostfs/hostfs_kern.c | 2 +- > fs/hpfs/super.c | 2 +- > fs/hugetlbfs/inode.c | 2 +- > fs/inode.c | 2 +- > fs/isofs/inode.c | 2 +- > fs/jffs2/super.c | 2 +- > fs/jfs/super.c | 2 +- > fs/minix/inode.c | 2 +- > fs/nfs/inode.c | 2 +- > fs/nfs/nfs42xattr.c | 95 ++++---- > fs/nilfs2/super.c | 2 +- > fs/ntfs/inode.c | 2 +- > fs/ntfs3/super.c | 2 +- > fs/ocfs2/dlmfs/dlmfs.c | 2 +- > fs/ocfs2/super.c | 2 +- > fs/openpromfs/inode.c | 2 +- > fs/orangefs/super.c | 2 +- > fs/overlayfs/super.c | 2 +- > fs/proc/inode.c | 2 +- > fs/qnx4/inode.c | 2 +- > fs/qnx6/inode.c | 2 +- > fs/reiserfs/super.c | 2 +- > fs/romfs/super.c | 2 +- > fs/squashfs/super.c | 2 +- > fs/sysv/inode.c | 2 +- > fs/ubifs/super.c | 2 +- > fs/udf/super.c | 2 +- > fs/ufs/super.c | 2 +- > fs/vboxsf/super.c | 2 +- > fs/xfs/xfs_icache.c | 2 +- > fs/zonefs/super.c | 2 +- > include/linux/fs.h | 11 + > include/linux/list_lru.h | 17 +- > include/linux/memcontrol.h | 42 ++-- > include/linux/slab.h | 3 + > include/linux/swap.h | 5 +- > include/linux/xarray.h | 9 +- > ipc/mqueue.c | 2 +- > lib/xarray.c | 10 +- > mm/list_lru.c | 423 ++++++++++++++++----------= -------- > mm/memcontrol.c | 164 +++---------- > mm/shmem.c | 2 +- > mm/slab.c | 39 +++- > mm/slab.h | 25 +- > mm/slob.c | 6 + > mm/slub.c | 42 ++-- > mm/workingset.c | 2 +- > net/socket.c | 2 +- > net/sunrpc/rpc_pipe.c | 2 +- > 76 files changed, 486 insertions(+), 539 deletions(-) >=20