From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10E2BC2D0D2 for ; Thu, 19 Dec 2019 01:46:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B00FB218AC for ; Thu, 19 Dec 2019 01:46:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="exIYWYdU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B00FB218AC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2FCA58E014F; Wed, 18 Dec 2019 20:46:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2ACC48E00F5; Wed, 18 Dec 2019 20:46:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 19D7C8E014F; Wed, 18 Dec 2019 20:46:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id F3A8B8E00F5 for ; Wed, 18 Dec 2019 20:46:30 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id AF19D4995EA for ; Thu, 19 Dec 2019 01:46:30 +0000 (UTC) X-FDA: 76280201340.05.game26_90163ee9d6b37 X-HE-Tag: game26_90163ee9d6b37 X-Filterd-Recvd-Size: 11093 Received: from mail-io1-f68.google.com (mail-io1-f68.google.com [209.85.166.68]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Thu, 19 Dec 2019 01:46:30 +0000 (UTC) Received: by mail-io1-f68.google.com with SMTP id t26so4055049ioi.13 for ; Wed, 18 Dec 2019 17:46:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=2SgtbmiE52TkBbM+NsUg7fsReaApD1ViuURteA0TONY=; b=exIYWYdUd5e9sF3wpiYUP1tj+O2AApY+wimd+88F2bzOrK00+VoWBK8ZXd0j6VvDaK MZwNHBuEDCnn32CkYXi2j2JHJE/egf0n4MabJgTikkzghRoqjZw/lf093tIYRwVJbcmg DSKEK1qcB5Qjx19e6pU0ys8GpnaVPNeWqHTPlleWd0x/XSdXWo18gFJ3Iper8f/iWj/I T/KvBH/rdY7HlRCb1E/ioWWug20LVv5VTSbLdYIf82vRQUnN7Oj9/Ph5ltgIYl/SKNfM gn1wlt22XGggZQMVnQotxIynD0ZYK3bYPDCbUdmc0CQxkdva4/ULXxhTFdn2MxyCTawx DIDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=2SgtbmiE52TkBbM+NsUg7fsReaApD1ViuURteA0TONY=; b=lgIlDEk61B4tyCYr3nMfcOEPme8FydEm4nvaHrfsP6nyj/fpNhpF0QUfeu5pjARfFL 5KAfnin05221K6t2+2WQCzTJPUtdxR9xG/UXXdMtNrluwjuvKdSHey0IkIxYO+j+bE1e DgTgI7Jcnb7XFtpZ2KEn0H/+EnP/y2JAEZy6CrmdNJ6k88TnD29cWMefixtdtk0cA28F ksleb+TZHdsT+uHAeX9Olq5UgvzHV6uUMyIcEiHGlRbmTE2V2Oislh/gmsFjnCfmlIXx pXgxdp46+iG+SDmm3zDphM3+dDZ9rahMrtmuh2kGEvRbZjrHXKjGtlDBWVrfsJ9wAKMq 4GxA== X-Gm-Message-State: APjAAAXyLGU9QLSoOo1cYBX53uFeO6qgwBgckrDPOhC9IbuQSHZY6GTf lGsvuPL1CXKgUKDYgwTINEfKL/VVbDzLufx+4uQ= X-Google-Smtp-Source: APXvYqxC4/AdMrwil8H0EDnQLkcKv+RHzbV/jx4ADdNxuc2pUfaowdGYyD0PqHoeajONMsCTqvyeJCnKN0VCG2XJDMQ= X-Received: by 2002:a6b:b941:: with SMTP id j62mr4258784iof.168.1576719989349; Wed, 18 Dec 2019 17:46:29 -0800 (PST) MIME-Version: 1.0 References: <1576582159-5198-1-git-send-email-laoar.shao@gmail.com> <1576582159-5198-5-git-send-email-laoar.shao@gmail.com> <20191218175310.GA4730@localhost.localdomain> In-Reply-To: <20191218175310.GA4730@localhost.localdomain> From: Yafang Shao Date: Thu, 19 Dec 2019 09:45:53 +0800 Message-ID: Subject: Re: [PATCH 4/4] memcg, inode: protect page cache from freeing inode To: Roman Gushchin Cc: "hannes@cmpxchg.org" , "mhocko@kernel.org" , "vdavydov.dev@gmail.com" , "akpm@linux-foundation.org" , "viro@zeniv.linux.org.uk" , "linux-mm@kvack.org" , "linux-fsdevel@vger.kernel.org" , Chris Down , Dave Chinner Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Dec 19, 2019 at 1:53 AM Roman Gushchin wrote: > > On Tue, Dec 17, 2019 at 06:29:19AM -0500, Yafang Shao wrote: > > On my server there're some running MEMCGs protected by memory.{min, low}, > > but I found the usage of these MEMCGs abruptly became very small, which > > were far less than the protect limit. It confused me and finally I > > found that was because of inode stealing. > > Once an inode is freed, all its belonging page caches will be dropped as > > well, no matter how may page caches it has. So if we intend to protect the > > page caches in a memcg, we must protect their host (the inode) first. > > Otherwise the memcg protection can be easily bypassed with freeing inode, > > especially if there're big files in this memcg. > > The inherent mismatch between memcg and inode is a trouble. One inode can > > be shared by different MEMCGs, but it is a very rare case. If an inode is > > shared, its belonging page caches may be charged to different MEMCGs. > > Currently there's no perfect solution to fix this kind of issue, but the > > inode majority-writer ownership switching can help it more or less. > > > > Cc: Roman Gushchin > > Cc: Chris Down > > Cc: Dave Chinner > > Signed-off-by: Yafang Shao > > --- > > fs/inode.c | 9 +++++++++ > > include/linux/memcontrol.h | 15 +++++++++++++++ > > mm/memcontrol.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ > > mm/vmscan.c | 4 ++++ > > 4 files changed, 74 insertions(+) > > > > diff --git a/fs/inode.c b/fs/inode.c > > index fef457a..b022447 100644 > > --- a/fs/inode.c > > +++ b/fs/inode.c > > @@ -734,6 +734,15 @@ static enum lru_status inode_lru_isolate(struct list_head *item, > > if (!spin_trylock(&inode->i_lock)) > > return LRU_SKIP; > > > > + > > + /* Page protection only works in reclaimer */ > > + if (inode->i_data.nrpages && current->reclaim_state) { > > + if (mem_cgroup_inode_protected(inode)) { > > + spin_unlock(&inode->i_lock); > > + return LRU_ROTATE; > > + } > > + } > > Not directly related to this approach, but I wonder, if we should scale down > the size of shrinker lists depending on the memory protection (like we do with > LRU lists)? It won't fix the problem with huge inodes being reclaimed at once > without a need, but will help scale the memory pressure for protected cgroups. > Same with what we are doing in get_scan_count() to calculate how many pages we should scan ? I guess we should. > > > > + > > /* > > * Referenced or dirty inodes are still in use. Give them another pass > > * through the LRU as we canot reclaim them now. > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index 1a315c7..21338f0 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -247,6 +247,9 @@ struct mem_cgroup { > > unsigned int tcpmem_active : 1; > > unsigned int tcpmem_pressure : 1; > > > > + /* Soft protection will be ignored if it's true */ > > + unsigned int in_low_reclaim : 1; > > + > > int under_oom; > > > > int swappiness; > > @@ -363,6 +366,7 @@ static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg, > > > > enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, > > struct mem_cgroup *memcg); > > +unsigned long mem_cgroup_inode_protected(struct inode *inode); > > > > int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, > > gfp_t gfp_mask, struct mem_cgroup **memcgp, > > @@ -850,6 +854,11 @@ static inline enum mem_cgroup_protection mem_cgroup_protected( > > return MEMCG_PROT_NONE; > > } > > > > +static inline unsigned long mem_cgroup_inode_protected(struct inode *inode) > > +{ > > + return 0; > > +} > > + > > static inline int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, > > gfp_t gfp_mask, > > struct mem_cgroup **memcgp, > > @@ -926,6 +935,12 @@ static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) > > return NULL; > > } > > > > +static inline struct mem_cgroup * > > +mem_cgroup_from_css(struct cgroup_subsys_state *css) > > +{ > > + return NULL; > > +} > > + > > static inline void mem_cgroup_put(struct mem_cgroup *memcg) > > { > > } > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 234370c..efb53f3 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -6355,6 +6355,52 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, > > } > > > > /** > > + * Once an inode is freed, all its belonging page caches will be dropped as > > + * well, even if there're lots of page caches. So if we intend to protect > > + * page caches in a memcg, we must protect their host first. Otherwise the > > + * memory usage can be dropped abruptly if there're big files in this > > + * memcg. IOW the memcy protection can be easily bypassed with freeing > > + * inode. We should prevent it. > > + * The inherent mismatch between memcg and inode is a trouble. One inode > > + * can be shared by different MEMCGs, but it is a very rare case. If > > + * an inode is shared, its belonging page caches may be charged to > > + * different MEMCGs. Currently there's no perfect solution to fix this > > + * kind of issue, but the inode majority-writer ownership switching can > > + * help it more or less. > > + */ > > +unsigned long mem_cgroup_inode_protected(struct inode *inode) > > +{ > > + unsigned long cgroup_size; > > + unsigned long protect = 0; > > + struct bdi_writeback *wb; > > + struct mem_cgroup *memcg; > > + > > + wb = inode_to_wb(inode); > > + if (!wb) > > + goto out; > > + > > + memcg = mem_cgroup_from_css(wb->memcg_css); > > + if (!memcg || memcg == root_mem_cgroup) > > + goto out; > > + > > + protect = mem_cgroup_protection(memcg, memcg->in_low_reclaim); > > + if (!protect) > > + goto out; > > + > > + cgroup_size = mem_cgroup_size(memcg); > > + /* > > + * Don't need to protect this inode, if the usage is still above > > + * the limit after reclaiming this inode and its belonging page > > + * caches. > > + */ > > + if (inode->i_data.nrpages + protect < cgroup_size) > > + protect = 0; > > + > > +out: > > + return protect; > > +} > > + > > +/** > > * mem_cgroup_try_charge - try charging a page > > * @page: page to charge > > * @mm: mm context of the victim > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 3c4c2da..1cc7fc2 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -2666,6 +2666,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) > > sc->memcg_low_skipped = 1; > > continue; > > } > > + memcg->in_low_reclaim = 1; > > memcg_memory_event(memcg, MEMCG_LOW); > > break; > > case MEMCG_PROT_NONE: > > @@ -2693,6 +2694,9 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) > > shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, > > sc->priority); > > > > + if (memcg->in_low_reclaim) > > + memcg->in_low_reclaim = 0; > > + > > /* Record the group's reclaim efficiency */ > > vmpressure(sc->gfp_mask, memcg, false, > > sc->nr_scanned - scanned, > > -- > > 1.8.3.1 > >