From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8DACC55179 for ; Fri, 6 Nov 2020 01:58:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1FC952151B for ; Fri, 6 Nov 2020 01:58:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CONU2HcX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1FC952151B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 045346B020D; Thu, 5 Nov 2020 20:58:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F383C6B020E; Thu, 5 Nov 2020 20:58:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E75D46B020F; Thu, 5 Nov 2020 20:58:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id BD3586B020D for ; Thu, 5 Nov 2020 20:58:06 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 68F58181AEF07 for ; Fri, 6 Nov 2020 01:58:06 +0000 (UTC) X-FDA: 77452332972.04.arm15_5912eeb272ce Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 4CEDC8006423 for ; Fri, 6 Nov 2020 01:58:06 +0000 (UTC) X-HE-Tag: arm15_5912eeb272ce X-Filterd-Recvd-Size: 9490 Received: from mail-io1-f66.google.com (mail-io1-f66.google.com [209.85.166.66]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Nov 2020 01:58:05 +0000 (UTC) Received: by mail-io1-f66.google.com with SMTP id n12so3891212ioc.2 for ; Thu, 05 Nov 2020 17:58:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=7Csi0GBg5pzxL/+QLyi/1eO4+R8JTXj+iTQxE+Er6B8=; b=CONU2HcXSAuncK+cEcu6qB6RCvRJqnZrQBg0Qu/BoOlhtpOWDhKjZfDPz/mqG+i8r4 BaRyz+DrnXdwqWfTgZwyuLjwREMnOkoBft19XAeCUI3LDOI5kw+Iv7c9S8+c2wx46f6O I/LToJ7Z7hu7vyiIHODK/iunKei+ZlY0HFHdkPVdiHcHcUg5bDCwgFti++UnGrPJmXXt 4lTl0FOlNCvXxTdb4/WuasrG11NmxZxTihB3cPI2Aem464ukyajpgeWcsTPeMWWsA/hx 8mK71YGgozebh8TZ55B4d54Yg7VLdb+2qHgFUXbYVoztcFEPr5RRprmNWZR+ANnP6ju5 AUEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=7Csi0GBg5pzxL/+QLyi/1eO4+R8JTXj+iTQxE+Er6B8=; b=MeMC0hy2DjnwEhAvvbXHh9YwgXXgiGsps0thqZFwP4L8YofPa2fRgy+8RYA9Ouz0gh ikKBUNogFtMDu+p/tIicQgH04CWt+Fh3FM5HwRP2P5S/6GJvMnFwb0GYlisKhJwHJn9v VfUDCIJZQ2HVO/nJWfNzxVCP53XT+bGVKKZURkcczW+FAOUHBoAzwFakcXeJZBF+W4YC LACuX6fSEx5FOHt3wqRtQqryCCZCn6c/Au1gljFOWSQHM1Lzo3g/KNxjwxa7dGb6VI7r dJSPJSVvpAgD9XyMtw35QauNW/DvFQIm187DXY6OnWpBQpuyuVvMe8wLJ2G/y+O05Jve 4vzg== X-Gm-Message-State: AOAM531MHCQ0X/mFeEOp7J9hZCeOXPiSCg8pxzMZkyGgC0oM/To6Zkgq zFZpQSlLD7n7CJWCba+4QXuPRqQ4AsB/2KMjMLo= X-Google-Smtp-Source: ABdhPJwuXRTUWsgMeXeT8f8znOu4q7zWfj0YilabNCbbddc8jJb+UWM8uX4h8p3J9I0hzc/1IiJp+8R5ozgSSzYVSGk= X-Received: by 2002:a05:6638:206:: with SMTP id e6mr4658176jaq.64.1604627885203; Thu, 05 Nov 2020 17:58:05 -0800 (PST) MIME-Version: 1.0 References: <20201105131012.82457-1-laoar.shao@gmail.com> <4c0a7ea6-4817-2dae-7473-8d0fe6110a45@suse.cz> In-Reply-To: <4c0a7ea6-4817-2dae-7473-8d0fe6110a45@suse.cz> From: Yafang Shao Date: Fri, 6 Nov 2020 09:57:29 +0800 Message-ID: Subject: Re: [PATCH] mm: account lazily freed anon pages in NR_FILE_PAGES To: Vlastimil Babka Cc: Andrew Morton , Michal Hocko , minchan@kernel.org, Johannes Weiner , Linux MM Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Nov 5, 2020 at 11:18 PM Vlastimil Babka wrote: > > On 11/5/20 2:10 PM, Yafang Shao wrote: > > The memory utilization (Used / Total) is used to monitor the memory > > pressure by us. If it is too high, it means the system may be OOM sooner > > or later when swap is off, then we will make adjustment on this system. > > Hmm I would say that any system looking just at memory utilization (Used / > Total) and not looking at file lru size is flawed. > There's a reason MemAvailable exists, and does count file lru sizes. > Right, the file lru size is counted in MemAvailable. MemAvailable and Used are two different metrics used by us. Both of them are useful, but the Used is not reliable anymore... > > However, this method is broken since MADV_FREE is introduced, because > > these lazily free anonymous can be reclaimed under memory pressure while > > they are still accounted in NR_ANON_MAPPED. > > > > Furthermore, since commit f7ad2a6cb9f7 ("mm: move MADV_FREE pages into > > LRU_INACTIVE_FILE list"), these lazily free anonymous pages are moved > > from anon lru list into file lru list. That means > > (Inactive(file) + Active(file)) may be much larger than Cached in > > /proc/meminfo. That makes our users confused. > > Yeah the counters are tricky for multiple reasons as Michal said... > > > So we'd better account the lazily freed anonoymous pages in > > NR_FILE_PAGES as well. > > > > Signed-off-by: Yafang Shao > > Cc: Minchan Kim > > Cc: Johannes Weiner > > Cc: Michal Hocko > > --- > > mm/memcontrol.c | 11 +++++++++-- > > mm/rmap.c | 26 ++++++++++++++++++-------- > > mm/swap.c | 2 ++ > > mm/vmscan.c | 2 ++ > > 4 files changed, 31 insertions(+), 10 deletions(-) > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 3dcbf24d2227..217a6f10fa8d 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -5659,8 +5659,15 @@ static int mem_cgroup_move_account(struct page *page, > > > > if (PageAnon(page)) { > > if (page_mapped(page)) { > > - __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); > > - __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); > > + if (!PageSwapBacked(page) && !PageSwapCache(page) && > > + !PageUnevictable(page)) { > > + __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages); > > + __mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages); > > + } else { > > + __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); > > + __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); > > + } > > + > > if (PageTransHuge(page)) { > > __mod_lruvec_state(from_vec, NR_ANON_THPS, > > -nr_pages); > > diff --git a/mm/rmap.c b/mm/rmap.c > > index 1b84945d655c..690ca7ff2392 100644 > > --- a/mm/rmap.c > > +++ b/mm/rmap.c > > @@ -1312,8 +1312,13 @@ static void page_remove_anon_compound_rmap(struct page *page) > > if (unlikely(PageMlocked(page))) > > clear_page_mlock(page); > > > > - if (nr) > > - __mod_lruvec_page_state(page, NR_ANON_MAPPED, -nr); > > + if (nr) { > > + if (PageLRU(page) && PageAnon(page) && !PageSwapBacked(page) && > > + !PageSwapCache(page) && !PageUnevictable(page)) > > + __mod_lruvec_page_state(page, NR_FILE_PAGES, -nr); > > + else > > + __mod_lruvec_page_state(page, NR_ANON_MAPPED, -nr); > > + } > > } > > > > /** > > @@ -1341,12 +1346,17 @@ void page_remove_rmap(struct page *page, bool compound) > > if (!atomic_add_negative(-1, &page->_mapcount)) > > goto out; > > > > - /* > > - * We use the irq-unsafe __{inc|mod}_zone_page_stat because > > - * these counters are not modified in interrupt context, and > > - * pte lock(a spinlock) is held, which implies preemption disabled. > > - */ > > - __dec_lruvec_page_state(page, NR_ANON_MAPPED); > > + if (PageLRU(page) && PageAnon(page) && !PageSwapBacked(page) && > > + !PageSwapCache(page) && !PageUnevictable(page)) { > > + __dec_lruvec_page_state(page, NR_FILE_PAGES); > > + } else { > > + /* > > + * We use the irq-unsafe __{inc|mod}_zone_page_stat because > > + * these counters are not modified in interrupt context, and > > + * pte lock(a spinlock) is held, which implies preemption disabled. > > + */ > > + __dec_lruvec_page_state(page, NR_ANON_MAPPED); > > + } > > > > if (unlikely(PageMlocked(page))) > > clear_page_mlock(page); > > diff --git a/mm/swap.c b/mm/swap.c > > index 47a47681c86b..340c5276a0f3 100644 > > --- a/mm/swap.c > > +++ b/mm/swap.c > > @@ -601,6 +601,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, > > > > del_page_from_lru_list(page, lruvec, > > LRU_INACTIVE_ANON + active); > > + __mod_lruvec_state(lruvec, NR_ANON_MAPPED, -nr_pages); > > ClearPageActive(page); > > ClearPageReferenced(page); > > /* > > @@ -610,6 +611,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, > > */ > > ClearPageSwapBacked(page); > > add_page_to_lru_list(page, lruvec, LRU_INACTIVE_FILE); > > + __mod_lruvec_state(lruvec, NR_FILE_PAGES, nr_pages); > > > > __count_vm_events(PGLAZYFREE, nr_pages); > > __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 1b8f0e059767..4821124c70f7 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -1428,6 +1428,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, > > goto keep_locked; > > } > > > > + mod_lruvec_page_state(page, NR_ANON_MAPPED, nr_pages); > > + mod_lruvec_page_state(page, NR_FILE_PAGES, -nr_pages); > > count_vm_event(PGLAZYFREED); > > count_memcg_page_event(page, PGLAZYFREED); > > } else if (!mapping || !__remove_mapping(mapping, page, true, > > > -- Thanks Yafang