From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDF89ECE58E for ; Tue, 15 Oct 2019 08:20:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0FF22086A for ; Tue, 15 Oct 2019 08:20:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0FF22086A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 44D618E0005; Tue, 15 Oct 2019 04:20:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3DE228E0001; Tue, 15 Oct 2019 04:20:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EABD8E0005; Tue, 15 Oct 2019 04:20:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id 070F48E0001 for ; Tue, 15 Oct 2019 04:20:50 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 846A762E8 for ; Tue, 15 Oct 2019 08:20:50 +0000 (UTC) X-FDA: 76045323060.03.arch64_5e9f7dec5db0e X-HE-Tag: arch64_5e9f7dec5db0e X-Filterd-Recvd-Size: 3643 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Oct 2019 08:20:50 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D1171B01F; Tue, 15 Oct 2019 08:20:48 +0000 (UTC) Date: Tue, 15 Oct 2019 10:20:48 +0200 From: Michal Hocko To: Konstantin Khlebnikov Cc: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Vladimir Davydov , Johannes Weiner Subject: Re: [PATCH] mm/memcontrol: update lruvec counters in mem_cgroup_move_account Message-ID: <20191015082048.GU317@dhcp22.suse.cz> References: <157112699975.7360.1062614888388489788.stgit@buzz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <157112699975.7360.1062614888388489788.stgit@buzz> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 15-10-19 11:09:59, Konstantin Khlebnikov wrote: > Mapped, dirty and writeback pages are also counted in per-lruvec stats. > These counters needs update when page is moved between cgroups. Please describe the user visible effect. > Fixes: 00f3ca2c2d66 ("mm: memcontrol: per-lruvec stats infrastructure") > Signed-off-by: Konstantin Khlebnikov We want Cc: stable I suspect because broken stats might be really misleading. The patch looks ok to me otherwise Acked-by: Michal Hocko > --- > mm/memcontrol.c | 18 ++++++++++++------ > 1 file changed, 12 insertions(+), 6 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index bdac56009a38..363106578876 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -5420,6 +5420,8 @@ static int mem_cgroup_move_account(struct page *page, > struct mem_cgroup *from, > struct mem_cgroup *to) > { > + struct lruvec *from_vec, *to_vec; > + struct pglist_data *pgdat; > unsigned long flags; > unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1; > int ret; > @@ -5443,11 +5445,15 @@ static int mem_cgroup_move_account(struct page *page, > > anon = PageAnon(page); > > + pgdat = page_pgdat(page); > + from_vec = mem_cgroup_lruvec(pgdat, from); > + to_vec = mem_cgroup_lruvec(pgdat, to); > + > spin_lock_irqsave(&from->move_lock, flags); > > if (!anon && page_mapped(page)) { > - __mod_memcg_state(from, NR_FILE_MAPPED, -nr_pages); > - __mod_memcg_state(to, NR_FILE_MAPPED, nr_pages); > + __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages); > + __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages); > } > > /* > @@ -5459,14 +5465,14 @@ static int mem_cgroup_move_account(struct page *page, > struct address_space *mapping = page_mapping(page); > > if (mapping_cap_account_dirty(mapping)) { > - __mod_memcg_state(from, NR_FILE_DIRTY, -nr_pages); > - __mod_memcg_state(to, NR_FILE_DIRTY, nr_pages); > + __mod_lruvec_state(from_vec, NR_FILE_DIRTY, -nr_pages); > + __mod_lruvec_state(to_vec, NR_FILE_DIRTY, nr_pages); > } > } > > if (PageWriteback(page)) { > - __mod_memcg_state(from, NR_WRITEBACK, -nr_pages); > - __mod_memcg_state(to, NR_WRITEBACK, nr_pages); > + __mod_lruvec_state(from_vec, NR_WRITEBACK, -nr_pages); > + __mod_lruvec_state(to_vec, NR_WRITEBACK, nr_pages); > } > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE -- Michal Hocko SUSE Labs