From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06C05C11F68 for ; Wed, 30 Jun 2021 06:58:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ACE9761C72 for ; Wed, 30 Jun 2021 06:58:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ACE9761C72 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0D69A8D0181; Wed, 30 Jun 2021 02:58:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 086528D017A; Wed, 30 Jun 2021 02:58:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E69A18D0181; Wed, 30 Jun 2021 02:58:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0143.hostedemail.com [216.40.44.143]) by kanga.kvack.org (Postfix) with ESMTP id BB8F78D017A for ; Wed, 30 Jun 2021 02:58:21 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 93AEBE086 for ; Wed, 30 Jun 2021 06:58:21 +0000 (UTC) X-FDA: 78309486402.36.76FE2BF Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf16.hostedemail.com (Postfix) with ESMTP id 0D472F00008C for ; Wed, 30 Jun 2021 06:58:20 +0000 (UTC) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 24D452250B; Wed, 30 Jun 2021 06:58:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1625036300; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=UrvclaE0RBGDdqhzkHfO18FcjM9iOBKXol1swEPnP9E=; b=AKfP3CaF18wM2DTp5N77uqCluqABTG8w/UHZfRQcQAU/jEwWuIKXBm08RUR+4YvZcgPsqC b0oalB7f1ma6qNiCxbmxVn0voljzLRxP1ouSaZUVtIqpXOqy9ZVicVOevbzOTDFxf9okLx pmdo77snluXd3+Lgk4fgGp3t/8E+yI0= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 848C1A3B8E; Wed, 30 Jun 2021 06:58:19 +0000 (UTC) Date: Wed, 30 Jun 2021 08:58:19 +0200 From: Michal Hocko To: "Matthew Wilcox (Oracle)" Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, Johannes Weiner , Vladimir Davydov Subject: Re: [PATCH v3 05/18] mm/memcg: Convert memcg_check_events to take a node ID Message-ID: References: <20210630040034.1155892-1-willy@infradead.org> <20210630040034.1155892-6-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210630040034.1155892-6-willy@infradead.org> X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 0D472F00008C Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=AKfP3CaF; spf=pass (imf16.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com X-Stat-Signature: rnk61rujkfzta51io4oihtpuidtjye3e X-HE-Tag: 1625036300-728684 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 30-06-21 05:00:21, Matthew Wilcox wrote: > memcg_check_events only uses the page's nid, so call page_to_nid in the > callers to make the folio conversion easier. It will also make the interface slightly easier to follow as there shouldn't be any real reason to take the page for these events. So this is a good cleanup in general. > Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Michal Hocko Thanks. > --- > mm/memcontrol.c | 23 ++++++++++++----------- > 1 file changed, 12 insertions(+), 11 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 29fdb70dca42..5d143d46a8a4 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -846,7 +846,7 @@ static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg, > * Check events in order. > * > */ > -static void memcg_check_events(struct mem_cgroup *memcg, struct page *page) > +static void memcg_check_events(struct mem_cgroup *memcg, int nid) > { > /* threshold event is triggered in finer grain than soft limit */ > if (unlikely(mem_cgroup_event_ratelimit(memcg, > @@ -857,7 +857,7 @@ static void memcg_check_events(struct mem_cgroup *memcg, struct page *page) > MEM_CGROUP_TARGET_SOFTLIMIT); > mem_cgroup_threshold(memcg); > if (unlikely(do_softlimit)) > - mem_cgroup_update_tree(memcg, page_to_nid(page)); > + mem_cgroup_update_tree(memcg, nid); > } > } > > @@ -5573,7 +5573,7 @@ static int mem_cgroup_move_account(struct page *page, > struct lruvec *from_vec, *to_vec; > struct pglist_data *pgdat; > unsigned int nr_pages = compound ? thp_nr_pages(page) : 1; > - int ret; > + int nid, ret; > > VM_BUG_ON(from == to); > VM_BUG_ON_PAGE(PageLRU(page), page); > @@ -5662,12 +5662,13 @@ static int mem_cgroup_move_account(struct page *page, > __unlock_page_memcg(from); > > ret = 0; > + nid = page_to_nid(page); > > local_irq_disable(); > mem_cgroup_charge_statistics(to, nr_pages); > - memcg_check_events(to, page); > + memcg_check_events(to, nid); > mem_cgroup_charge_statistics(from, -nr_pages); > - memcg_check_events(from, page); > + memcg_check_events(from, nid); > local_irq_enable(); > out_unlock: > unlock_page(page); > @@ -6688,7 +6689,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > local_irq_disable(); > mem_cgroup_charge_statistics(memcg, nr_pages); > - memcg_check_events(memcg, page); > + memcg_check_events(memcg, page_to_nid(page)); > local_irq_enable(); > out: > return ret; > @@ -6796,7 +6797,7 @@ struct uncharge_gather { > unsigned long nr_memory; > unsigned long pgpgout; > unsigned long nr_kmem; > - struct page *dummy_page; > + int nid; > }; > > static inline void uncharge_gather_clear(struct uncharge_gather *ug) > @@ -6820,7 +6821,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) > local_irq_save(flags); > __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); > __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory); > - memcg_check_events(ug->memcg, ug->dummy_page); > + memcg_check_events(ug->memcg, ug->nid); > local_irq_restore(flags); > > /* drop reference from uncharge_page */ > @@ -6861,7 +6862,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) > uncharge_gather_clear(ug); > } > ug->memcg = memcg; > - ug->dummy_page = page; > + ug->nid = page_to_nid(page); > > /* pairs with css_put in uncharge_batch */ > css_get(&memcg->css); > @@ -6979,7 +6980,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) > > local_irq_save(flags); > mem_cgroup_charge_statistics(memcg, nr_pages); > - memcg_check_events(memcg, newpage); > + memcg_check_events(memcg, page_to_nid(newpage)); > local_irq_restore(flags); > } > > @@ -7209,7 +7210,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) > */ > VM_BUG_ON(!irqs_disabled()); > mem_cgroup_charge_statistics(memcg, -nr_entries); > - memcg_check_events(memcg, page); > + memcg_check_events(memcg, page_to_nid(page)); > > css_put(&memcg->css); > } > -- > 2.30.2 -- Michal Hocko SUSE Labs