From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F3D1C433EF for ; Wed, 15 Jun 2022 00:30:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A8A6E6B0071; Tue, 14 Jun 2022 20:30:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A12646B0072; Tue, 14 Jun 2022 20:30:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88CFB6B0073; Tue, 14 Jun 2022 20:30:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 73CB76B0071 for ; Tue, 14 Jun 2022 20:30:23 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 442BE20EF4 for ; Wed, 15 Jun 2022 00:30:23 +0000 (UTC) X-FDA: 79578588726.28.A9AF3FA Received: from mail-vs1-f53.google.com (mail-vs1-f53.google.com [209.85.217.53]) by imf13.hostedemail.com (Postfix) with ESMTP id C3822200A4 for ; Wed, 15 Jun 2022 00:30:18 +0000 (UTC) Received: by mail-vs1-f53.google.com with SMTP id n4so10554104vsm.6 for ; Tue, 14 Jun 2022 17:30:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=S0+jxhdxQYlwfGyL2eorPsvd4HK6ClRLx/SDIDi9EYE=; b=nsMholi1dqJAUclDlUeocYpA/JI5WlOJb1O9NLkVGP6d/Qx41hkkiiTKleVNyiqs2j ytJ0PoiLNdEWn7TZ5bCJcNLE7lSi1RA7dd5QL1fEwp7vZfEuGCsjW5b5QU2ynzOL0uCk te5fOSC2R/JD/ABYBbXHWDeTnyeDxqvpfH/cyGQmLXNDQ+VgzYkIk/ostp2Aff0imYkV 8IwfvnpEkJoBbRHVsCJW/uJFBBVxkjnffuAVOHJdj+duJSOSvpjL02EXZG+s7lX5YxGr 8LHdfXvfVJXfztJrGQtiyvqGu0cgv32LpymuLR4wwf/qNhXtvE0EcjpBSyLLBWaRiD05 ocUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=S0+jxhdxQYlwfGyL2eorPsvd4HK6ClRLx/SDIDi9EYE=; b=M826usEHrJKeqDkaTJiVmNJfhH7NNW2ESR0fbRGr7cv1zPNzdtSBic3Cd3JItZIQ60 uMzh1SmA0irkSLEKi77T+oUJ0rzuK4VaNi3SMFPGV92RQ+oyyaiWMbh+3D0hGKLFVkoq qdW6dWImcX5twvF05SzNCHXudD9se3byNk6if4uricIga32dJBDcKdEysyqNBWJUKikT snK1LKvfQnws0kpqGc5XPIogPbBszPpKvuvWMhcErpIWs5nSBQ9pORWNjgkwwLglFYwF 8twm0A9V/HhLxjBdGX4GQS1WHJFQpvCOPj2lxU9co37XvvAchss86JxsI2VN0hhDIsvQ 039A== X-Gm-Message-State: AJIora9VqmNmBUDC4iwtLTUdK1tGW2Z+xyrbc5MvoqPeNhFEckDAR/h8 1E01CeQhWMkSPVJ+0N02Eo5gemUzTaq+/Z4QpKiAfA== X-Google-Smtp-Source: AGRyM1tMrjpPoOln//3++KXJB7XQrc9IiqFViGp0d+mqErM3d7AUVPa/BB615159WTyX2UrcFKX8BDDvVxVrRJeesTg= X-Received: by 2002:a05:6102:3ecf:b0:320:7c27:5539 with SMTP id n15-20020a0561023ecf00b003207c275539mr3947771vsv.59.1655253017828; Tue, 14 Jun 2022 17:30:17 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Wei Xu Date: Tue, 14 Jun 2022 17:30:06 -0700 Message-ID: Subject: Re: [RFC PATCH 2/3] mm/memory-tiers: Use page counter to track toptier memory usage To: Tim Chen Cc: Linux MM , Andrew Morton , Huang Ying , Greg Thelen , Yang Shi , Davidlohr Bueso , Brice Goglin , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Jonathan Cameron , Alistair Popple , Dan Williams , Feng Tang , Jagdish Gediya , Baolin Wang , David Rientjes , "Aneesh Kumar K . V" , Shakeel Butt Content-Type: text/plain; charset="UTF-8" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655253018; a=rsa-sha256; cv=none; b=fvUAlsIgd17HRb3dK+ZBWPu2HuEevd4o+pxqyWZmZRt+7a6CatvzZbygkOLKYTgv6NttrY Vwrr70VtuDnO9YT/yrpEOdWBIIC7MGWcPeF6juy58kKfPXiGhK+ZkeB9FAQKAK5eS4vyH1 zyT6G7ElEKFhl5Lmpt8RhBD8RnRxSvg= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=nsMholi1; spf=pass (imf13.hostedemail.com: domain of weixugc@google.com designates 209.85.217.53 as permitted sender) smtp.mailfrom=weixugc@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655253018; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=S0+jxhdxQYlwfGyL2eorPsvd4HK6ClRLx/SDIDi9EYE=; b=Jx+W9Ur9QxfjbzPEwCojsJwiqYZtxupnsjah2E+KILg/Wrs9TybKnmqIsOxZdJ2V7Tk/Gm obFsoW/BV/SLlB43OJW9/rCvCHfPRXVpIYwuod9kUunHm/W4kHbWuU/jZfyVu0HUiNZQsD BgSJuzcYjHW+y7Iz9julP8XJPYDCVf8= X-Stat-Signature: sehs173fhbm861anh48mer5ii8c9aiyr X-Rspamd-Queue-Id: C3822200A4 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=nsMholi1; spf=pass (imf13.hostedemail.com: domain of weixugc@google.com designates 209.85.217.53 as permitted sender) smtp.mailfrom=weixugc@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam07 X-Rspam-User: X-HE-Tag: 1655253018-579426 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: (Resend in plain text. Sorry.) On Tue, Jun 14, 2022 at 3:26 PM Tim Chen wrote: > > If we need to restrict toptier memory usage for a cgroup, > we need to retrieve usage of toptier memory efficiently. > Add a page counter to track toptier memory usage directly > so its value can be returned right away. > --- > include/linux/memcontrol.h | 1 + > mm/memcontrol.c | 50 ++++++++++++++++++++++++++++++++------ > 2 files changed, 43 insertions(+), 8 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 9ecead1042b9..b4f727cba1de 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -241,6 +241,7 @@ struct mem_cgroup { > > /* Accounted resources */ > struct page_counter memory; /* Both v1 & v2 */ > + struct page_counter toptier; > > union { > struct page_counter swap; /* v2 only */ > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 2f6e95e6d200..2f20ec2712b8 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -848,6 +848,23 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, > __this_cpu_add(memcg->vmstats_percpu->nr_page_events, nr_pages); > } > > +static inline void mem_cgroup_charge_toptier(struct mem_cgroup *memcg, > + int nid, > + int nr_pages) > +{ > + if (!node_is_toptier(nid) || !memcg) > + return; > + > + if (nr_pages >= 0) { > + page_counter_charge(&memcg->toptier, > + (unsigned long) nr_pages); > + } else { > + nr_pages = -nr_pages; > + page_counter_uncharge(&memcg->toptier, > + (unsigned long) nr_pages); > + } > +} When we don't know which pages are being charged, we should still charge the usage to toptier (assuming that toptier always include the default tier), e.g. from try_charge_memcg(). The idea is that when lower tier memory is not used, memcg->toptier and memcg->memory should have the same value. Otherwise, it can cause confusions about where the pages of (memcg->memory - memcg->toptier) go. > static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg, > enum mem_cgroup_events_target target) > { > @@ -3027,6 +3044,8 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) > if (!ret) { > page->memcg_data = (unsigned long)objcg | > MEMCG_DATA_KMEM; > + mem_cgroup_charge_toptier(page_memcg(page), > + page_to_nid(page), 1 << order); > return 0; > } > obj_cgroup_put(objcg); > @@ -3050,6 +3069,8 @@ void __memcg_kmem_uncharge_page(struct page *page, int order) > > objcg = __folio_objcg(folio); > obj_cgroup_uncharge_pages(objcg, nr_pages); > + mem_cgroup_charge_toptier(page_memcg(page), > + page_to_nid(page), -nr_pages); > folio->memcg_data = 0; > obj_cgroup_put(objcg); > } > @@ -3947,13 +3968,10 @@ unsigned long mem_cgroup_memtier_usage(struct mem_cgroup *memcg, > > unsigned long mem_cgroup_toptier_usage(struct mem_cgroup *memcg) > { > - struct memory_tier *top_tier; > - > - top_tier = list_first_entry(&memory_tiers, struct memory_tier, list); > - if (top_tier) > - return mem_cgroup_memtier_usage(memcg, top_tier); > - else > + if (!memcg) > return 0; > + > + return page_counter_read(&memcg->toptier); > } > > #endif /* CONFIG_NUMA */ > @@ -5228,11 +5246,13 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) > memcg->oom_kill_disable = parent->oom_kill_disable; > > page_counter_init(&memcg->memory, &parent->memory); > + page_counter_init(&memcg->toptier, &parent->toptier); > page_counter_init(&memcg->swap, &parent->swap); > page_counter_init(&memcg->kmem, &parent->kmem); > page_counter_init(&memcg->tcpmem, &parent->tcpmem); > } else { > page_counter_init(&memcg->memory, NULL); > + page_counter_init(&memcg->toptier, NULL); > page_counter_init(&memcg->swap, NULL); > page_counter_init(&memcg->kmem, NULL); > page_counter_init(&memcg->tcpmem, NULL); > @@ -5678,6 +5698,8 @@ static int mem_cgroup_move_account(struct page *page, > memcg_check_events(to, nid); > mem_cgroup_charge_statistics(from, -nr_pages); > memcg_check_events(from, nid); > + mem_cgroup_charge_toptier(to, nid, nr_pages); > + mem_cgroup_charge_toptier(from, nid, -nr_pages); > local_irq_enable(); > out_unlock: > folio_unlock(folio); > @@ -6761,6 +6783,7 @@ static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg, > > local_irq_disable(); > mem_cgroup_charge_statistics(memcg, nr_pages); > + mem_cgroup_charge_toptier(memcg, folio_nid(folio), nr_pages); > memcg_check_events(memcg, folio_nid(folio)); > local_irq_enable(); > out: > @@ -6853,6 +6876,7 @@ struct uncharge_gather { > unsigned long nr_memory; > unsigned long pgpgout; > unsigned long nr_kmem; > + unsigned long nr_toptier; > int nid; > }; > > @@ -6867,6 +6891,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) > > if (ug->nr_memory) { > page_counter_uncharge(&ug->memcg->memory, ug->nr_memory); > + page_counter_uncharge(&ug->memcg->toptier, ug->nr_toptier); > if (do_memsw_account()) > page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory); > if (ug->nr_kmem) > @@ -6929,12 +6954,18 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) > ug->nr_memory += nr_pages; > ug->nr_kmem += nr_pages; > > + if (node_is_toptier(folio_nid(folio))) > + ug->nr_toptier += nr_pages; > + > folio->memcg_data = 0; > obj_cgroup_put(objcg); > } else { > /* LRU pages aren't accounted at the root level */ > - if (!mem_cgroup_is_root(memcg)) > + if (!mem_cgroup_is_root(memcg)) { > ug->nr_memory += nr_pages; > + if (node_is_toptier(folio_nid(folio))) > + ug->nr_toptier += nr_pages; > + } > ug->pgpgout++; > > folio->memcg_data = 0; > @@ -7011,6 +7042,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) > /* Force-charge the new page. The old one will be freed soon */ > if (!mem_cgroup_is_root(memcg)) { > page_counter_charge(&memcg->memory, nr_pages); > + mem_cgroup_charge_toptier(memcg, folio_nid(new), nr_pages); > if (do_memsw_account()) > page_counter_charge(&memcg->memsw, nr_pages); > } > @@ -7231,8 +7263,10 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) > > folio->memcg_data = 0; > > - if (!mem_cgroup_is_root(memcg)) > + if (!mem_cgroup_is_root(memcg)) { > page_counter_uncharge(&memcg->memory, nr_entries); > + mem_cgroup_charge_toptier(memcg, folio_nid(folio), -nr_entries); > + } > > if (!cgroup_memory_noswap && memcg != swap_memcg) { > if (!mem_cgroup_is_root(swap_memcg)) > -- > 2.35.1 > >