From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F76AD1F9DA for ; Tue, 15 Oct 2024 21:53:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE4E66B0083; Tue, 15 Oct 2024 17:53:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E942C6B0085; Tue, 15 Oct 2024 17:53:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5C066B0088; Tue, 15 Oct 2024 17:53:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B4D2F6B0083 for ; Tue, 15 Oct 2024 17:53:03 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id B78781A17F5 for ; Tue, 15 Oct 2024 21:52:46 +0000 (UTC) X-FDA: 82677187278.04.F5B2991 Received: from mail-qv1-f41.google.com (mail-qv1-f41.google.com [209.85.219.41]) by imf02.hostedemail.com (Postfix) with ESMTP id 5B11B8000D for ; Tue, 15 Oct 2024 21:52:46 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=jHqrWpwB; spf=pass (imf02.hostedemail.com: domain of gourry@gourry.net designates 209.85.219.41 as permitted sender) smtp.mailfrom=gourry@gourry.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729029134; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ByrqP2HuNDz4/B2jUpoQ0m7Is0xEnttOr3HRUCLizRc=; b=N83Gw9ep4YqqRC3Mnc6oILzjm8JIuqFN58G4V0NYTgFfM2elW1J0j0cYITzrM57tZrD4W+ O8c5B9j1sQUzuCMjvvXo/AlQThjKD2w9xARV7P+X2clkQLVU/7C0ftpIbHiHOwMMbxk4dx doFzxsU1PVi9y5/kom38FnHZvzIFTZM= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=jHqrWpwB; spf=pass (imf02.hostedemail.com: domain of gourry@gourry.net designates 209.85.219.41 as permitted sender) smtp.mailfrom=gourry@gourry.net; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729029134; a=rsa-sha256; cv=none; b=X7qFTCTFO+RTwdwBwm52CIhCOeUK50zpdTftFG+axhwx3MJ5JxeKwWaYP13HbE95L7SfU4 TqMFuWDtUW1Fj1NuSAQNZJsGCDsOQJSwrgANIZg/y3+4iSVx2pGm5iSKQjS841xyKxwrjd jh7ilOCtBN4C310oBJLmmxZaeNPRD9M= Received: by mail-qv1-f41.google.com with SMTP id 6a1803df08f44-6cbf340fccaso2637826d6.1 for ; Tue, 15 Oct 2024 14:53:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1729029180; x=1729633980; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=ByrqP2HuNDz4/B2jUpoQ0m7Is0xEnttOr3HRUCLizRc=; b=jHqrWpwBJ49qHnN/qWRTXtniNhCY9YIM3asz3NQJcs07NDVHFkh+x42GQJayDjiGpo jmLFcDMHu2Ufrrs2bt3cXGjk0k2ND41V0f3tS/v329CGATbdqf71Of/hXOTrtJ980v2R 5Z9nxhSi/Cu+VQyFG+GIewrVozVL6FP1opa8XtEYfsDqv1azaI+SQwTnesdkmw2mTaQ6 I7aHvWA/I3iGGeuRotv58ecAfx9+Wf+KvZ55XIPNaEceihzoeee7R6LsgiYqaYFH78fe KhkJwGZMZE7Q+N47KgSpTaCnK0O1zx60AN5+0e3tvB1Tk4dQ8iTNOQ7QEJvDvsv/Es0d oukA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729029180; x=1729633980; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=ByrqP2HuNDz4/B2jUpoQ0m7Is0xEnttOr3HRUCLizRc=; b=lCF/2ljqo0wh5100We1FL78VccTPGkYB6+WUe9WsyN9zyet6GeLhK279ntaeeZztZe nnedRUPY13Pm2J5FjZpalBKolhieYNggvtuZ5dcuLnA2sPDMzZpWWEcNKhnwUb/jSxZT MPsCDMxbx8frTnneGGS8uY0EUrRLYJ5AEe5dG2hYyAFUptUpXJBQjpBNPriz6LVxqHyQ Rh+u2AnVPsGpzZpzfQrlJ8EHR2AwiB0nTvG3hC9Sco5/KS/6BVo/UV1dMK2qguEv1ELb eCwUsFDfopyB8gxhcD0zgMW/jVZItsRXEljyGWN1ZaZHXRWCFkwSgdHqn4h3lMgpICJq Tuqg== X-Gm-Message-State: AOJu0YyFYMDCnbMVFVdkGN5ocshM6YdvD0D4NWO+cvvZ+pIE8GUZ8k2t Jl6t8t1JCkFDUtW669VsNitq5ot+QXEy2fd/2tdLJW0bL6iXSL62zoNA1IAq8j4= X-Google-Smtp-Source: AGHT+IH7T8p8v5N9Ela+YmiiUwvyud0j7BoExroRpPen0ZBA/8H/A3UwSM4Z0as+ot9qaaX2knyEZA== X-Received: by 2002:ad4:5aed:0:b0:6cb:fc3f:6cc7 with SMTP id 6a1803df08f44-6cc2b5c1009mr35780666d6.15.1729029180281; Tue, 15 Oct 2024 14:53:00 -0700 (PDT) Received: from PC2K9PVX.TheFacebook.com (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4607b0a6760sm10926471cf.6.2024.10.15.14.52.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Oct 2024 14:52:59 -0700 (PDT) Date: Tue, 15 Oct 2024 17:52:56 -0400 From: Gregory Price To: kaiyang2@cs.cmu.edu Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org, mhocko@kernel.org, nehagholkar@meta.com, abhishekd@meta.com, hannes@cmpxchg.org, weixugc@google.com, rientjes@google.com Subject: Re: [RFC PATCH 3/4] use memory.low local node protection for local node reclaim Message-ID: References: <20240920221202.1734227-1-kaiyang2@cs.cmu.edu> <20240920221202.1734227-4-kaiyang2@cs.cmu.edu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240920221202.1734227-4-kaiyang2@cs.cmu.edu> X-Rspam-User: X-Stat-Signature: 3idwxz4g4jbsq8rzmyhm77aah89a3pih X-Rspamd-Queue-Id: 5B11B8000D X-Rspamd-Server: rspam11 X-HE-Tag: 1729029166-40035 X-HE-Meta: U2FsdGVkX18pNRb9Lpwta4ZBYPYLaFqJnnYEBWCVyX7JSDBTKv+tohF0VTM+98BRQr31omN+rSQzXOrOSXO3iPMdQLnexJPWLiHjixYvPohD9Gl9hHGrEIAPcJI2Vme1gGRhi6AJRp3Z1BAMLomdl7YFP04Lf01Cf6Kk1ECO65sqG5siMHqD/yMewv/sho+q9QgU/Gj3C2SlZZ+JvuIMaBHFdhnfp20yQ5y5IxbSgCUzlJ5uwUHTn7ZcjORQee4o5D7lkNd/xbDqRGTfQALa8f5OxNsb2eAJTo9VM/700VnAqb6BG5H3NvxjrLhUp2QFxXwb0kSWe9D5vkRgCtmgvehOJ2K12SlcyarUXffxsuYGbfyoOTbjRFDf69Om2c1Hj71ypHAol7FrkFJM/1j6ZuD2YbMDY5dYXbs0x0R20rMwpugNaWYSUvsTp/PE8lUUkllALUGXm5iX504F1VjUVT5wL/ytHPkpaTlABXPb3EgL720rTM7jAT/U3Xr2WC0/f8Xtey7m9ofYWDlR/dciw7Z+4mrgdOX4xSFDMoAHHkOlisTAwD1GWC82n6EM9XORcrbRcp7bq2mycY/UIgCh7PYTNfeYmfkJHOKFcP/9Ax+OcUF37qbv7m+1qybhlRXFLHS6yqPa76LtkzOFKRsV2Mt4lrC0itZgTy9dOaWxUaKDzjlHlmauVWsPM/pcxeg06q6fyU8fTNs0iwnFb4PaDUfuXUfCxsB6hSD/RmC7LFWyoFrVFXMbFz+Fm5WjKq+l6AZ8OsIZiyIlWg7IsPb7yyR41e4ZKEWw9I51mdbyGiAckPC6QfUGSfJMft6r6SA8XtbujHQZ8fys4rNpbhpaO6GBiGG0Tg18AopBv1yQ1GunPrVv9/aQSa6l5mrDJ7KNNJMC6py2pTm1Xwp8w8fKUvAnd9wN93Sj+shsriaHwcZtk4wh/al/197RaAjh17RuJQWpSHwjeP35CjkiUth F7sA1gvO lLcBt82xt+nWDbf9tecx9dHUueTcIZOnhH53A35c+r5zH2c3oG9omNn98mf4licRMchZuE3v+NYrpHi6+6m8dOghbjeorfWAzLn7aiDUnmGEqdJ0iD+44+kUT7+Y3LIxSMgvzcC3JsBzq4DkDpTR7G3wgrSDI4FyxH89/K1BKzmXE14Mc6CAqgyL/bSBqiByC0iCVpJ7LdQ2KQd4N0okz2XgSbajWWr2gZjbeogn3vlevK/7evc73gx7mxcoZW5SQSpF4JkqKamGE1M+iSxWXiz3aSOMo7hWS+60C6FidnCkjhtE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Sep 20, 2024 at 10:11:50PM +0000, kaiyang2@cs.cmu.edu wrote: > From: Kaiyang Zhao > > When reclaim targets the top-tier node usage by the root memcg, > apply local memory.low protection instead of global protection. > Changelog probably needs a little more context about the intended affect of this change. What exactly is the implication of this change compared to applying it against elow? > Signed-off-by: Kaiyang Zhao > --- > include/linux/memcontrol.h | 23 ++++++++++++++--------- > mm/memcontrol.c | 4 ++-- > mm/vmscan.c | 19 ++++++++++++++----- > 3 files changed, 30 insertions(+), 16 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 94aba4498fca..256912b91922 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -586,9 +586,9 @@ static inline bool mem_cgroup_disabled(void) > static inline void mem_cgroup_protection(struct mem_cgroup *root, > struct mem_cgroup *memcg, > unsigned long *min, > - unsigned long *low) > + unsigned long *low, unsigned long *locallow) > { > - *min = *low = 0; > + *min = *low = *locallow = 0; > "locallow" can be read as "loc allow" or "local low", probably you want to change all the references to local_low. Sorry for not saying this on earlier feedback. > if (mem_cgroup_disabled()) > return; > @@ -631,10 +631,11 @@ static inline void mem_cgroup_protection(struct mem_cgroup *root, > > *min = READ_ONCE(memcg->memory.emin); > *low = READ_ONCE(memcg->memory.elow); > + *locallow = READ_ONCE(memcg->memory.elocallow); > } > > void mem_cgroup_calculate_protection(struct mem_cgroup *root, > - struct mem_cgroup *memcg); > + struct mem_cgroup *memcg, int is_local); > > static inline bool mem_cgroup_unprotected(struct mem_cgroup *target, > struct mem_cgroup *memcg) > @@ -651,13 +652,17 @@ static inline bool mem_cgroup_unprotected(struct mem_cgroup *target, > unsigned long get_cgroup_local_usage(struct mem_cgroup *memcg, bool flush); > > static inline bool mem_cgroup_below_low(struct mem_cgroup *target, > - struct mem_cgroup *memcg) > + struct mem_cgroup *memcg, int is_local) > { > if (mem_cgroup_unprotected(target, memcg)) > return false; > > - return READ_ONCE(memcg->memory.elow) >= > - page_counter_read(&memcg->memory); > + if (is_local) > + return READ_ONCE(memcg->memory.elocallow) >= > + get_cgroup_local_usage(memcg, true); > + else > + return READ_ONCE(memcg->memory.elow) >= > + page_counter_read(&memcg->memory); Don't need else case here is if block returns. > } > > static inline bool mem_cgroup_below_min(struct mem_cgroup *target, > @@ -1159,13 +1164,13 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, > static inline void mem_cgroup_protection(struct mem_cgroup *root, > struct mem_cgroup *memcg, > unsigned long *min, > - unsigned long *low) > + unsigned long *low, unsigned long *locallow) > { > *min = *low = 0; > } > > static inline void mem_cgroup_calculate_protection(struct mem_cgroup *root, > - struct mem_cgroup *memcg) > + struct mem_cgroup *memcg, int is_local) > { > } > > @@ -1175,7 +1180,7 @@ static inline bool mem_cgroup_unprotected(struct mem_cgroup *target, > return true; > } > static inline bool mem_cgroup_below_low(struct mem_cgroup *target, > - struct mem_cgroup *memcg) > + struct mem_cgroup *memcg, int is_local) > { > return false; > } > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index d7c5fff12105..61718ba998fe 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -4495,7 +4495,7 @@ struct cgroup_subsys memory_cgrp_subsys = { > * of a top-down tree iteration, not for isolated queries. > */ > void mem_cgroup_calculate_protection(struct mem_cgroup *root, > - struct mem_cgroup *memcg) > + struct mem_cgroup *memcg, int is_local) > { > bool recursive_protection = > cgrp_dfl_root.flags & CGRP_ROOT_MEMORY_RECURSIVE_PROT; > @@ -4507,7 +4507,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, > root = root_mem_cgroup; > > page_counter_calculate_protection(&root->memory, &memcg->memory, > - recursive_protection, false); > + recursive_protection, is_local); > } > > static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg, > diff --git a/mm/vmscan.c b/mm/vmscan.c > index ce471d686a88..a2681d52fc5f 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2377,6 +2377,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, > enum scan_balance scan_balance; > unsigned long ap, fp; > enum lru_list lru; > + int is_local = (pgdat->node_id == 0) && root_reclaim(sc); int should be bool to be more explicit as to what the valid values are. Should be addressed across the patch set. > > /* If we have no swap space, do not bother scanning anon folios. */ > if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id, sc)) { > @@ -2457,12 +2458,14 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, > for_each_evictable_lru(lru) { > bool file = is_file_lru(lru); > unsigned long lruvec_size; > - unsigned long low, min; > + unsigned long low, min, locallow; > unsigned long scan; > > lruvec_size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx); > mem_cgroup_protection(sc->target_mem_cgroup, memcg, > - &min, &low); > + &min, &low, &locallow); > + if (is_local) > + low = locallow; > > if (min || low) { > /* > @@ -2494,7 +2497,12 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, > * again by how much of the total memory used is under > * hard protection. > */ > - unsigned long cgroup_size = mem_cgroup_size(memcg); > + unsigned long cgroup_size; > + > + if (is_local) > + cgroup_size = get_cgroup_local_usage(memcg, true); > + else > + cgroup_size = mem_cgroup_size(memcg); > unsigned long protection; > > /* memory.low scaling, make sure we retry before OOM */ > @@ -5869,6 +5877,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) > }; > struct mem_cgroup_reclaim_cookie *partial = &reclaim; > struct mem_cgroup *memcg; > + int is_local = (pgdat->node_id == 0) && root_reclaim(sc); > > /* > * In most cases, direct reclaimers can do partial walks > @@ -5896,7 +5905,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) > */ > cond_resched(); > > - mem_cgroup_calculate_protection(target_memcg, memcg); > + mem_cgroup_calculate_protection(target_memcg, memcg, is_local); > > if (mem_cgroup_below_min(target_memcg, memcg)) { > /* > @@ -5904,7 +5913,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) > * If there is no reclaimable memory, OOM. > */ > continue; > - } else if (mem_cgroup_below_low(target_memcg, memcg)) { > + } else if (mem_cgroup_below_low(target_memcg, memcg, is_local)) { > /* > * Soft protection. > * Respect the protection only as long as > -- > 2.43.0 >