From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4030AC3ABC3 for ; Tue, 13 May 2025 03:14:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 193F56B0096; Mon, 12 May 2025 23:14:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1462F6B0098; Mon, 12 May 2025 23:14:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 00DEA6B0099; Mon, 12 May 2025 23:14:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D53E56B0096 for ; Mon, 12 May 2025 23:14:19 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 27B49E2A5A for ; Tue, 13 May 2025 03:14:21 +0000 (UTC) X-FDA: 83436416322.29.51C9C30 Received: from out-180.mta1.migadu.com (out-180.mta1.migadu.com [95.215.58.180]) by imf23.hostedemail.com (Postfix) with ESMTP id 654CD140008 for ; Tue, 13 May 2025 03:14:19 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=nMEpv0pa; spf=pass (imf23.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.180 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747106059; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AODqjUq8YBPAVfDEVDBp4w1/avIi3o3A3GYYWWucdOg=; b=0KWcoyO7LlvLSe4hyGNW7gLbIRu4tmAhxi+58Ts2JmG70LYpBuA6HmCDNP+ZrBpbyXzjUh CKK0Uj/pHMQlGRvvxME2hsLirrqkqdaO9JaUWTZUISvL9UN/Zxdze0ohtVpBJhnEi70nWN gAlqLG549maYm2JbAnIImn3g0s+/1qI= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=nMEpv0pa; spf=pass (imf23.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.180 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747106059; a=rsa-sha256; cv=none; b=n/6m7+/Ij6IZy9HLhTYn3E3pSxHzYT4mmh09MbywT6aJf58PpUDuIHca2JZ0P3woB5LBtg 5X443UDHBfnESU9jjhOLnfIobs3Q2HPnhbbL38cNRmOu3SgpsIi1lwqG1HiobJoqgTxzZk UsMn2ALC29rfRaZVc2QuxlGx3INDhcE= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1747106058; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AODqjUq8YBPAVfDEVDBp4w1/avIi3o3A3GYYWWucdOg=; b=nMEpv0paVndzE8CayYRfrJko55+hok+VSNUS4tnLfpg19zLqu/PvnMnIxqCvJWjK48L3bJ pgdw7Iur+sFTvXj2v9Wm4AgeQIsJIFY5CJhHjQVZ5kZpIc7+TLNW/z16Mp1RmWkUmKZHQW 31iyv5GGYhMMXu1kJxlCeCI14I5l6pk= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Vlastimil Babka , Alexei Starovoitov , Sebastian Andrzej Siewior , Harry Yoo , Yosry Ahmed , bpf@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [RFC PATCH 5/7] memcg: make __mod_memcg_lruvec_state re-entrant safe against irqs Date: Mon, 12 May 2025 20:13:14 -0700 Message-ID: <20250513031316.2147548-6-shakeel.butt@linux.dev> In-Reply-To: <20250513031316.2147548-1-shakeel.butt@linux.dev> References: <20250513031316.2147548-1-shakeel.butt@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 654CD140008 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: uibib81oqrsups1rsux4pdjhq1u3mc5b X-HE-Tag: 1747106059-557485 X-HE-Meta: U2FsdGVkX1/tmA17excdgt4JwBRIGqD68mMQoiUR6C8/q28XkK1DeffpEb8hsX6RNuVcNvdCq9QwoQwfCYlj766YLr0YpiRljZw1MZbx1JhOAY1/jG5lfFyfxpYn8UaHfLv3h9A3cWxiBnpPzvdqupb711xTKevTXlTscXTeAtA4T49ot1jPds2JuWd1nlZ/AH98SyuruGne3x0jkkGcUFxHotUw8+7amYfbmN8+Fzu94JEYnrx0KhFdgkThF8esLZWBM93ALOg+2sMyVtxNykVK11Ac3Wjvxd4yABGt2Y+haXnIm98LF3bjLqSH6sf4OLTbtq12dWkTPHM15/B0gjsvvG8RKC8A0uFsnBbpVLRZqD3YaK1+J5ISB7Ax+YGqjllSKY8IJxtpV6ty8+W/jtH4uZqLEd9LIarVBvVdQkVWUE7RhbLk2JD+OjLSJeH1eda/MjjMLdIOMMnYTxWkkCJvn/6yAXECBAqvjO2QLwxpjad3xjnJj3uwdsSLx9ej8kdmzbUw9+x5+WjnhjVZgiJCD44gzNQFACrtCpYArkyB1n1zFM9u05s8+cLN27NnKGtkViigm8bZfgWTMBBkoy5zDpoKxCBaOo9h1+RCJm4AhNFy0TIpkBKbsOIz0qb4KhsyMzOZ/23XVhhVifjGaVjCSNFe4wvNbaVT9HcAz33cy6WtsLllbh03n4XMigfE0sWQ+HYjDcoFolzwsLrJcSEA6bacinsFgVX76rlRC3eVEYJwY9yPA6hfG1gqT3BBjyn1JIiSkY0nju98KkajLtAe3jXvbmYVUo4hjdW1uAhWJhNEj0dIfCmAkwiMXK9lpgk+rKHhvoCKGOOiMUVl+LtmF4robss70hlJbhRtJcjHsXzSO2GLLrvEg5DL2UkCpknfQSmlV9DusvUxUuLWdZpFdME0aG+9oMuVubjk0TM9kdJW7KzGELGClSod+NAamXRvGSuCkBMkULhTCLP P3tmN/XK GEahCWk+wDbCQpJZ5VpUiCYtccggLUZ/fEl+b2HiIQhTYGiPIFTcaW66MpEABSO3RfMU7Clx4btLSPhrhem9rChq9cippd0hNqAk0K144pBlSn9gOL0QcprWj5jnviSrh57D6lEzqMcqLJINKi1y+KJDCc8bkwYdDVm/Qs95Y+UwkpPUEJF2o8uUBZo14+VYhWu60VGBLJsMh9JHF+boosK4c+Th5vpr6lu6x X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's make __mod_memcg_lruvec_state re-entrant safe and name it mod_memcg_lruvec_state(). The only thing needed is to convert the usage of __this_cpu_add() to this_cpu_add(). There are two callers of mod_memcg_lruvec_state() and one of them i.e. __mod_objcg_mlstate() will be re-entrant safe as well, so, rename it mod_objcg_mlstate(). The last caller __mod_lruvec_state() still calls __mod_node_page_state() which is not re-entrant safe yet, so keep it as is. Signed-off-by: Shakeel Butt --- mm/memcontrol.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9e7dc90cc460..adf2f1922118 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -731,7 +731,7 @@ unsigned long memcg_page_state_local(struct mem_cgroup *memcg, int idx) } #endif -static void __mod_memcg_lruvec_state(struct lruvec *lruvec, +static void mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { @@ -743,16 +743,20 @@ static void __mod_memcg_lruvec_state(struct lruvec *lruvec, if (WARN_ONCE(BAD_STAT_IDX(i), "%s: missing stat item %d\n", __func__, idx)) return; + if (WARN_ONCE(in_nmi(), "%s: called in nmi context for stat item %d\n", + __func__, idx)) + return; + pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); memcg = pn->memcg; cpu = get_cpu(); /* Update memcg */ - __this_cpu_add(memcg->vmstats_percpu->state[i], val); + this_cpu_add(memcg->vmstats_percpu->state[i], val); /* Update lruvec */ - __this_cpu_add(pn->lruvec_stats_percpu->state[i], val); + this_cpu_add(pn->lruvec_stats_percpu->state[i], val); val = memcg_state_val_in_pages(idx, val); memcg_rstat_updated(memcg, val, cpu); @@ -779,7 +783,7 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, /* Update memcg and lruvec */ if (!mem_cgroup_disabled()) - __mod_memcg_lruvec_state(lruvec, idx, val); + mod_memcg_lruvec_state(lruvec, idx, val); } void __lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx, @@ -2559,7 +2563,7 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) folio->memcg_data = (unsigned long)memcg; } -static inline void __mod_objcg_mlstate(struct obj_cgroup *objcg, +static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr) { @@ -2570,7 +2574,7 @@ static inline void __mod_objcg_mlstate(struct obj_cgroup *objcg, memcg = obj_cgroup_memcg(objcg); if (likely(!in_nmi())) { lruvec = mem_cgroup_lruvec(memcg, pgdat); - __mod_memcg_lruvec_state(lruvec, idx, nr); + mod_memcg_lruvec_state(lruvec, idx, nr); } else { struct mem_cgroup_per_node *pn = memcg->nodeinfo[pgdat->node_id]; @@ -2901,12 +2905,12 @@ static void __account_obj_stock(struct obj_cgroup *objcg, struct pglist_data *oldpg = stock->cached_pgdat; if (stock->nr_slab_reclaimable_b) { - __mod_objcg_mlstate(objcg, oldpg, NR_SLAB_RECLAIMABLE_B, + mod_objcg_mlstate(objcg, oldpg, NR_SLAB_RECLAIMABLE_B, stock->nr_slab_reclaimable_b); stock->nr_slab_reclaimable_b = 0; } if (stock->nr_slab_unreclaimable_b) { - __mod_objcg_mlstate(objcg, oldpg, NR_SLAB_UNRECLAIMABLE_B, + mod_objcg_mlstate(objcg, oldpg, NR_SLAB_UNRECLAIMABLE_B, stock->nr_slab_unreclaimable_b); stock->nr_slab_unreclaimable_b = 0; } @@ -2932,7 +2936,7 @@ static void __account_obj_stock(struct obj_cgroup *objcg, } } if (nr) - __mod_objcg_mlstate(objcg, pgdat, idx, nr); + mod_objcg_mlstate(objcg, pgdat, idx, nr); } static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes, @@ -3004,13 +3008,13 @@ static void drain_obj_stock(struct obj_stock_pcp *stock) */ if (stock->nr_slab_reclaimable_b || stock->nr_slab_unreclaimable_b) { if (stock->nr_slab_reclaimable_b) { - __mod_objcg_mlstate(old, stock->cached_pgdat, + mod_objcg_mlstate(old, stock->cached_pgdat, NR_SLAB_RECLAIMABLE_B, stock->nr_slab_reclaimable_b); stock->nr_slab_reclaimable_b = 0; } if (stock->nr_slab_unreclaimable_b) { - __mod_objcg_mlstate(old, stock->cached_pgdat, + mod_objcg_mlstate(old, stock->cached_pgdat, NR_SLAB_UNRECLAIMABLE_B, stock->nr_slab_unreclaimable_b); stock->nr_slab_unreclaimable_b = 0; @@ -3050,7 +3054,7 @@ static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes, if (unlikely(in_nmi())) { if (pgdat) - __mod_objcg_mlstate(objcg, pgdat, idx, nr_bytes); + mod_objcg_mlstate(objcg, pgdat, idx, nr_bytes); nr_pages = nr_bytes >> PAGE_SHIFT; nr_bytes = nr_bytes & (PAGE_SIZE - 1); atomic_add(nr_bytes, &objcg->nr_charged_bytes); -- 2.47.1