From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE863C3ABC3 for ; Tue, 13 May 2025 03:13:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C09CA6B0085; Mon, 12 May 2025 23:13:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B93EF6B0088; Mon, 12 May 2025 23:13:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A34C46B0089; Mon, 12 May 2025 23:13:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 81C0F6B0085 for ; Mon, 12 May 2025 23:13:47 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 72F5CBEE53 for ; Tue, 13 May 2025 03:13:48 +0000 (UTC) X-FDA: 83436414936.05.DECD23F Received: from out-178.mta1.migadu.com (out-178.mta1.migadu.com [95.215.58.178]) by imf19.hostedemail.com (Postfix) with ESMTP id B0A061A0004 for ; Tue, 13 May 2025 03:13:46 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=s046WqLi; spf=pass (imf19.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.178 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747106026; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fKjwuFgLIi+/qbpLVSCmVcizGpR4uToQYQN4R9XWXaE=; b=eL5Jsw6h9WJu8Ih5NBTMPJ8gt3C18l6k7V0NwqMcBNa00G8Zab88IbAHJVHo8kHR8x+y6w fxmU+Lvf9MsVXlNEurLaMU7vlGCB2vFaQFkqw4fY1G6Bff018a3RsV7UbN2qIwRBq+supi HKR9eEntHpXQVQkHiFGJfY6J9GOXUjs= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=s046WqLi; spf=pass (imf19.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.178 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747106026; a=rsa-sha256; cv=none; b=QssYQAIkSJqnwNDEgjgXct3IweSl/ZP2SvKCLvnw71GIO/meAapgpATP33xsLqwcr5LzCJ ogEWTbsNz7smp6VX4yKfUfktUIEoCu0xRGlOoJwuw/HTZdVFvtBWzp1/2wZONgtVBIL6as 2+++rykAcroEU3/kjEqXd3C+zFG4+Kk= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1747106023; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fKjwuFgLIi+/qbpLVSCmVcizGpR4uToQYQN4R9XWXaE=; b=s046WqLiD6NzciDYeAs2Pd9sL0cg9XQg1MC6wfbfV4pxi+XDfH9ARAET9Z5Afa6tzo72s7 xbSQAxH8ENw0+G5K5ilXEYUxLCAFQR7nzo62cUREnuWUW604+ZxMzrhqyDoDEPFHYlGfcS h3wbcmRWWAGlhLsSP4vhNIGsGZTnp+g= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Vlastimil Babka , Alexei Starovoitov , Sebastian Andrzej Siewior , Harry Yoo , Yosry Ahmed , bpf@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [RFC PATCH 1/7] memcg: memcg_rstat_updated re-entrant safe against irqs Date: Mon, 12 May 2025 20:13:10 -0700 Message-ID: <20250513031316.2147548-2-shakeel.butt@linux.dev> In-Reply-To: <20250513031316.2147548-1-shakeel.butt@linux.dev> References: <20250513031316.2147548-1-shakeel.butt@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Stat-Signature: rakc3pu4dkw6x7opc6rzobco1yaecuoo X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B0A061A0004 X-HE-Tag: 1747106026-94335 X-HE-Meta: U2FsdGVkX18LNAmcnXUef0iwr7cjgpmybE5lBFMf+TjbInqAz7imHVXd+V5YVPwlOf4tTWKWp5u9vcS5ocjzCsKxtxPOJxmX8r8+q8Jwfy1xjvIdS5cBPp8jfqQa+b5M1nIANz9uTtpmfiLu+WzlzwUkXcF8LOtLtzrWhvhTCIXsYn3nn538f0WJMYnG9uEDgd66dP9EC1MMvYl7Jx/e/96zWPpYq/Cqc59L54cEJAj/WGQo2Uw8W0n6BCq5h9SuVM+/V1qZVe5BzAHbb6xBacyvL+BU8bWdErkYgjpx85DddTXU4phLvWQcneIt3gXvL9ThohZzJ6iOrj0j5iTl2stvqNWzu96z8Lxa9PZLEhSzrmnfdSjZusFDS2crXkMMTpyJxE/pZ6leLyjsDj8O33pdfhwcvu31lGJcJrk/YchRs4G1Y/HGVy8G822YeiZfCHyC4b17n0x7oJB2YKeXc9/Mmt37/ozJ0jN61JqgnvUaRCEz2YtFjh0tuBsw8Qqz3Al1cZXU2UepI7ThgVDyIVsZpil5poumqelqcTixe0ngOajCDqNlIpaNYbfa5QHXBOzreZnQizoxupZ3zZie5eSDPiAEYmlWJfVw8eigoP2BpDypT8zCQXJzoMyRR5MIzZp31eVyziQS8ro+0gLzH4OYYBDatgLSLaLQdBASau8WCFfhzpoRBeTN93LfPefx+lqmW1dGl4SDNXbi67DhhAPX4u1n18Gp7B+YfIcBR4ni2PCllAsT4SqjqsDETpP13X+FhnawzW5di9yjM8Q7t827rK1NHTr2MnNg3cF5aU0vePba6PIOa2hETYsUbaLX89iE4V+vMtQNmpc7LaOUGgg6SQKnW76/iybnegfIuYyy8Lms4PHu1VM4IiWMbPFeSIZvUA9Jc3rv8+eSnce5n8mlAnQpn4wbXgVcFdyu9SV/d5mmI7dc2O/Ik1zNBaEkhOJd1q7S6ARo3/Su5d4 gl/yulkU Gtld35qj1foj25maQ4QtMZlbR8yAa790ItPaU6hguyWFqWVqUHCXdIjGmwaZSPDNh25Ib5XaMVHR0WYhq7FKTq14EsfEqZWY7HCYEwoXnjX0OcA1nt/qcOw/wQqSqfR+e86iTEiPXyWxXvm/qcMW88PgCFhzpiEEz71rBQgt4rZl7XosPy9kA0PBw1QqwhVsKefZMfxrUB6kOQkMO2mYzuuCxaUch2OzJqEal X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The function memcg_rstat_updated() is used to track the memcg stats updates for optimizing the flushes. At the moment, it is not re-entrant safe and the callers disabled irqs before calling. However to achieve the goal of updating memcg stats without irqs, memcg_rstat_updated() needs to be re-entrant safe against irqs. This patch makes memcg_rstat_updated() re-entrant safe against irqs. However it is using atomic_* ops which on x86, adds lock prefix to the instructions. Since this is per-cpu data, the this_cpu_* ops are preferred. However the percpu pointer is stored in struct mem_cgroup and doing the upward traversal through struct mem_cgroup may cause two cache misses as compared to traversing through struct memcg_vmstats_percpu pointer. NOTE: explore if there is atomic_* ops alternative without lock prefix. Signed-off-by: Shakeel Butt --- mm/memcontrol.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6cfa3550f300..2c4c095bf26c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -503,7 +503,7 @@ static inline int memcg_events_index(enum vm_event_item idx) struct memcg_vmstats_percpu { /* Stats updates since the last flush */ - unsigned int stats_updates; + atomic_t stats_updates; /* Cached pointers for fast iteration in memcg_rstat_updated() */ struct memcg_vmstats_percpu *parent; @@ -590,12 +590,15 @@ static bool memcg_vmstats_needs_flush(struct memcg_vmstats *vmstats) static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) { struct memcg_vmstats_percpu *statc; - int cpu = smp_processor_id(); - unsigned int stats_updates; + int cpu; + int stats_updates; if (!val) return; + /* Don't assume callers have preemption disabled. */ + cpu = get_cpu(); + cgroup_rstat_updated(memcg->css.cgroup, cpu); statc = this_cpu_ptr(memcg->vmstats_percpu); for (; statc; statc = statc->parent) { @@ -607,14 +610,16 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) if (memcg_vmstats_needs_flush(statc->vmstats)) break; - stats_updates = READ_ONCE(statc->stats_updates) + abs(val); - WRITE_ONCE(statc->stats_updates, stats_updates); + stats_updates = atomic_add_return(abs(val), &statc->stats_updates); if (stats_updates < MEMCG_CHARGE_BATCH) continue; - atomic64_add(stats_updates, &statc->vmstats->stats_updates); - WRITE_ONCE(statc->stats_updates, 0); + stats_updates = atomic_xchg(&statc->stats_updates, 0); + if (stats_updates) + atomic64_add(stats_updates, + &statc->vmstats->stats_updates); } + put_cpu(); } static void __mem_cgroup_flush_stats(struct mem_cgroup *memcg, bool force) @@ -4155,7 +4160,7 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu) mem_cgroup_stat_aggregate(&ac); } - WRITE_ONCE(statc->stats_updates, 0); + atomic_set(&statc->stats_updates, 0); /* We are in a per-cpu loop here, only do the atomic write once */ if (atomic64_read(&memcg->vmstats->stats_updates)) atomic64_set(&memcg->vmstats->stats_updates, 0); -- 2.47.1