From: alexei.starovoitov@gmail.com
To: bpf@vger.kernel.org
Cc: andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org,
peterz@infradead.org, vbabka@suse.cz, bigeasy@linutronix.de,
rostedt@goodmis.org, houtao1@huawei.com, hannes@cmpxchg.org,
shakeel.butt@linux.dev, mhocko@suse.com, willy@infradead.org,
tglx@linutronix.de, jannh@google.com, tj@kernel.org,
linux-mm@kvack.org, kernel-team@fb.com
Subject: [PATCH bpf-next v3 4/6] memcg: Use trylock to access memcg stock_lock.
Date: Tue, 17 Dec 2024 19:07:17 -0800 [thread overview]
Message-ID: <20241218030720.1602449-5-alexei.starovoitov@gmail.com> (raw)
In-Reply-To: <20241218030720.1602449-1-alexei.starovoitov@gmail.com>
From: Alexei Starovoitov <ast@kernel.org>
Teach memcg to operate under trylock conditions when
spinning locks cannot be used.
The end result is __memcg_kmem_charge_page() and
__memcg_kmem_uncharge_page() are safe to use from
any context in RT and !RT.
In !RT the NMI handler may fail to trylock stock_lock.
In RT hard IRQ and NMI handlers will not attempt to trylock.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
mm/memcontrol.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 7b3503d12aaf..f168d223375f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1756,7 +1756,8 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
*
* returns true if successful, false otherwise.
*/
-static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
+static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages,
+ gfp_t gfp_mask)
{
struct memcg_stock_pcp *stock;
unsigned int stock_pages;
@@ -1766,7 +1767,11 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
if (nr_pages > MEMCG_CHARGE_BATCH)
return ret;
- local_lock_irqsave(&memcg_stock.stock_lock, flags);
+ if (!local_trylock_irqsave(&memcg_stock.stock_lock, flags)) {
+ if (gfp_mask & __GFP_TRYLOCK)
+ return ret;
+ local_lock_irqsave(&memcg_stock.stock_lock, flags);
+ }
stock = this_cpu_ptr(&memcg_stock);
stock_pages = READ_ONCE(stock->nr_pages);
@@ -1851,7 +1856,14 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
{
unsigned long flags;
- local_lock_irqsave(&memcg_stock.stock_lock, flags);
+ if (!local_trylock_irqsave(&memcg_stock.stock_lock, flags)) {
+ /*
+ * In case of unlikely failure to lock percpu stock_lock
+ * uncharge memcg directly.
+ */
+ mem_cgroup_cancel_charge(memcg, nr_pages);
+ return;
+ }
__refill_stock(memcg, nr_pages);
local_unlock_irqrestore(&memcg_stock.stock_lock, flags);
}
@@ -2196,7 +2208,7 @@ int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
unsigned long pflags;
retry:
- if (consume_stock(memcg, nr_pages))
+ if (consume_stock(memcg, nr_pages, gfp_mask))
return 0;
if (!do_memsw_account() ||
--
2.43.5
next prev parent reply other threads:[~2024-12-18 3:07 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-18 3:07 [PATCH bpf-next v3 0/6] bpf, mm: Introduce try_alloc_pages() alexei.starovoitov
2024-12-18 3:07 ` [PATCH bpf-next v3 1/6] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation alexei.starovoitov
2024-12-18 11:32 ` Michal Hocko
2024-12-19 0:05 ` Shakeel Butt
2024-12-19 7:18 ` Michal Hocko
2024-12-19 1:18 ` Alexei Starovoitov
2024-12-19 7:13 ` Michal Hocko
2024-12-20 0:41 ` Alexei Starovoitov
2024-12-19 0:10 ` Shakeel Butt
2024-12-19 1:39 ` Alexei Starovoitov
2024-12-18 3:07 ` [PATCH bpf-next v3 2/6] mm, bpf: Introduce free_pages_nolock() alexei.starovoitov
2024-12-18 4:58 ` Yosry Ahmed
2024-12-18 5:33 ` Alexei Starovoitov
2024-12-18 5:57 ` Yosry Ahmed
2024-12-18 6:37 ` Alexei Starovoitov
2024-12-18 6:49 ` Yosry Ahmed
2024-12-18 7:25 ` Alexei Starovoitov
2024-12-18 7:40 ` Yosry Ahmed
2024-12-18 11:32 ` Michal Hocko
2024-12-19 1:45 ` Alexei Starovoitov
2024-12-19 7:03 ` Michal Hocko
2024-12-20 0:42 ` Alexei Starovoitov
2024-12-18 3:07 ` [PATCH bpf-next v3 3/6] locking/local_lock: Introduce local_trylock_irqsave() alexei.starovoitov
2024-12-18 3:07 ` alexei.starovoitov [this message]
2024-12-18 11:32 ` [PATCH bpf-next v3 4/6] memcg: Use trylock to access memcg stock_lock Michal Hocko
2024-12-19 1:53 ` Alexei Starovoitov
2024-12-19 7:08 ` Michal Hocko
2024-12-19 7:27 ` Michal Hocko
2024-12-19 7:52 ` Michal Hocko
2024-12-20 0:39 ` Alexei Starovoitov
2024-12-20 8:24 ` Michal Hocko
2024-12-20 16:10 ` Alexei Starovoitov
2024-12-20 19:45 ` Shakeel Butt
2024-12-21 7:20 ` Michal Hocko
2024-12-18 3:07 ` [PATCH bpf-next v3 5/6] mm, bpf: Use memcg in try_alloc_pages() alexei.starovoitov
2024-12-18 3:07 ` [PATCH bpf-next v3 6/6] bpf: Use try_alloc_pages() to allocate pages for bpf needs alexei.starovoitov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241218030720.1602449-5-alexei.starovoitov@gmail.com \
--to=alexei.starovoitov@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andrii@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=bpf@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=houtao1@huawei.com \
--cc=jannh@google.com \
--cc=kernel-team@fb.com \
--cc=linux-mm@kvack.org \
--cc=memxor@gmail.com \
--cc=mhocko@suse.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=shakeel.butt@linux.dev \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox