From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DDC8C27C4F for ; Sat, 15 Jun 2024 14:35:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5CAF06B014A; Sat, 15 Jun 2024 10:29:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B41A16B0150; Sat, 15 Jun 2024 10:29:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EC046B014D; Sat, 15 Jun 2024 10:29:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A31716B019E for ; Sat, 15 Jun 2024 10:26:55 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5D9B180CEC for ; Sat, 15 Jun 2024 14:26:55 +0000 (UTC) X-FDA: 82233349590.25.BCEE7B8 Received: from www262.sakura.ne.jp (www262.sakura.ne.jp [202.181.97.72]) by imf15.hostedemail.com (Postfix) with ESMTP id 9DCDCA000F for ; Sat, 15 Jun 2024 14:26:52 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf15.hostedemail.com: domain of penguin-kernel@I-love.SAKURA.ne.jp designates 202.181.97.72 as permitted sender) smtp.mailfrom=penguin-kernel@I-love.SAKURA.ne.jp ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718461609; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p4pXJBZ+vE/0NQMQmfCWeDoLR62/y+aNW1naYzAIAHU=; b=NkBQW5fUFLzhPnjM75EShIHjR6O5kD1l4yAyawTBhUik6G1Tt1dJU7OYqHEnFPlXbNOpTW w2WDDa5yzu3+MGrI+VBgWri6XZ2jQ4dy83fmIxrAS/H+M6Hu/1oe2+h0q5++9fm2rnmiqE 6/SfI0wAbpoJ3OsLdhS97V7DR4FPVAg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718461609; a=rsa-sha256; cv=none; b=Q2oC2rIAUbVRcoaZOvJXt2ze5LI/BPaHyD2+KYpbM9ReVj6uCs0+ntQAnnuLyQwAweQYVs jpm0SpDv5P07XaVvB/gFcXW1baHJYIhuc3OrUM19Fjd8Y1kBr1VGUYhFL3D6UEsSRE+Ifl sCMTosj5BPLR7VE5HC6B4an26niaj/4= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf15.hostedemail.com: domain of penguin-kernel@I-love.SAKURA.ne.jp designates 202.181.97.72 as permitted sender) smtp.mailfrom=penguin-kernel@I-love.SAKURA.ne.jp Received: from fsav411.sakura.ne.jp (fsav411.sakura.ne.jp [133.242.250.110]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id 45FEQ8Jx006284; Sat, 15 Jun 2024 23:26:08 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav411.sakura.ne.jp (F-Secure/fsigk_smtp/550/fsav411.sakura.ne.jp); Sat, 15 Jun 2024 23:26:08 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/fsav411.sakura.ne.jp) Received: from [192.168.1.6] (M106072142033.v4.enabler.ne.jp [106.72.142.33]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id 45FEQ7Tg006280 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NO); Sat, 15 Jun 2024 23:26:08 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Message-ID: <394049f8-49cc-4d82-8ff1-c19a38a61fe6@I-love.SAKURA.ne.jp> Date: Sat, 15 Jun 2024 23:26:08 +0900 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] bpf: don't call mmap_read_trylock() from IRQ context From: Tetsuo Handa To: Alexei Starovoitov , Nicolas Saenz Julienne , Axel Rasmussen , Vlastimil Babka , "Steven Rostedt (Google)" Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , bpf , LKML , linux-mm References: <4b875158-1aa7-402e-8861-860a493c49cd@I-love.SAKURA.ne.jp> <3e9b2a54-73d4-48cb-a510-d17984c97a45@I-love.SAKURA.ne.jp> <52d3d784-47ad-4190-920b-e5fe4673b11f@I-love.SAKURA.ne.jp> <79d32963-de38-49cf-a03f-f6f5f4fbb462@I-love.SAKURA.ne.jp> Content-Language: en-US In-Reply-To: <79d32963-de38-49cf-a03f-f6f5f4fbb462@I-love.SAKURA.ne.jp> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9DCDCA000F X-Stat-Signature: sxzjcu3cxtqmy8uecm4fbsswjr1ihm9c X-Rspam-User: X-HE-Tag: 1718461612-870003 X-HE-Meta: U2FsdGVkX18ONFpVBPqMzbHXdFCBomf4Y4KjvQEBoOyisuKRrIKXYTCVpFfbUVVcZJVtV4G3w81k4Qi4WCuwfLtKhI8cHSJ/QI7crzU7mPFM2JZXXNhWfGZkXG9EZfDl8CXNii8tNIVNC+FgnPSGC/qKjT8jyW2NlSytM0iYTXUYSDKZngO+s1lt9A/REN0+ULsTFgm38yfINfCSxAFP9g1c2+PvLoKTDN+q8rv4h5kZZDOsYMlY4TKNCiBMBarWMr6aljVxcBPjIboggci6SYRUDd1RlV31CIS2SZNhhVuHvcMIj0oA4wwk7dnjk4UbVVGA3YfUBdE/qJHXd8j7MM/TRUEcCVoCdgg6eeIoEvmEo9zH86LfPgGCuLmFFoWUKBtYxX+MV3wjFQCSGVVtqjGbOv5+KZ5EKuN1VxnGiz7w8QrMgnce76bRQMwbdPXD1RgLF6vh9GvZWLYForW3eFlOV0DEP6H2G17ZbWqJgpLOrl1sYdoJPo1OynDYJ6/On1vYgS9c41MenPwVdRuCZfmsIh2MmAVdKJljOj0pD4IcSbCXMoyQJ/uZ93a/F8GwuMGY/K7ZyYpddTTriQzJQDQuV7S2OloCNhzC1AVSIeEuCHcSVlxy4nt6cuMmqdzgadugbiD/fX/CElJGSuUyOGLW5gU5iED8FZmmjYe0CiWdDrpe+ONw3h6NXkdSzAYOYb3o/HxHoqm3Pms6p3iGVM/0xkesd3u8iaafHFtj0reVPgIHvRvjuXklvcdvR54EaCmZW6H3XO+yeyoVKeb1lg0IOzYp/DX0a4HhNdLq0FL7tQjysdLS6jBmyvpwC3rHf0VxQ5fWywY6Yc6XiIdWrpsdmZfXP8Qm+CBYg0zpZoPDCTdJiDA1gjxXtcAUsVLlHnMbI5sgWysZwVri4eRzle6cqhbhdSYe53Iz9mPukZBYKc7QJvoy/uKOUh5cvBK+tHFVbUr8GUqlqPDkulM c83GqGt3 ZiXmIzMMrkeaWziieBFNNHnswUmV7v2foSWTasLyNWeKZcuGNc4Z19r7I1VG0oCBT/7VB3MpZjhsK4meO4lwWHCW1UFg2e70+B99OVorhxf/o28tXVZHgoizqgfZ+eOg401JxlYnuu22pN3XVL/2GZ5RelxXKQei3JU/4VZ60PNlYJD2ap9QtkOVxDhKdUe84MBL2s3qGWXG4qrp67OxvWPqgSLvAvG07SOxjAJnd+sf+dkLBFZP2Tulm3rOmeW9dh+Je2+wRnWvvuCMESiSCioOmv3qZtIUlc3wwwZdYZCblg53paRdgXI/bXU14Ra7T7xXqBP+ZsopnOywsNZh36bkikgEalxOPLRov6qZrjzrFJs0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/06/15 19:59, Tetsuo Handa wrote: > Is the reason because &buf[idx] in get_memcg_path_buf() might become out of > bounds due to preemption in normal context if PREEMPT_RT=y ? If so, can't we > add "idx >=0 && idx < CONTEXT_COUNT" check into get_memcg_path_buf() and > return NULL if preemption (or interrupt or recursion if any) exhausted the per > cpu buffer? More simply, why not use on-stack buffer, for MEMCG_PATH_BUF_SIZE is only 256? mm/mmap_lock.c | 175 ++++++------------------------------------------- 1 file changed, 20 insertions(+), 155 deletions(-) diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c index 1854850b4b89..59165c01c960 100644 --- a/mm/mmap_lock.c +++ b/mm/mmap_lock.c @@ -19,14 +19,7 @@ EXPORT_TRACEPOINT_SYMBOL(mmap_lock_released); #ifdef CONFIG_MEMCG -/* - * Our various events all share the same buffer (because we don't want or need - * to allocate a set of buffers *per event type*), so we need to protect against - * concurrent _reg() and _unreg() calls, and count how many _reg() calls have - * been made. - */ -static DEFINE_MUTEX(reg_lock); -static int reg_refcount; /* Protected by reg_lock. */ +static atomic_t reg_refcount; /* * Size of the buffer for memcg path names. Ignoring stack trace support, @@ -34,136 +27,22 @@ static int reg_refcount; /* Protected by reg_lock. */ */ #define MEMCG_PATH_BUF_SIZE MAX_FILTER_STR_VAL -/* - * How many contexts our trace events might be called in: normal, softirq, irq, - * and NMI. - */ -#define CONTEXT_COUNT 4 - -struct memcg_path { - local_lock_t lock; - char __rcu *buf; - local_t buf_idx; -}; -static DEFINE_PER_CPU(struct memcg_path, memcg_paths) = { - .lock = INIT_LOCAL_LOCK(lock), - .buf_idx = LOCAL_INIT(0), -}; - -static char **tmp_bufs; - -/* Called with reg_lock held. */ -static void free_memcg_path_bufs(void) -{ - struct memcg_path *memcg_path; - int cpu; - char **old = tmp_bufs; - - for_each_possible_cpu(cpu) { - memcg_path = per_cpu_ptr(&memcg_paths, cpu); - *(old++) = rcu_dereference_protected(memcg_path->buf, - lockdep_is_held(®_lock)); - rcu_assign_pointer(memcg_path->buf, NULL); - } - - /* Wait for inflight memcg_path_buf users to finish. */ - synchronize_rcu(); - - old = tmp_bufs; - for_each_possible_cpu(cpu) { - kfree(*(old++)); - } - - kfree(tmp_bufs); - tmp_bufs = NULL; -} - int trace_mmap_lock_reg(void) { - int cpu; - char *new; - - mutex_lock(®_lock); - - /* If the refcount is going 0->1, proceed with allocating buffers. */ - if (reg_refcount++) - goto out; - - tmp_bufs = kmalloc_array(num_possible_cpus(), sizeof(*tmp_bufs), - GFP_KERNEL); - if (tmp_bufs == NULL) - goto out_fail; - - for_each_possible_cpu(cpu) { - new = kmalloc(MEMCG_PATH_BUF_SIZE * CONTEXT_COUNT, GFP_KERNEL); - if (new == NULL) - goto out_fail_free; - rcu_assign_pointer(per_cpu_ptr(&memcg_paths, cpu)->buf, new); - /* Don't need to wait for inflights, they'd have gotten NULL. */ - } - -out: - mutex_unlock(®_lock); + atomic_inc(®_refcount); return 0; - -out_fail_free: - free_memcg_path_bufs(); -out_fail: - /* Since we failed, undo the earlier ref increment. */ - --reg_refcount; - - mutex_unlock(®_lock); - return -ENOMEM; } void trace_mmap_lock_unreg(void) { - mutex_lock(®_lock); - - /* If the refcount is going 1->0, proceed with freeing buffers. */ - if (--reg_refcount) - goto out; - - free_memcg_path_bufs(); - -out: - mutex_unlock(®_lock); -} - -static inline char *get_memcg_path_buf(void) -{ - struct memcg_path *memcg_path = this_cpu_ptr(&memcg_paths); - char *buf; - int idx; - - rcu_read_lock(); - buf = rcu_dereference(memcg_path->buf); - if (buf == NULL) { - rcu_read_unlock(); - return NULL; - } - idx = local_add_return(MEMCG_PATH_BUF_SIZE, &memcg_path->buf_idx) - - MEMCG_PATH_BUF_SIZE; - return &buf[idx]; + atomic_dec(®_refcount); } -static inline void put_memcg_path_buf(void) -{ - local_sub(MEMCG_PATH_BUF_SIZE, &this_cpu_ptr(&memcg_paths)->buf_idx); - rcu_read_unlock(); -} - -#define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \ - do { \ - const char *memcg_path; \ - local_lock(&memcg_paths.lock); \ - memcg_path = get_mm_memcg_path(mm); \ - trace_mmap_lock_##type(mm, \ - memcg_path != NULL ? memcg_path : "", \ - ##__VA_ARGS__); \ - if (likely(memcg_path != NULL)) \ - put_memcg_path_buf(); \ - local_unlock(&memcg_paths.lock); \ +#define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \ + do { \ + char buf[MEMCG_PATH_BUF_SIZE]; \ + get_mm_memcg_path(mm, buf, sizeof(buf)); \ + trace_mmap_lock_##type(mm, buf, ##__VA_ARGS__); \ } while (0) #else /* !CONFIG_MEMCG */ @@ -185,37 +64,23 @@ void trace_mmap_lock_unreg(void) #ifdef CONFIG_TRACING #ifdef CONFIG_MEMCG /* - * Write the given mm_struct's memcg path to a percpu buffer, and return a - * pointer to it. If the path cannot be determined, or no buffer was available - * (because the trace event is being unregistered), NULL is returned. - * - * Note: buffers are allocated per-cpu to avoid locking, so preemption must be - * disabled by the caller before calling us, and re-enabled only after the - * caller is done with the pointer. - * - * The caller must call put_memcg_path_buf() once the buffer is no longer - * needed. This must be done while preemption is still disabled. + * Write the given mm_struct's memcg path to on-stack buffer. If the path cannot be + * determined or the trace event is being unregistered, empty string is written. */ -static const char *get_mm_memcg_path(struct mm_struct *mm) +static void get_mm_memcg_path(struct mm_struct *mm, char *buf, size_t buflen) { - char *buf = NULL; - struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm); + struct mem_cgroup *memcg; + buf[0] = '\0'; + /* No need to get path if no trace event is registered. */ + if (!atomic_read(®_refcount)) + return; + memcg = get_mem_cgroup_from_mm(mm); if (memcg == NULL) - goto out; - if (unlikely(memcg->css.cgroup == NULL)) - goto out_put; - - buf = get_memcg_path_buf(); - if (buf == NULL) - goto out_put; - - cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE); - -out_put: + return; + if (memcg->css.cgroup) + cgroup_path(memcg->css.cgroup, buf, buflen); css_put(&memcg->css); -out: - return buf; } #endif /* CONFIG_MEMCG */ -- 2.18.4