From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAF14C04FFE for ; Tue, 14 May 2024 16:16:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 47EBC6B0354; Tue, 14 May 2024 12:16:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 407486B0356; Tue, 14 May 2024 12:16:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2CE9F6B0355; Tue, 14 May 2024 12:16:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0F5F86B0353 for ; Tue, 14 May 2024 12:16:14 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7E3BCA0A9E for ; Tue, 14 May 2024 16:16:13 +0000 (UTC) X-FDA: 82117503426.11.17A4122 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf04.hostedemail.com (Postfix) with ESMTP id 1A59F40018 for ; Tue, 14 May 2024 16:16:09 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of "SRS0=swlw=MR=goodmis.org=rostedt@kernel.org" designates 145.40.73.55 as permitted sender) smtp.mailfrom="SRS0=swlw=MR=goodmis.org=rostedt@kernel.org"; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715703371; a=rsa-sha256; cv=none; b=x/3VSMdkH7gb8tHlLWzD8CgfWZhB9FKlwz9luMkhR3Z1FH+VpMtIUQmsrm+1QDIe4hA9vO an9BNyZphmk6AmUe9Qq8KoZGEi61dY9wCXgrmmW+8L2Y2ZDb/LRrZxz+/n0lnD+B5iNJeo sdyFoq0eSjlsn/c2M7HmkZTkXWQZRBY= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of "SRS0=swlw=MR=goodmis.org=rostedt@kernel.org" designates 145.40.73.55 as permitted sender) smtp.mailfrom="SRS0=swlw=MR=goodmis.org=rostedt@kernel.org"; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715703371; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references; bh=pTJjYfxxQVYv/GuMwFvBqAOsInIfk8xWxy2Oa3MLPHw=; b=t2vKb0fHu8wvRfoUAH27EvTmv42wNWqyOkyC5TRglVGW5poLwX8XCzfwX8f1++S97M5qnX uZJD8bTKV+Wbq1ipxox61eqnhlnDHbcFBgPRUlO74S0mIEw+fWZ5PE1t6lfj1D1BKvZGXa 0rnrtqDfp7lj9KI/pkvtFsUrTxbYzoU= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id A5CD4CE12B2; Tue, 14 May 2024 16:16:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D504CC4AF08; Tue, 14 May 2024 16:16:05 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1s6upK-00000003tDB-2PrG; Tue, 14 May 2024 12:16:30 -0400 Message-ID: <20240514161630.438125598@goodmis.org> User-Agent: quilt/0.68 Date: Tue, 14 May 2024 12:16:01 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , , Vincent Donnefort Subject: [for-next][PATCH 3/6] tracing: Allow user-space mapping of the ring-buffer References: <20240514161558.664348429@goodmis.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Rspamd-Queue-Id: 1A59F40018 X-Rspam-User: X-Rspamd-Server: rspam12 X-Stat-Signature: xawg64cwojkdkpmeaw5sn9b5hmem75cz X-HE-Tag: 1715703369-612697 X-HE-Meta: U2FsdGVkX18H0xdybe4+QvcHLPFHGzWSQzTGjuEgSvI3DqdEFB9lITMJu0NbjIDiIqIZRV3s4vNZlww+BG+jp6NUJTDCzFyzH0yrP8UpO0UjqHbErCkqKLHeKRt3C/N+ge86egybr13TkkQUiTBa/fSeva2u8AqZ/X3lCqQSMX4kHwi6yW8FyHfswb1DD+0cB4fmhyuWjEMwkVjFmfnKl6h77YnY9IyZTvNOCnbTVTU34hpW5szoVflV7FBIxFtfBRSyt5Z0sOpd0Qkv2AFaV+qMHEswJAHhNFB4Lp8ySmVAmA17IZ8ZKSza55YGsr9OZ9bqPIzhzLbgx0OChfkM06dyN6jc+6xPgv7vgTDiORai4N9rsb0kd1zxgM1jA5AuFV1csHwhbqDiHvbEnGjKnL6nOSa8N4mtuZXMqeN4UwVoWtzj5p/lm7pgLg40d01cv4kllu8VEWYzXaqKT/BUARkHqbecyeU4V+mqc1S+fi1IXGobFBXMYdqUXxN7c4fo9+LJSXL3lTbQknGPDHSNW29bvAVl9ZjndqFUkCV/bFawNO3JhB8yHKHPdtrWZNgKAqhvnwkVNV9itFGfPaf0Ve7LvObJ5v89HuyTWuXFusVQL2SIHJOfHkQgtSuv04tII4FWjkbbZaEOwEo26BIXf9J06UVvV9QMYW3cHwpccVd0Usf+F1Hwp80pgJD25vrl1vsIbqcnLAekrlPHX8XtvISjnBB8TvzvPbC2p+o2toGVfQVzNfgcBL28mm/UsMs1gyPKDg2lCf142rdnf3mLalj8+gQ/GGC4TeysR/nxNZpyIJSXLvzjQiWvYtdZkk5Aljb3qI4E2U12A+WZbXhn+DTWQBbhfdsEaO1CV6EBGA3Fsrwz1vy13laih911BeZahDdWWMRLtbrL6F/YPVjLRgM1na7IvPpDh+MfySAdKFXBPA2j5dQg8EzLE5vFWs99QtDJ5E81fb6lOJP4cro wLFdc3aV UfbP6o0agSUlz3xRvvhUeKBxZsHyYVbTraEaIPZF9vpW+tJvL5F1DSnXWqEkqgv4cyKDw2oF0KGyHkpUvZEpJ4VDHhCoww6ZkKGorOZvRBwqxecf2DT4FvdRZ1GdWydBhoiBgLOj9CVerJDSQcp8VfrXiKGPQzsj7w//EOV4w4lzk2sjCZM102jeWMAfMudGLYndh2u8uvCidbWhh6LUXjoD+hT2ZibTakFYP7o/+02deK5VFKJrLYqqHfOX1DcOnhzjd2D21WPapl4c3NgZ5z4yr31PFCZjJXHwCsJRdocUswx4rswR6J5u3tKAJ7Gdqccjya/+tbe8H22Tk5rXYLu/tQ73nbBZiHXWQSJ8vtECvsEaFuHkAiW+i9dfUX3FKkzK3Mp0hj9zHS5tXlVuS/v+zpDJuBfFYFxuzFLnf5qWOZ89Lwdfg6m6IAwl5M1+O8LN/XdCv5qQeL79r9IPAflrQPOMgNISvb6cXkEaFZWTyIqg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Vincent Donnefort Currently, user-space extracts data from the ring-buffer via splice, which is handy for storage or network sharing. However, due to splice limitations, it is imposible to do real-time analysis without a copy. A solution for that problem is to let the user-space map the ring-buffer directly. The mapping is exposed via the per-CPU file trace_pipe_raw. The first element of the mapping is the meta-page. It is followed by each subbuffer constituting the ring-buffer, ordered by their unique page ID: * Meta-page -- include/uapi/linux/trace_mmap.h for a description * Subbuf ID 0 * Subbuf ID 1 ... It is therefore easy to translate a subbuf ID into an offset in the mapping: reader_id = meta->reader->id; reader_offset = meta->meta_page_size + reader_id * meta->subbuf_size; When new data is available, the mapper must call a newly introduced ioctl: TRACE_MMAP_IOCTL_GET_READER. This will update the Meta-page reader ID to point to the next reader containing unread data. Mapping will prevent snapshot and buffer size modifications. Link: https://lore.kernel.org/linux-trace-kernel/20240510140435.3550353-4-vdonnefort@google.com CC: Signed-off-by: Vincent Donnefort Signed-off-by: Steven Rostedt (Google) --- include/uapi/linux/trace_mmap.h | 2 + kernel/trace/trace.c | 104 ++++++++++++++++++++++++++++++-- kernel/trace/trace.h | 1 + 3 files changed, 102 insertions(+), 5 deletions(-) diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h index b682e9925539..bd1066754220 100644 --- a/include/uapi/linux/trace_mmap.h +++ b/include/uapi/linux/trace_mmap.h @@ -43,4 +43,6 @@ struct trace_buffer_meta { __u64 Reserved2; }; +#define TRACE_MMAP_IOCTL_GET_READER _IO('T', 0x1) + #endif /* _TRACE_MMAP_H_ */ diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 233d1af39fff..a35e7f598233 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -1191,6 +1191,12 @@ static void tracing_snapshot_instance_cond(struct trace_array *tr, return; } + if (tr->mapped) { + trace_array_puts(tr, "*** BUFFER MEMORY MAPPED ***\n"); + trace_array_puts(tr, "*** Can not use snapshot (sorry) ***\n"); + return; + } + local_irq_save(flags); update_max_tr(tr, current, smp_processor_id(), cond_data); local_irq_restore(flags); @@ -1323,7 +1329,7 @@ static int tracing_arm_snapshot_locked(struct trace_array *tr) lockdep_assert_held(&trace_types_lock); spin_lock(&tr->snapshot_trigger_lock); - if (tr->snapshot == UINT_MAX) { + if (tr->snapshot == UINT_MAX || tr->mapped) { spin_unlock(&tr->snapshot_trigger_lock); return -EBUSY; } @@ -6068,7 +6074,7 @@ static void tracing_set_nop(struct trace_array *tr) { if (tr->current_trace == &nop_trace) return; - + tr->current_trace->enabled--; if (tr->current_trace->reset) @@ -8194,15 +8200,32 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos, return ret; } -/* An ioctl call with cmd 0 to the ring buffer file will wake up all waiters */ static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { struct ftrace_buffer_info *info = file->private_data; struct trace_iterator *iter = &info->iter; + int err; + + if (cmd == TRACE_MMAP_IOCTL_GET_READER) { + if (!(file->f_flags & O_NONBLOCK)) { + err = ring_buffer_wait(iter->array_buffer->buffer, + iter->cpu_file, + iter->tr->buffer_percent, + NULL, NULL); + if (err) + return err; + } - if (cmd) - return -ENOIOCTLCMD; + return ring_buffer_map_get_reader(iter->array_buffer->buffer, + iter->cpu_file); + } else if (cmd) { + return -ENOTTY; + } + /* + * An ioctl call with cmd 0 to the ring buffer file will wake up all + * waiters + */ mutex_lock(&trace_types_lock); /* Make sure the waiters see the new wait_index */ @@ -8214,6 +8237,76 @@ static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned return 0; } +#ifdef CONFIG_TRACER_MAX_TRACE +static int get_snapshot_map(struct trace_array *tr) +{ + int err = 0; + + /* + * Called with mmap_lock held. lockdep would be unhappy if we would now + * take trace_types_lock. Instead use the specific + * snapshot_trigger_lock. + */ + spin_lock(&tr->snapshot_trigger_lock); + + if (tr->snapshot || tr->mapped == UINT_MAX) + err = -EBUSY; + else + tr->mapped++; + + spin_unlock(&tr->snapshot_trigger_lock); + + /* Wait for update_max_tr() to observe iter->tr->mapped */ + if (tr->mapped == 1) + synchronize_rcu(); + + return err; + +} +static void put_snapshot_map(struct trace_array *tr) +{ + spin_lock(&tr->snapshot_trigger_lock); + if (!WARN_ON(!tr->mapped)) + tr->mapped--; + spin_unlock(&tr->snapshot_trigger_lock); +} +#else +static inline int get_snapshot_map(struct trace_array *tr) { return 0; } +static inline void put_snapshot_map(struct trace_array *tr) { } +#endif + +static void tracing_buffers_mmap_close(struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = vma->vm_file->private_data; + struct trace_iterator *iter = &info->iter; + + WARN_ON(ring_buffer_unmap(iter->array_buffer->buffer, iter->cpu_file)); + put_snapshot_map(iter->tr); +} + +static const struct vm_operations_struct tracing_buffers_vmops = { + .close = tracing_buffers_mmap_close, +}; + +static int tracing_buffers_mmap(struct file *filp, struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = filp->private_data; + struct trace_iterator *iter = &info->iter; + int ret = 0; + + ret = get_snapshot_map(iter->tr); + if (ret) + return ret; + + ret = ring_buffer_map(iter->array_buffer->buffer, iter->cpu_file, vma); + if (ret) + put_snapshot_map(iter->tr); + + vma->vm_ops = &tracing_buffers_vmops; + + return ret; +} + static const struct file_operations tracing_buffers_fops = { .open = tracing_buffers_open, .read = tracing_buffers_read, @@ -8223,6 +8316,7 @@ static const struct file_operations tracing_buffers_fops = { .splice_read = tracing_buffers_splice_read, .unlocked_ioctl = tracing_buffers_ioctl, .llseek = no_llseek, + .mmap = tracing_buffers_mmap, }; static ssize_t diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 64450615ca0c..749a182dab48 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -336,6 +336,7 @@ struct trace_array { bool allocated_snapshot; spinlock_t snapshot_trigger_lock; unsigned int snapshot; + unsigned int mapped; unsigned long max_latency; #ifdef CONFIG_FSNOTIFY struct dentry *d_max_latency; -- 2.43.0