ksummit.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Steven Rostedt <rostedt@goodmis.org>
To: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	"ksummit-discuss@lists.linuxfoundation.org"
	<ksummit-discuss@lists.linuxfoundation.org>
Subject: Re: [Ksummit-discuss] [TECH TOPIC] printk redesign
Date: Thu, 22 Jun 2017 10:06:41 -0400	[thread overview]
Message-ID: <20170622100641.1dae4e3c@gandalf.local.home> (raw)
In-Reply-To: <20170621111210.GA7502@jagdpanzerIV.localdomain>

On Wed, 21 Jun 2017 20:12:10 +0900
Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> wrote:


> I thought about it, and the question is:
> would lockless per-CPU logbuffers buy us anything? we used to have

Well, I'm not 100% happy with the current NMI approach. There is still
no "print everything" from NMI. That is, prints from NMI must be placed
in a buffer before going out, and that limits how much can be printed.
And an ftrace_dump_on_oops can be huge.

> problems with the logbuf_lock, but not any more (to the best of my
> knowledge). we deal with logbuf_lock deadlocks using printk_nmi and
> printk_safe. so I'd say that logbuf_lock probably doesn't bother us
> anymore, it's all those locks that printk can't control that bother
> us (semaphore, scheduler, timekeeping, serial consoles, etc. etc.).
> 
> so would per-CPU logbufs be better? we would need to do N-way merge
> (N per-CPU logbufs) when we print the kernel messages log, correct?

Yes.

> 
> > 2) have two types of console interfaces. A normal and a critical.
> > 
> > 3) have a thread that is woken whenever there is data in any of the
> > buffers, and reads the buffers, again lockless. But to do this in a
> > reasonable manner, unless you break the printks up in sub buffers like
> > ftrace, if the consumer isn't fast enough, newer messages are dropped.  
> 
> yes, so I definitely want to have printing offloading. but, per my
> experience, it's not all so simple when it comes to offloading. if
> we would compare offloading with the direct printing then offloading
> does change printk behaviour and I saw a number of dropped messages
> bug reports because of offloading. the existing direct printing can
> throttle the CPU that printks a lot.
> 
> direct printing
> 
> 	CPU1
> 
> 	printk
> 	call_console_drivers
> 	printk
> 	call_console_drivers
> 	...
> 	printk
> 	call_console_drivers
> 
> 
> so new logbuf entries do not appear in the logbuf until the previous
> ones are printed to the serial console. while with the offloading
> it's different:
> 
> offloading
> 
> 	CPU1				CPU2
> 	printk
> 	printk				call_console_drivers
> 	printk
> 	printk				call_console_drivers
> 	printk
> 					call_console_drivers
> 
> new logbuf entries now appear uncontrollably.
> 
> well, nothing new here. we already can have hit scenario, we just need
> one CPU spinning in console_unlock() and one or several CPUs doing
> printk. but with offloading we potentially break a trivial case - a
> single CPU that calls printk.

We could come up with another way to throttle the CPU that does all the
printks.

> 
> 
> so may be additionally to offloading we also would need some sort of
> throttling mechanism in printk.

Yes.


-- Steve

  reply	other threads:[~2017-06-22 14:06 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-19  5:21 Sergey Senozhatsky
2017-06-19  6:22 ` Hannes Reinecke
2017-06-19 14:39   ` Steven Rostedt
2017-06-19 15:20     ` Andrew Lunn
2017-06-19 15:54       ` Hannes Reinecke
2017-06-19 16:17         ` Andrew Lunn
2017-06-19 16:23         ` Mark Brown
2017-06-20 15:58           ` Sergey Senozhatsky
2017-06-20 16:44             ` Luck, Tony
2017-06-20 17:11               ` Sergey Senozhatsky
2017-06-20 17:27                 ` Mark Brown
2017-06-20 23:28                   ` Steven Rostedt
2017-06-21  7:17                     ` Hannes Reinecke
2017-06-21 11:12                     ` Sergey Senozhatsky
2017-06-22 14:06                       ` Steven Rostedt [this message]
2017-06-23  5:43                         ` Sergey Senozhatsky
2017-06-23 13:09                           ` Steven Rostedt
2017-06-21 12:23                     ` Petr Mladek
2017-06-21 14:18                       ` Andrew Lunn
2017-06-23  8:46                         ` Petr Mladek
2017-06-21 16:09                       ` Andrew Lunn
2017-06-23  8:49                         ` Petr Mladek
2017-07-19  7:35                   ` David Woodhouse
2017-07-20  7:53                     ` Sergey Senozhatsky
2017-06-20 16:09         ` Sergey Senozhatsky
2017-06-19 16:26       ` Steven Rostedt
2017-06-19 16:35         ` Andrew Lunn
2017-06-24 11:14         ` Mauro Carvalho Chehab
2017-06-24 14:06           ` Andrew Lunn
2017-06-24 22:42             ` Steven Rostedt
2017-06-24 23:21               ` Andrew Lunn
2017-06-24 23:26                 ` Linus Torvalds
2017-06-24 23:40                   ` Steven Rostedt
2017-06-26 11:16                     ` Sergey Senozhatsky
2017-06-24 23:48                   ` Al Viro
2017-06-25  1:29                     ` Andrew Lunn
2017-06-25  2:41                       ` Linus Torvalds
2017-06-26  8:46                         ` Jiri Kosina
2017-07-19  7:59                           ` David Woodhouse
2017-06-20 15:56     ` Sergey Senozhatsky
2017-06-20 18:45     ` Daniel Vetter
2017-06-21  9:29       ` Petr Mladek
2017-06-21 10:15       ` Sergey Senozhatsky
2017-06-22 13:42         ` Daniel Vetter
2017-06-22 13:48           ` Daniel Vetter
2017-06-23  9:07             ` Bartlomiej Zolnierkiewicz
2017-06-27 13:06               ` Sergey Senozhatsky
2017-06-23  5:20           ` Sergey Senozhatsky
2017-06-19 23:46 ` Josh Triplett
2017-06-20  8:24   ` Arnd Bergmann
2017-06-20 14:36     ` Steven Rostedt
2017-06-20 15:26       ` Sergey Senozhatsky
2017-06-22 16:35 ` David Howells
2017-07-19  6:24 ` Sergey Senozhatsky
2017-07-19  6:25   ` Sergey Senozhatsky
2017-07-19  7:26     ` Daniel Vetter
2017-07-20  5:19       ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170622100641.1dae4e3c@gandalf.local.home \
    --to=rostedt@goodmis.org \
    --cc=ksummit-discuss@lists.linuxfoundation.org \
    --cc=peterz@infradead.org \
    --cc=sergey.senozhatsky.work@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox