From: Nick Piggin <npiggin@suse.de>
To: Jens Axboe <jens.axboe@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
Linux Memory Management List <linux-mm@kvack.org>
Subject: Re: [patch] splice mmap_sem deadlock
Date: Sun, 30 Sep 2007 14:07:01 +0200 [thread overview]
Message-ID: <20070930120701.GC7697@wotan.suse.de> (raw)
In-Reply-To: <20070930064646.GF11717@kernel.dk>
On Sun, Sep 30, 2007 at 08:46:46AM +0200, Jens Axboe wrote:
> On Sat, Sep 29 2007, Nick Piggin wrote:
> > On Fri, Sep 28, 2007 at 01:02:50PM -0700, Linus Torvalds wrote:
> > >
> > >
> > > On Fri, 28 Sep 2007, Jens Axboe wrote:
> > > >
> > > > Hmm, part of me doesn't like this patch, since we now end up beating on
> > > > mmap_sem for each part of the vec. It's fine for a stable patch, but how
> > > > about
> > > >
> > > > - prefaulting the iovec
> > > > - using __get_user()
> > > > - only dropping/regrabbing the lock if we have to fault
> > >
> > > "__get_user()" doesn't help any. But we should do the same thing we do for
> > > generic_file_write(), or whatever - probe it while in an atomic region.
> > >
> > > So something like the appended might work. Untested.
> >
> > I got an idea for getting rid of mmap_sem from here completely. Which
> > is why I was looking at these callers in the first place.
> >
> > It would be really convenient and help me play with the idea if mmap_sem
> > is wrapped closely around get_user_pages where possible...
>
> Well, move it back there in your first patch? Not a big deal, surely :-)
>
> > If you're really worried about mmap_sem batching here, can you just
> > avoid this complexity and do all the get_user()s up-front, before taking
> > mmap_sem at all? You only have to save PIPE_BUFFERS number of
> > them.
>
> Sure, that is easily doable at the cost of some stack. I have other
> patches that grow PIPE_BUFFERS dynamically in the pipeline, so I'd
> prefer not to since that'll then turn into a dynamic allocation.
You already have much more PIPE_BUFFERS stuff on the stack. If it
gets much bigger, you should dynamically allocate all this anyway, no?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2007-09-30 12:07 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-09-28 16:00 Nick Piggin
2007-09-28 17:31 ` Jens Axboe
2007-09-28 18:10 ` Linus Torvalds
2007-09-28 18:15 ` Jens Axboe
2007-09-28 18:23 ` Linus Torvalds
2007-09-28 19:30 ` Jens Axboe
2007-09-28 20:02 ` Linus Torvalds
2007-09-28 20:08 ` Linus Torvalds
2007-09-29 6:37 ` Jens Axboe
2007-10-01 12:03 ` Jens Axboe
2007-10-01 15:11 ` Linus Torvalds
2007-10-01 15:45 ` Balbir Singh
2007-10-01 16:11 ` Linus Torvalds
2007-10-01 18:19 ` Balbir Singh
2007-10-01 17:33 ` Jens Axboe
2007-09-29 13:10 ` Nick Piggin
2007-09-30 6:46 ` Jens Axboe
2007-09-30 12:07 ` Nick Piggin [this message]
2007-09-30 20:05 ` Jens Axboe
2007-09-30 20:12 ` Nick Piggin
2007-09-29 13:08 ` Nick Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070930120701.GC7697@wotan.suse.de \
--to=npiggin@suse.de \
--cc=akpm@linux-foundation.org \
--cc=jens.axboe@oracle.com \
--cc=linux-mm@kvack.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox