linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hirokazu Takahashi <taka@valinux.co.jp>
To: nickpiggin@yahoo.com.au
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [rfc] lockless pagecache
Date: Thu, 30 Jun 2005 12:32:21 +0900 (JST)	[thread overview]
Message-ID: <20050630.123221.41649450.taka@valinux.co.jp> (raw)
In-Reply-To: <42C28846.60702@yahoo.com.au>

Hi,

> > Your patches improve the performance if lots of processes are
> > accessing the same file at the same time, right?
> > 
> 
> Yes.
> 
> > If so, I think we can introduce multiple radix-trees instead,
> > which enhance each inode to be able to have two or more radix-trees
> > in it to avoid the race condition traversing the trees.
> > Some decision mechanism is needed which radix-tree each page
> > should be in, how many radix-tree should be prepared.
> > 
> > It seems to be simple and effective.
> > 
> > What do you think?
> > 
> 
> Sure it is a possibility.
> 
> I don't think you could call it effective like a completely
> lockless version is effective. You might take more locks during
> gang lookups, you may have a lot of ugly and not-always-working
> heuristics (hey, my app goes really fast if it spreads accesses
> over a 1GB file, but falls on its face with a 10MB one). You
> might get increased cache footprints for common operations.

I guess it would be enough to split a huge file into the same
size pieces simply and put each of them in its associated radix-tree
in most cases for practical use.

And I also feel your approach is interesting.

> I mainly did the patches for a bit of fun rather than to address
> a particular problem with a real workload and as such I won't be
> pushing to get them in the kernel for the time being.

I see.

I propose another idea if you don't mind, seqlock seems to make
your code much simpler though I'm not sure whether it works well
under heavy load. It would become stable without the tricks,
which makes VM hard to be enhanced in the future.

Thanks,
Hirokazu Takahashi.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>

      reply	other threads:[~2005-06-30  3:32 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-06-27  6:29 Nick Piggin
2005-06-27  6:32 ` [patch 1] mm: PG_free flag Nick Piggin
2005-06-27  6:32   ` [patch 2] mm: speculative get_page Nick Piggin
2005-06-27  6:33     ` [patch 3] radix tree: lookup_slot Nick Piggin
2005-06-27  6:34       ` [patch 4] radix tree: lockless readside Nick Piggin
2005-06-27  6:34         ` [patch 5] mm: lockless pagecache lookups Nick Piggin
2005-06-27  6:35           ` [patch 6] mm: spinlock tree_lock Nick Piggin
2005-06-27 14:12     ` [patch 2] mm: speculative get_page William Lee Irwin III
2005-06-28  0:03       ` Nick Piggin
2005-06-28  0:56         ` Nick Piggin
2005-06-28  1:22         ` William Lee Irwin III
2005-06-28  1:42           ` Nick Piggin
2005-06-28  4:06             ` William Lee Irwin III
2005-06-28  4:50               ` Nick Piggin
2005-06-28  5:08                 ` [patch 2] mm: speculative get_page, " David S. Miller, Nick Piggin
2005-06-28  5:34                   ` Nick Piggin
2005-06-28 14:19                   ` William Lee Irwin III
2005-06-28 15:43                     ` Nick Piggin
2005-06-28 17:01                       ` Christoph Lameter
2005-06-28 23:10                         ` Nick Piggin
2005-06-28 21:32                   ` Jesse Barnes
2005-06-28 22:17                     ` Christoph Lameter
2005-06-28 12:45     ` Andy Whitcroft
2005-06-28 13:16       ` Nick Piggin
2005-06-28 16:02         ` Dave Hansen
2005-06-29 16:31           ` Pavel Machek
2005-06-29 18:43             ` Dave Hansen
2005-06-29 21:22               ` Pavel Machek
2005-06-29 16:31         ` Pavel Machek
2005-06-27  6:43 ` VFS scalability (was: [rfc] lockless pagecache) Nick Piggin
2005-06-27  7:13   ` Andi Kleen
2005-06-27  7:33     ` VFS scalability Nick Piggin
2005-06-27  7:44       ` Andi Kleen
2005-06-27  8:03         ` Nick Piggin
2005-06-27  7:46 ` [rfc] lockless pagecache Andrew Morton
2005-06-27  8:02   ` Nick Piggin
2005-06-27  8:15     ` Andrew Morton
2005-06-27  8:28       ` Nick Piggin
2005-06-27  8:56     ` Lincoln Dale
2005-06-27  9:04       ` Nick Piggin
2005-06-27 18:14         ` Chen, Kenneth W
2005-06-27 18:50           ` Badari Pulavarty
2005-06-27 19:05             ` Chen, Kenneth W
2005-06-27 19:22               ` Christoph Lameter
2005-06-27 19:42                 ` Chen, Kenneth W
2005-07-05 15:11                   ` Sonny Rao
2005-07-05 15:31                     ` Martin J. Bligh
2005-07-05 15:37                       ` Sonny Rao
2005-06-27 13:17     ` Benjamin LaHaise
2005-06-28  0:32       ` Nick Piggin
2005-06-28  1:26         ` William Lee Irwin III
2005-06-27 14:08   ` Martin J. Bligh
2005-06-27 17:49   ` Christoph Lameter
2005-06-29 10:49 ` Hirokazu Takahashi
2005-06-29 11:38   ` Nick Piggin
2005-06-30  3:32     ` Hirokazu Takahashi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20050630.123221.41649450.taka@valinux.co.jp \
    --to=taka@valinux.co.jp \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nickpiggin@yahoo.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox