linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nick Piggin <nickpiggin@yahoo.com.au>
To: linux-kernel <linux-kernel@vger.kernel.org>,
	Linux Memory Management <linux-mm@kvack.org>
Subject: VFS scalability (was: [rfc] lockless pagecache)
Date: Mon, 27 Jun 2005 16:43:32 +1000	[thread overview]
Message-ID: <42BFA014.9090604@yahoo.com.au> (raw)
In-Reply-To: <42BF9CD1.2030102@yahoo.com.au>

Just an interesting aside, when first testing the patch I was
using read(2) instead of nopage faults. I ran into some surprising
results there which I don't have the time to follow up at the
moment - it might be worth investigating if someone has the time,
regardless the state of the lockless pagecache work.

For the parallel workload as described in the parent post (but
read instead of fault), the vanilla kernel profile looks like
this:

  74453 total                                      0.0121
  25839 update_atime                              44.8594
  19595 _read_unlock_irq                         306.1719
  13025 do_generic_mapping_read                    5.5758
   9374 rw_verify_area                            29.2937
   1739 ia64_pal_call_static                       9.0573
   1567 default_idle                               4.0807
   1114 __copy_user                                0.4704
    848 _spin_lock                                 8.8333
    786 ia64_spinlock_contention                   8.1875
    246 ia64_save_scratch_fpregs                   3.8438
    187 ia64_load_scratch_fpregs                   2.9219
     16 file_read_actor                            0.0263
     15 fsys_bubble_down                           0.0586
     12 vfs_read                                   0.0170

This is with the filesystem mounted as noatime, so I can't work
out why update_atime is so high on the list. I suspect maybe a
false sharing issue with some other fields.

-- 
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com 
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>

  parent reply	other threads:[~2005-06-27  6:43 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-06-27  6:29 [rfc] lockless pagecache Nick Piggin
2005-06-27  6:32 ` [patch 1] mm: PG_free flag Nick Piggin
2005-06-27  6:32   ` [patch 2] mm: speculative get_page Nick Piggin
2005-06-27  6:33     ` [patch 3] radix tree: lookup_slot Nick Piggin
2005-06-27  6:34       ` [patch 4] radix tree: lockless readside Nick Piggin
2005-06-27  6:34         ` [patch 5] mm: lockless pagecache lookups Nick Piggin
2005-06-27  6:35           ` [patch 6] mm: spinlock tree_lock Nick Piggin
2005-06-27 14:12     ` [patch 2] mm: speculative get_page William Lee Irwin III
2005-06-28  0:03       ` Nick Piggin
2005-06-28  0:56         ` Nick Piggin
2005-06-28  1:22         ` William Lee Irwin III
2005-06-28  1:42           ` Nick Piggin
2005-06-28  4:06             ` William Lee Irwin III
2005-06-28  4:50               ` Nick Piggin
2005-06-28  5:08                 ` [patch 2] mm: speculative get_page, " David S. Miller, Nick Piggin
2005-06-28  5:34                   ` Nick Piggin
2005-06-28 14:19                   ` William Lee Irwin III
2005-06-28 15:43                     ` Nick Piggin
2005-06-28 17:01                       ` Christoph Lameter
2005-06-28 23:10                         ` Nick Piggin
2005-06-28 21:32                   ` Jesse Barnes
2005-06-28 22:17                     ` Christoph Lameter
2005-06-28 12:45     ` Andy Whitcroft
2005-06-28 13:16       ` Nick Piggin
2005-06-28 16:02         ` Dave Hansen
2005-06-29 16:31           ` Pavel Machek
2005-06-29 18:43             ` Dave Hansen
2005-06-29 21:22               ` Pavel Machek
2005-06-29 16:31         ` Pavel Machek
2005-06-27  6:43 ` Nick Piggin [this message]
2005-06-27  7:13   ` VFS scalability (was: [rfc] lockless pagecache) Andi Kleen
2005-06-27  7:33     ` VFS scalability Nick Piggin
2005-06-27  7:44       ` Andi Kleen
2005-06-27  8:03         ` Nick Piggin
2005-06-27  7:46 ` [rfc] lockless pagecache Andrew Morton
2005-06-27  8:02   ` Nick Piggin
2005-06-27  8:15     ` Andrew Morton
2005-06-27  8:28       ` Nick Piggin
2005-06-27  8:56     ` Lincoln Dale
2005-06-27  9:04       ` Nick Piggin
2005-06-27 18:14         ` Chen, Kenneth W
2005-06-27 18:50           ` Badari Pulavarty
2005-06-27 19:05             ` Chen, Kenneth W
2005-06-27 19:22               ` Christoph Lameter
2005-06-27 19:42                 ` Chen, Kenneth W
2005-07-05 15:11                   ` Sonny Rao
2005-07-05 15:31                     ` Martin J. Bligh
2005-07-05 15:37                       ` Sonny Rao
2005-06-27 13:17     ` Benjamin LaHaise
2005-06-28  0:32       ` Nick Piggin
2005-06-28  1:26         ` William Lee Irwin III
2005-06-27 14:08   ` Martin J. Bligh
2005-06-27 17:49   ` Christoph Lameter
2005-06-29 10:49 ` Hirokazu Takahashi
2005-06-29 11:38   ` Nick Piggin
2005-06-30  3:32     ` Hirokazu Takahashi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=42BFA014.9090604@yahoo.com.au \
    --to=nickpiggin@yahoo.com.au \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox