linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Shaohua Li <shaohua.li@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: lkml <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>, Andi Kleen <andi@firstfloor.org>,
	"Zhang, Yanmin" <yanmin.zhang@intel.com>,
	Hugh Dickins <hughd@google.com>
Subject: Re: [PATCH]shmem: reduce one time of locking in pagefault
Date: Wed, 7 Jul 2010 09:39:19 +0800	[thread overview]
Message-ID: <20100707013919.GA22097@sli10-desk.sh.intel.com> (raw)
In-Reply-To: <20100706183254.cf67e29e.akpm@linux-foundation.org>

On Wed, Jul 07, 2010 at 09:32:54AM +0800, Andrew Morton wrote:
> On Wed, 07 Jul 2010 09:15:46 +0800 Shaohua Li <shaohua.li@intel.com> wrote:
> 
> > I'm running a shmem pagefault test case (see attached file) under a 64 CPU
> > system. Profile shows shmem_inode_info->lock is heavily contented and 100%
> > CPUs time are trying to get the lock.
> 
> I seem to remember complaining about that in 2002 ;) Faulting in a
> mapping of /dev/zero is just awful on a 4-way(!).
> 
> > In the pagefault (no swap) case,
> > shmem_getpage gets the lock twice, the last one is avoidable if we prealloc a
> > page so we could reduce one time of locking. This is what below patch does.
> > 
> > The result of the test case:
> > 2.6.35-rc3: ~20s
> > 2.6.35-rc3 + patch: ~12s
> > so this is 40% improvement.
> > 
> > One might argue if we could have better locking for shmem. But even shmem is lockless,
> > the pagefault will soon have pagecache lock heavily contented because shmem must add
> > new page to pagecache. So before we have better locking for pagecache, improving shmem
> > locking doesn't have too much improvement. I did a similar pagefault test against
> > a ramfs file, the test result is ~10.5s.
> > 
> > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > 
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index f65f840..c5f2939 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> 
> The patch doesn't make shmem_getpage() any clearer :(
> 
> shmem_inode_info.lock appears to be held too much.  Surely
> lookup_swap_cache() didn't need it (for example).
> 
> What data does shmem_inode_info.lock actually protect?
As far as my understanding, it protects shmem swp_entry, which is most used
to support swap. It also protects some accounting. If no swap, the lock almost
can be removed like tiny-shmem.

Thanks,
Shaohua

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-07-07  1:39 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-07  1:15 Shaohua Li
2010-07-07  1:32 ` Andrew Morton
2010-07-07  1:39   ` Shaohua Li [this message]
2010-07-09  1:28     ` Hugh Dickins
2010-07-09  1:13 ` Hugh Dickins
2010-07-09  2:52   ` Shaohua Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100707013919.GA22097@sli10-desk.sh.intel.com \
    --to=shaohua.li@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi@firstfloor.org \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=yanmin.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox