linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nick Piggin <nickpiggin@yahoo.com.au>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mel@csn.ul.ie>, Eric Munson <ebmunson@us.ibm.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linuxppc-dev@ozlabs.org,
	libhugetlbfs-devel@lists.sourceforge.net
Subject: Re: [RFC] [PATCH 0/5 V2] Huge page backed user-space stacks
Date: Thu, 31 Jul 2008 16:26:15 +1000	[thread overview]
Message-ID: <200807311626.15709.nickpiggin@yahoo.com.au> (raw)
In-Reply-To: <20080730231428.a7bdcfa7.akpm@linux-foundation.org>

On Thursday 31 July 2008 16:14, Andrew Morton wrote:
> On Thu, 31 Jul 2008 16:04:14 +1000 Nick Piggin <nickpiggin@yahoo.com.au> 
wrote:
> > > Do we expect that this change will be replicated in other
> > > memory-intensive apps?  (I do).
> >
> > Such as what? It would be nice to see some numbers with some HPC or java
> > or DBMS workload using this. Not that I dispute it will help some cases,
> > but 10% (or 20% for ppc) I guess is getting toward the best case, short
> > of a specifically written TLB thrasher.
>
> I didn't realise the STREAM is using vast amounts of automatic memory.
> I'd assumed that it was using sane amounts of stack, but the stack TLB
> slots were getting zapped by all the heap-memory activity.  Oh well.

An easy mistake to make because that's probabably how STREAM would normally
work. I think what Mel had done is to modify the stream kernel so as to
have it operate on arrays of stack memory.


> I guess that effect is still there, but smaller.

I imagine it should be, unless you're using a CPU with seperate TLBs for
small and huge pages, and your large data set is mapped with huge pages,
in which case you might now introduce *new* TLB contention between the
stack and the dataset :)

Also, interestingly I have actually seen some CPUs whos memory operations
get significantly slower when operating on large pages than small (in the
case when there is full TLB coverage for both sizes). This would make
sense if the CPU only implements a fast L1 TLB for small pages.

So for the vast majority of workloads, where stacks are relatively small
(or slowly changing), and relatively hot, I suspect this could easily have
no benefit at best and slowdowns at worst.

But I'm not saying that as a reason not to merge it -- this is no
different from any other hugepage allocations and as usual they have to be
used selectively where they help.... I just wonder exactly where huge
stacks will help.


> I agree that few real-world apps are likely to see gains of this
> order.  More benchmarks, please :)

Would be nice, if just out of morbid curiosity :)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2008-07-31  6:26 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-07-28 19:17 Eric Munson
2008-07-28 19:17 ` [PATCH 1/5 V2] Align stack boundaries based on personality Eric Munson
2008-07-28 20:09   ` Dave Hansen
2008-07-28 19:17 ` [PATCH 2/5 V2] Add shared and reservation control to hugetlb_file_setup Eric Munson
2008-07-28 19:17 ` [PATCH 3/5] Split boundary checking from body of do_munmap Eric Munson
2008-07-28 19:17 ` [PATCH 4/5 V2] Build hugetlb backed process stacks Eric Munson
2008-07-28 20:37   ` Dave Hansen
2008-07-28 19:17 ` [PATCH 5/5 V2] [PPC] Setup stack memory segment for hugetlb pages Eric Munson
2008-07-28 20:33 ` [RFC] [PATCH 0/5 V2] Huge page backed user-space stacks Dave Hansen
2008-07-28 21:23   ` Eric B Munson
2008-07-30  8:41 ` Andrew Morton
2008-07-30 15:04   ` Eric B Munson
2008-07-30 15:08   ` Eric B Munson
2008-07-30  8:43 ` Andrew Morton
2008-07-30 17:23   ` Mel Gorman
2008-07-30 17:34     ` Andrew Morton
2008-07-30 19:30       ` Mel Gorman
2008-07-30 19:40         ` Christoph Lameter
2008-07-30 20:07         ` Andrew Morton
2008-07-31 10:31           ` Mel Gorman
2008-08-04 21:10             ` Dave Hansen
2008-08-05 11:11               ` Mel Gorman
2008-08-05 16:12                 ` Dave Hansen
2008-08-05 16:28                   ` Mel Gorman
2008-08-05 17:53                     ` Dave Hansen
2008-08-06  9:02                       ` Mel Gorman
2008-08-06 19:50                         ` Dave Hansen
2008-08-07 16:06                           ` Mel Gorman
2008-08-07 17:29                             ` Dave Hansen
2008-08-11  8:04                               ` Mel Gorman
2008-07-31  6:04       ` Nick Piggin
2008-07-31  6:14         ` Andrew Morton
2008-07-31  6:26           ` Nick Piggin [this message]
2008-07-31 11:27             ` Mel Gorman
2008-07-31 11:51               ` Nick Piggin
2008-07-31 13:50                 ` Mel Gorman
2008-07-31 14:32                   ` Michael Ellerman
2008-08-06 18:49       ` Andi Kleen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200807311626.15709.nickpiggin@yahoo.com.au \
    --to=nickpiggin@yahoo.com.au \
    --cc=akpm@linux-foundation.org \
    --cc=ebmunson@us.ibm.com \
    --cc=libhugetlbfs-devel@lists.sourceforge.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=mel@csn.ul.ie \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox