linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: James Manning <jmm@computer.org>
To: Linux MM mailing list <linux-mm@kvack.org>
Subject: Re: 2.4: why is NR_GFPINDEX so large?
Date: Wed, 21 Jun 2000 17:22:45 -0400	[thread overview]
Message-ID: <20000621172245.A8507@bp6.sublogic.lan> (raw)
In-Reply-To: <20000621210620Z131176-21003+33@kanga.kvack.org>; from ttabi@interactivesi.com on Wed, Jun 21, 2000 at 03:59:51PM -0500

[Timur Tabi]
> ** Reply to message from Kanoj Sarcar <kanoj@google.engr.sgi.com> on Wed, 21
> Jun 2000 13:49:56 -0700 (PDT)
> > What I was warning you about is that if you shrink the array to the
> > exact size, there might be other data that comes on the same cacheline,
> > which might cause all kinds of interesting behavior (I think they call
> > this false cache sharing or some such thing).
> 
> Ok, I understand your explanation, but I have a hard time seeing how false
> cache sharing can be a bad thing.
> 
> If the cache sucks up a bunch of zeros that are never used, that's definitely
> wasted cache space.  How can that be any better than sucking up some real data
> that can be used?

The (possible) problem is that by decreasing the size of the array,
you're shifting data structures in memory and therefore shifting
their placement in caches.  Since caches exist as sets of cache lines
(an N-way associative cache having N members of each of these sets),
we may have shifted some high-traffic cachelines into the same set as
this structure.  We also may have made the situation better, but it's
hard to tell without real data on cache behavior (something I'm working
on now, but it's going slowly).

Of course, since gcc is the blessed compiler we can specify alignments
of structures to try and help the situation, and page coloring may
help the situation later down the road as well.

James
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

  parent reply	other threads:[~2000-06-21 21:22 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2000-06-21 19:48 Timur Tabi
2000-06-21 19:56 ` Kanoj Sarcar
2000-06-21 19:57   ` Timur Tabi
2000-06-21 20:23     ` Puppetmaster
2000-06-21 20:37       ` Timur Tabi
2000-06-21 20:37     ` Kanoj Sarcar
2000-06-21 20:41       ` Timur Tabi
2000-06-21 20:49         ` Kanoj Sarcar
2000-06-21 20:59           ` Timur Tabi
2000-06-21 21:10             ` Kanoj Sarcar
2000-06-21 21:28               ` Timur Tabi
2000-06-21 21:41                 ` Kanoj Sarcar
2000-06-21 21:43                   ` Timur Tabi
2000-06-22 19:26                 ` Andrea Arcangeli
2000-06-22 19:51                   ` Jamie Lokier
2000-06-23 17:41                     ` Andrea Arcangeli
2000-06-23 17:52                       ` Jamie Lokier
2000-06-23 18:02                         ` Andrea Arcangeli
2000-06-23 18:03                           ` Andrea Arcangeli
2000-06-22 20:22                   ` Kanoj Sarcar
2000-06-23 18:11                     ` Andrea Arcangeli
2000-06-21 21:22             ` James Manning [this message]
2000-06-21 21:24             ` Juan J. Quintela
2000-06-21 21:15 frankeh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20000621172245.A8507@bp6.sublogic.lan \
    --to=jmm@computer.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox