linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: Re: State of the Page (August 2022)
Date: Fri, 12 Aug 2022 17:33:56 +0300	[thread overview]
Message-ID: <20220812143356.4kv5cycwbcy2t7ul@box.shutemov.name> (raw)
In-Reply-To: <YvZW/exP02XceTVl@casper.infradead.org>

On Fri, Aug 12, 2022 at 02:34:53PM +0100, Matthew Wilcox wrote:
> On Fri, Aug 12, 2022 at 01:16:39PM +0300, Kirill A. Shutemov wrote:
> > On Thu, Aug 11, 2022 at 10:31:21PM +0100, Matthew Wilcox wrote:
> > > ==============================
> > > State Of The Page, August 2022
> > > ==============================
> > > 
> > > I thought I'd write down where we are with struct page and where
> > > we're going, just to make sure we're all (still?) pulling in a similar
> > > direction.
> > > 
> > > Destination
> > > ===========
> > > 
> > > For some users, the size of struct page is simply too large.  At 64
> > > bytes per 4KiB page, memmap occupies 1.6% of memory.  If we can get
> > > struct page down to an 8 byte tagged pointer, it will be 0.2% of memory,
> > > which is an acceptable overhead.
> > 
> > Right. This is attractive. But it brings cost of indirection.
> 
> It does, but it also crams 8 pages into a single cacheline instead of
> occupying one cacheline per page.

If you really need info about these pages and reference their memdesc it
is likely be 9 cache lines that scattered across memory instead of 8 cache
lines next to each other in the same page.

And it's going to be two cachelines instead of one if we need info about
one page. I think it is the most common case.

Initially, I thought we can offset the cost by caching memdescs instead of
struct page/folio. Like page cache store memdesc, but it would require
memdesc_to_pfn() which is not possible, unless we want to store pfn
explicitly in memdesc.

I don't want to be buzzkill, I like the idea a lot, but abstractions are
often costly. Getting it upstream without noticeable performance
regressions going to be a challenge.

> > It can be especially painful for physical memory scanning. I guess we can
> > derive some info from memdesc type itself, like if it can be movable. But
> > still looks like an expensive change.
> 
> I just don't think of physical memory scanning as something we do
> often, or in a performance-sensitive path.  I'm OK with slowing down
> kcompactd if it makes walking the LRU list faster.
> 
> > Do you have any estimation on how much CPU time we will pay to reduce
> > memory (and cache) overhead? RAM size tend to grow faster than IPC.
> > We need to make sure it is the right direction.
> 
> I don't.  I've heard colourful metaphors from the hyperscale crowd about
> how many more VMs they could sell, usually in terms of putting pallets
> of money in the parking lot and setting them on fire.  But IPC isn't the
> right metric either, CPU performance is all about cache misses these days.

As I said above, I don't expect the new scheme to be cache-friendly
either.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


  reply	other threads:[~2022-08-12 14:31 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-11 21:31 Matthew Wilcox
2022-08-12 10:16 ` Kirill A. Shutemov
2022-08-12 13:34   ` Matthew Wilcox
2022-08-12 14:33     ` Kirill A. Shutemov [this message]
2022-08-12 14:39       ` Matthew Wilcox
2022-08-13 15:21 ` Mike Rapoport
2022-08-14 10:57   ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220812143356.4kv5cycwbcy2t7ul@box.shutemov.name \
    --to=kirill@shutemov.name \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox