linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Cédric Villemain" <cedric@2ndquadrant.com>
To: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org
Subject: Re: mincore() & fincore()
Date: Sat, 27 Jul 2013 22:08:04 +0200	[thread overview]
Message-ID: <201307272208.14354.cedric@2ndquadrant.com> (raw)
In-Reply-To: <20130726015534.GA24060@hacker.(null)>

[-- Attachment #1: Type: Text/Plain, Size: 2061 bytes --]

> >> Johannes, I add you in CC because you're the last one who proposed
> >> something. Can I update your patch with previous suggestions from
> >> reviewers ?
> >
> >Absolutely!

OK.

> >> I'm also asking for feedback in this area, others ideas are very
> >> welcome.
> >
> >Andrew didn't like the idea of the one byte per covered page
> >representation but all proposals to express continuous ranges in a
> 
> mincore utilize byte array and the least significant bit is used to
> check if the corresponding page is currently resident in memory, I
> don't know the history, what's the reason for not using bitmap?
> 
> >more compact fashion had worse worst cases and a much more involved
> >interface.
> >
> >I do wonder if we should model fincore() after mincore() and add a
> >separate syscall to query page cache coverage with statistical output
> >(x present [y dirty, z active, whatever] in specified area) rather
> >than describing individual pages or continuous chunks of pages in
> >address order.  That might leave us with better interfaces than trying
> >to integrate all of this into one arcane syscall.

It should works too. My tool pgfincore (for postgresql) also outputs the number 
of group of contiguous in-memory page, it is to get a quick idea of 
the access pattern: from large number of groups (random) to few groups 
(sequential). So for this usage, I don't really need the full vector and page 
level information, but some stats are needed to make those sums useful.

However another usage is to snapshot/restore in-memory pages, it is useful in 
at least 2 scenarios. One for simple server restart, PostgreSQL is back to 
full speed faster when you're able to restore the previous cache content. The 
other one is similar, switchover to a previously 'cold' server or prepare a 
server to get traffic.
For those use-cases, it is interesting to have the details.

-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 490 bytes --]

  reply	other threads:[~2013-07-27 20:08 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-25 14:58 Cédric Villemain
2013-07-25 15:07 ` Cédric Villemain
2013-07-25 15:32   ` Johannes Weiner
2013-07-26  1:55     ` Wanpeng Li
2013-07-27 20:08       ` Cédric Villemain [this message]
2013-07-26  1:55     ` Wanpeng Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201307272208.14354.cedric@2ndquadrant.com \
    --to=cedric@2ndquadrant.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=liwanp@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox