linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: Waiman Long <longman@redhat.com>,
	Daniel Jordan <daniel.m.jordan@oracle.com>,
	linux-kernel@vger.kernel.org,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Aaron Lu <aaron.lu@intel.com>,
	alex.kogan@oracle.com, akpm@linux-foundation.org,
	boqun.feng@gmail.com, brouer@redhat.com, dave.dice@oracle.com,
	Dhaval Giani <dhaval.giani@oracle.com>,
	ktkhai@virtuozzo.com, ldufour@linux.vnet.ibm.com,
	Pavel.Tatashin@microsoft.com, paulmck@linux.vnet.ibm.com,
	shady.issa@oracle.com, tariqt@mellanox.com, tglx@linutronix.de,
	tim.c.chen@intel.com, vbabka@suse.cz, yang.shi@linux.alibaba.com,
	shy828301@gmail.com, Huang Ying <ying.huang@intel.com>,
	subhra.mazumdar@oracle.com,
	Steven Sistare <steven.sistare@oracle.com>,
	jwadams@google.com, ashwinch@google.com, sqazi@google.com,
	Shakeel Butt <shakeelb@google.com>,
	walken@google.com, rientjes@google.com, junaids@google.com,
	Neha Agarwal <nehaagarwal@google.com>
Subject: Re: Plumbers 2018 - Performance and Scalability Microconference
Date: Mon, 10 Sep 2018 10:34:19 -0700	[thread overview]
Message-ID: <78fa0507-4789-415b-5b9c-18e3fcefebab@nvidia.com> (raw)
In-Reply-To: <20180910172011.GB3902@linux-r8p5>

On 9/10/18 10:20 AM, Davidlohr Bueso wrote:
> On Mon, 10 Sep 2018, Waiman Long wrote:
>> On 09/08/2018 12:13 AM, John Hubbard wrote:
[...]
>>> It's also interesting that there are two main huge page systems (THP and Hugetlbfs), and I sometimes
>>> wonder the obvious thing to wonder: are these sufficiently different to warrant remaining separate,
>>> long-term?  Yes, I realize they're quite different in some ways, but still, one wonders. :)
>>
>> One major difference between hugetlbfs and THP is that the former has to
>> be explicitly managed by the applications that use it whereas the latter
>> is done automatically without the applications being aware that THP is
>> being used at all. Performance wise, THP may or may not increase
>> application performance depending on the exact memory access pattern,
>> though the chance is usually higher that an application will benefit
>> than suffer from it.
>>
>> If an application know what it is doing, using hughtblfs can boost
>> performance more than it can ever achieved by THP. Many large enterprise
>> applications, like Oracle DB, are using hugetlbfs and explicitly disable
>> THP. So unless THP can improve its performance to a level that is
>> comparable to hugetlbfs, I won't see the later going away.
> 
> Yep, there are a few non-trivial workloads out there that flat out discourage
> thp, ie: redis to avoid latency issues.
> 

Yes, the need for guaranteed, available-now huge pages in some cases is 
understood. That's not the quite same as saying that there have to be two different
subsystems, though. Nor does it even necessarily imply that the pool has to be
reserved in the same way as hugetlbfs does it...exactly.

So I'm wondering if THP behavior can be made to mimic hugetlbfs enough (perhaps
another option, in addition to "always, never, madvise") that we could just use
THP in all cases. But the "transparent" could become a sliding scale that could
go all the way down to "opaque" (hugetlbfs behavior).


thanks,
-- 
John Hubbard
NVIDIA

  reply	other threads:[~2018-09-10 17:34 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-04 21:28 Daniel Jordan
2018-09-05  6:38 ` Mike Rapoport
2018-09-05 19:51   ` Pasha Tatashin
2018-09-06  5:49     ` Mike Rapoport
2018-09-05 15:10 ` Christopher Lameter
2018-09-05 16:17   ` Laurent Dufour
2018-09-05 17:11     ` Christopher Lameter
2018-09-05 23:01     ` Thomas Gleixner
2018-09-06  7:45       ` Laurent Dufour
2018-09-06  1:58   ` Huang, Ying
2018-09-06 14:41     ` Christopher Lameter
2018-09-07  2:17       ` Huang, Ying
2018-09-06 21:36     ` Mike Kravetz
2018-09-07  0:52       ` Hugh Dickins
2018-09-08  4:13 ` John Hubbard
2018-09-10 17:09   ` Waiman Long
2018-09-10 17:20     ` Davidlohr Bueso
2018-09-10 17:34       ` John Hubbard [this message]
2018-09-11  0:29         ` Daniel Jordan
2018-09-11 13:52           ` Waiman Long
2018-09-11  0:38   ` Daniel Jordan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=78fa0507-4789-415b-5b9c-18e3fcefebab@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=Pavel.Tatashin@microsoft.com \
    --cc=aaron.lu@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.kogan@oracle.com \
    --cc=ashwinch@google.com \
    --cc=boqun.feng@gmail.com \
    --cc=brouer@redhat.com \
    --cc=daniel.m.jordan@oracle.com \
    --cc=dave.dice@oracle.com \
    --cc=dhaval.giani@oracle.com \
    --cc=junaids@google.com \
    --cc=jwadams@google.com \
    --cc=ktkhai@virtuozzo.com \
    --cc=ldufour@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=nehaagarwal@google.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=rientjes@google.com \
    --cc=shady.issa@oracle.com \
    --cc=shakeelb@google.com \
    --cc=shy828301@gmail.com \
    --cc=sqazi@google.com \
    --cc=steven.sistare@oracle.com \
    --cc=subhra.mazumdar@oracle.com \
    --cc=tariqt@mellanox.com \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@intel.com \
    --cc=vbabka@suse.cz \
    --cc=walken@google.com \
    --cc=yang.shi@linux.alibaba.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox