linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: David Hildenbrand <david@redhat.com>
Cc: Stefan Roesch <shr@devkernel.io>,
	kernel-team@fb.com, linux-mm@kvack.org, riel@surriel.com,
	mhocko@suse.com, linux-kselftest@vger.kernel.org,
	linux-doc@vger.kernel.org, akpm@linux-foundation.org,
	Mike Kravetz <mike.kravetz@oracle.com>
Subject: Re: [PATCH v4 0/3] mm: process/cgroup ksm support
Date: Wed, 15 Mar 2023 17:19:27 -0400	[thread overview]
Message-ID: <20230315211927.GB116016@cmpxchg.org> (raw)
In-Reply-To: <20230315210545.GA116016@cmpxchg.org>

On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote:
> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote:
> > On 10.03.23 19:28, Stefan Roesch wrote:
> > > So far KSM can only be enabled by calling madvise for memory regions. To
> > > be able to use KSM for more workloads, KSM needs to have the ability to be
> > > enabled / disabled at the process / cgroup level.
> > > 
> > > Use case 1:
> > > The madvise call is not available in the programming language. An example for
> > > this are programs with forked workloads using a garbage collected language without
> > > pointers. In such a language madvise cannot be made available.
> > > 
> > > In addition the addresses of objects get moved around as they are garbage
> > > collected. KSM sharing needs to be enabled "from the outside" for these type of
> > > workloads.
> > > 
> > > Use case 2:
> > > The same interpreter can also be used for workloads where KSM brings no
> > > benefit or even has overhead. We'd like to be able to enable KSM on a workload
> > > by workload basis.
> > > 
> > > Use case 3:
> > > With the madvise call sharing opportunities are only enabled for the current
> > > process: it is a workload-local decision. A considerable number of sharing
> > > opportuniites may exist across multiple workloads or jobs. Only a higler level
> > > entity like a job scheduler or container can know for certain if its running
> > > one or more instances of a job. That job scheduler however doesn't have
> > > the necessary internal worklaod knowledge to make targeted madvise calls.
> > > 
> > > Security concerns:
> > > In previous discussions security concerns have been brought up. The problem is
> > > that an individual workload does not have the knowledge about what else is
> > > running on a machine. Therefore it has to be very conservative in what memory
> > > areas can be shared or not. However, if the system is dedicated to running
> > > multiple jobs within the same security domain, its the job scheduler that has
> > > the knowledge that sharing can be safely enabled and is even desirable.
> > > 
> > > Performance:
> > > Experiments with using UKSM have shown a capacity increase of around 20%.
> > 
> > Stefan, can you do me a favor and investigate which pages we end up
> > deduplicating -- especially if it's mostly only the zeropage and if it's
> > still that significant when disabling THP?
> > 
> > 
> > I'm currently investigating with some engineers on playing with enabling KSM
> > on some selected processes (enabling it blindly on all VMAs of that process
> > via madvise() ).
> > 
> > One thing we noticed is that such (~50 times) 20MiB processes end up saving
> > ~2MiB of memory per process. That made me suspicious, because it's the THP
> > size.
> > 
> > What I think happens is that we have a 2 MiB area (stack?) and only touch a
> > single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
> > 
> > KSM somehow ends up splitting that THP and deduplicates all resulting
> > zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
> > "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
> > processes with THP enabled, but I only took a look at a sample of the
> > process' smaps so far.
> 
> THP and KSM is indeed an interesting problem. Better TLB hits with
> THPs, but reduced chance of deduplicating memory - which may or may
> not result in more IO that outweighs any THP benefits.
> 
> That said, the service in the experiment referenced above has swap
> turned on and is under significant memory pressure. Unused splitpages
> would get swapped out. The difference from KSM was from deduplicating
> pages that were in active use, not internal THP fragmentation.

Brainfart, my apologies. It could have been the ksm-induced splits
themselves that allowed the unused subpages to get swapped out in the
first place.

But no, I double checked that workload just now. On a weekly average,
it has about 50 anon THPs and 12 million regular anon. THP is not a
factor in the reduction results.


  reply	other threads:[~2023-03-15 21:19 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-10 18:28 Stefan Roesch
2023-03-10 18:28 ` [PATCH v4 1/3] mm: add new api to enable ksm per process Stefan Roesch
2023-03-13 16:26   ` Johannes Weiner
2023-04-03 10:37   ` David Hildenbrand
2023-04-03 11:03     ` David Hildenbrand
2023-04-04 16:32       ` Stefan Roesch
2023-04-04 16:43       ` Stefan Roesch
2023-04-05  6:51       ` Christian Borntraeger
2023-04-05 16:04         ` David Hildenbrand
2023-04-03 15:50     ` Stefan Roesch
2023-04-03 17:02       ` David Hildenbrand
2023-03-10 18:28 ` [PATCH v4 2/3] mm: add new KSM process and sysfs knobs Stefan Roesch
2023-04-05 17:04   ` David Hildenbrand
2023-04-05 21:20     ` Stefan Roesch
2023-04-06 13:23       ` David Hildenbrand
2023-04-06 14:16         ` Johannes Weiner
2023-04-06 14:32           ` David Hildenbrand
2023-03-10 18:28 ` [PATCH v4 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
2023-03-15 20:03 ` [PATCH v4 0/3] mm: process/cgroup ksm support David Hildenbrand
2023-03-15 20:23   ` Mike Kravetz
2023-03-15 21:05   ` Johannes Weiner
2023-03-15 21:19     ` Johannes Weiner [this message]
2023-03-15 21:45       ` David Hildenbrand
2023-03-15 21:47         ` David Hildenbrand
2023-03-30 16:19         ` Stefan Roesch
2023-03-28 23:09 ` Andrew Morton
2023-03-30  4:55   ` David Hildenbrand
2023-03-30 14:26     ` Johannes Weiner
2023-03-30 14:40       ` David Hildenbrand
2023-03-30 16:41         ` Stefan Roesch
2023-04-03  9:48           ` David Hildenbrand
2023-04-03 16:34             ` Stefan Roesch
2023-04-03 17:04               ` David Hildenbrand
2023-04-06 16:59               ` Stefan Roesch
2023-04-06 17:10                 ` David Hildenbrand
2023-03-30 20:18     ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230315211927.GB116016@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=kernel-team@fb.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=riel@surriel.com \
    --cc=shr@devkernel.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox