linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosryahmed@google.com>
To: "Sridhar, Kanchana P" <kanchana.p.sridhar@intel.com>
Cc: Nhat Pham <nphamcs@gmail.com>,
	 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	 "hannes@cmpxchg.org" <hannes@cmpxchg.org>,
	"ryan.roberts@arm.com" <ryan.roberts@arm.com>,
	 "Huang, Ying" <ying.huang@intel.com>,
	"21cnbao@gmail.com" <21cnbao@gmail.com>,
	 "akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"Zou, Nanhai" <nanhai.zou@intel.com>,
	 "Feghali, Wajdi K" <wajdi.k.feghali@intel.com>,
	"Gopal, Vinodh" <vinodh.gopal@intel.com>
Subject: Re: [PATCH v4 0/4] mm: ZSWAP swap-out of mTHP folios
Date: Wed, 28 Aug 2024 00:43:45 -0700	[thread overview]
Message-ID: <CAJD7tkb0Lnq+mrFtpba80ck76BF2Hnc9Rn8OVs_7dqmE2Hww2w@mail.gmail.com> (raw)
In-Reply-To: <SJ0PR11MB567807116A760D785F9822EBC9952@SJ0PR11MB5678.namprd11.prod.outlook.com>

[..]
>
> This shows that in all cases, reclaim_high() is called only from the return
> path to user mode after handling a page-fault.

I am sorry I haven't been keeping up with this thread, I don't have a
lot of capacity right now.

If my understanding is correct, the summary of the problem we are
observing here is that with high concurrency (70 processes), we
observe worse system time, worse throughput, and higher memory_high
events with zswap than SSD swap. This is true (with varying degrees)
for 4K or mTHP, and with or without charging zswap compressed memory.

Did I get that right?

I saw you also mentioned that reclaim latency is directly correlated
to higher memory_high events.

Is it possible that with SSD swap, because we wait for IO during
reclaim, this gives a chance for other processes to allocate and free
the memory they need. While with zswap because everything is
synchronous, all processes are trying to allocate their memory at the
same time resulting in higher reclaim rates?

IOW, maybe with zswap all the processes try to allocate their memory
at the same time, so the total amount of memory needed at any given
instance is much higher than memory.high, so we keep producing
memory_high events and reclaiming. If 70 processes all require 1G at
the same time, then we need 70G of memory at once, we will keep
thrashing pages in/out of zswap.

While with SSD swap, due to the waits imposed by IO, the allocations
are more spread out and more serialized, and the amount of memory
needed at any given instance is lower; resulting in less reclaim
activity and ultimately faster overall execution?

Could you please describe what the processes are doing? Are they
allocating memory and holding on to it, or immediately freeing it?

Do you have visibility into when each process allocates and frees memory?


  reply	other threads:[~2024-08-28  7:44 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-19  2:16 Kanchana P Sridhar
2024-08-19  2:16 ` [PATCH v4 1/4] mm: zswap: zswap_is_folio_same_filled() takes an index in the folio Kanchana P Sridhar
2024-08-19  2:16 ` [PATCH v4 2/4] mm: zswap: zswap_store() extended to handle mTHP folios Kanchana P Sridhar
2024-08-20 20:03   ` Sridhar, Kanchana P
2024-08-19  2:16 ` [PATCH v4 3/4] mm: Add MTHP_STAT_ZSWPOUT to sysfs per-order mthp stats Kanchana P Sridhar
2024-08-19  2:16 ` [PATCH v4 4/4] mm: swap: Count successful mTHP ZSWAP stores in sysfs mTHP stats Kanchana P Sridhar
2024-08-19  3:16 ` [PATCH v4 0/4] mm: ZSWAP swap-out of mTHP folios Huang, Ying
2024-08-19  5:12   ` Sridhar, Kanchana P
2024-08-19  5:51     ` Huang, Ying
2024-08-20  3:00       ` Sridhar, Kanchana P
2024-08-20 21:13         ` Nhat Pham
2024-08-20 22:09           ` Sridhar, Kanchana P
2024-08-21 14:42 ` Nhat Pham
2024-08-21 19:07   ` Sridhar, Kanchana P
2024-08-24  6:21     ` Sridhar, Kanchana P
2024-08-26 14:12       ` Nhat Pham
2024-08-27  6:08         ` Sridhar, Kanchana P
2024-08-27 15:23           ` Nhat Pham
2024-08-27 15:30             ` Nhat Pham
2024-08-27 18:43               ` Sridhar, Kanchana P
2024-08-28  7:27                 ` Sridhar, Kanchana P
2024-08-27 18:42             ` Sridhar, Kanchana P
2024-08-28  7:24               ` Sridhar, Kanchana P
2024-08-28  7:43                 ` Yosry Ahmed [this message]
2024-08-28 18:50                   ` Sridhar, Kanchana P
2024-08-28 22:34                     ` Yosry Ahmed
2024-08-29  0:14                       ` Sridhar, Kanchana P
2024-08-24  3:09   ` Yosry Ahmed
2024-08-24  6:24     ` Sridhar, Kanchana P
2024-08-27 14:55 ` Nhat Pham
2024-08-27 18:09   ` Sridhar, Kanchana P

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJD7tkb0Lnq+mrFtpba80ck76BF2Hnc9Rn8OVs_7dqmE2Hww2w@mail.gmail.com \
    --to=yosryahmed@google.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=kanchana.p.sridhar@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nanhai.zou@intel.com \
    --cc=nphamcs@gmail.com \
    --cc=ryan.roberts@arm.com \
    --cc=vinodh.gopal@intel.com \
    --cc=wajdi.k.feghali@intel.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox