linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Sridhar, Kanchana P" <kanchana.p.sridhar@intel.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Nhat Pham <nphamcs@gmail.com>,
	Yosry Ahmed <yosryahmed@google.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"hannes@cmpxchg.org" <hannes@cmpxchg.org>,
	"chengming.zhou@linux.dev" <chengming.zhou@linux.dev>,
	"usamaarif642@gmail.com" <usamaarif642@gmail.com>,
	"ryan.roberts@arm.com" <ryan.roberts@arm.com>,
	"21cnbao@gmail.com" <21cnbao@gmail.com>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"Zou, Nanhai" <nanhai.zou@intel.com>,
	"Feghali, Wajdi K" <wajdi.k.feghali@intel.com>,
	"Gopal, Vinodh" <vinodh.gopal@intel.com>,
	"Sridhar, Kanchana P" <kanchana.p.sridhar@intel.com>
Subject: RE: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios
Date: Fri, 20 Sep 2024 16:53:11 +0000	[thread overview]
Message-ID: <SJ0PR11MB5678D6D34F8612F5CD86AE55C96C2@SJ0PR11MB5678.namprd11.prod.outlook.com> (raw)
In-Reply-To: <87msk2vgd4.fsf@yhuang6-desk2.ccr.corp.intel.com>

> -----Original Message-----
> From: Huang, Ying <ying.huang@intel.com>
> Sent: Friday, September 20, 2024 2:12 AM
> To: Sridhar, Kanchana P <kanchana.p.sridhar@intel.com>
> Cc: Nhat Pham <nphamcs@gmail.com>; Yosry Ahmed
> <yosryahmed@google.com>; linux-kernel@vger.kernel.org; linux-
> mm@kvack.org; hannes@cmpxchg.org; chengming.zhou@linux.dev;
> usamaarif642@gmail.com; ryan.roberts@arm.com; 21cnbao@gmail.com;
> akpm@linux-foundation.org; Zou, Nanhai <nanhai.zou@intel.com>; Feghali,
> Wajdi K <wajdi.k.feghali@intel.com>; Gopal, Vinodh
> <vinodh.gopal@intel.com>
> Subject: Re: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios
> 
> "Sridhar, Kanchana P" <kanchana.p.sridhar@intel.com> writes:
> 
> > Hi Nhat,
> >
> >> -----Original Message-----
> >> From: Nhat Pham <nphamcs@gmail.com>
> >> Sent: Thursday, August 29, 2024 4:46 PM
> >> To: Yosry Ahmed <yosryahmed@google.com>
> >> Cc: Sridhar, Kanchana P <kanchana.p.sridhar@intel.com>; linux-
> >> kernel@vger.kernel.org; linux-mm@kvack.org; hannes@cmpxchg.org;
> >> chengming.zhou@linux.dev; usamaarif642@gmail.com;
> >> ryan.roberts@arm.com; Huang, Ying <ying.huang@intel.com>;
> >> 21cnbao@gmail.com; akpm@linux-foundation.org; Zou, Nanhai
> >> <nanhai.zou@intel.com>; Feghali, Wajdi K <wajdi.k.feghali@intel.com>;
> >> Gopal, Vinodh <vinodh.gopal@intel.com>
> >> Subject: Re: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios
> >>
> >> On Thu, Aug 29, 2024 at 3:49 PM Yosry Ahmed
> <yosryahmed@google.com>
> >> wrote:
> >> >
> >> > On Thu, Aug 29, 2024 at 2:27 PM Kanchana P Sridhar
> >> >
> >> > We are basically comparing zram with zswap in this case, and it's not
> >> > fair because, as you mentioned, the zswap compressed data is being
> >> > accounted for while the zram compressed data isn't. I am not really
> >> > sure how valuable these test results are. Even if we remove the cgroup
> >> > accounting from zswap, we won't see an improvement, we should
> expect a
> >> > similar performance to zram.
> >> >
> >> > I think the test results that are really valuable are case 1, where
> >> > zswap users are currently disabling CONFIG_THP_SWAP, and get to
> enable
> >> > it after this series.
> >>
> >> Ah, this is a good point.
> >>
> >> I think the point of comparing mTHP zswap v.s mTHP (SSD)swap is more
> >> of a sanity check. IOW, if mTHP swap outperforms mTHP zswap, then
> >> something is wrong (otherwise why would enable zswap - might as well
> >> just use swap, since SSD swap with mTHP >>> zswap with mTHP >>>
> zswap
> >> without mTHP).
> >>
> >> That said, I don't think this benchmark can show it anyway. The access
> >> pattern here is such that all the allocated memories are really cold,
> >> so swap to disk (or to zram, which does not account memory usage
> >> towards cgroup) is better by definition... And Kanchana does not seem
> >> to have access to setup with larger SSD swapfiles? :)
> >
> > As follow up, I created a swapfile on disk to increase the SSD swap to 179G.
> 
> Are you sure you used swapfile instead of a swap partition?  From the
> following code in scan_swap_map_slots(),
> 
> 	if (order > 0) {
> 		/*
> 		 * Should not even be attempting large allocations when huge
> 		 * page swap is disabled.  Warn and fail the allocation.
> 		 */
> 		if (!IS_ENABLED(CONFIG_THP_SWAP) ||
> 		    nr_pages > SWAPFILE_CLUSTER) {
> 			VM_WARN_ON_ONCE(1);
> 			return 0;
> 		}
> 
> 		/*
> 		 * Swapfile is not block device or not using clusters so unable
> 		 * to allocate large entries.
> 		 */
> 		if (!(si->flags & SWP_BLKDEV) || !si->cluster_info)
> 			return 0;
> 	}
> 
> large folio will be split for swapfile.

I see. Thanks for this clarification. No, this is a configuration with
175G swapfile on disk + 4G SSD. Large folios being split for swapfile
probably explains the memcg_swap_fail counts in this case.

Thanks,
Kanchana

> 
> --
> Best Regards,
> Huang, Ying
> 
> >  64KB mTHP (cgroup memory.high set to 40G, no swap limit):
> >  =========================================================
> >  CONFIG_THP_SWAP=Y
> >  Sapphire Rapids server with 503 GiB RAM and 179G SSD swap backing
> device
> >  for zswap.
> >
> >  usemem --init-time -w -O --sleep 0 -n 70 1g:
> >
> >  -------------------------------------------------------------------------------
> >                     mm-unstable 9-17-2024           zswap-mTHP v6     Change wrt
> >                                  Baseline                               Baseline
> >                                  "before"                 "after"      (sleep 0)
> >  -------------------------------------------------------------------------------
> >  ZSWAP compressor       zstd     deflate-        zstd    deflate-  zstd deflate-
> >                                       iaa                     iaa            iaa
> >  -------------------------------------------------------------------------------
> >  Throughput (KB/s)    93,273       88,496     143,117     134,131    53%     52%
> >  sys time (sec)       316.68       349.00      917.88      877.74  -190%   -152%
> >  memcg_high           73,836       83,522     126,120     133,013
> >  memcg_swap_fail     261,136      324,533     494,191     578,824
> >  pswpin                   16           11           0           0
> >  pswpout           1,242,187    1,263,493           0           0
> >  zswpin                  694          668         712         702
> >  zswpout           3,991,403    4,933,901   9,289,092  10,461,948
> >  thp_swpout                0            0           0           0
> >  thp_swpout_               0            0           0           0
> >   fallback
> >  pgmajfault            3,488        3,353       3,377       3,499
> >  ZSWPOUT-64kB            n/a          n/a     110,067     103,957
> >  SWPOUT-64kB          77,637       78,968           0           0
> >  -------------------------------------------------------------------------------
> >
> > We do see 50% throughput improvement with mTHP-zswap wrt mTHP-SSD.
> > The sys time increase can be attributed to higher swapout activity
> > occurring with zswap-mTHP.
> >
> > I hope this quantifies the benefit of mTHP-zswap wrt mTHP-SSD in a
> > non-swap-constrained setup. The 4G SSD swap setup data I shared
> > in my response to Yosry also indicates better throughput with mTHP-zswap
> > as compared to mTHP-SSD.
> >
> > Please do let me know if you have any other questions/suggestions.
> >
> > Thanks,
> > Kanchana
> >
> >>
> >> >
> >> > If we really want to compare CONFIG_THP_SWAP on before and after, it
> >> > should be with SSD because that's a more conventional setup. In this
> >> > case the users that have CONFIG_THP_SWAP=y only experience the
> >> > benefits of zswap with this series. You mentioned experimenting with
> >> > usemem to keep the memory allocated longer so that you're able to have
> >> > a fair test with the small SSD swap setup. Did that work?
> >> >
> >> > I am hoping Nhat or Johannes would shed some light on whether they
> >> > usually have CONFIG_THP_SWAP enabled or not with zswap. I am trying
> to
> >> > figure out if any reasonable setups enable CONFIG_THP_SWAP with
> zswap.
> >> > Otherwise the testing results from case 1 should be sufficient.
> >> >
> >> > >
> >> > > In my opinion, even though the test set up does not provide an
> accurate
> >> > > way for a direct before/after comparison (because of zswap usage
> being
> >> > > counted in cgroup, hence towards the memory.high), it still seems
> >> > > reasonable for zswap_store to support (m)THP, so that further
> >> performance
> >> > > improvements can be implemented.
> >> >
> >> > This is only referring to the results of case 2, right?
> >> >
> >> > Honestly, I wouldn't want to merge mTHP swapout support on its own
> >> > just because it enables further performance improvements without
> >> > having actual patches for them. But I don't think this captures the
> >> > results accurately as it dismisses case 1 results (which I think are
> >> > more reasonable).
> >> >
> >> > Thnaks

  reply	other threads:[~2024-09-20 16:53 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-29 21:27 Kanchana P Sridhar
2024-08-29 21:27 ` [PATCH v6 1/3] mm: Define obj_cgroup_get() if CONFIG_MEMCG is not defined Kanchana P Sridhar
2024-08-29 21:27 ` [PATCH v6 2/3] mm: zswap: zswap_store() extended to handle mTHP folios Kanchana P Sridhar
2024-08-29 23:06   ` Yosry Ahmed
2024-09-20  1:57     ` Sridhar, Kanchana P
2024-09-02 11:37   ` Chengming Zhou
2024-09-20  2:43     ` Sridhar, Kanchana P
2024-09-16  5:55   ` Barry Song
2024-09-20 20:53     ` Sridhar, Kanchana P
2024-08-29 21:27 ` [PATCH v6 3/3] mm: swap: Count successful mTHP ZSWAP stores in sysfs mTHP zswpout stats Kanchana P Sridhar
2024-08-30  0:19   ` Nhat Pham
2024-09-20  2:32     ` Sridhar, Kanchana P
2024-09-20 22:57   ` Yosry Ahmed
2024-09-20 23:28     ` Sridhar, Kanchana P
2024-08-29 22:48 ` [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios Yosry Ahmed
2024-08-29 23:45   ` Nhat Pham
2024-08-29 23:54     ` Yosry Ahmed
2024-08-30  0:06       ` Nhat Pham
2024-08-30  0:14         ` Yosry Ahmed
2024-09-20  2:30           ` Sridhar, Kanchana P
2024-09-20  2:26         ` Sridhar, Kanchana P
2024-09-20  2:22       ` Sridhar, Kanchana P
2024-09-20  2:16     ` Sridhar, Kanchana P
2024-09-20  9:12       ` Huang, Ying
2024-09-20 16:53         ` Sridhar, Kanchana P [this message]
2024-08-30  9:27   ` Huang, Ying
2024-09-20  2:41     ` Sridhar, Kanchana P
2024-09-20  1:41   ` Sridhar, Kanchana P
2024-09-20  9:29     ` Huang, Ying
2024-09-20 17:57       ` Sridhar, Kanchana P
2024-09-20 23:15     ` Yosry Ahmed
2024-09-20 23:45       ` Sridhar, Kanchana P
2024-09-02 14:40 ` Usama Arif
2024-09-20 19:31   ` Sridhar, Kanchana P

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR11MB5678D6D34F8612F5CD86AE55C96C2@SJ0PR11MB5678.namprd11.prod.outlook.com \
    --to=kanchana.p.sridhar@intel.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chengming.zhou@linux.dev \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nanhai.zou@intel.com \
    --cc=nphamcs@gmail.com \
    --cc=ryan.roberts@arm.com \
    --cc=usamaarif642@gmail.com \
    --cc=vinodh.gopal@intel.com \
    --cc=wajdi.k.feghali@intel.com \
    --cc=ying.huang@intel.com \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox