linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: SeongJae Park <sj@kernel.org>
To: Piyush Sachdeva <piyushs@linux.ibm.com>
Cc: sjpark@amazon.de, damon@lists.linux.dev, linux-mm@kvack.org,
	aneesh.kumar@linux.ibm.com
Subject: Re: DAMON testing and benchmarking
Date: Wed, 14 Jun 2023 17:27:00 +0000	[thread overview]
Message-ID: <20230614172700.82480-1-sj@kernel.org> (raw)
In-Reply-To: <e6e8c252-f7b4-e71f-0315-f6cc774354d2@linux.ibm.com>

Hi Piyush,

On Tue, 13 Jun 2023 12:18:48 +0530 Piyush Sachdeva <piyushs@linux.ibm.com> wrote:

> Dear Mr. SeongJae Park,
> I hope this email finds you well.

It did, thank you for this email :)

> 
> For the last few months, I have been looking at DAMON from an end-users
> perspective and a developer's PoV. Most recently, I was focusing on
> `lru_sort.c` module that uses the `lru_prio` and `lru_deprio` operations
> which result in a more precise reclaim. In my understanding, enabling
> the "lru_sort.c" module would make intelligent decisions based on the
> access frequency of the pages and end up preventing hot page
> swaps. Hence, when integrated with an LRU algorithm, it should improve
> it.
> 
> If you could share any test/benchmark that you might have run to verify
> the above assumption?

Yes, of course.  I will share those soon.

> I did find the result numbers you posted (link below), but that doesn't
> mention the "plrus-*" scheme numbers. It also doesn't have numbers for
> running the `pageout` operation on the entire physical address space (paddr)
> i.e. the `pprcl` scheme. So, if you can link those too, it would be amazing.

We run an automated test[1] every day, against the latest damon/next tree.  And
the page you linked is the output of the test.  The latet version contains the
results from `pprcl`[2] and `plrus`[3], but I was too lazy at updating the
document, sorry.  I will try to clean up the mess as soon as possible.

> 
> Can you also share any real-world (memory-management specific) workload
> results that you would have used with DAMON in your experiments? Like
> either MongoDB or memcached over Parsec3.0 (including splash2x) - which,
> in my understanding, is less memory intensive and more architecture
> inclined.

On my personal testing setup, I'm using only parsec3 and splash2x at the
moment.  We heard some production DB system is using DAMON_RECLAIM and achieved
about 20% memory footprint reduction, though.

> 
> I also had a question regarding schemes - A scheme is highly tweakable,
> and it's what the efficiency of DAMON rests upon. The more precise the
> scheme, the more efficient DAMON will be. Hence, I'd be thankful if you
> can help me derive a config that would provide the best results.

Very good point.  Unfortunately, repeats of experience and adjustment is the
only way as of now, like other tuning practices.  Nevertheless, DAMOS supports
some safeguards such as quotas[4], watermarks[5], and filters[6].  Because
quotas feature provides prioritization, setting the access pattern a little
wide, and more focusing on tuning of the quotas might be a good practice.

I'm trying to add some more easy-to-use intuitive tuning knobs, including
feedback-based quota auto-tuning, which I shared the rough idea at LSFMM[7].

> Hope to hear from you soon.

Thank you again for this great questions.  Please feel free to ask any question
or helps :)

[1] https://github.com/awslabs/damon-tests/blob/next/perf/full_run.sh
[2] https://github.com/awslabs/damon-tests/blob/next/perf/schemes/pdarc_v4_2_2.json
[3] https://github.com/awslabs/damon-tests/blob/next/perf/schemes/plrus-2.json
[4] https://www.kernel.org/doc/html/next/mm/damon/design.html#quotas
[5] https://www.kernel.org/doc/html/next/mm/damon/design.html#watermarks
[6] https://www.kernel.org/doc/html/next/mm/damon/design.html#filters
[7] https://lwn.net/Articles/931769/


Thanks,
SJ

> 
> Test results: https://damonitor.github.io/test/result/perf/latest/html/
> 
> --
> Regards,
> Piyush Sachdeva
> 


  reply	other threads:[~2023-06-14 17:27 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-13  6:48 Piyush Sachdeva
2023-06-14 17:27 ` SeongJae Park [this message]
2023-06-16  6:52   ` Piyush Sachdeva

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230614172700.82480-1-sj@kernel.org \
    --to=sj@kernel.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=damon@lists.linux.dev \
    --cc=linux-mm@kvack.org \
    --cc=piyushs@linux.ibm.com \
    --cc=sjpark@amazon.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox