From: SeongJae Park <sj@kernel.org>
To: Vinay Banakar <vny@google.com>
Cc: SeongJae Park <sj@kernel.org>, Bharata B Rao <bharata@amd.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
akpm@linux-foundation.org, willy@infradead.org, mgorman@suse.de,
Wei Xu <weixugc@google.com>, Greg Thelen <gthelen@google.com>
Subject: Re: [PATCH] mm: Optimize TLB flushes during page reclaim
Date: Thu, 23 Jan 2025 09:23:38 -0800 [thread overview]
Message-ID: <20250123172338.53472-1-sj@kernel.org> (raw)
In-Reply-To: <CALf+9Ye+yuntf0V7SN03kYExdGBkUNRVuLKxY83oB-AKAcJ90w@mail.gmail.com>
On Thu, 23 Jan 2025 11:11:13 -0600 Vinay Banakar <vny@google.com> wrote:
> On Wed, Jan 22, 2025 at 2:05 PM SeongJae Park <sj@kernel.org> wrote:
> > damon_pa_pageout() from mm/damon/paddr.c also calls shrink_folio_list() similar
> > to madvise.c, but it doesn't aware such batching behavior. Have you checked
> > that path?
>
> Thanks for catching this path. In damon_pa_pageout(),
> shrink_folio_list() processes all pages from a single NUMA node that
> were collected (filtered) from a single DAMON region (r->ar.start to
> r->ar.end). This means it could be processing anywhere from 1 page up
> to ULONG_MAX pages from a single node at once.
Thank you Vinay. That's same to my understanding, except that it is not
limited to a single NUMA node. A region can have any start and end physical
addresses, so it could cover memory of different NUMA nodes.
> With the patch, we'll
> send a single IPI for TLB flush for the entire region, reducing IPIs
> by a factor equal to the number of pages being reclaimed by DAMON at
> once (decided by damon_reclaim_quota).
I guess the fact that the pages could belong to differnt NUMA nodes doesn't
make difference here?
>
> My only concern here would be the overhead of maintaining the
> temporary pageout_list for batching. However, during BIO submission,
> the patch checks if the folio was reactivated, so submitting to BIO in
> bulk should be safe.
>
> Another option would be to modify shrink_folio_list() to force batch
> flushes for up to N pages (512) at a time, rather than relying on
> callers to do the batching via folio_list.
Both sounds good to me :)
Thanks,
SJ
>
> Thanks!
> Vinay
next prev parent reply other threads:[~2025-01-23 17:23 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-20 22:47 Vinay Banakar
2025-01-21 0:05 ` Vinay Banakar
2025-01-22 8:59 ` Bharata B Rao
2025-01-22 11:09 ` Vinay Banakar
2025-01-22 11:31 ` Bharata B Rao
2025-01-22 13:28 ` Vinay Banakar
2025-01-22 20:05 ` SeongJae Park
2025-01-23 17:11 ` Vinay Banakar
2025-01-23 17:23 ` SeongJae Park [this message]
2025-01-23 18:17 ` Matthew Wilcox
2025-01-21 1:43 ` Byungchul Park
2025-01-21 18:03 ` Vinay Banakar
2025-01-23 4:17 ` Rik van Riel
2025-01-23 19:16 ` Vinay Banakar
2025-01-28 22:01 ` Rik van Riel
2025-03-17 19:20 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250123172338.53472-1-sj@kernel.org \
--to=sj@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=bharata@amd.com \
--cc=gthelen@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=vny@google.com \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox