From: Mike Rapoport <rppt@kernel.org>
To: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org,
Aaron Lu <aaron.lu@intel.com>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [LSF/MM/BPF TOPIC] reducing direct map fragmentation
Date: Fri, 24 Feb 2023 16:45:21 +0200 [thread overview]
Message-ID: <Y/jNgaPoAiF0Cp+K@kernel.org> (raw)
In-Reply-To: <Y/OG9+ShCfiGNUfE@localhost>
On Mon, Feb 20, 2023 at 02:43:03PM +0000, Hyeonggon Yoo wrote:
> On Sun, Feb 19, 2023 at 08:09:07PM +0200, Mike Rapoport wrote:
> > On Sun, Feb 19, 2023 at 08:07:59AM +0000, Hyeonggon Yoo wrote:
>
> > > > My current proposal is to have a cache of 2M pages close to the page
> > > > allocator and use a GFP flag to make allocation request use that cache. On
> > > > the free() path, the pages that are mapped at PTE level will be put into
> > > > that cache.
> > >
> > > I would like to discuss not only having cache layer of pages but also how
> > > direct map could be merged correctly and efficiently.
> > >
> > > I vaguely recall that Aaron Lu sent RFC series about this and Kirill A.
> > > Shutemov's feedback was to batch merge operations. [1]
> > >
> > > Also a CPA API called by the cache layer that could merge fragmented
> > > mappings would work for merging 4K pages to 2M [2], but won't work
> > > for merging 2M mappings to 1G mappings.
> >
> > One possible way is to make CPA scan all PMDs in 1G page after merging a 2M
> > page. Not sure how efficient would it be though.
>
> That seems to be similar to what Kirill A. Shutemov has been tried.
> He may have opinions about that?
>
> [3] https://lore.kernel.org/lkml/20200416213229.19174-1-kirill.shutemov@linux.intel.com
Kirill's patch attempted to restore 1G page for each cpa_flush(), so it
scanned a lot of page tables without a guarantee that collapsing small
mappings to a large page is possible.
If we call a function that will collapse a 2M when we know for sure that
the collapse is possible, it will be more efficient.
--
Sincerely yours,
Mike.
next prev parent reply other threads:[~2023-02-24 14:45 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-01 18:06 Mike Rapoport
2023-02-19 8:07 ` Hyeonggon Yoo
2023-02-19 18:09 ` Mike Rapoport
2023-02-20 14:43 ` Hyeonggon Yoo
2023-02-24 14:45 ` Mike Rapoport [this message]
2023-04-21 9:05 ` [Lsf-pc] " Michal Hocko
2023-04-21 9:47 ` Mike Rapoport
2023-04-21 12:41 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y/jNgaPoAiF0Cp+K@kernel.org \
--to=rppt@kernel.org \
--cc=42.hyeyoo@gmail.com \
--cc=aaron.lu@intel.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox