From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Jonathan Corbet <corbet@lwn.net>,
Mike Kravetz <mike.kravetz@oracle.com>,
Hugh Dickins <hughd@google.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Yin Fengwei <fengwei.yin@intel.com>,
Yang Shi <shy828301@gmail.com>, Zi Yan <ziy@nvidia.com>
Subject: Re: [PATCH mm-unstable v1] mm: add a total mapcount for large folios
Date: Wed, 9 Aug 2023 21:17:31 +0200 [thread overview]
Message-ID: <60b5b2a2-1d1d-661c-d61e-855178fff44d@redhat.com> (raw)
In-Reply-To: <181fcc79-b1c6-412f-9ca1-d1f21ef33e32@arm.com>
On 09.08.23 21:07, Ryan Roberts wrote:
> On 09/08/2023 09:32, David Hildenbrand wrote:
>> Let's track the total mapcount for all large folios in the first subpage.
>>
>> The total mapcount is what we actually want to know in folio_mapcount()
>> and it is also sufficient for implementing folio_mapped(). This also
>> gets rid of any "raceiness" concerns as expressed in
>> folio_total_mapcount().
>>
>> With sub-PMD THP becoming more important and things looking promising
>> that we will soon get support for such anon THP, we want to avoid looping
>> over all pages of a folio just to calculate the total mapcount. Further,
>> we may soon want to use the total mapcount in other context more
>> frequently, so prepare for reading it efficiently and atomically.
>>
>> Make room for the total mapcount in page[1] of the folio by moving
>> _nr_pages_mapped to page[2] of the folio: it is not applicable to hugetlb
>> -- and with the total mapcount in place probably also not desirable even
>> if PMD-mappable hugetlb pages could get PTE-mapped at some point -- so we
>> can overlay the hugetlb fields.
>>
>> Note that we currently don't expect any order-1 compound pages / THP in
>> rmap code, and that such support is not planned. If ever desired, we could
>> move the compound mapcount to another page, because it only applies to
>> PMD-mappable folios that are definitely larger. Let's avoid consuming
>> more space elsewhere for now -- we might need more space for a different
>> purpose in some subpages soon.
>>
>> Maintain the total mapcount also for hugetlb pages. Use the total mapcount
>> to implement folio_mapcount(), total_mapcount(), folio_mapped() and
>> page_mapped().
>>
>> We can now get rid of folio_total_mapcount() and
>> folio_large_is_mapped(), by just inlining reading of the total mapcount.
>>
>> _nr_pages_mapped is now only used in rmap code, so not accidentially
>> externally where it might be used on arbitrary order-1 pages. The remaining
>> usage is:
>>
>> (1) Detect how to adjust stats: NR_ANON_MAPPED and NR_FILE_MAPPED
>> -> If we would account the total folio as mapped when mapping a
>> page (based on the total mapcount), we could remove that usage.
>>
>> (2) Detect when to add a folio to the deferred split queue
>> -> If we would apply a different heuristic, or scan using the rmap on
>> the memory reclaim path for partially mapped anon folios to
>> split them, we could remove that usage as well.
>>
>> So maybe, we can simplify things in the future and remove
>> _nr_pages_mapped. For now, leave these things as they are, they need more
>> thought. Hugh really did a nice job implementing that precise tracking
>> after all.
>>
>> Note: Not adding order-1 sanity checks to the file_rmap functions for
>> now. For anon pages, they are certainly not required right now.
>>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Jonathan Corbet <corbet@lwn.net>
>> Cc: Mike Kravetz <mike.kravetz@oracle.com>
>> Cc: Hugh Dickins <hughd@google.com>
>> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Yin Fengwei <fengwei.yin@intel.com>
>> Cc: Yang Shi <shy828301@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> Other than the nits and query on zeroing _total_mapcount below, LGTM. If zeroing
> is correct:
>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Thanks for the review!
[...]
>>
>> static inline int total_mapcount(struct page *page)
>
> nit: couldn't total_mapcount() just be implemented as a wrapper around
> folio_mapcount()? What's the benefit of PageCompound() check instead of just
> getting the folio and checking if it's large? i.e:
Good point, let me take a look tomorrow if the compiler can optimize in
both cases equally well.
[...]
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 5f498e8025cc..6a614c559ccf 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -1479,7 +1479,7 @@ static void __destroy_compound_gigantic_folio(struct folio *folio,
>> struct page *p;
>>
>> atomic_set(&folio->_entire_mapcount, 0);
>> - atomic_set(&folio->_nr_pages_mapped, 0);
>> + atomic_set(&folio->_total_mapcount, 0);
>
> Just checking this is definitely what you intended? _total_mapcount is -1 when
> it means "no pages mapped", so 0 means 1 page mapped?
I was blindly doing what _entire_mapcount is doing: zeroing out the
values. ;)
But let's look into the details: in __destroy_compound_gigantic_folio(),
we're manually dissolving the whole compound page. So instead of
actually returning a compound page to the buddy (where we would make
sure the mapcounts are -1, to then zero them out !), we simply zero-out
the fields we use and then dissolve the compound page: to be left with a
bunch of order-0 pages where the memmap is in a clean state.
(the buddy doesn't handle that page order, so we have to do things
manually to get to order-0 pages we can reuse or free)
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2023-08-09 19:17 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-09 8:32 David Hildenbrand
2023-08-09 15:45 ` Zi Yan
2023-08-09 19:07 ` Ryan Roberts
2023-08-09 19:17 ` David Hildenbrand [this message]
2023-08-10 10:40 ` Ryan Roberts
2023-08-10 11:14 ` David Hildenbrand
2023-08-10 11:27 ` David Hildenbrand
2023-08-10 11:32 ` David Hildenbrand
2023-08-10 11:35 ` Ryan Roberts
2023-08-09 19:21 ` Matthew Wilcox
2023-08-09 19:26 ` David Hildenbrand
2023-08-10 3:14 ` Yin Fengwei
2023-08-09 21:23 ` Peter Xu
2023-08-10 3:25 ` Matthew Wilcox
2023-08-10 8:37 ` David Hildenbrand
2023-08-10 21:48 ` Peter Xu
2023-08-10 21:54 ` Matthew Wilcox
2023-08-10 21:59 ` David Hildenbrand
2023-08-11 15:03 ` Peter Xu
2023-08-11 15:14 ` Zi Yan
2023-08-11 15:17 ` David Hildenbrand
2023-08-10 8:59 ` David Hildenbrand
2023-08-10 10:48 ` Ryan Roberts
2023-08-10 17:15 ` Peter Xu
2023-08-10 17:47 ` David Hildenbrand
2023-08-10 19:02 ` Ryan Roberts
2023-08-10 20:57 ` Peter Xu
2023-08-10 21:48 ` Matthew Wilcox
2023-08-10 22:27 ` David Hildenbrand
2023-08-11 15:18 ` Peter Xu
2023-08-11 15:32 ` David Hildenbrand
2023-08-11 15:58 ` Peter Xu
2023-08-11 16:08 ` David Hildenbrand
2023-08-11 16:11 ` Zi Yan
2023-08-11 22:18 ` Peter Xu
2023-08-10 22:16 ` David Hildenbrand
2023-08-10 3:24 ` Yin Fengwei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=60b5b2a2-1d1d-661c-d61e-855178fff44d@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=corbet@lwn.net \
--cc=fengwei.yin@intel.com \
--cc=hughd@google.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox