From: Zi Yan <ziy@nvidia.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, Vishal Moola <vishal.moola@gmail.com>,
Hugh Dickins <hughd@google.com>, Rik van Riel <riel@surriel.com>,
David Hildenbrand <david@redhat.com>,
"Yin, Fengwei" <fengwei.yin@intel.com>
Subject: Re: Folio mapcount
Date: Fri, 10 Feb 2023 10:15:00 -0500 [thread overview]
Message-ID: <4AACFAC5-635D-4A7C-AEB7-59BCD68E7943@nvidia.com> (raw)
In-Reply-To: <Y+P+DFz3flRsiq9c@casper.infradead.org>
[-- Attachment #1: Type: text/plain, Size: 4447 bytes --]
On 8 Feb 2023, at 14:54, Matthew Wilcox wrote:
> On Wed, Feb 08, 2023 at 02:36:41PM -0500, Zi Yan wrote:
>> On 7 Feb 2023, at 11:51, Matthew Wilcox wrote:
>>
>>> On Tue, Feb 07, 2023 at 11:23:31AM -0500, Zi Yan wrote:
>>>> On 24 Jan 2023, at 13:13, Matthew Wilcox wrote:
>>>>
>>>>> Once we get to the part of the folio journey where we have
>>>>> one-pointer-per-page, we can't afford to maintain per-page state.
>>>>> Currently we maintain a per-page mapcount, and that will have to go.
>>>>> We can maintain extra state for a multi-page folio, but it has to be a
>>>>> constant amount of extra state no matter how many pages are in the folio.
>>>>>
>>>>> My proposal is that we maintain a single mapcount per folio, and its
>>>>> definition is the number of (vma, page table) tuples which have a
>>>>> reference to any pages in this folio.
>>>>
>>>> How about having two, full_folio_mapcount and partial_folio_mapcount?
>>>> If partial_folio_mapcount is 0, we can have a fast path without doing
>>>> anything at page level.
>>>
>>> A fast path for what? I don't understand your vision; can you spell it
>>> out for me? My current proposal is here:
>>
>> A fast code path for only handling folios as a whole. For cases that
>> subpages are mapped from a folio, traversing through subpages might be
>> needed and will be slow. A code separation might be cleaner and makes
>> folio as a whole handling quicker.
>
> To be clear, in this proposal, there is no subpage mapcount. I've got
> my eye on one struct folio per allocation, so there will be no more
> tail pages. The proposal has one mapcount, and that's it. I'd be
> open to saying "OK, we need two mapcounts", but not to anything that
> needs to scale per number of pages in the folio.
>
>> For your proposal, "How many VMAs have one-or-more pages of this folio mapped"
>> should be the responsibility of rmap. We could add a counter to rmap
>> instead. It seems that you are mixing page table mapping with virtual
>> address space (VMA) mapping together.
>
> rmap tells you how many VMAs cover this folio. It doesn't tell you
> how many of those VMAs have actually got any pages from it mapped.
> It's also rather slower than a simple atomic_read(), so I think
> you'll have an uphill battle trying to convince people to use rmap
> for this purpose.
>
> I'm not sure what you mean by "add a counter to rmap"? One count
> per mapped page in the vma?
>
>>>
>>> https://lore.kernel.org/linux-mm/Y+FkV4fBxHlp6FTH@casper.infradead.org/
>>>
>>> The three questions we need to be able to answer (in my current
>>> understanding) are laid out here:
>>>
>>> https://lore.kernel.org/linux-mm/Y+HblAN5bM1uYD2f@casper.infradead.org/
>>
>> I think we probably need to clarify the definition of "map" in your
>> questions. Does it mean mapped by page tables or VMAs? When a page
>> is mapped into a VMA, it can be mapped by one or more page table entries,
>> but not the other way around, right? Or is shared page table entry merged
>> now so that more than one VMAs can use a single page table entry to map
>> a folio?
>
> Mapped by page tables, just like today. It'd be quite the change to
> figure out the mapcount of a page newly brought into the page cache;
> we'd have to do an rmap walk to see how many mapcounts to give it.
> I don't think this is a great idea.
>
> As far as I know, shared page tables are only supported by hugetlbfs,
> and I prefer to stick cheese in my ears and pretend they don't exist.
>
> To be absolutely concrete about this, my proposal is:
>
> Folio brought into page cache has mapcount 0 (whether or not there are any VMAs
> that cover it)
> When we take a page fault on one of the pages in it, its mapcount
> increases from 0 to 1.
> When we take another page fault on a page in it, we do a pvmw to
> determine if any pages from this folio are already mapped by this VMA;
> we see that there is one and we do not increment the mapcount.
> We partially munmap() so that we need to unmap one of the pages.
> We remove it from the page tables and call page_remove_rmap().
> That does another pvmw and sees there's still a page in this folio
> mapped by this VMA, does not decrement the refcount
> We truncate() the file smaller than the position of the folio, which
> causes us to unmap the rest of the folio. The pvmw walk detects no
> more pages from this folio mapped and we decrement the mapcount.
>
> Clear enough?
Yes. Thanks.
--
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
next prev parent reply other threads:[~2023-02-10 15:15 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-24 18:13 Matthew Wilcox
2023-01-24 18:35 ` David Hildenbrand
2023-01-24 18:37 ` David Hildenbrand
2023-01-24 18:35 ` Yang Shi
2023-02-02 3:45 ` Mike Kravetz
2023-02-02 15:31 ` Matthew Wilcox
2023-02-07 16:19 ` Zi Yan
2023-02-07 16:44 ` Matthew Wilcox
2023-02-06 20:34 ` Matthew Wilcox
2023-02-06 22:55 ` Yang Shi
2023-02-06 23:09 ` Matthew Wilcox
2023-02-07 3:06 ` Yin, Fengwei
2023-02-07 4:08 ` Matthew Wilcox
2023-02-07 22:39 ` Peter Xu
2023-02-07 23:27 ` Matthew Wilcox
2023-02-08 19:40 ` Peter Xu
2023-02-08 20:25 ` Matthew Wilcox
2023-02-08 20:58 ` Peter Xu
2023-02-09 15:10 ` Chih-En Lin
2023-02-09 15:43 ` Peter Xu
2023-02-07 22:56 ` James Houghton
2023-02-07 23:08 ` Matthew Wilcox
2023-02-07 23:27 ` James Houghton
2023-02-07 23:35 ` Matthew Wilcox
2023-02-08 0:35 ` James Houghton
2023-02-08 2:26 ` Matthew Wilcox
2023-02-07 16:23 ` Zi Yan
2023-02-07 16:51 ` Matthew Wilcox
2023-02-08 19:36 ` Zi Yan
2023-02-08 19:54 ` Matthew Wilcox
2023-02-10 15:15 ` Zi Yan [this message]
2023-03-29 14:02 ` Yin, Fengwei
2023-07-01 1:17 ` Zi Yan
2023-07-02 9:50 ` Yin, Fengwei
2023-07-02 11:45 ` David Hildenbrand
2023-07-02 12:26 ` Matthew Wilcox
2023-07-03 20:54 ` David Hildenbrand
2023-07-02 19:51 ` Zi Yan
2023-07-03 1:09 ` Yin, Fengwei
2023-07-03 13:24 ` Zi Yan
2023-07-03 20:46 ` David Hildenbrand
2023-07-04 1:22 ` Yin, Fengwei
2023-07-04 2:25 ` Matthew Wilcox
2023-07-03 21:09 ` David Hildenbrand
-- strict thread matches above, loose matches on Subject: below --
2021-12-15 21:55 folio mapcount Matthew Wilcox
2021-12-16 9:37 ` Kirill A. Shutemov
2021-12-16 13:56 ` Matthew Wilcox
2021-12-16 15:19 ` Jason Gunthorpe
2021-12-16 15:54 ` Matthew Wilcox
2021-12-16 16:45 ` David Hildenbrand
2021-12-16 17:01 ` Jason Gunthorpe
2021-12-16 18:56 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4AACFAC5-635D-4A7C-AEB7-59BCD68E7943@nvidia.com \
--to=ziy@nvidia.com \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=hughd@google.com \
--cc=linux-mm@kvack.org \
--cc=riel@surriel.com \
--cc=vishal.moola@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox