From: Matthew Wilcox <willy@infradead.org>
To: linux-mm@kvack.org
Cc: Vishal Moola <vishal.moola@gmail.com>,
Hugh Dickins <hughd@google.com>, Rik van Riel <riel@surriel.com>,
David Hildenbrand <david@redhat.com>,
"Yin, Fengwei" <fengwei.yin@intel.com>
Subject: Re: Folio mapcount
Date: Mon, 6 Feb 2023 20:34:31 +0000 [thread overview]
Message-ID: <Y+FkV4fBxHlp6FTH@casper.infradead.org> (raw)
In-Reply-To: <Y9Afwds/Jl39UjEp@casper.infradead.org>
On Tue, Jan 24, 2023 at 06:13:21PM +0000, Matthew Wilcox wrote:
> Once we get to the part of the folio journey where we have
> one-pointer-per-page, we can't afford to maintain per-page state.
> Currently we maintain a per-page mapcount, and that will have to go.
> We can maintain extra state for a multi-page folio, but it has to be a
> constant amount of extra state no matter how many pages are in the folio.
>
> My proposal is that we maintain a single mapcount per folio, and its
> definition is the number of (vma, page table) tuples which have a
> reference to any pages in this folio.
I've been thinking about this a lot more, and I have changed my
mind. It works fine to answer the question "Is any page in this
folio mapped", but it's now hard to answer the question "I have it
mapped, does anybody else?" That question is asked, for example,
in madvise_cold_or_pageout_pte_range().
With this definition, if the mapcount is 1, it's definitely only mapped
by us. If it's more than 2, it's definitely mapped by somebody else (*).
If it's 2, maybe we have the folio mapped twice, and maybe we have it
mapped once and somebody else has it mapped once, so we have to consult
the rmap to find out. Not fun times.
(*) If we support folios larger than PMD size, then the answer is more
complex.
I now think the mapcount has to be defined as "How many VMAs have
one-or-more pages of this folio mapped".
That means that our future folio_add_file_rmap_range() looks a bit
like this:
{
bool add_mapcount = true;
if (nr < folio_nr_pages(folio))
add_mapcount = !folio_has_ptes(folio, vma);
if (add_mapcount)
atomic_inc(&folio->_mapcount);
__lruvec_stat_mod_folio(folio, NR_FILE_MAPPED, nr);
if (nr == HPAGE_PMD_NR)
__lruvec_stat_mod_folio(folio, folio_test_swapbacked(folio) ?
NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, nr);
mlock_vma_folio(folio, vma, nr == HPAGE_PMD_NR);
}
bool folio_mapped_in_vma(struct folio *folio, struct vm_area_struct *vma)
{
unsigned long address = vma_address(&folio->page, vma);
DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0);
if (!page_vma_mapped_walk(&pvmw))
return false;
page_vma_mapped_walk_done(&pvmw);
return true;
}
... some details to be fixed here; particularly this will currently
deadlock on the PTL, so we'd need not only to exclude the current
PMD from being examined, but also avoid a deadly embrace between
two threads (do we currently have a locking order defined for
page table locks at the same height of the tree?)
next prev parent reply other threads:[~2023-02-06 20:34 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-24 18:13 Matthew Wilcox
2023-01-24 18:35 ` David Hildenbrand
2023-01-24 18:37 ` David Hildenbrand
2023-01-24 18:35 ` Yang Shi
2023-02-02 3:45 ` Mike Kravetz
2023-02-02 15:31 ` Matthew Wilcox
2023-02-07 16:19 ` Zi Yan
2023-02-07 16:44 ` Matthew Wilcox
2023-02-06 20:34 ` Matthew Wilcox [this message]
2023-02-06 22:55 ` Yang Shi
2023-02-06 23:09 ` Matthew Wilcox
2023-02-07 3:06 ` Yin, Fengwei
2023-02-07 4:08 ` Matthew Wilcox
2023-02-07 22:39 ` Peter Xu
2023-02-07 23:27 ` Matthew Wilcox
2023-02-08 19:40 ` Peter Xu
2023-02-08 20:25 ` Matthew Wilcox
2023-02-08 20:58 ` Peter Xu
2023-02-09 15:10 ` Chih-En Lin
2023-02-09 15:43 ` Peter Xu
2023-02-07 22:56 ` James Houghton
2023-02-07 23:08 ` Matthew Wilcox
2023-02-07 23:27 ` James Houghton
2023-02-07 23:35 ` Matthew Wilcox
2023-02-08 0:35 ` James Houghton
2023-02-08 2:26 ` Matthew Wilcox
2023-02-07 16:23 ` Zi Yan
2023-02-07 16:51 ` Matthew Wilcox
2023-02-08 19:36 ` Zi Yan
2023-02-08 19:54 ` Matthew Wilcox
2023-02-10 15:15 ` Zi Yan
2023-03-29 14:02 ` Yin, Fengwei
2023-07-01 1:17 ` Zi Yan
2023-07-02 9:50 ` Yin, Fengwei
2023-07-02 11:45 ` David Hildenbrand
2023-07-02 12:26 ` Matthew Wilcox
2023-07-03 20:54 ` David Hildenbrand
2023-07-02 19:51 ` Zi Yan
2023-07-03 1:09 ` Yin, Fengwei
2023-07-03 13:24 ` Zi Yan
2023-07-03 20:46 ` David Hildenbrand
2023-07-04 1:22 ` Yin, Fengwei
2023-07-04 2:25 ` Matthew Wilcox
2023-07-03 21:09 ` David Hildenbrand
-- strict thread matches above, loose matches on Subject: below --
2021-12-15 21:55 folio mapcount Matthew Wilcox
2021-12-16 9:37 ` Kirill A. Shutemov
2021-12-16 13:56 ` Matthew Wilcox
2021-12-16 15:19 ` Jason Gunthorpe
2021-12-16 15:54 ` Matthew Wilcox
2021-12-16 16:45 ` David Hildenbrand
2021-12-16 17:01 ` Jason Gunthorpe
2021-12-16 18:56 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y+FkV4fBxHlp6FTH@casper.infradead.org \
--to=willy@infradead.org \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=hughd@google.com \
--cc=linux-mm@kvack.org \
--cc=riel@surriel.com \
--cc=vishal.moola@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox