From: Hugh Dickins <hughd@google.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Hugh Dickins <hughd@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
Michal Hocko <mhocko@suse.com>, Vlastimil Babka <vbabka@suse.cz>,
"Kirill A. Shutemov" <kirill@shutemov.name>,
David Hildenbrand <david@redhat.com>,
Alistair Popple <apopple@nvidia.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Rik van Riel <riel@surriel.com>,
Suren Baghdasaryan <surenb@google.com>,
Yu Zhao <yuzhao@google.com>, Greg Thelen <gthelen@google.com>,
Shakeel Butt <shakeelb@google.com>,
Yang Li <yang.lee@linux.alibaba.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v2 04/13] mm/munlock: rmap call mlock_vma_page() munlock_vma_page()
Date: Tue, 15 Feb 2022 13:38:20 -0800 (PST) [thread overview]
Message-ID: <3c6097a7-df8c-f39c-36e8-8b5410e76c8a@google.com> (raw)
In-Reply-To: <YgvFMjWPITbD1o64@casper.infradead.org>
On Tue, 15 Feb 2022, Matthew Wilcox wrote:
> On Mon, Feb 14, 2022 at 06:26:39PM -0800, Hugh Dickins wrote:
> > Add vma argument to mlock_vma_page() and munlock_vma_page(), make them
> > inline functions which check (vma->vm_flags & VM_LOCKED) before calling
> > mlock_page() and munlock_page() in mm/mlock.c.
> >
> > Add bool compound to mlock_vma_page() and munlock_vma_page(): this is
> > because we have understandable difficulty in accounting pte maps of THPs,
> > and if passed a PageHead page, mlock_page() and munlock_page() cannot
> > tell whether it's a pmd map to be counted or a pte map to be ignored.
> >
> [...]
> >
> > Mlock accounting on THPs has been hard to define, differed between anon
> > and file, involved PageDoubleMap in some places and not others, required
> > clear_page_mlock() at some points. Keep it simple now: just count the
> > pmds and ignore the ptes, there is no reason for ptes to undo pmd mlocks.
>
> How would you suggest we handle the accounting for folios which are
> intermediate in size between PMDs and PTEs? eg, an order-4 page?
> Would it make sense to increment mlock_count by HUGE_PMD_NR for
> each PMD mapping and by 1 for each PTE mapping?
I think you're asking the wrong question here, but perhaps you've
already decided there's only one satisfactory answer to the right question.
To answer what you've asked: it doesn't matter at all how you count them
in mlock_count, just so long as they are counted up and down consistently.
Since it's simplest just to count 1 in mlock_count for each pmd or pte,
I prefer that (as I did with THPs); but if you prefer to count pmds up
and down by HUGE_PMD_NR, that works too.
Though, reading again, you're asking about a PMD mapping of an order-4
page? I don't understand how that could be allowed (except on some
non-x86 architecture where the page table fits only 16 pages).
The question I thought you should be asking is about how to count them
in Mlocked. That's tough; but I take it for granted that you would not
want per-subpage flags and counts involved (or not unless forced to do
so by some regression that turns out to matter). And I think the only
satisfactory answer is to count the whole compound_nr() as Mlocked
when any part of it (a single pte, a series of ptes, a pmd) is mlocked;
and (try to) move folio to Unevictable whenever any part of it is mlocked.
That differs from what Kirill decided for THPs (which I cannot
confidently describe, but something like count pmd as Mlocked, don't count
ptes as Mlocked, but uncount pmd if any ptes), and what I simplified it to
in the mm/munlock series (count pmd as Mlocked, ignore ptes), and will
tend to show larger numbers for Mlocked than before; but alternatives
seem unworkable to me.
Hugh
next prev parent reply other threads:[~2022-02-15 21:38 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-15 2:18 [PATCH v2 00/13] mm/munlock: rework of mlock+munlock page handling Hugh Dickins
2022-02-15 2:20 ` [PATCH v2 01/13] mm/munlock: delete page_mlock() and all its works Hugh Dickins
2022-02-15 2:21 ` [PATCH v2 02/13] mm/munlock: delete FOLL_MLOCK and FOLL_POPULATE Hugh Dickins
2022-02-15 2:23 ` [PATCH v2 03/13] mm/munlock: delete munlock_vma_pages_all(), allow oomreap Hugh Dickins
2022-02-15 2:26 ` [PATCH v2 04/13] mm/munlock: rmap call mlock_vma_page() munlock_vma_page() Hugh Dickins
2022-02-15 15:22 ` Matthew Wilcox
2022-02-15 21:38 ` Hugh Dickins [this message]
2022-02-15 23:21 ` Matthew Wilcox
2022-02-15 2:28 ` [PATCH v2 05/13] mm/munlock: replace clear_page_mlock() by final clearance Hugh Dickins
2022-02-15 2:29 ` [PATCH v2 06/13] mm/munlock: maintain page->mlock_count while unevictable Hugh Dickins
2022-02-15 2:31 ` [PATCH v2 07/13] mm/munlock: mlock_pte_range() when mlocking or munlocking Hugh Dickins
2022-02-18 6:35 ` [mm/munlock] 237b445401: stress-ng.remap.ops_per_sec -62.6% regression kernel test robot
2022-02-18 8:49 ` Hugh Dickins
2022-02-21 6:32 ` Hugh Dickins
2022-02-24 8:37 ` Oliver Sang
2022-02-15 2:33 ` [PATCH v2 08/13] mm/migrate: __unmap_and_move() push good newpage to LRU Hugh Dickins
2022-02-15 2:34 ` [PATCH v2 09/13] mm/munlock: delete smp_mb() from __pagevec_lru_add_fn() Hugh Dickins
2022-02-15 2:37 ` [PATCH v2 10/13] mm/munlock: mlock_page() munlock_page() batch by pagevec Hugh Dickins
2022-02-15 16:40 ` Matthew Wilcox
2022-02-15 21:02 ` Hugh Dickins
2022-02-15 22:56 ` Matthew Wilcox
2022-02-15 2:38 ` [PATCH v2 11/13] mm/munlock: page migration needs mlock pagevec drained Hugh Dickins
2022-02-15 2:40 ` [PATCH v2 12/13] mm/thp: collapse_file() do try_to_unmap(TTU_BATCH_FLUSH) Hugh Dickins
2022-02-15 2:42 ` [PATCH v2 13/13] mm/thp: shrink_page_list() avoid splitting VM_LOCKED THP Hugh Dickins
2022-02-15 19:35 ` [PATCH v2 00/13] mm/munlock: rework of mlock+munlock page handling Matthew Wilcox
-- strict thread matches above, loose matches on Subject: below --
2022-02-06 21:27 [PATCH " Hugh Dickins
2022-02-06 21:36 ` [PATCH 04/13] mm/munlock: rmap call mlock_vma_page() munlock_vma_page() Hugh Dickins
2022-02-11 10:29 ` Vlastimil Babka
2022-02-14 7:05 ` [PATCH v2 " Hugh Dickins
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3c6097a7-df8c-f39c-36e8-8b5410e76c8a@google.com \
--to=hughd@google.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=david@redhat.com \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=riel@surriel.com \
--cc=shakeelb@google.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=yang.lee@linux.alibaba.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox