From: Yu Zhao <yuzhao@google.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Yin Fengwei <fengwei.yin@intel.com>,
David Hildenbrand <david@redhat.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Geert Uytterhoeven <geert@linux-m68k.org>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Sven Schnelle <svens@linux.ibm.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
Dave Hansen <dave.hansen@linux.intel.com>,
"H. Peter Anvin" <hpa@zytor.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-alpha@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
linux-s390@vger.kernel.org
Subject: Re: [PATCH v1 01/10] mm: Expose clear_huge_page() unconditionally
Date: Tue, 27 Jun 2023 12:26:24 -0600 [thread overview]
Message-ID: <CAOUHufaEwY=cm8mBi4HSbxYBvAr_x4_vyZZM2NYHEt-U7KaFhA@mail.gmail.com> (raw)
In-Reply-To: <91e3364f-1d1b-f959-636b-4f60bf5a577b@arm.com>
On Tue, Jun 27, 2023 at 3:41 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 27/06/2023 09:29, Yu Zhao wrote:
> > On Tue, Jun 27, 2023 at 1:21 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>
> >> On 27/06/2023 02:55, Yu Zhao wrote:
> >>> On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>>>
> >>>> In preparation for extending vma_alloc_zeroed_movable_folio() to
> >>>> allocate a arbitrary order folio, expose clear_huge_page()
> >>>> unconditionally, so that it can be used to zero the allocated folio in
> >>>> the generic implementation of vma_alloc_zeroed_movable_folio().
> >>>>
> >>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> >>>> ---
> >>>> include/linux/mm.h | 3 ++-
> >>>> mm/memory.c | 2 +-
> >>>> 2 files changed, 3 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
> >>>> index 7f1741bd870a..7e3bf45e6491 100644
> >>>> --- a/include/linux/mm.h
> >>>> +++ b/include/linux/mm.h
> >>>> @@ -3684,10 +3684,11 @@ enum mf_action_page_type {
> >>>> */
> >>>> extern const struct attribute_group memory_failure_attr_group;
> >>>>
> >>>> -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS)
> >>>> extern void clear_huge_page(struct page *page,
> >>>> unsigned long addr_hint,
> >>>> unsigned int pages_per_huge_page);
> >>>> +
> >>>> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS)
> >>>
> >>> We might not want to depend on THP eventually. Right now, we still
> >>> have to, unless splitting is optional, which seems to contradict
> >>> 06/10. (deferred_split_folio() is a nop without THP.)
> >>
> >> Yes, I agree - for large anon folios to work, we depend on THP. But I don't
> >> think that helps us here.
> >>
> >> In the next patch, I give vma_alloc_zeroed_movable_folio() an extra `order`
> >> parameter. So the generic/default version of the function now needs a way to
> >> clear a compound page.
> >>
> >> I guess I could do something like:
> >>
> >> static inline
> >> struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
> >> unsigned long vaddr, gfp_t gfp, int order)
> >> {
> >> struct folio *folio;
> >>
> >> folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp,
> >> order, vma, vaddr, false);
> >> if (folio) {
> >> #ifdef CONFIG_LARGE_FOLIO
> >> clear_huge_page(&folio->page, vaddr, 1U << order);
> >> #else
> >> BUG_ON(order != 0);
> >> clear_user_highpage(&folio->page, vaddr);
> >> #endif
> >> }
> >>
> >> return folio;
> >> }
> >>
> >> But that's pretty messy and there's no reason why other users might come along
> >> that pass order != 0 and will be surprised by the BUG_ON.
> >
> > #ifdef CONFIG_LARGE_ANON_FOLIO // depends on CONFIG_TRANSPARENT_HUGE_PAGE
> > struct folio *alloc_anon_folio(struct vm_area_struct *vma, unsigned
> > long vaddr, int order)
> > {
> > // how do_huge_pmd_anonymous_page() allocs and clears
> > vma_alloc_folio(..., *true*);
>
> This controls the mem allocation policy (see mempolicy.c::vma_alloc_folio()) not
> clearing. Clearing is done in __do_huge_pmd_anonymous_page():
>
> clear_huge_page(page, vmf->address, HPAGE_PMD_NR);
Sorry for rushing this previously. This is what I meant. The #ifdef
makes it safe to use clear_huge_page() without 01/10. I highlighted
the last parameter to vma_alloc_folio() only because it's different
from what you chose (not implying it clears the folio).
> > }
> > #else
> > #define alloc_anon_folio(vma, addr, order)
> > vma_alloc_zeroed_movable_folio(vma, addr)
> > #endif
>
> Sorry I don't get this at all... If you are suggesting to bypass
> vma_alloc_zeroed_movable_folio() entirely for the LARGE_ANON_FOLIO case
Correct.
> I don't
> think that works because the arch code adds its own gfp flags there. For
> example, arm64 adds __GFP_ZEROTAGS for VM_MTE VMAs.
I think it's the opposite: it should be safer to reuse the THP code because
1. It's an existing case that has been working for PMD_ORDER folios
mapped by PTEs, and it's an arch-independent API which would be easier
to review.
2. Use vma_alloc_zeroed_movable_folio() for large folios is a *new*
case. It's an arch-*dependent* API which I have no idea what VM_MTE
does (should do) to large folios and don't plan to answer that for
now.
> Perhaps we can do away with an arch-owned vma_alloc_zeroed_movable_folio() and
> replace it with a new arch_get_zeroed_movable_gfp_flags() then
> alloc_anon_folio() add in those flags?
>
> But I still think the cleanest, simplest change is just to unconditionally
> expose clear_huge_page() as I've done it.
The fundamental choice there as I see it is to whether the first step
of large anon folios should lean toward the THP code base or the base
page code base (I'm a big fan of the answer "Neither -- we should
create something entirely new instead"). My POV is that the THP code
base would allow us to move faster, since it's proven to work for a
very similar case (PMD_ORDER folios mapped by PTEs).
next prev parent reply other threads:[~2023-06-27 18:27 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-26 17:14 [PATCH v1 00/10] variable-order, large folios for anonymous memory Ryan Roberts
2023-06-26 17:14 ` [PATCH v1 01/10] mm: Expose clear_huge_page() unconditionally Ryan Roberts
2023-06-27 1:55 ` Yu Zhao
2023-06-27 7:21 ` Ryan Roberts
2023-06-27 8:29 ` Yu Zhao
2023-06-27 9:41 ` Ryan Roberts
2023-06-27 18:26 ` Yu Zhao [this message]
2023-06-28 10:56 ` Ryan Roberts
2023-06-26 17:14 ` [PATCH v1 02/10] mm: pass gfp flags and order to vma_alloc_zeroed_movable_folio() Ryan Roberts
2023-06-27 2:27 ` Yu Zhao
2023-06-27 7:27 ` Ryan Roberts
2023-06-26 17:14 ` [PATCH v1 03/10] mm: Introduce try_vma_alloc_movable_folio() Ryan Roberts
2023-06-27 2:34 ` Yu Zhao
2023-06-27 5:29 ` Yu Zhao
2023-06-27 7:56 ` Ryan Roberts
2023-06-28 2:32 ` Yin Fengwei
2023-06-28 11:06 ` Ryan Roberts
2023-06-26 17:14 ` [PATCH v1 04/10] mm: Implement folio_add_new_anon_rmap_range() Ryan Roberts
2023-06-27 7:08 ` Yu Zhao
2023-06-27 8:09 ` Ryan Roberts
2023-06-28 2:20 ` Yin Fengwei
2023-06-28 11:09 ` Ryan Roberts
2023-06-28 2:17 ` Yin Fengwei
2023-06-26 17:14 ` [PATCH v1 05/10] mm: Implement folio_remove_rmap_range() Ryan Roberts
2023-06-27 3:06 ` Yu Zhao
2023-06-26 17:14 ` [PATCH v1 06/10] mm: Allow deferred splitting of arbitrary large anon folios Ryan Roberts
2023-06-27 2:54 ` Yu Zhao
2023-06-28 2:43 ` Yin Fengwei
2023-06-26 17:14 ` [PATCH v1 07/10] mm: Batch-zap large anonymous folio PTE mappings Ryan Roberts
2023-06-27 3:04 ` Yu Zhao
2023-06-27 9:46 ` Ryan Roberts
2023-06-26 17:14 ` [PATCH v1 08/10] mm: Kconfig hooks to determine max anon folio allocation order Ryan Roberts
2023-06-27 2:47 ` Yu Zhao
2023-06-27 9:54 ` Ryan Roberts
2023-06-29 1:38 ` Yang Shi
2023-06-29 11:31 ` Ryan Roberts
2023-06-26 17:14 ` [PATCH v1 09/10] arm64: mm: Declare support for large anonymous folios Ryan Roberts
2023-06-27 2:53 ` Yu Zhao
2023-06-26 17:14 ` [PATCH v1 10/10] mm: Allocate large folios for anonymous memory Ryan Roberts
2023-06-27 3:01 ` Yu Zhao
2023-06-27 9:57 ` Ryan Roberts
2023-06-27 18:33 ` Yu Zhao
2023-06-29 2:13 ` Yang Shi
2023-06-29 11:30 ` Ryan Roberts
2023-06-29 17:05 ` Yang Shi
2023-06-27 3:30 ` [PATCH v1 00/10] variable-order, " Yu Zhao
2023-06-27 7:49 ` Yu Zhao
2023-06-27 9:59 ` Ryan Roberts
2023-06-28 18:22 ` Yu Zhao
2023-06-28 23:59 ` Yin Fengwei
2023-06-29 0:27 ` Yu Zhao
2023-06-29 0:31 ` Yin Fengwei
2023-06-29 15:28 ` Ryan Roberts
2023-06-29 2:21 ` Yang Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAOUHufaEwY=cm8mBi4HSbxYBvAr_x4_vyZZM2NYHEt-U7KaFhA@mail.gmail.com' \
--to=yuzhao@google.com \
--cc=akpm@linux-foundation.org \
--cc=borntraeger@linux.ibm.com \
--cc=bp@alien8.de \
--cc=catalin.marinas@arm.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=geert@linux-m68k.org \
--cc=hpa@zytor.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-alpha@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-ia64@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-m68k@lists.linux-m68k.org \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=svens@linux.ibm.com \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox