linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	 Yin Fengwei <fengwei.yin@intel.com>,
	David Hildenbrand <david@redhat.com>, Yu Zhao <yuzhao@google.com>,
	 Catalin Marinas <catalin.marinas@arm.com>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	 Yang Shi <shy828301@gmail.com>,
	"Huang, Ying" <ying.huang@intel.com>, Zi Yan <ziy@nvidia.com>,
	 Luis Chamberlain <mcgrof@kernel.org>,
	Itaru Kitayama <itaru.kitayama@gmail.com>,
	 "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	John Hubbard <jhubbard@nvidia.com>,
	 David Rientjes <rientjes@google.com>,
	Vlastimil Babka <vbabka@suse.cz>, Hugh Dickins <hughd@google.com>,
	 Kefeng Wang <wangkefeng.wang@huawei.com>,
	Alistair Popple <apopple@nvidia.com>,
	linux-mm@kvack.org,  linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v8 04/10] mm: thp: Support allocation of anonymous multi-size THP
Date: Wed, 6 Dec 2023 09:16:26 +1300	[thread overview]
Message-ID: <CAGsJ_4xHib66MP3-o9jpHGzKecmgb-omBXinazBbrCiwHkonEQ@mail.gmail.com> (raw)
In-Reply-To: <5216caaf-1fcf-4715-99c3-521e2a1cc756@arm.com>

On Tue, Dec 5, 2023 at 11:48 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 05/12/2023 01:24, Barry Song wrote:
> > On Tue, Dec 5, 2023 at 9:15 AM Barry Song <21cnbao@gmail.com> wrote:
> >>
> >> On Mon, Dec 4, 2023 at 6:21 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>>
> >>> Introduce the logic to allow THP to be configured (through the new sysfs
> >>> interface we just added) to allocate large folios to back anonymous
> >>> memory, which are larger than the base page size but smaller than
> >>> PMD-size. We call this new THP extension "multi-size THP" (mTHP).
> >>>
> >>> mTHP continues to be PTE-mapped, but in many cases can still provide
> >>> similar benefits to traditional PMD-sized THP: Page faults are
> >>> significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on
> >>> the configured order), but latency spikes are much less prominent
> >>> because the size of each page isn't as huge as the PMD-sized variant and
> >>> there is less memory to clear in each page fault. The number of per-page
> >>> operations (e.g. ref counting, rmap management, lru list management) are
> >>> also significantly reduced since those ops now become per-folio.
> >>>
> >>> Some architectures also employ TLB compression mechanisms to squeeze
> >>> more entries in when a set of PTEs are virtually and physically
> >>> contiguous and approporiately aligned. In this case, TLB misses will
> >>> occur less often.
> >>>
> >>> The new behaviour is disabled by default, but can be enabled at runtime
> >>> by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled
> >>> (see documentation in previous commit). The long term aim is to change
> >>> the default to include suitable lower orders, but there are some risks
> >>> around internal fragmentation that need to be better understood first.
> >>>
> >>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> >>> ---
> >>>  include/linux/huge_mm.h |   6 ++-
> >>>  mm/memory.c             | 106 ++++++++++++++++++++++++++++++++++++----
> >>>  2 files changed, 101 insertions(+), 11 deletions(-)
> >>>
> >>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> >>> index bd0eadd3befb..91a53b9835a4 100644
> >>> --- a/include/linux/huge_mm.h
> >>> +++ b/include/linux/huge_mm.h
> >>> @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr;
> >>>  #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
> >>>
> >>>  /*
> >>> - * Mask of all large folio orders supported for anonymous THP.
> >>> + * Mask of all large folio orders supported for anonymous THP; all orders up to
> >>> + * and including PMD_ORDER, except order-0 (which is not "huge") and order-1
> >>> + * (which is a limitation of the THP implementation).
> >>>   */
> >>> -#define THP_ORDERS_ALL_ANON    BIT(PMD_ORDER)
> >>> +#define THP_ORDERS_ALL_ANON    ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1)))
> >>>
> >>>  /*
> >>>   * Mask of all large folio orders supported for file THP.
> >>> diff --git a/mm/memory.c b/mm/memory.c
> >>> index 3ceeb0f45bf5..bf7e93813018 100644
> >>> --- a/mm/memory.c
> >>> +++ b/mm/memory.c
> >>> @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >>>         return ret;
> >>>  }
> >>>
> >>> +static bool pte_range_none(pte_t *pte, int nr_pages)
> >>> +{
> >>> +       int i;
> >>> +
> >>> +       for (i = 0; i < nr_pages; i++) {
> >>> +               if (!pte_none(ptep_get_lockless(pte + i)))
> >>> +                       return false;
> >>> +       }
> >>> +
> >>> +       return true;
> >>> +}
> >>> +
> >>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >>> +static struct folio *alloc_anon_folio(struct vm_fault *vmf)
> >>> +{
> >>> +       gfp_t gfp;
> >>> +       pte_t *pte;
> >>> +       unsigned long addr;
> >>> +       struct folio *folio;
> >>> +       struct vm_area_struct *vma = vmf->vma;
> >>> +       unsigned long orders;
> >>> +       int order;
> >>> +
> >>> +       /*
> >>> +        * If uffd is active for the vma we need per-page fault fidelity to
> >>> +        * maintain the uffd semantics.
> >>> +        */
> >>> +       if (userfaultfd_armed(vma))
> >>> +               goto fallback;
> >>> +
> >>> +       /*
> >>> +        * Get a list of all the (large) orders below PMD_ORDER that are enabled
> >>> +        * for this vma. Then filter out the orders that can't be allocated over
> >>> +        * the faulting address and still be fully contained in the vma.
> >>> +        */
> >>> +       orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true,
> >>> +                                         BIT(PMD_ORDER) - 1);
> >>> +       orders = thp_vma_suitable_orders(vma, vmf->address, orders);
> >>> +
> >>> +       if (!orders)
> >>> +               goto fallback;
> >>> +
> >>> +       pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK);
> >>> +       if (!pte)
> >>> +               return ERR_PTR(-EAGAIN);
> >>> +
> >>> +       order = first_order(orders);
> >>> +       while (orders) {
> >>> +               addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
> >>> +               vmf->pte = pte + pte_index(addr);
> >>> +               if (pte_range_none(vmf->pte, 1 << order))
> >>> +                       break;
> >>> +               order = next_order(&orders, order);
> >>> +       }
> >>> +
> >>> +       vmf->pte = NULL;
> >>> +       pte_unmap(pte);
> >>> +
> >>> +       gfp = vma_thp_gfp_mask(vma);
> >>> +
> >>> +       while (orders) {
> >>> +               addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
> >>> +               folio = vma_alloc_folio(gfp, order, vma, addr, true);
> >>> +               if (folio) {
> >>> +                       clear_huge_page(&folio->page, addr, 1 << order);
> >>
> >> Minor.
> >>
> >> Do we have to constantly clear a huge page? Is it possible to let
> >> post_alloc_hook()
> >> finish this job by using __GFP_ZERO/__GFP_ZEROTAGS as
> >> vma_alloc_zeroed_movable_folio() is doing?
>
> I'm currently following the same allocation pattern as is done for PMD-sized
> THP. In earlier versions of this patch I was trying to be smarter and use the
> __GFP_ZERO/__GFP_ZEROTAGS as you suggest, but I was advised to keep it simple
> and follow the existing pattern.
>
> I have a vague recollection __GFP_ZERO is not preferred for large folios because
> of some issue with virtually indexed caches? (Matthew: did I see you mention
> that in some other context?)
>
> That said, I wasn't aware that Android ships with
> CONFIG_INIT_ON_ALLOC_DEFAULT_ON (I thought it was only used as a debug option),
> so I can see the potential for some overhead reduction here.
>
> Options:
>
>  1) leave it as is and accept the duplicated clearing
>  2) Pass __GFP_ZERO and remove clear_huge_page()
>  3) define __GFP_SKIP_ZERO even when kasan is not enabled and pass it down so
>     clear_huge_page() is the only clear
>  4) make clear_huge_page() conditional on !want_init_on_alloc()
>
> I prefer option 4. What do you think?

either 1 and 4 is ok to me if we will finally remove this duplicated
clear_huge_page on top.
4 is even better as it can at least temporarily resolve the problem.

in Android gki_defconfig,
https://android.googlesource.com/kernel/common/+/refs/heads/android14-6.1-lts/arch/arm64/configs/gki_defconfig

Android always has the below,
CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y

here is some explanation for the reason,
https://source.android.com/docs/security/test/memory-safety/zero-initialized-memory

>
> As an aside, I've also noticed that clear_huge_page() should take vmf->address
> so that it clears the faulting page last to keep the cache hot. If we decide on
> an option that keeps clear_huge_page(), I'll also make that change.
>
> Thanks,
> Ryan
>
> >>

Thanks
Barry


  parent reply	other threads:[~2023-12-05 20:16 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-04 10:20 [PATCH v8 00/10] Multi-size THP for anonymous memory Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 01/10] mm: Allow deferred splitting of arbitrary anon large folios Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 02/10] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() Ryan Roberts
2023-12-05  0:58   ` Barry Song
2023-12-04 10:20 ` [PATCH v8 03/10] mm: thp: Introduce multi-size THP sysfs interface Ryan Roberts
2023-12-05  4:21   ` Barry Song
2023-12-05  9:50     ` Ryan Roberts
2023-12-05  9:57       ` David Hildenbrand
2023-12-05 10:50         ` Ryan Roberts
2023-12-05 16:57   ` David Hildenbrand
2023-12-06 13:18     ` Ryan Roberts
2023-12-07 10:56       ` Ryan Roberts
2023-12-07 11:13       ` David Hildenbrand
2023-12-07 11:22         ` Ryan Roberts
2023-12-07 11:25           ` David Hildenbrand
2023-12-07 11:44             ` Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 04/10] mm: thp: Support allocation of anonymous multi-size THP Ryan Roberts
2023-12-05  1:15   ` Barry Song
2023-12-05  1:24     ` Barry Song
2023-12-05 10:48       ` Ryan Roberts
2023-12-05 11:16         ` David Hildenbrand
2023-12-05 20:16         ` Barry Song [this message]
2023-12-06 10:15           ` Ryan Roberts
2023-12-06 10:25             ` Barry Song
2023-12-05 16:32   ` David Hildenbrand
2023-12-05 16:35     ` David Hildenbrand
2023-12-06 14:19     ` Ryan Roberts
2023-12-06 15:44       ` Ryan Roberts
2023-12-07 10:37         ` Ryan Roberts
2023-12-07 10:40           ` David Hildenbrand
2023-12-07 11:08       ` David Hildenbrand
2023-12-07 12:08         ` Ryan Roberts
2023-12-07 13:28           ` David Hildenbrand
2023-12-07 14:45             ` Ryan Roberts
2023-12-07 15:01               ` David Hildenbrand
2023-12-07 15:12                 ` Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 05/10] selftests/mm/kugepaged: Restore thp settings at exit Ryan Roberts
2023-12-05 17:00   ` David Hildenbrand
2023-12-04 10:20 ` [PATCH v8 06/10] selftests/mm: Factor out thp settings management Ryan Roberts
2023-12-05 17:03   ` David Hildenbrand
2023-12-04 10:20 ` [PATCH v8 07/10] selftests/mm: Support multi-size THP interface in thp_settings Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 08/10] selftests/mm/khugepaged: Enlighten for multi-size THP Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 09/10] selftests/mm/cow: Generalize do_run_with_thp() helper Ryan Roberts
2023-12-05  9:59   ` David Hildenbrand
2023-12-04 10:20 ` [PATCH v8 10/10] selftests/mm/cow: Add tests for anonymous multi-size THP Ryan Roberts
2023-12-05 16:00   ` David Hildenbrand
2023-12-04 19:30 ` [PATCH v8 00/10] Multi-size THP for anonymous memory Andrew Morton
2023-12-05  9:34   ` Ryan Roberts
2023-12-05  3:28 ` Barry Song
2023-12-05 11:05   ` Ryan Roberts
2023-12-05  3:37 ` John Hubbard
2023-12-05 11:13   ` Ryan Roberts
2023-12-05 18:58     ` John Hubbard
2023-12-05 14:19 ` Kefeng Wang
2023-12-06 10:08   ` Ryan Roberts
2023-12-07 15:50     ` Kefeng Wang
2023-12-05 17:21 ` David Hildenbrand
2023-12-06 10:13   ` Ryan Roberts
2023-12-06 10:22     ` David Hildenbrand
2023-12-06 14:22       ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGsJ_4xHib66MP3-o9jpHGzKecmgb-omBXinazBbrCiwHkonEQ@mail.gmail.com \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=apopple@nvidia.com \
    --cc=catalin.marinas@arm.com \
    --cc=david@redhat.com \
    --cc=fengwei.yin@intel.com \
    --cc=hughd@google.com \
    --cc=itaru.kitayama@gmail.com \
    --cc=jhubbard@nvidia.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mcgrof@kernel.org \
    --cc=rientjes@google.com \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox