linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	 Yin Fengwei <fengwei.yin@intel.com>,
	David Hildenbrand <david@redhat.com>, Yu Zhao <yuzhao@google.com>,
	 Catalin Marinas <catalin.marinas@arm.com>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	 Yang Shi <shy828301@gmail.com>,
	"Huang, Ying" <ying.huang@intel.com>, Zi Yan <ziy@nvidia.com>,
	 Luis Chamberlain <mcgrof@kernel.org>,
	Itaru Kitayama <itaru.kitayama@gmail.com>,
	 "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	John Hubbard <jhubbard@nvidia.com>,
	 David Rientjes <rientjes@google.com>,
	Vlastimil Babka <vbabka@suse.cz>, Hugh Dickins <hughd@google.com>,
	 Kefeng Wang <wangkefeng.wang@huawei.com>,
	Alistair Popple <apopple@nvidia.com>,
	linux-mm@kvack.org,  linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v8 04/10] mm: thp: Support allocation of anonymous multi-size THP
Date: Tue, 5 Dec 2023 09:24:26 +0800	[thread overview]
Message-ID: <CAGsJ_4zYhJWGx1DnHTiDnP3h1m8_rr6ZT6fXt8pO=jzs9QZS-A@mail.gmail.com> (raw)
In-Reply-To: <CAGsJ_4zG6W_Z-u+3QcRDn4ByoeqUXjMusNS0RotfRMSqo8RCHg@mail.gmail.com>

On Tue, Dec 5, 2023 at 9:15 AM Barry Song <21cnbao@gmail.com> wrote:
>
> On Mon, Dec 4, 2023 at 6:21 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
> >
> > Introduce the logic to allow THP to be configured (through the new sysfs
> > interface we just added) to allocate large folios to back anonymous
> > memory, which are larger than the base page size but smaller than
> > PMD-size. We call this new THP extension "multi-size THP" (mTHP).
> >
> > mTHP continues to be PTE-mapped, but in many cases can still provide
> > similar benefits to traditional PMD-sized THP: Page faults are
> > significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on
> > the configured order), but latency spikes are much less prominent
> > because the size of each page isn't as huge as the PMD-sized variant and
> > there is less memory to clear in each page fault. The number of per-page
> > operations (e.g. ref counting, rmap management, lru list management) are
> > also significantly reduced since those ops now become per-folio.
> >
> > Some architectures also employ TLB compression mechanisms to squeeze
> > more entries in when a set of PTEs are virtually and physically
> > contiguous and approporiately aligned. In this case, TLB misses will
> > occur less often.
> >
> > The new behaviour is disabled by default, but can be enabled at runtime
> > by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled
> > (see documentation in previous commit). The long term aim is to change
> > the default to include suitable lower orders, but there are some risks
> > around internal fragmentation that need to be better understood first.
> >
> > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> > ---
> >  include/linux/huge_mm.h |   6 ++-
> >  mm/memory.c             | 106 ++++++++++++++++++++++++++++++++++++----
> >  2 files changed, 101 insertions(+), 11 deletions(-)
> >
> > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > index bd0eadd3befb..91a53b9835a4 100644
> > --- a/include/linux/huge_mm.h
> > +++ b/include/linux/huge_mm.h
> > @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr;
> >  #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
> >
> >  /*
> > - * Mask of all large folio orders supported for anonymous THP.
> > + * Mask of all large folio orders supported for anonymous THP; all orders up to
> > + * and including PMD_ORDER, except order-0 (which is not "huge") and order-1
> > + * (which is a limitation of the THP implementation).
> >   */
> > -#define THP_ORDERS_ALL_ANON    BIT(PMD_ORDER)
> > +#define THP_ORDERS_ALL_ANON    ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1)))
> >
> >  /*
> >   * Mask of all large folio orders supported for file THP.
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 3ceeb0f45bf5..bf7e93813018 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >         return ret;
> >  }
> >
> > +static bool pte_range_none(pte_t *pte, int nr_pages)
> > +{
> > +       int i;
> > +
> > +       for (i = 0; i < nr_pages; i++) {
> > +               if (!pte_none(ptep_get_lockless(pte + i)))
> > +                       return false;
> > +       }
> > +
> > +       return true;
> > +}
> > +
> > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > +static struct folio *alloc_anon_folio(struct vm_fault *vmf)
> > +{
> > +       gfp_t gfp;
> > +       pte_t *pte;
> > +       unsigned long addr;
> > +       struct folio *folio;
> > +       struct vm_area_struct *vma = vmf->vma;
> > +       unsigned long orders;
> > +       int order;
> > +
> > +       /*
> > +        * If uffd is active for the vma we need per-page fault fidelity to
> > +        * maintain the uffd semantics.
> > +        */
> > +       if (userfaultfd_armed(vma))
> > +               goto fallback;
> > +
> > +       /*
> > +        * Get a list of all the (large) orders below PMD_ORDER that are enabled
> > +        * for this vma. Then filter out the orders that can't be allocated over
> > +        * the faulting address and still be fully contained in the vma.
> > +        */
> > +       orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true,
> > +                                         BIT(PMD_ORDER) - 1);
> > +       orders = thp_vma_suitable_orders(vma, vmf->address, orders);
> > +
> > +       if (!orders)
> > +               goto fallback;
> > +
> > +       pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK);
> > +       if (!pte)
> > +               return ERR_PTR(-EAGAIN);
> > +
> > +       order = first_order(orders);
> > +       while (orders) {
> > +               addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
> > +               vmf->pte = pte + pte_index(addr);
> > +               if (pte_range_none(vmf->pte, 1 << order))
> > +                       break;
> > +               order = next_order(&orders, order);
> > +       }
> > +
> > +       vmf->pte = NULL;
> > +       pte_unmap(pte);
> > +
> > +       gfp = vma_thp_gfp_mask(vma);
> > +
> > +       while (orders) {
> > +               addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
> > +               folio = vma_alloc_folio(gfp, order, vma, addr, true);
> > +               if (folio) {
> > +                       clear_huge_page(&folio->page, addr, 1 << order);
>
> Minor.
>
> Do we have to constantly clear a huge page? Is it possible to let
> post_alloc_hook()
> finish this job by using __GFP_ZERO/__GFP_ZEROTAGS as
> vma_alloc_zeroed_movable_folio() is doing?
>
> struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
>                                                 unsigned long vaddr)
> {
>         gfp_t flags = GFP_HIGHUSER_MOVABLE | __GFP_ZERO;
>
>         /*
>          * If the page is mapped with PROT_MTE, initialise the tags at the
>          * point of allocation and page zeroing as this is usually faster than
>          * separate DC ZVA and STGM.
>          */
>         if (vma->vm_flags & VM_MTE)
>                 flags |= __GFP_ZEROTAGS;
>
>         return vma_alloc_folio(flags, 0, vma, vaddr, false);
> }

I am asking this because Android and some other kernels might always set
CONFIG_INIT_ON_ALLOC_DEFAULT_ON, that means one more explicit
clear_page is doing a duplicated job.

when the below is true, post_alloc_hook has cleared huge_page before
vma_alloc_folio() returns the folio,

static inline bool want_init_on_alloc(gfp_t flags)
{
        if (static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON,
                                &init_on_alloc))
                return true;
        return flags & __GFP_ZERO;
}


>
> > +                       return folio;
> > +               }
> > +               order = next_order(&orders, order);
> > +       }
> > +
> > +fallback:
> > +       return vma_alloc_zeroed_movable_folio(vma, vmf->address);
> > +}
> > +#else
> > +#define alloc_anon_folio(vmf) \
> > +               vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address)
> > +#endif
> > +
> >  /*
> >   * We enter with non-exclusive mmap_lock (to exclude vma changes,
> >   * but allow concurrent faults), and pte mapped but not yet locked.
> > @@ -4132,6 +4210,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >   */
> >  static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> >  {
> > +       int i;
> > +       int nr_pages = 1;
> > +       unsigned long addr = vmf->address;
> >         bool uffd_wp = vmf_orig_pte_uffd_wp(vmf);
> >         struct vm_area_struct *vma = vmf->vma;
> >         struct folio *folio;
> > @@ -4176,10 +4257,15 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> >         /* Allocate our own private page. */
> >         if (unlikely(anon_vma_prepare(vma)))
> >                 goto oom;
> > -       folio = vma_alloc_zeroed_movable_folio(vma, vmf->address);
> > +       folio = alloc_anon_folio(vmf);
> > +       if (IS_ERR(folio))
> > +               return 0;
> >         if (!folio)
> >                 goto oom;
> >
> > +       nr_pages = folio_nr_pages(folio);
> > +       addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE);
> > +
> >         if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL))
> >                 goto oom_free_page;
> >         folio_throttle_swaprate(folio, GFP_KERNEL);
> > @@ -4196,12 +4282,13 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> >         if (vma->vm_flags & VM_WRITE)
> >                 entry = pte_mkwrite(pte_mkdirty(entry), vma);
> >
> > -       vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
> > -                       &vmf->ptl);
> > +       vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl);
> >         if (!vmf->pte)
> >                 goto release;
> > -       if (vmf_pte_changed(vmf)) {
> > -               update_mmu_tlb(vma, vmf->address, vmf->pte);
> > +       if ((nr_pages == 1 && vmf_pte_changed(vmf)) ||
> > +           (nr_pages  > 1 && !pte_range_none(vmf->pte, nr_pages))) {
> > +               for (i = 0; i < nr_pages; i++)
> > +                       update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
> >                 goto release;
> >         }
> >
> > @@ -4216,16 +4303,17 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> >                 return handle_userfault(vmf, VM_UFFD_MISSING);
> >         }
> >
> > -       inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
> > -       folio_add_new_anon_rmap(folio, vma, vmf->address);
> > +       folio_ref_add(folio, nr_pages - 1);
> > +       add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages);
> > +       folio_add_new_anon_rmap(folio, vma, addr);
> >         folio_add_lru_vma(folio, vma);
> >  setpte:
> >         if (uffd_wp)
> >                 entry = pte_mkuffd_wp(entry);
> > -       set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry);
> > +       set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages);
> >
> >         /* No need to invalidate - it was non-present before */
> > -       update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
> > +       update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages);
> >  unlock:
> >         if (vmf->pte)
> >                 pte_unmap_unlock(vmf->pte, vmf->ptl);
> > --
> > 2.25.1
> >


  reply	other threads:[~2023-12-05  1:24 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-04 10:20 [PATCH v8 00/10] Multi-size THP for anonymous memory Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 01/10] mm: Allow deferred splitting of arbitrary anon large folios Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 02/10] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() Ryan Roberts
2023-12-05  0:58   ` Barry Song
2023-12-04 10:20 ` [PATCH v8 03/10] mm: thp: Introduce multi-size THP sysfs interface Ryan Roberts
2023-12-05  4:21   ` Barry Song
2023-12-05  9:50     ` Ryan Roberts
2023-12-05  9:57       ` David Hildenbrand
2023-12-05 10:50         ` Ryan Roberts
2023-12-05 16:57   ` David Hildenbrand
2023-12-06 13:18     ` Ryan Roberts
2023-12-07 10:56       ` Ryan Roberts
2023-12-07 11:13       ` David Hildenbrand
2023-12-07 11:22         ` Ryan Roberts
2023-12-07 11:25           ` David Hildenbrand
2023-12-07 11:44             ` Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 04/10] mm: thp: Support allocation of anonymous multi-size THP Ryan Roberts
2023-12-05  1:15   ` Barry Song
2023-12-05  1:24     ` Barry Song [this message]
2023-12-05 10:48       ` Ryan Roberts
2023-12-05 11:16         ` David Hildenbrand
2023-12-05 20:16         ` Barry Song
2023-12-06 10:15           ` Ryan Roberts
2023-12-06 10:25             ` Barry Song
2023-12-05 16:32   ` David Hildenbrand
2023-12-05 16:35     ` David Hildenbrand
2023-12-06 14:19     ` Ryan Roberts
2023-12-06 15:44       ` Ryan Roberts
2023-12-07 10:37         ` Ryan Roberts
2023-12-07 10:40           ` David Hildenbrand
2023-12-07 11:08       ` David Hildenbrand
2023-12-07 12:08         ` Ryan Roberts
2023-12-07 13:28           ` David Hildenbrand
2023-12-07 14:45             ` Ryan Roberts
2023-12-07 15:01               ` David Hildenbrand
2023-12-07 15:12                 ` Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 05/10] selftests/mm/kugepaged: Restore thp settings at exit Ryan Roberts
2023-12-05 17:00   ` David Hildenbrand
2023-12-04 10:20 ` [PATCH v8 06/10] selftests/mm: Factor out thp settings management Ryan Roberts
2023-12-05 17:03   ` David Hildenbrand
2023-12-04 10:20 ` [PATCH v8 07/10] selftests/mm: Support multi-size THP interface in thp_settings Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 08/10] selftests/mm/khugepaged: Enlighten for multi-size THP Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 09/10] selftests/mm/cow: Generalize do_run_with_thp() helper Ryan Roberts
2023-12-05  9:59   ` David Hildenbrand
2023-12-04 10:20 ` [PATCH v8 10/10] selftests/mm/cow: Add tests for anonymous multi-size THP Ryan Roberts
2023-12-05 16:00   ` David Hildenbrand
2023-12-04 19:30 ` [PATCH v8 00/10] Multi-size THP for anonymous memory Andrew Morton
2023-12-05  9:34   ` Ryan Roberts
2023-12-05  3:28 ` Barry Song
2023-12-05 11:05   ` Ryan Roberts
2023-12-05  3:37 ` John Hubbard
2023-12-05 11:13   ` Ryan Roberts
2023-12-05 18:58     ` John Hubbard
2023-12-05 14:19 ` Kefeng Wang
2023-12-06 10:08   ` Ryan Roberts
2023-12-07 15:50     ` Kefeng Wang
2023-12-05 17:21 ` David Hildenbrand
2023-12-06 10:13   ` Ryan Roberts
2023-12-06 10:22     ` David Hildenbrand
2023-12-06 14:22       ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGsJ_4zYhJWGx1DnHTiDnP3h1m8_rr6ZT6fXt8pO=jzs9QZS-A@mail.gmail.com' \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=apopple@nvidia.com \
    --cc=catalin.marinas@arm.com \
    --cc=david@redhat.com \
    --cc=fengwei.yin@intel.com \
    --cc=hughd@google.com \
    --cc=itaru.kitayama@gmail.com \
    --cc=jhubbard@nvidia.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mcgrof@kernel.org \
    --cc=rientjes@google.com \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox