linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosryahmed@google.com>
To: alexei.starovoitov@gmail.com
Cc: bpf@vger.kernel.org, andrii@kernel.org, memxor@gmail.com,
	 akpm@linux-foundation.org, peterz@infradead.org, vbabka@suse.cz,
	 bigeasy@linutronix.de, rostedt@goodmis.org, houtao1@huawei.com,
	 hannes@cmpxchg.org, shakeel.butt@linux.dev, mhocko@suse.com,
	 willy@infradead.org, tglx@linutronix.de, jannh@google.com,
	tj@kernel.org,  linux-mm@kvack.org, kernel-team@fb.com
Subject: Re: [PATCH bpf-next v3 2/6] mm, bpf: Introduce free_pages_nolock()
Date: Tue, 17 Dec 2024 20:58:43 -0800	[thread overview]
Message-ID: <CAJD7tkYOfBepXDeUFj6mM1evRoDdaS_THwmhp9a4pHeM4bgsFA@mail.gmail.com> (raw)
In-Reply-To: <20241218030720.1602449-3-alexei.starovoitov@gmail.com>

On Tue, Dec 17, 2024 at 7:07 PM <alexei.starovoitov@gmail.com> wrote:
>
> From: Alexei Starovoitov <ast@kernel.org>
>
> Introduce free_pages_nolock() that can free pages without taking locks.
> It relies on trylock and can be called from any context.
> Since spin_trylock() cannot be used in RT from hard IRQ or NMI
> it uses lockless link list to stash the pages which will be freed
> by subsequent free_pages() from good context.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
>  include/linux/gfp.h      |  1 +
>  include/linux/mm_types.h |  4 ++
>  include/linux/mmzone.h   |  3 ++
>  mm/page_alloc.c          | 79 ++++++++++++++++++++++++++++++++++++----
>  4 files changed, 79 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index 65b8df1db26a..ff9060af6295 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -372,6 +372,7 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas
>         __get_free_pages((gfp_mask) | GFP_DMA, (order))
>
>  extern void __free_pages(struct page *page, unsigned int order);
> +extern void free_pages_nolock(struct page *page, unsigned int order);
>  extern void free_pages(unsigned long addr, unsigned int order);
>
>  #define __free_page(page) __free_pages((page), 0)
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 7361a8f3ab68..52547b3e5fd8 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -99,6 +99,10 @@ struct page {
>                                 /* Or, free page */
>                                 struct list_head buddy_list;
>                                 struct list_head pcp_list;
> +                               struct {
> +                                       struct llist_node pcp_llist;
> +                                       unsigned int order;
> +                               };
>                         };
>                         /* See page-flags.h for PAGE_MAPPING_FLAGS */
>                         struct address_space *mapping;
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index b36124145a16..1a854e0a9e3b 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -953,6 +953,9 @@ struct zone {
>         /* Primarily protects free_area */
>         spinlock_t              lock;
>
> +       /* Pages to be freed when next trylock succeeds */
> +       struct llist_head       trylock_free_pages;
> +
>         /* Write-intensive fields used by compaction and vmstats. */
>         CACHELINE_PADDING(_pad2_);
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d23545057b6e..10918bfc6734 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -88,6 +88,9 @@ typedef int __bitwise fpi_t;
>   */
>  #define FPI_TO_TAIL            ((__force fpi_t)BIT(1))
>
> +/* Free the page without taking locks. Rely on trylock only. */
> +#define FPI_TRYLOCK            ((__force fpi_t)BIT(2))
> +

The comment above the definition of fpi_t mentions that it's for
non-pcp variants of free_pages(), so I guess that needs to be updated
in this patch.

More importantly, I think the comment states this mainly because the
existing flags won't be properly handled when freeing pages to the
pcplist. The flags will be lost once the pages are added to the
pcplist, and won't be propagated when the pages are eventually freed
to the buddy allocator (e.g. through free_pcppages_bulk()).

So I think we need to at least explicitly check which flags are
allowed when freeing pages to the pcplists or something similar.

>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8)
> @@ -1247,13 +1250,44 @@ static void split_large_buddy(struct zone *zone, struct page *page,
>         }
>  }
>
> +static void add_page_to_zone_llist(struct zone *zone, struct page *page,
> +                                  unsigned int order)
> +{
> +       /* Remember the order */
> +       page->order = order;
> +       /* Add the page to the free list */
> +       llist_add(&page->pcp_llist, &zone->trylock_free_pages);
> +}
> +
>  static void free_one_page(struct zone *zone, struct page *page,
>                           unsigned long pfn, unsigned int order,
>                           fpi_t fpi_flags)
>  {
> +       struct llist_head *llhead;
>         unsigned long flags;
>
> -       spin_lock_irqsave(&zone->lock, flags);
> +       if (!spin_trylock_irqsave(&zone->lock, flags)) {
> +               if (unlikely(fpi_flags & FPI_TRYLOCK)) {
> +                       add_page_to_zone_llist(zone, page, order);
> +                       return;
> +               }
> +               spin_lock_irqsave(&zone->lock, flags);
> +       }
> +
> +       /* The lock succeeded. Process deferred pages. */
> +       llhead = &zone->trylock_free_pages;
> +       if (unlikely(!llist_empty(llhead) && !(fpi_flags & FPI_TRYLOCK))) {
> +               struct llist_node *llnode;
> +               struct page *p, *tmp;
> +
> +               llnode = llist_del_all(llhead);
> +               llist_for_each_entry_safe(p, tmp, llnode, pcp_llist) {
> +                       unsigned int p_order = p->order;
> +
> +                       split_large_buddy(zone, p, page_to_pfn(p), p_order, fpi_flags);
> +                       __count_vm_events(PGFREE, 1 << p_order);
> +               }
> +       }
>         split_large_buddy(zone, page, pfn, order, fpi_flags);
>         spin_unlock_irqrestore(&zone->lock, flags);
>
> @@ -2596,7 +2630,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
>
>  static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
>                                    struct page *page, int migratetype,
> -                                  unsigned int order)
> +                                  unsigned int order, fpi_t fpi_flags)
>  {
>         int high, batch;
>         int pindex;
> @@ -2631,6 +2665,14 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
>         }
>         if (pcp->free_count < (batch << CONFIG_PCP_BATCH_SCALE_MAX))
>                 pcp->free_count += (1 << order);
> +
> +       if (unlikely(fpi_flags & FPI_TRYLOCK)) {
> +               /*
> +                * Do not attempt to take a zone lock. Let pcp->count get
> +                * over high mark temporarily.
> +                */
> +               return;
> +       }
>         high = nr_pcp_high(pcp, zone, batch, free_high);
>         if (pcp->count >= high) {
>                 free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high),
> @@ -2645,7 +2687,8 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
>  /*
>   * Free a pcp page
>   */
> -void free_unref_page(struct page *page, unsigned int order)
> +static void __free_unref_page(struct page *page, unsigned int order,
> +                             fpi_t fpi_flags)
>  {
>         unsigned long __maybe_unused UP_flags;
>         struct per_cpu_pages *pcp;
> @@ -2654,7 +2697,7 @@ void free_unref_page(struct page *page, unsigned int order)
>         int migratetype;
>
>         if (!pcp_allowed_order(order)) {
> -               __free_pages_ok(page, order, FPI_NONE);
> +               __free_pages_ok(page, order, fpi_flags);
>                 return;
>         }
>
> @@ -2671,24 +2714,33 @@ void free_unref_page(struct page *page, unsigned int order)
>         migratetype = get_pfnblock_migratetype(page, pfn);
>         if (unlikely(migratetype >= MIGRATE_PCPTYPES)) {
>                 if (unlikely(is_migrate_isolate(migratetype))) {
> -                       free_one_page(page_zone(page), page, pfn, order, FPI_NONE);
> +                       free_one_page(page_zone(page), page, pfn, order, fpi_flags);
>                         return;
>                 }
>                 migratetype = MIGRATE_MOVABLE;
>         }
>
>         zone = page_zone(page);
> +       if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq())) {
> +               add_page_to_zone_llist(zone, page, order);
> +               return;
> +       }
>         pcp_trylock_prepare(UP_flags);
>         pcp = pcp_spin_trylock(zone->per_cpu_pageset);
>         if (pcp) {
> -               free_unref_page_commit(zone, pcp, page, migratetype, order);
> +               free_unref_page_commit(zone, pcp, page, migratetype, order, fpi_flags);
>                 pcp_spin_unlock(pcp);
>         } else {
> -               free_one_page(zone, page, pfn, order, FPI_NONE);
> +               free_one_page(zone, page, pfn, order, fpi_flags);
>         }
>         pcp_trylock_finish(UP_flags);
>  }
>
> +void free_unref_page(struct page *page, unsigned int order)
> +{
> +       __free_unref_page(page, order, FPI_NONE);
> +}
> +
>  /*
>   * Free a batch of folios
>   */
> @@ -2777,7 +2829,7 @@ void free_unref_folios(struct folio_batch *folios)
>
>                 trace_mm_page_free_batched(&folio->page);
>                 free_unref_page_commit(zone, pcp, &folio->page, migratetype,
> -                               order);
> +                                      order, FPI_NONE);
>         }
>
>         if (pcp) {
> @@ -4854,6 +4906,17 @@ void __free_pages(struct page *page, unsigned int order)
>  }
>  EXPORT_SYMBOL(__free_pages);
>
> +/*
> + * Can be called while holding raw_spin_lock or from IRQ and NMI,
> + * but only for pages that came from try_alloc_pages():
> + * order <= 3, !folio, etc
> + */
> +void free_pages_nolock(struct page *page, unsigned int order)
> +{
> +       if (put_page_testzero(page))
> +               __free_unref_page(page, order, FPI_TRYLOCK);
> +}
> +
>  void free_pages(unsigned long addr, unsigned int order)
>  {
>         if (addr != 0) {
> --
> 2.43.5
>
>


  reply	other threads:[~2024-12-18  4:59 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-18  3:07 [PATCH bpf-next v3 0/6] bpf, mm: Introduce try_alloc_pages() alexei.starovoitov
2024-12-18  3:07 ` [PATCH bpf-next v3 1/6] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation alexei.starovoitov
2024-12-18 11:32   ` Michal Hocko
2024-12-19  0:05     ` Shakeel Butt
2024-12-19  7:18       ` Michal Hocko
2024-12-19  1:18     ` Alexei Starovoitov
2024-12-19  7:13       ` Michal Hocko
2024-12-20  0:41         ` Alexei Starovoitov
2024-12-19  0:10   ` Shakeel Butt
2024-12-19  1:39     ` Alexei Starovoitov
2024-12-18  3:07 ` [PATCH bpf-next v3 2/6] mm, bpf: Introduce free_pages_nolock() alexei.starovoitov
2024-12-18  4:58   ` Yosry Ahmed [this message]
2024-12-18  5:33     ` Alexei Starovoitov
2024-12-18  5:57       ` Yosry Ahmed
2024-12-18  6:37         ` Alexei Starovoitov
2024-12-18  6:49           ` Yosry Ahmed
2024-12-18  7:25             ` Alexei Starovoitov
2024-12-18  7:40               ` Yosry Ahmed
2024-12-18 11:32   ` Michal Hocko
2024-12-19  1:45     ` Alexei Starovoitov
2024-12-19  7:03       ` Michal Hocko
2024-12-20  0:42         ` Alexei Starovoitov
2024-12-18  3:07 ` [PATCH bpf-next v3 3/6] locking/local_lock: Introduce local_trylock_irqsave() alexei.starovoitov
2024-12-18  3:07 ` [PATCH bpf-next v3 4/6] memcg: Use trylock to access memcg stock_lock alexei.starovoitov
2024-12-18 11:32   ` Michal Hocko
2024-12-19  1:53     ` Alexei Starovoitov
2024-12-19  7:08       ` Michal Hocko
2024-12-19  7:27         ` Michal Hocko
2024-12-19  7:52           ` Michal Hocko
2024-12-20  0:39             ` Alexei Starovoitov
2024-12-20  8:24               ` Michal Hocko
2024-12-20 16:10                 ` Alexei Starovoitov
2024-12-20 19:45                   ` Shakeel Butt
2024-12-21  7:20                     ` Michal Hocko
2024-12-18  3:07 ` [PATCH bpf-next v3 5/6] mm, bpf: Use memcg in try_alloc_pages() alexei.starovoitov
2024-12-18  3:07 ` [PATCH bpf-next v3 6/6] bpf: Use try_alloc_pages() to allocate pages for bpf needs alexei.starovoitov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJD7tkYOfBepXDeUFj6mM1evRoDdaS_THwmhp9a4pHeM4bgsFA@mail.gmail.com \
    --to=yosryahmed@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andrii@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=bpf@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=houtao1@huawei.com \
    --cc=jannh@google.com \
    --cc=kernel-team@fb.com \
    --cc=linux-mm@kvack.org \
    --cc=memxor@gmail.com \
    --cc=mhocko@suse.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=shakeel.butt@linux.dev \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox