linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, William Kucharski <william.kucharski@oracle.com>
Subject: Re: [PATCH v3 14/15] mm/mempolicy: Add alloc_frozen_pages()
Date: Fri, 29 Nov 2024 15:44:17 +0100	[thread overview]
Message-ID: <ad26d88f-365a-4b1e-aa2f-e7e807d39c42@redhat.com> (raw)
In-Reply-To: <20241125210149.2976098-15-willy@infradead.org>

On 25.11.24 22:01, Matthew Wilcox (Oracle) wrote:
> Provide an interface to allocate pages from the page allocator without
> incrementing their refcount.  This saves an atomic operation on free,
> which may be beneficial to some users (eg slab).
> 
> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   mm/internal.h  | 12 ++++++++++++
>   mm/mempolicy.c | 49 ++++++++++++++++++++++++++++++++-----------------
>   2 files changed, 44 insertions(+), 17 deletions(-)
> 
> diff --git a/mm/internal.h b/mm/internal.h
> index 55e03f8f41d9..74713b44bedb 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -747,6 +747,18 @@ struct page *__alloc_frozen_pages_noprof(gfp_t, unsigned int order, int nid,
>   void free_frozen_pages(struct page *page, unsigned int order);
>   void free_unref_folios(struct folio_batch *fbatch);
>   
> +#ifdef CONFIG_NUMA
> +struct page *alloc_frozen_pages_noprof(gfp_t, unsigned int order);
> +#else
> +static inline struct page *alloc_frozen_pages_noprof(gfp_t gfp, unsigned int order)
> +{
> +	return __alloc_frozen_pages_noprof(gfp, order, numa_node_id(), NULL);
> +}
> +#endif
> +
> +#define alloc_frozen_pages(...) \
> +	alloc_hooks(alloc_frozen_pages_noprof(__VA_ARGS__))
> +
>   extern void zone_pcp_reset(struct zone *zone);
>   extern void zone_pcp_disable(struct zone *zone);
>   extern void zone_pcp_enable(struct zone *zone);
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index cda5f56085e6..3682184993dd 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2201,9 +2201,9 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
>   	 */
>   	preferred_gfp = gfp | __GFP_NOWARN;
>   	preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL);
> -	page = __alloc_pages_noprof(preferred_gfp, order, nid, nodemask);
> +	page = __alloc_frozen_pages_noprof(preferred_gfp, order, nid, nodemask);
>   	if (!page)
> -		page = __alloc_pages_noprof(gfp, order, nid, NULL);
> +		page = __alloc_frozen_pages_noprof(gfp, order, nid, NULL);
>   
>   	return page;
>   }
> @@ -2249,8 +2249,9 @@ static struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
>   			 * First, try to allocate THP only on local node, but
>   			 * don't reclaim unnecessarily, just compact.
>   			 */
> -			page = __alloc_pages_node_noprof(nid,
> -				gfp | __GFP_THISNODE | __GFP_NORETRY, order);
> +			page = __alloc_frozen_pages_noprof(
> +				gfp | __GFP_THISNODE | __GFP_NORETRY, order,
> +				nid, NULL);
>   			if (page || !(gfp & __GFP_DIRECT_RECLAIM))
>   				return page;
>   			/*
> @@ -2262,7 +2263,7 @@ static struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
>   		}
>   	}
>   
> -	page = __alloc_pages_noprof(gfp, order, nid, nodemask);
> +	page = __alloc_frozen_pages_noprof(gfp, order, nid, nodemask);
>   
>   	if (unlikely(pol->mode == MPOL_INTERLEAVE) && page) {
>   		/* skip NUMA_INTERLEAVE_HIT update if numa stats is disabled */
> @@ -2280,8 +2281,13 @@ static struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
>   struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
>   		struct mempolicy *pol, pgoff_t ilx, int nid)
>   {
> -	return page_rmappable_folio(alloc_pages_mpol(gfp | __GFP_COMP,
> -							order, pol, ilx, nid));
> +	struct page *page = alloc_pages_mpol(gfp | __GFP_COMP, order, pol,
> +			ilx, nid);
> +	if (!page)
> +		return NULL;
> +
> +	set_page_refcounted(page);
> +	return page_rmappable_folio(page);

What I don't quite like is that we now have a bit of an inconsistency, 
that makes it harder to understand what gives you frozen and what 
doesn't give you frozen pages.

alloc_pages(): non-frozen/refcounted
alloc_frozen_pages(): frozen
folio_alloc_mpol_noprof(): non-frozen/refcounted
alloc_pages_mpol(): ... frozen pages?


Ideally, it would be "alloc_pages": non-frozen, "alloc_frozen_pages": 
frozen.

The same concern applies to other internal functions, like 
"__alloc_pages_cpuset_fallback".


I'll note that that's one of the reasons I thought of having GFP_FROZEN 
instead (function name doesn't really matter). The only ugly thing with 
GFP_FROZEN is that we still need separate free*() functions. Well, or 
detect during free_*() that the refcount is 0 and assume that it was an 
frozen allocation ...

Anyhow, if we could make this more consistent, that would help.

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2024-11-29 14:44 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-25 21:01 [PATCH v3 00/15] Allocate and free frozen pages Matthew Wilcox (Oracle)
2024-11-25 21:01 ` [PATCH v3 01/15] mm/page_alloc: Cache page_zone() result in free_unref_page() Matthew Wilcox (Oracle)
2024-11-29 14:30   ` David Hildenbrand
2024-11-29 15:37   ` Zi Yan
2024-12-03 16:53   ` Vlastimil Babka
2024-12-03 16:54   ` Vlastimil Babka
2024-12-03 17:20     ` Matthew Wilcox
2024-12-03 18:38       ` Konstantin Ryabitsev
2024-11-25 21:01 ` [PATCH v3 02/15] mm: Make alloc_pages_mpol() static Matthew Wilcox (Oracle)
2024-11-29 14:31   ` David Hildenbrand
2024-11-29 15:42   ` Zi Yan
2024-12-03 16:58   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 03/15] mm/page_alloc: Export free_frozen_pages() instead of free_unref_page() Matthew Wilcox (Oracle)
2024-11-29 15:47   ` Zi Yan
2024-12-04  9:37   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 04/15] mm/page_alloc: Move set_page_refcounted() to callers of post_alloc_hook() Matthew Wilcox (Oracle)
2024-11-29 14:31   ` David Hildenbrand
2024-11-29 15:51   ` Zi Yan
2024-12-04  9:46   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 05/15] mm/page_alloc: Move set_page_refcounted() to callers of prep_new_page() Matthew Wilcox (Oracle)
2024-11-29 14:34   ` David Hildenbrand
2024-11-29 15:52   ` Zi Yan
2024-12-04  9:55   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 06/15] mm/page_alloc: Move set_page_refcounted() to callers of get_page_from_freelist() Matthew Wilcox (Oracle)
2024-11-29 15:55   ` Zi Yan
2024-12-04 10:03   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 07/15] mm/page_alloc: Move set_page_refcounted() to callers of __alloc_pages_cpuset_fallback() Matthew Wilcox (Oracle)
2024-11-29 15:58   ` Zi Yan
2024-12-04 10:36   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 08/15] mm/page_alloc: Move set_page_refcounted() to callers of __alloc_pages_may_oom() Matthew Wilcox (Oracle)
2024-11-29 16:01   ` Zi Yan
2024-12-04 10:37   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 09/15] mm/page_alloc: Move set_page_refcounted() to callers of __alloc_pages_direct_compact() Matthew Wilcox (Oracle)
2024-11-29 16:06   ` Zi Yan
2024-12-04 10:39   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 10/15] mm/page_alloc: Move set_page_refcounted() to callers of __alloc_pages_direct_reclaim() Matthew Wilcox (Oracle)
2024-11-29 16:08   ` Zi Yan
2024-12-04 10:41   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 11/15] mm/page_alloc: Move set_page_refcounted() to callers of __alloc_pages_slowpath() Matthew Wilcox (Oracle)
2024-11-29 16:10   ` Zi Yan
2024-12-04 10:57   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 12/15] mm/page_alloc: Move set_page_refcounted() to end of __alloc_pages() Matthew Wilcox (Oracle)
2024-11-29 16:14   ` Zi Yan
2024-12-04 11:03   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 13/15] mm/page_alloc: Add __alloc_frozen_pages() Matthew Wilcox (Oracle)
2024-11-29 14:36   ` David Hildenbrand
2024-11-29 16:19   ` Zi Yan
2024-12-04 11:10   ` Vlastimil Babka
2025-01-13  3:29     ` Andrew Morton
2024-11-25 21:01 ` [PATCH v3 14/15] mm/mempolicy: Add alloc_frozen_pages() Matthew Wilcox (Oracle)
2024-11-29 14:44   ` David Hildenbrand [this message]
2024-11-29 16:29     ` Zi Yan
2024-11-29 17:18       ` David Hildenbrand
2024-12-04 11:34         ` Vlastimil Babka
2024-12-04 13:58           ` David Hildenbrand
2024-12-04 11:29   ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 15/15] slab: Allocate frozen pages Matthew Wilcox (Oracle)
2024-11-27 15:07   ` David Hildenbrand
2024-11-27 15:52     ` Matthew Wilcox
2024-12-04 14:43       ` Vlastimil Babka
2025-01-13  9:18         ` Vlastimil Babka
2024-11-26  5:04 ` [PATCH v3 00/15] Allocate and free " Hyeonggon Yoo
2024-12-04 16:07   ` Vlastimil Babka
2024-12-09  0:17     ` Hyeonggon Yoo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ad26d88f-365a-4b1e-aa2f-e7e807d39c42@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=william.kucharski@oracle.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox