From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-mm@kvack.org, David Hildenbrand <david@redhat.com>
Subject: [PATCH v3 02/15] mm: Make alloc_pages_mpol() static
Date: Mon, 25 Nov 2024 21:01:34 +0000 [thread overview]
Message-ID: <20241125210149.2976098-3-willy@infradead.org> (raw)
In-Reply-To: <20241125210149.2976098-1-willy@infradead.org>
All callers outside mempolicy.c now use folio_alloc_mpol() thanks to
Kefeng's cleanups, so we can remove this as a visible symbol.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/gfp.h | 8 --------
mm/mempolicy.c | 8 ++++----
2 files changed, 4 insertions(+), 12 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index b0fe9f62d15b..c96d5d7f7b89 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -300,8 +300,6 @@ static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask,
#ifdef CONFIG_NUMA
struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order);
-struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
- struct mempolicy *mpol, pgoff_t ilx, int nid);
struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order);
struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
struct mempolicy *mpol, pgoff_t ilx, int nid);
@@ -312,11 +310,6 @@ static inline struct page *alloc_pages_noprof(gfp_t gfp_mask, unsigned int order
{
return alloc_pages_node_noprof(numa_node_id(), gfp_mask, order);
}
-static inline struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
- struct mempolicy *mpol, pgoff_t ilx, int nid)
-{
- return alloc_pages_noprof(gfp, order);
-}
static inline struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order)
{
return __folio_alloc_node_noprof(gfp, order, numa_node_id());
@@ -331,7 +324,6 @@ static inline struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int orde
#endif
#define alloc_pages(...) alloc_hooks(alloc_pages_noprof(__VA_ARGS__))
-#define alloc_pages_mpol(...) alloc_hooks(alloc_pages_mpol_noprof(__VA_ARGS__))
#define folio_alloc(...) alloc_hooks(folio_alloc_noprof(__VA_ARGS__))
#define folio_alloc_mpol(...) alloc_hooks(folio_alloc_mpol_noprof(__VA_ARGS__))
#define vma_alloc_folio(...) alloc_hooks(vma_alloc_folio_noprof(__VA_ARGS__))
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index bb37cd1a51d8..cda5f56085e6 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2218,7 +2218,7 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
*
* Return: The page on success or NULL if allocation fails.
*/
-struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
+static struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
struct mempolicy *pol, pgoff_t ilx, int nid)
{
nodemask_t *nodemask;
@@ -2280,7 +2280,7 @@ struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
struct mempolicy *pol, pgoff_t ilx, int nid)
{
- return page_rmappable_folio(alloc_pages_mpol_noprof(gfp | __GFP_COMP,
+ return page_rmappable_folio(alloc_pages_mpol(gfp | __GFP_COMP,
order, pol, ilx, nid));
}
@@ -2295,7 +2295,7 @@ struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
* NUMA policy. The caller must hold the mmap_lock of the mm_struct of the
* VMA to prevent it from going away. Should be used for all allocations
* for folios that will be mapped into user space, excepting hugetlbfs, and
- * excepting where direct use of alloc_pages_mpol() is more appropriate.
+ * excepting where direct use of folio_alloc_mpol() is more appropriate.
*
* Return: The folio on success or NULL if allocation fails.
*/
@@ -2341,7 +2341,7 @@ struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order)
if (!in_interrupt() && !(gfp & __GFP_THISNODE))
pol = get_task_policy(current);
- return alloc_pages_mpol_noprof(gfp, order, pol, NO_INTERLEAVE_INDEX,
+ return alloc_pages_mpol(gfp, order, pol, NO_INTERLEAVE_INDEX,
numa_node_id());
}
EXPORT_SYMBOL(alloc_pages_noprof);
--
2.45.2
next prev parent reply other threads:[~2024-11-25 21:58 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-25 21:01 [PATCH v3 00/15] Allocate and free frozen pages Matthew Wilcox (Oracle)
2024-11-25 21:01 ` [PATCH v3 01/15] mm/page_alloc: Cache page_zone() result in free_unref_page() Matthew Wilcox (Oracle)
2024-11-29 14:30 ` David Hildenbrand
2024-11-29 15:37 ` Zi Yan
2024-12-03 16:53 ` Vlastimil Babka
2024-12-03 16:54 ` Vlastimil Babka
2024-12-03 17:20 ` Matthew Wilcox
2024-12-03 18:38 ` Konstantin Ryabitsev
2024-11-25 21:01 ` Matthew Wilcox (Oracle) [this message]
2024-11-29 14:31 ` [PATCH v3 02/15] mm: Make alloc_pages_mpol() static David Hildenbrand
2024-11-29 15:42 ` Zi Yan
2024-12-03 16:58 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 03/15] mm/page_alloc: Export free_frozen_pages() instead of free_unref_page() Matthew Wilcox (Oracle)
2024-11-29 15:47 ` Zi Yan
2024-12-04 9:37 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 04/15] mm/page_alloc: Move set_page_refcounted() to callers of post_alloc_hook() Matthew Wilcox (Oracle)
2024-11-29 14:31 ` David Hildenbrand
2024-11-29 15:51 ` Zi Yan
2024-12-04 9:46 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 05/15] mm/page_alloc: Move set_page_refcounted() to callers of prep_new_page() Matthew Wilcox (Oracle)
2024-11-29 14:34 ` David Hildenbrand
2024-11-29 15:52 ` Zi Yan
2024-12-04 9:55 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 06/15] mm/page_alloc: Move set_page_refcounted() to callers of get_page_from_freelist() Matthew Wilcox (Oracle)
2024-11-29 15:55 ` Zi Yan
2024-12-04 10:03 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 07/15] mm/page_alloc: Move set_page_refcounted() to callers of __alloc_pages_cpuset_fallback() Matthew Wilcox (Oracle)
2024-11-29 15:58 ` Zi Yan
2024-12-04 10:36 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 08/15] mm/page_alloc: Move set_page_refcounted() to callers of __alloc_pages_may_oom() Matthew Wilcox (Oracle)
2024-11-29 16:01 ` Zi Yan
2024-12-04 10:37 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 09/15] mm/page_alloc: Move set_page_refcounted() to callers of __alloc_pages_direct_compact() Matthew Wilcox (Oracle)
2024-11-29 16:06 ` Zi Yan
2024-12-04 10:39 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 10/15] mm/page_alloc: Move set_page_refcounted() to callers of __alloc_pages_direct_reclaim() Matthew Wilcox (Oracle)
2024-11-29 16:08 ` Zi Yan
2024-12-04 10:41 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 11/15] mm/page_alloc: Move set_page_refcounted() to callers of __alloc_pages_slowpath() Matthew Wilcox (Oracle)
2024-11-29 16:10 ` Zi Yan
2024-12-04 10:57 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 12/15] mm/page_alloc: Move set_page_refcounted() to end of __alloc_pages() Matthew Wilcox (Oracle)
2024-11-29 16:14 ` Zi Yan
2024-12-04 11:03 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 13/15] mm/page_alloc: Add __alloc_frozen_pages() Matthew Wilcox (Oracle)
2024-11-29 14:36 ` David Hildenbrand
2024-11-29 16:19 ` Zi Yan
2024-12-04 11:10 ` Vlastimil Babka
2025-01-13 3:29 ` Andrew Morton
2024-11-25 21:01 ` [PATCH v3 14/15] mm/mempolicy: Add alloc_frozen_pages() Matthew Wilcox (Oracle)
2024-11-29 14:44 ` David Hildenbrand
2024-11-29 16:29 ` Zi Yan
2024-11-29 17:18 ` David Hildenbrand
2024-12-04 11:34 ` Vlastimil Babka
2024-12-04 13:58 ` David Hildenbrand
2024-12-04 11:29 ` Vlastimil Babka
2024-11-25 21:01 ` [PATCH v3 15/15] slab: Allocate frozen pages Matthew Wilcox (Oracle)
2024-11-27 15:07 ` David Hildenbrand
2024-11-27 15:52 ` Matthew Wilcox
2024-12-04 14:43 ` Vlastimil Babka
2025-01-13 9:18 ` Vlastimil Babka
2024-11-26 5:04 ` [PATCH v3 00/15] Allocate and free " Hyeonggon Yoo
2024-12-04 16:07 ` Vlastimil Babka
2024-12-09 0:17 ` Hyeonggon Yoo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241125210149.2976098-3-willy@infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox