* [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof()
@ 2024-08-05 16:31 Aruna Ramakrishna
2024-08-05 16:31 ` [PATCH 2/4] mm/mempolicy: Make alloc_pages_mpol_noprof() static Aruna Ramakrishna
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Aruna Ramakrishna @ 2024-08-05 16:31 UTC (permalink / raw)
To: linux-mm; +Cc: willy, aruna.ramakrishna
Convert alloc_pages_noprof() to use folio_alloc_mpol_noprof() so that
alloc_pages_mpol(_noprof)() can be removed in a future commit.
Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
---
mm/mempolicy.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index b3b5f376471f..2d367ef15d0f 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2332,6 +2332,7 @@ EXPORT_SYMBOL(vma_alloc_folio_noprof);
struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order)
{
struct mempolicy *pol = &default_policy;
+ struct folio *folio;
/*
* No reference counting needed for current->mempolicy
@@ -2340,8 +2341,10 @@ struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order)
if (!in_interrupt() && !(gfp & __GFP_THISNODE))
pol = get_task_policy(current);
- return alloc_pages_mpol_noprof(gfp, order, pol, NO_INTERLEAVE_INDEX,
- numa_node_id());
+ folio = folio_alloc_mpol_noprof(gfp, order, pol, NO_INTERLEAVE_INDEX,
+ numa_node_id());
+
+ return &folio->page;
}
EXPORT_SYMBOL(alloc_pages_noprof);
base-commit: 2b820b576dfc4aa9b65f18b68f468cb5b38ece84
--
2.43.5
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 2/4] mm/mempolicy: Make alloc_pages_mpol_noprof() static
2024-08-05 16:31 [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof() Aruna Ramakrishna
@ 2024-08-05 16:31 ` Aruna Ramakrishna
2024-08-05 16:31 ` [PATCH 3/4] mm/mempolicy: Remove alloc_pages_mpol_noprof() Aruna Ramakrishna
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Aruna Ramakrishna @ 2024-08-05 16:31 UTC (permalink / raw)
To: linux-mm; +Cc: willy, aruna.ramakrishna
As a first step to removing/replacing alloc_pages_mpol()
and alloc_pages_mpol_noprof() with their folio equivalents,
make alloc_pages_mpol_noprof() static.
Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
---
include/linux/gfp.h | 7 -------
mm/mempolicy.c | 2 +-
2 files changed, 1 insertion(+), 8 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index f53f76e0b17e..f5ce91ccc954 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -300,8 +300,6 @@ static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask,
#ifdef CONFIG_NUMA
struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order);
-struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
- struct mempolicy *mpol, pgoff_t ilx, int nid);
struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order);
struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
struct mempolicy *mpol, pgoff_t ilx, int nid);
@@ -312,11 +310,6 @@ static inline struct page *alloc_pages_noprof(gfp_t gfp_mask, unsigned int order
{
return alloc_pages_node_noprof(numa_node_id(), gfp_mask, order);
}
-static inline struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
- struct mempolicy *mpol, pgoff_t ilx, int nid)
-{
- return alloc_pages_noprof(gfp, order);
-}
static inline struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order)
{
return __folio_alloc_node(gfp, order, numa_node_id());
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 2d367ef15d0f..6132a230a3b9 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2216,7 +2216,7 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
*
* Return: The page on success or NULL if allocation fails.
*/
-struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
+static struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
struct mempolicy *pol, pgoff_t ilx, int nid)
{
nodemask_t *nodemask;
--
2.43.5
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 3/4] mm/mempolicy: Remove alloc_pages_mpol_noprof()
2024-08-05 16:31 [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof() Aruna Ramakrishna
2024-08-05 16:31 ` [PATCH 2/4] mm/mempolicy: Make alloc_pages_mpol_noprof() static Aruna Ramakrishna
@ 2024-08-05 16:31 ` Aruna Ramakrishna
2024-08-05 16:31 ` [PATCH 4/4] mm/mempolicy: Convert alloc_pages_preferred_many() to return a folio Aruna Ramakrishna
2024-08-06 8:05 ` [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof() Kefeng Wang
3 siblings, 0 replies; 6+ messages in thread
From: Aruna Ramakrishna @ 2024-08-05 16:31 UTC (permalink / raw)
To: linux-mm; +Cc: willy, aruna.ramakrishna
There are now no callers of either alloc_pages_mpol() or
alloc_pages_mpol_noprof(). Remove both functions, and fully
convert the body of folio_alloc_mpol_nprof() to use folios.
Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
---
include/linux/gfp.h | 1 -
mm/mempolicy.c | 42 ++++++++++++++++++++----------------------
2 files changed, 20 insertions(+), 23 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index f5ce91ccc954..58f23f15a71a 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -324,7 +324,6 @@ static inline struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int orde
#endif
#define alloc_pages(...) alloc_hooks(alloc_pages_noprof(__VA_ARGS__))
-#define alloc_pages_mpol(...) alloc_hooks(alloc_pages_mpol_noprof(__VA_ARGS__))
#define folio_alloc(...) alloc_hooks(folio_alloc_noprof(__VA_ARGS__))
#define folio_alloc_mpol(...) alloc_hooks(folio_alloc_mpol_noprof(__VA_ARGS__))
#define vma_alloc_folio(...) alloc_hooks(vma_alloc_folio_noprof(__VA_ARGS__))
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 6132a230a3b9..9be32c3bfff2 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2207,25 +2207,28 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
}
/**
- * alloc_pages_mpol - Allocate pages according to NUMA mempolicy.
+ * folio_alloc_mpol_noprof - Allocate pages according to NUMA mempolicy.
* @gfp: GFP flags.
- * @order: Order of the page allocation.
+ * @order: Order of the folio allocation.
* @pol: Pointer to the NUMA mempolicy.
* @ilx: Index for interleave mempolicy (also distinguishes alloc_pages()).
* @nid: Preferred node (usually numa_node_id() but @mpol may override it).
*
- * Return: The page on success or NULL if allocation fails.
+ * Return: The folio on success or NULL if allocation fails.
*/
-static struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
+struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
struct mempolicy *pol, pgoff_t ilx, int nid)
{
nodemask_t *nodemask;
- struct page *page;
+ struct folio *folio;
+ gfp |= __GFP_COMP;
nodemask = policy_nodemask(gfp, pol, ilx, &nid);
if (pol->mode == MPOL_PREFERRED_MANY)
- return alloc_pages_preferred_many(gfp, order, nid, nodemask);
+ return page_rmappable_folio(
+ alloc_pages_preferred_many(gfp, order,
+ nid, nodemask));
if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
/* filter "hugepage" allocation, unless from alloc_pages() */
@@ -2247,10 +2250,12 @@ static struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
* First, try to allocate THP only on local node, but
* don't reclaim unnecessarily, just compact.
*/
- page = __alloc_pages_node_noprof(nid,
- gfp | __GFP_THISNODE | __GFP_NORETRY, order);
- if (page || !(gfp & __GFP_DIRECT_RECLAIM))
- return page;
+ folio = __folio_alloc_node_noprof(
+ gfp | __GFP_THISNODE | __GFP_NORETRY,
+ order, nid);
+
+ if (folio || !(gfp & __GFP_DIRECT_RECLAIM))
+ return folio;
/*
* If hugepage allocations are configured to always
* synchronous compact or the vma has been madvised
@@ -2260,26 +2265,19 @@ static struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
}
}
- page = __alloc_pages_noprof(gfp, order, nid, nodemask);
+ folio = __folio_alloc_noprof(gfp, order, nid, nodemask);
- if (unlikely(pol->mode == MPOL_INTERLEAVE) && page) {
+ if (unlikely(pol->mode == MPOL_INTERLEAVE) && folio) {
/* skip NUMA_INTERLEAVE_HIT update if numa stats is disabled */
if (static_branch_likely(&vm_numa_stat_key) &&
- page_to_nid(page) == nid) {
+ folio_nid(folio) == nid) {
preempt_disable();
- __count_numa_event(page_zone(page), NUMA_INTERLEAVE_HIT);
+ __count_numa_event(folio_zone(folio), NUMA_INTERLEAVE_HIT);
preempt_enable();
}
}
- return page;
-}
-
-struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
- struct mempolicy *pol, pgoff_t ilx, int nid)
-{
- return page_rmappable_folio(alloc_pages_mpol_noprof(gfp | __GFP_COMP,
- order, pol, ilx, nid));
+ return folio;
}
/**
--
2.43.5
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 4/4] mm/mempolicy: Convert alloc_pages_preferred_many() to return a folio
2024-08-05 16:31 [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof() Aruna Ramakrishna
2024-08-05 16:31 ` [PATCH 2/4] mm/mempolicy: Make alloc_pages_mpol_noprof() static Aruna Ramakrishna
2024-08-05 16:31 ` [PATCH 3/4] mm/mempolicy: Remove alloc_pages_mpol_noprof() Aruna Ramakrishna
@ 2024-08-05 16:31 ` Aruna Ramakrishna
2024-08-06 8:05 ` [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof() Kefeng Wang
3 siblings, 0 replies; 6+ messages in thread
From: Aruna Ramakrishna @ 2024-08-05 16:31 UTC (permalink / raw)
To: linux-mm; +Cc: willy, aruna.ramakrishna
There is only one caller of alloc_pages_preferred_many(), which
already expects a folio. Rename the function and convert the body
of alloc_pages_preferred_many() to work with folios too.
Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
---
mm/mempolicy.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 9be32c3bfff2..33074ffd59fe 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2185,10 +2185,10 @@ bool mempolicy_in_oom_domain(struct task_struct *tsk,
return ret;
}
-static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
- int nid, nodemask_t *nodemask)
+static struct folio *folio_alloc_preferred_many(gfp_t gfp, unsigned int order,
+ int nid, nodemask_t *nodemask)
{
- struct page *page;
+ struct folio *folio;
gfp_t preferred_gfp;
/*
@@ -2199,11 +2199,11 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
*/
preferred_gfp = gfp | __GFP_NOWARN;
preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL);
- page = __alloc_pages_noprof(preferred_gfp, order, nid, nodemask);
- if (!page)
- page = __alloc_pages_noprof(gfp, order, nid, NULL);
+ folio = __folio_alloc_noprof(preferred_gfp, order, nid, nodemask);
+ if (!folio)
+ folio = __folio_alloc_noprof(gfp, order, nid, NULL);
- return page;
+ return folio;
}
/**
@@ -2226,9 +2226,7 @@ struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
nodemask = policy_nodemask(gfp, pol, ilx, &nid);
if (pol->mode == MPOL_PREFERRED_MANY)
- return page_rmappable_folio(
- alloc_pages_preferred_many(gfp, order,
- nid, nodemask));
+ return folio_alloc_preferred_many(gfp, order, nid, nodemask);
if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
/* filter "hugepage" allocation, unless from alloc_pages() */
--
2.43.5
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof()
2024-08-05 16:31 [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof() Aruna Ramakrishna
` (2 preceding siblings ...)
2024-08-05 16:31 ` [PATCH 4/4] mm/mempolicy: Convert alloc_pages_preferred_many() to return a folio Aruna Ramakrishna
@ 2024-08-06 8:05 ` Kefeng Wang
2024-08-20 17:58 ` Aruna Ramakrishna
3 siblings, 1 reply; 6+ messages in thread
From: Kefeng Wang @ 2024-08-06 8:05 UTC (permalink / raw)
To: Aruna Ramakrishna, linux-mm; +Cc: willy
On 2024/8/6 0:31, Aruna Ramakrishna wrote:
> Convert alloc_pages_noprof() to use folio_alloc_mpol_noprof() so that
> alloc_pages_mpol(_noprof)() can be removed in a future commit.
>
> Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
> ---
> mm/mempolicy.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index b3b5f376471f..2d367ef15d0f 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2332,6 +2332,7 @@ EXPORT_SYMBOL(vma_alloc_folio_noprof);
> struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order)
> {
> struct mempolicy *pol = &default_policy;
> + struct folio *folio;
>
> /*
> * No reference counting needed for current->mempolicy
> @@ -2340,8 +2341,10 @@ struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order)
> if (!in_interrupt() && !(gfp & __GFP_THISNODE))
> pol = get_task_policy(current);
>
> - return alloc_pages_mpol_noprof(gfp, order, pol, NO_INTERLEAVE_INDEX,
> - numa_node_id());
> + folio = folio_alloc_mpol_noprof(gfp, order, pol, NO_INTERLEAVE_INDEX,
> + numa_node_id());
We have __GFP_COMP in folio_alloc_mpol_noprof and set large_rmappable
for large folio, not sure that there is some issue for alloc_pages()
callers.
> +
> + return &folio->page;
> }
> EXPORT_SYMBOL(alloc_pages_noprof);
>
>
> base-commit: 2b820b576dfc4aa9b65f18b68f468cb5b38ece84
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof()
2024-08-06 8:05 ` [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof() Kefeng Wang
@ 2024-08-20 17:58 ` Aruna Ramakrishna
0 siblings, 0 replies; 6+ messages in thread
From: Aruna Ramakrishna @ 2024-08-20 17:58 UTC (permalink / raw)
To: Kefeng Wang; +Cc: linux-mm, Matthew Wilcox
> On Aug 6, 2024, at 1:05 AM, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
>
>
> On 2024/8/6 0:31, Aruna Ramakrishna wrote:
>> Convert alloc_pages_noprof() to use folio_alloc_mpol_noprof() so that
>> alloc_pages_mpol(_noprof)() can be removed in a future commit.
>> Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
>> ---
>> mm/mempolicy.c | 7 +++++--
>> 1 file changed, 5 insertions(+), 2 deletions(-)
>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>> index b3b5f376471f..2d367ef15d0f 100644
>> --- a/mm/mempolicy.c
>> +++ b/mm/mempolicy.c
>> @@ -2332,6 +2332,7 @@ EXPORT_SYMBOL(vma_alloc_folio_noprof);
>> struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order)
>> {
>> struct mempolicy *pol = &default_policy;
>> + struct folio *folio;
>> /*
>> * No reference counting needed for current->mempolicy
>> @@ -2340,8 +2341,10 @@ struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order)
>> if (!in_interrupt() && !(gfp & __GFP_THISNODE))
>> pol = get_task_policy(current);
>> - return alloc_pages_mpol_noprof(gfp, order, pol, NO_INTERLEAVE_INDEX,
>> - numa_node_id());
>> + folio = folio_alloc_mpol_noprof(gfp, order, pol, NO_INTERLEAVE_INDEX,
>> + numa_node_id());
>
> We have __GFP_COMP in folio_alloc_mpol_noprof and set large_rmappable
> for large folio, not sure that there is some issue for alloc_pages()
> callers.
>
Hi Kefeng,
You’re right, this will force all callers of alloc_pages() to use __GFP_COMP which,
at this point, seems risky to do. I was trying to find a way to separate out the compound
page users from the non-, but it makes things too complicated. I do not think it is possible
to convert alloc_pages_noprof() to folios at this point, without addressing that first.
Thank you for catching that.
Thanks,
Aruna
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-08-20 17:58 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-08-05 16:31 [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof() Aruna Ramakrishna
2024-08-05 16:31 ` [PATCH 2/4] mm/mempolicy: Make alloc_pages_mpol_noprof() static Aruna Ramakrishna
2024-08-05 16:31 ` [PATCH 3/4] mm/mempolicy: Remove alloc_pages_mpol_noprof() Aruna Ramakrishna
2024-08-05 16:31 ` [PATCH 4/4] mm/mempolicy: Convert alloc_pages_preferred_many() to return a folio Aruna Ramakrishna
2024-08-06 8:05 ` [PATCH 1/4] mm/mempolicy: Use folio_alloc_mpol_noprof() in alloc_pages_noprof() Kefeng Wang
2024-08-20 17:58 ` Aruna Ramakrishna
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox