* [PATCH v5 1/3] mm/page_alloc/vmstat: Simplify refresh_cpu_vm_stats change detection
2025-10-14 14:50 [PATCH v5 0/3] mm/page_alloc: Batch callers of free_pcppages_bulk Joshua Hahn
@ 2025-10-14 14:50 ` Joshua Hahn
2025-10-14 14:50 ` [PATCH v5 2/3] mm/page_alloc: Batch page freeing in decay_pcp_high Joshua Hahn
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: Joshua Hahn @ 2025-10-14 14:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Chris Mason, Kiryl Shutsemau, Liam R. Howlett, Brendan Jackman,
David Hildenbrand, Johannes Weiner, Lorenzo Stoakes,
Michal Hocko, Mike Rapoport, Suren Baghdasaryan, Vlastimil Babka,
Zi Yan, linux-kernel, linux-mm, kernel-team
Currently, refresh_cpu_vm_stats returns an int, indicating how many
changes were made during its updates. Using this information, callers
like vmstat_update can heuristically determine if more work will be done
in the future.
However, all of refresh_cpu_vm_stats's callers either (a) ignore the
result, only caring about performing the updates, or (b) only care about
whether changes were made, but not *how many* changes were made.
Simplify the code by returning a bool instead to indicate if updates
were made.
In addition, simplify fold_diff and decay_pcp_high to return a bool
for the same reason.
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com>
---
include/linux/gfp.h | 2 +-
mm/page_alloc.c | 8 ++++----
mm/vmstat.c | 28 +++++++++++++++-------------
3 files changed, 20 insertions(+), 18 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 0ceb4e09306c..f46b066c7661 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -386,7 +386,7 @@ extern void free_pages(unsigned long addr, unsigned int order);
#define free_page(addr) free_pages((addr), 0)
void page_alloc_init_cpuhp(void);
-int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp);
+bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp);
void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp);
void drain_all_pages(struct zone *zone);
void drain_local_pages(struct zone *zone);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 600d9e981c23..bbc3282fdffc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2557,10 +2557,10 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
* Called from the vmstat counter updater to decay the PCP high.
* Return whether there are addition works to do.
*/
-int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
+bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
{
int high_min, to_drain, batch;
- int todo = 0;
+ bool todo = false;
high_min = READ_ONCE(pcp->high_min);
batch = READ_ONCE(pcp->batch);
@@ -2573,7 +2573,7 @@ int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
pcp->high = max3(pcp->count - (batch << CONFIG_PCP_BATCH_SCALE_MAX),
pcp->high - (pcp->high >> 3), high_min);
if (pcp->high > high_min)
- todo++;
+ todo = true;
}
to_drain = pcp->count - pcp->high;
@@ -2581,7 +2581,7 @@ int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
spin_lock(&pcp->lock);
free_pcppages_bulk(zone, to_drain, pcp, 0);
spin_unlock(&pcp->lock);
- todo++;
+ todo = true;
}
return todo;
diff --git a/mm/vmstat.c b/mm/vmstat.c
index bb09c032eecf..98855f31294d 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -771,25 +771,25 @@ EXPORT_SYMBOL(dec_node_page_state);
/*
* Fold a differential into the global counters.
- * Returns the number of counters updated.
+ * Returns whether counters were updated.
*/
static int fold_diff(int *zone_diff, int *node_diff)
{
int i;
- int changes = 0;
+ bool changed = false;
for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
if (zone_diff[i]) {
atomic_long_add(zone_diff[i], &vm_zone_stat[i]);
- changes++;
+ changed = true;
}
for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
if (node_diff[i]) {
atomic_long_add(node_diff[i], &vm_node_stat[i]);
- changes++;
+ changed = true;
}
- return changes;
+ return changed;
}
/*
@@ -806,16 +806,16 @@ static int fold_diff(int *zone_diff, int *node_diff)
* with the global counters. These could cause remote node cache line
* bouncing and will have to be only done when necessary.
*
- * The function returns the number of global counters updated.
+ * The function returns whether global counters were updated.
*/
-static int refresh_cpu_vm_stats(bool do_pagesets)
+static bool refresh_cpu_vm_stats(bool do_pagesets)
{
struct pglist_data *pgdat;
struct zone *zone;
int i;
int global_zone_diff[NR_VM_ZONE_STAT_ITEMS] = { 0, };
int global_node_diff[NR_VM_NODE_STAT_ITEMS] = { 0, };
- int changes = 0;
+ bool changed = false;
for_each_populated_zone(zone) {
struct per_cpu_zonestat __percpu *pzstats = zone->per_cpu_zonestats;
@@ -839,7 +839,8 @@ static int refresh_cpu_vm_stats(bool do_pagesets)
if (do_pagesets) {
cond_resched();
- changes += decay_pcp_high(zone, this_cpu_ptr(pcp));
+ if (decay_pcp_high(zone, this_cpu_ptr(pcp)))
+ changed = true;
#ifdef CONFIG_NUMA
/*
* Deal with draining the remote pageset of this
@@ -861,13 +862,13 @@ static int refresh_cpu_vm_stats(bool do_pagesets)
}
if (__this_cpu_dec_return(pcp->expire)) {
- changes++;
+ changed = true;
continue;
}
if (__this_cpu_read(pcp->count)) {
drain_zone_pages(zone, this_cpu_ptr(pcp));
- changes++;
+ changed = true;
}
#endif
}
@@ -887,8 +888,9 @@ static int refresh_cpu_vm_stats(bool do_pagesets)
}
}
- changes += fold_diff(global_zone_diff, global_node_diff);
- return changes;
+ if (fold_diff(global_zone_diff, global_node_diff))
+ changed = true;
+ return changed;
}
/*
--
2.47.3
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v5 2/3] mm/page_alloc: Batch page freeing in decay_pcp_high
2025-10-14 14:50 [PATCH v5 0/3] mm/page_alloc: Batch callers of free_pcppages_bulk Joshua Hahn
2025-10-14 14:50 ` [PATCH v5 1/3] mm/page_alloc/vmstat: Simplify refresh_cpu_vm_stats change detection Joshua Hahn
@ 2025-10-14 14:50 ` Joshua Hahn
2025-10-14 14:50 ` [PATCH v5 3/3] mm/page_alloc: Batch page freeing in free_frozen_page_commit Joshua Hahn
2025-10-14 17:54 ` [PATCH v5 0/3] mm/page_alloc: Batch callers of free_pcppages_bulk Vlastimil Babka
3 siblings, 0 replies; 8+ messages in thread
From: Joshua Hahn @ 2025-10-14 14:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Chris Mason, Kiryl Shutsemau, Brendan Jackman, Johannes Weiner,
Michal Hocko, Suren Baghdasaryan, Vlastimil Babka, Zi Yan,
linux-kernel, linux-mm, kernel-team
It is possible for pcp->count - pcp->high to exceed pcp->batch by a lot.
When this happens, we should perform batching to ensure that
free_pcppages_bulk isn't called with too many pages to free at once and
starve out other threads that need the pcp or zone lock.
Since we are still only freeing the difference between the initial
pcp->count and pcp->high values, there should be no change to how many
pages are freed.
Suggested-by: Chris Mason <clm@fb.com>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Co-developed-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com>
---
mm/page_alloc.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index bbc3282fdffc..8ecd48be8bdd 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2559,7 +2559,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
*/
bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
{
- int high_min, to_drain, batch;
+ int high_min, to_drain, to_drain_batched, batch;
bool todo = false;
high_min = READ_ONCE(pcp->high_min);
@@ -2577,11 +2577,14 @@ bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
}
to_drain = pcp->count - pcp->high;
- if (to_drain > 0) {
+ while (to_drain > 0) {
+ to_drain_batched = min(to_drain, batch);
spin_lock(&pcp->lock);
- free_pcppages_bulk(zone, to_drain, pcp, 0);
+ free_pcppages_bulk(zone, to_drain_batched, pcp, 0);
spin_unlock(&pcp->lock);
todo = true;
+
+ to_drain -= to_drain_batched;
}
return todo;
--
2.47.3
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v5 3/3] mm/page_alloc: Batch page freeing in free_frozen_page_commit
2025-10-14 14:50 [PATCH v5 0/3] mm/page_alloc: Batch callers of free_pcppages_bulk Joshua Hahn
2025-10-14 14:50 ` [PATCH v5 1/3] mm/page_alloc/vmstat: Simplify refresh_cpu_vm_stats change detection Joshua Hahn
2025-10-14 14:50 ` [PATCH v5 2/3] mm/page_alloc: Batch page freeing in decay_pcp_high Joshua Hahn
@ 2025-10-14 14:50 ` Joshua Hahn
2025-10-14 19:28 ` Joshua Hahn
2025-10-14 17:54 ` [PATCH v5 0/3] mm/page_alloc: Batch callers of free_pcppages_bulk Vlastimil Babka
3 siblings, 1 reply; 8+ messages in thread
From: Joshua Hahn @ 2025-10-14 14:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Chris Mason, Kiryl Shutsemau, Brendan Jackman, Johannes Weiner,
Michal Hocko, Suren Baghdasaryan, Vlastimil Babka, Zi Yan,
linux-kernel, linux-mm, kernel-team
Before returning, free_frozen_page_commit calls free_pcppages_bulk using
nr_pcp_free to determine how many pages can appropritately be freed,
based on the tunable parameters stored in pcp. While this number is an
accurate representation of how many pages should be freed in total, it
is not an appropriate number of pages to free at once using
free_pcppages_bulk, since we have seen the value consistently go above
2000 in the Meta fleet on larger machines.
As such, perform batched page freeing in free_pcppages_bulk by using
pcp->batch. In order to ensure that other processes are not starved of the
zone lock, free both the zone lock and pcp lock to yield to other threads.
Note that because free_frozen_page_commit now performs a spinlock inside the
function (and can fail), the function may now return with a freed pcp.
To handle this, return true if the pcp is locked on exit and false otherwise.
In addition, since free_frozen_page_commit must now be aware of what UP
flags were stored at the time of the spin lock, and because we must be
able to report new UP flags to the callers, add a new unsigned long*
parameter UP_flags to keep track of this.
The following are a few synthetic benchmarks, made on three machines. The
first is a large machine with 754GiB memory and 316 processors.
The second is a relatively smaller machine with 251GiB memory and 176
processors. The third and final is the smallest of the three, which has 62GiB
memory and 36 processors.
On all machines, I kick off a kernel build with -j$(nproc).
Negative delta is better (faster compilation)
Large machine (754GiB memory, 316 processors)
make -j$(nproc)
+------------+---------------+-----------+
| Metric (s) | Variation (%) | Delta(%) |
+------------+---------------+-----------+
| real | 0.8070 | - 1.4865 |
| user | 0.2823 | + 0.4081 |
| sys | 5.0267 | -11.8737 |
+------------+---------------+-----------+
Medium machine (251GiB memory, 176 processors)
make -j$(nproc)
+------------+---------------+----------+
| Metric (s) | Variation (%) | Delta(%) |
+------------+---------------+----------+
| real | 0.2806 | +0.0351 |
| user | 0.0994 | +0.3170 |
| sys | 0.6229 | -0.6277 |
+------------+---------------+----------+
Small machine (62GiB memory, 36 processors)
make -j$(nproc)
+------------+---------------+----------+
| Metric (s) | Variation (%) | Delta(%) |
+------------+---------------+----------+
| real | 0.1503 | -2.6585 |
| user | 0.0431 | -2.2984 |
| sys | 0.1870 | -3.2013 |
+------------+---------------+----------+
Here, variation is the coefficient of variation, i.e. standard deviation / mean.
Suggested-by: Chris Mason <clm@fb.com>
Co-developed-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com>
---
mm/page_alloc.c | 65 ++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 56 insertions(+), 9 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8ecd48be8bdd..6d544521e49c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2818,12 +2818,22 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
return high;
}
-static void free_frozen_page_commit(struct zone *zone,
+/*
+ * Tune pcp alloc factor and adjust count & free_count. Free pages to bring the
+ * pcp's watermarks below high.
+ *
+ * May return a freed pcp, if during page freeing the pcp spinlock cannot be
+ * reacquired. Return true if pcp is locked, false otherwise.
+ */
+static bool free_frozen_page_commit(struct zone *zone,
struct per_cpu_pages *pcp, struct page *page, int migratetype,
- unsigned int order, fpi_t fpi_flags)
+ unsigned int order, fpi_t fpi_flags, unsigned long *UP_flags)
{
int high, batch;
+ int to_free, to_free_batched;
int pindex;
+ int cpu = smp_processor_id();
+ int ret = true;
bool free_high = false;
/*
@@ -2861,15 +2871,46 @@ static void free_frozen_page_commit(struct zone *zone,
* Do not attempt to take a zone lock. Let pcp->count get
* over high mark temporarily.
*/
- return;
+ return true;
}
high = nr_pcp_high(pcp, zone, batch, free_high);
if (pcp->count < high)
- return;
+ return true;
+
+ to_free = nr_pcp_free(pcp, batch, high, free_high);
+ while (to_free > 0 && pcp->count > 0) {
+ to_free_batched = min(to_free, batch);
+ free_pcppages_bulk(zone, to_free_batched, pcp, pindex);
+ to_free -= to_free_batched;
+
+ if (to_free <= 0 || pcp->count <= 0)
+ break;
+
+ pcp_spin_unlock(pcp);
+ pcp_trylock_finish(*UP_flags);
+
+ pcp_trylock_prepare(*UP_flags);
+ pcp = pcp_spin_trylock(zone->per_cpu_pageset);
+ if (!pcp) {
+ pcp_trylock_finish(*UP_flags);
+ ret = false;
+ break;
+ }
+
+ /*
+ * Check if this thread has been migrated to a different CPU.
+ * If that is the case, give up and indicate that the pcp is
+ * returned in an unlocked state.
+ */
+ if (smp_processor_id() != cpu) {
+ pcp_spin_unlock(pcp);
+ pcp_trylock_finish(*UP_flags);
+ ret = false;
+ break;
+ }
+ }
- free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high),
- pcp, pindex);
if (test_bit(ZONE_BELOW_HIGH, &zone->flags) &&
zone_watermark_ok(zone, 0, high_wmark_pages(zone),
ZONE_MOVABLE, 0)) {
@@ -2887,6 +2928,7 @@ static void free_frozen_page_commit(struct zone *zone,
next_memory_node(pgdat->node_id) < MAX_NUMNODES)
atomic_set(&pgdat->kswapd_failures, 0);
}
+ return ret;
}
/*
@@ -2934,7 +2976,9 @@ static void __free_frozen_pages(struct page *page, unsigned int order,
pcp_trylock_prepare(UP_flags);
pcp = pcp_spin_trylock(zone->per_cpu_pageset);
if (pcp) {
- free_frozen_page_commit(zone, pcp, page, migratetype, order, fpi_flags);
+ if (!free_frozen_page_commit(zone, pcp, page, migratetype,
+ order, fpi_flags, &UP_flags))
+ return;
pcp_spin_unlock(pcp);
} else {
free_one_page(zone, page, pfn, order, fpi_flags);
@@ -3034,8 +3078,11 @@ void free_unref_folios(struct folio_batch *folios)
migratetype = MIGRATE_MOVABLE;
trace_mm_page_free_batched(&folio->page);
- free_frozen_page_commit(zone, pcp, &folio->page, migratetype,
- order, FPI_NONE);
+ if (!free_frozen_page_commit(zone, pcp, &folio->page,
+ migratetype, order, FPI_NONE, &UP_flags)) {
+ pcp = NULL;
+ locked_zone = NULL;
+ }
}
if (pcp) {
--
2.47.3
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v5 3/3] mm/page_alloc: Batch page freeing in free_frozen_page_commit
2025-10-14 14:50 ` [PATCH v5 3/3] mm/page_alloc: Batch page freeing in free_frozen_page_commit Joshua Hahn
@ 2025-10-14 19:28 ` Joshua Hahn
2025-10-14 20:45 ` Vlastimil Babka
0 siblings, 1 reply; 8+ messages in thread
From: Joshua Hahn @ 2025-10-14 19:28 UTC (permalink / raw)
To: Joshua Hahn, Andrew Morton
Cc: Chris Mason, Kiryl Shutsemau, Brendan Jackman, Johannes Weiner,
Michal Hocko, Suren Baghdasaryan, Vlastimil Babka, Zi Yan,
linux-kernel, linux-mm, kernel-team
On Tue, 14 Oct 2025 07:50:10 -0700 Joshua Hahn <joshua.hahnjy@gmail.com> wrote:
> Before returning, free_frozen_page_commit calls free_pcppages_bulk using
> nr_pcp_free to determine how many pages can appropritately be freed,
> based on the tunable parameters stored in pcp. While this number is an
> accurate representation of how many pages should be freed in total, it
> is not an appropriate number of pages to free at once using
> free_pcppages_bulk, since we have seen the value consistently go above
> 2000 in the Meta fleet on larger machines.
>
> As such, perform batched page freeing in free_pcppages_bulk by using
> pcp->batch. In order to ensure that other processes are not starved of the
> zone lock, free both the zone lock and pcp lock to yield to other threads.
>
> Note that because free_frozen_page_commit now performs a spinlock inside the
> function (and can fail), the function may now return with a freed pcp.
> To handle this, return true if the pcp is locked on exit and false otherwise.
>
> In addition, since free_frozen_page_commit must now be aware of what UP
> flags were stored at the time of the spin lock, and because we must be
> able to report new UP flags to the callers, add a new unsigned long*
> parameter UP_flags to keep track of this.
[...snip...]
Hello Andrew, I hope you are doing well! I was wondering if you could help
adding this as a fixlet for the patch I am writing this reply to. Vlastimil
kindly pointed out that they should never go negative, so checking for
0-ness should be sufficient and more readable than the <= checks.
I think it is OK to leave the changelog in 0/3 unchanged, since it will not go
into the commit history and Vlastimil has already left a correction. But
please let me know if you would like me to add a correction for that as well.
Thank you as always, for your help! I hope you have a great day!
Joshua
...
Since to_free and pcp->count cannot become negative, make the checks into an
equality check instead.
Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com>
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6d544521e49c..fd46a982ce3c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2884,7 +2884,7 @@ static bool free_frozen_page_commit(struct zone *zone,
free_pcppages_bulk(zone, to_free_batched, pcp, pindex);
to_free -= to_free_batched;
- if (to_free <= 0 || pcp->count <= 0)
+ if (to_free == 0 || pcp->count == 0)
break;
pcp_spin_unlock(pcp);
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v5 3/3] mm/page_alloc: Batch page freeing in free_frozen_page_commit
2025-10-14 19:28 ` Joshua Hahn
@ 2025-10-14 20:45 ` Vlastimil Babka
0 siblings, 0 replies; 8+ messages in thread
From: Vlastimil Babka @ 2025-10-14 20:45 UTC (permalink / raw)
To: Joshua Hahn, Andrew Morton
Cc: Chris Mason, Kiryl Shutsemau, Brendan Jackman, Johannes Weiner,
Michal Hocko, Suren Baghdasaryan, Zi Yan, linux-kernel, linux-mm,
kernel-team
On 10/14/25 21:28, Joshua Hahn wrote:
> On Tue, 14 Oct 2025 07:50:10 -0700 Joshua Hahn <joshua.hahnjy@gmail.com> wrote:
>
>> Before returning, free_frozen_page_commit calls free_pcppages_bulk using
>> nr_pcp_free to determine how many pages can appropritately be freed,
>> based on the tunable parameters stored in pcp. While this number is an
>> accurate representation of how many pages should be freed in total, it
>> is not an appropriate number of pages to free at once using
>> free_pcppages_bulk, since we have seen the value consistently go above
>> 2000 in the Meta fleet on larger machines.
>>
>> As such, perform batched page freeing in free_pcppages_bulk by using
>> pcp->batch. In order to ensure that other processes are not starved of the
>> zone lock, free both the zone lock and pcp lock to yield to other threads.
>>
>> Note that because free_frozen_page_commit now performs a spinlock inside the
>> function (and can fail), the function may now return with a freed pcp.
>> To handle this, return true if the pcp is locked on exit and false otherwise.
>>
>> In addition, since free_frozen_page_commit must now be aware of what UP
>> flags were stored at the time of the spin lock, and because we must be
>> able to report new UP flags to the callers, add a new unsigned long*
>> parameter UP_flags to keep track of this.
>
> [...snip...]
>
> Hello Andrew, I hope you are doing well! I was wondering if you could help
> adding this as a fixlet for the patch I am writing this reply to. Vlastimil
> kindly pointed out that they should never go negative, so checking for
> 0-ness should be sufficient and more readable than the <= checks.
>
> I think it is OK to leave the changelog in 0/3 unchanged, since it will not go
> into the commit history and Vlastimil has already left a correction. But
> please let me know if you would like me to add a correction for that as well.
>
> Thank you as always, for your help! I hope you have a great day!
> Joshua
>
> ...
>
> Since to_free and pcp->count cannot become negative, make the checks into an
> equality check instead.
>
> Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com>
For the patch with this fixup:
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Thanks!
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 6d544521e49c..fd46a982ce3c 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2884,7 +2884,7 @@ static bool free_frozen_page_commit(struct zone *zone,
> free_pcppages_bulk(zone, to_free_batched, pcp, pindex);
> to_free -= to_free_batched;
>
> - if (to_free <= 0 || pcp->count <= 0)
> + if (to_free == 0 || pcp->count == 0)
> break;
>
> pcp_spin_unlock(pcp);
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v5 0/3] mm/page_alloc: Batch callers of free_pcppages_bulk
2025-10-14 14:50 [PATCH v5 0/3] mm/page_alloc: Batch callers of free_pcppages_bulk Joshua Hahn
` (2 preceding siblings ...)
2025-10-14 14:50 ` [PATCH v5 3/3] mm/page_alloc: Batch page freeing in free_frozen_page_commit Joshua Hahn
@ 2025-10-14 17:54 ` Vlastimil Babka
2025-10-14 19:17 ` Joshua Hahn
3 siblings, 1 reply; 8+ messages in thread
From: Vlastimil Babka @ 2025-10-14 17:54 UTC (permalink / raw)
To: Joshua Hahn, Andrew Morton
Cc: Chris Mason, Kiryl Shutsemau, Liam R. Howlett, Brendan Jackman,
David Hildenbrand, Johannes Weiner, Lorenzo Stoakes,
Michal Hocko, Mike Rapoport, Suren Baghdasaryan, Zi Yan,
linux-kernel, linux-mm, kernel-team
On 10/14/25 16:50, Joshua Hahn wrote:
> Changelog
> =========
> v4 --> v5:
> - Wordsmithing
> - Patches 1/3 and 2/3 were left untouched.
> - Patch 3/3 no longer checks for the to_free == 0 case. It also now checks
> for pcp->count > 0 as the condition inside the while loop, and the early
> break checks for the opposite condition. Note that both to_free and
> pcp->count can become negative due to high-order pages that are freed, so
> we must check for (to_free <= 0 || pcp->count <= 0), instead of just
> checking for == 0.
I don't see how that's possible?
- to_free is decremented by to_free_batched = min(to_free, batch); so it
can't go negative.
- pcp->count indeed decrements by nr_pages but it should be exactly zero
once pcp becomes empty. It's true that internally in free_pcppages_bulk()
the count parameter (where we pass to_free_batched) can go negative, but
that doesn't affect to_free_batched in the caller free_frozen_page_commit().
So testing for <= is unnecessary and only looks weird?
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v5 0/3] mm/page_alloc: Batch callers of free_pcppages_bulk
2025-10-14 17:54 ` [PATCH v5 0/3] mm/page_alloc: Batch callers of free_pcppages_bulk Vlastimil Babka
@ 2025-10-14 19:17 ` Joshua Hahn
0 siblings, 0 replies; 8+ messages in thread
From: Joshua Hahn @ 2025-10-14 19:17 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, Chris Mason, Kiryl Shutsemau, Liam R. Howlett,
Brendan Jackman, David Hildenbrand, Johannes Weiner,
Lorenzo Stoakes, Michal Hocko, Mike Rapoport, Suren Baghdasaryan,
Zi Yan, linux-kernel, linux-mm, kernel-team
On Tue, 14 Oct 2025 19:54:28 +0200 Vlastimil Babka <vbabka@suse.cz> wrote:
> On 10/14/25 16:50, Joshua Hahn wrote:
> > Changelog
> > =========
> > v4 --> v5:
> > - Wordsmithing
> > - Patches 1/3 and 2/3 were left untouched.
> > - Patch 3/3 no longer checks for the to_free == 0 case. It also now checks
> > for pcp->count > 0 as the condition inside the while loop, and the early
> > break checks for the opposite condition. Note that both to_free and
> > pcp->count can become negative due to high-order pages that are freed, so
> > we must check for (to_free <= 0 || pcp->count <= 0), instead of just
> > checking for == 0.
Hello Vlastimil,
> I don't see how that's possible?
> - to_free is decremented by to_free_batched = min(to_free, batch); so it
> can't go negative.
> - pcp->count indeed decrements by nr_pages but it should be exactly zero
> once pcp becomes empty. It's true that internally in free_pcppages_bulk()
> the count parameter (where we pass to_free_batched) can go negative, but
> that doesn't affect to_free_batched in the caller free_frozen_page_commit().
> So testing for <= is unnecessary and only looks weird?
You are totally right. For the first point, that must have been a slip up in
my mind, for some reason, I thought it could go negative after looking at
free_pcppages_bulk, but it obviously can't since we are taking the min as you
pointed out.
The same goes for pcp->count, I realize that it cannot become negative. I think
I was being too careful without really thinking too hard about what I was
protecting against. I'll send in a fixlet in 3/3 which should hopefully be
folded in.
Sorry about these mistakes, I think they could have been avoided had I
thought more about the code. I hope you have a great day!
Joshua
^ permalink raw reply [flat|nested] 8+ messages in thread