* [PATCH mm-unstable v3] mm/page_alloc: keep track of free highatomic
@ 2024-10-28 18:26 Yu Zhao
2024-10-28 18:33 ` Vlastimil Babka
2024-10-29 16:46 ` Johannes Weiner
0 siblings, 2 replies; 4+ messages in thread
From: Yu Zhao @ 2024-10-28 18:26 UTC (permalink / raw)
To: Andrew Morton
Cc: Vlastimil Babka, linux-mm, linux-kernel, Yu Zhao, Link Lin,
David Rientjes
OOM kills due to vastly overestimated free highatomic reserves were
observed:
... invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0 ...
Node 0 Normal free:1482936kB boost:0kB min:410416kB low:739404kB high:1068392kB reserved_highatomic:1073152KB ...
Node 0 Normal: 1292*4kB (ME) 1920*8kB (E) 383*16kB (UE) 220*32kB (ME) 340*64kB (E) 2155*128kB (UE) 3243*256kB (UE) 615*512kB (U) 1*1024kB (M) 0*2048kB 0*4096kB = 1477408kB
The second line above shows that the OOM kill was due to the following
condition:
free (1482936kB) - reserved_highatomic (1073152kB) = 409784KB < min (410416kB)
And the third line shows there were no free pages in any
MIGRATE_HIGHATOMIC pageblocks, which otherwise would show up as type
'H'. Therefore __zone_watermark_unusable_free() underestimated the
usable free memory by over 1GB, which resulted in the unnecessary OOM
kill above.
The comments in __zone_watermark_unusable_free() warns about the
potential risk, i.e.,
If the caller does not have rights to reserves below the min
watermark then subtract the high-atomic reserves. This will
over-estimate the size of the atomic reserve but it avoids a search.
However, it is possible to keep track of free pages in reserved
highatomic pageblocks with a new per-zone counter nr_free_highatomic
protected by the zone lock, to avoid a search when calculating the
usable free memory. And the cost would be minimal, i.e., simple
arithmetics in the highatomic alloc/free/move paths.
Note that since nr_free_highatomic can be relatively small, using a
per-cpu counter might cause too much drift and defeat its purpose,
in addition to the extra memory overhead.
Reported-by: Link Lin <linkl@google.com>
Signed-off-by: Yu Zhao <yuzhao@google.com>
Acked-by: David Rientjes <rientjes@google.com>
---
include/linux/mmzone.h | 1 +
mm/page_alloc.c | 10 +++++++---
2 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 2e8c4307c728..5e8f567753bd 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -825,6 +825,7 @@ struct zone {
unsigned long watermark_boost;
unsigned long nr_reserved_highatomic;
+ unsigned long nr_free_highatomic;
/*
* We don't know if the memory that we're going to allocate will be
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a78acaae6d9c..372a386f34f5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -635,6 +635,8 @@ compaction_capture(struct capture_control *capc, struct page *page,
static inline void account_freepages(struct zone *zone, int nr_pages,
int migratetype)
{
+ lockdep_assert_held(&zone->lock);
+
if (is_migrate_isolate(migratetype))
return;
@@ -642,6 +644,9 @@ static inline void account_freepages(struct zone *zone, int nr_pages,
if (is_migrate_cma(migratetype))
__mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
+
+ if (is_migrate_highatomic(migratetype))
+ WRITE_ONCE(zone->nr_free_highatomic, zone->nr_free_highatomic + nr_pages);
}
/* Used for pages not on another list */
@@ -3117,11 +3122,10 @@ static inline long __zone_watermark_unusable_free(struct zone *z,
/*
* If the caller does not have rights to reserves below the min
- * watermark then subtract the high-atomic reserves. This will
- * over-estimate the size of the atomic reserve but it avoids a search.
+ * watermark then subtract the free pages reserved for highatomic.
*/
if (likely(!(alloc_flags & ALLOC_RESERVES)))
- unusable_free += z->nr_reserved_highatomic;
+ unusable_free += READ_ONCE(z->nr_free_highatomic);
#ifdef CONFIG_CMA
/* If allocation can't use CMA areas don't use free CMA pages */
--
2.47.0.163.g1226f6d8fa-goog
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH mm-unstable v3] mm/page_alloc: keep track of free highatomic
2024-10-28 18:26 [PATCH mm-unstable v3] mm/page_alloc: keep track of free highatomic Yu Zhao
@ 2024-10-28 18:33 ` Vlastimil Babka
2024-10-29 16:46 ` Johannes Weiner
1 sibling, 0 replies; 4+ messages in thread
From: Vlastimil Babka @ 2024-10-28 18:33 UTC (permalink / raw)
To: Yu Zhao, Andrew Morton; +Cc: linux-mm, linux-kernel, Link Lin, David Rientjes
On 10/28/24 19:26, Yu Zhao wrote:
> OOM kills due to vastly overestimated free highatomic reserves were
> observed:
>
> ... invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0 ...
> Node 0 Normal free:1482936kB boost:0kB min:410416kB low:739404kB high:1068392kB reserved_highatomic:1073152KB ...
> Node 0 Normal: 1292*4kB (ME) 1920*8kB (E) 383*16kB (UE) 220*32kB (ME) 340*64kB (E) 2155*128kB (UE) 3243*256kB (UE) 615*512kB (U) 1*1024kB (M) 0*2048kB 0*4096kB = 1477408kB
>
> The second line above shows that the OOM kill was due to the following
> condition:
>
> free (1482936kB) - reserved_highatomic (1073152kB) = 409784KB < min (410416kB)
>
> And the third line shows there were no free pages in any
> MIGRATE_HIGHATOMIC pageblocks, which otherwise would show up as type
> 'H'. Therefore __zone_watermark_unusable_free() underestimated the
> usable free memory by over 1GB, which resulted in the unnecessary OOM
> kill above.
>
> The comments in __zone_watermark_unusable_free() warns about the
> potential risk, i.e.,
>
> If the caller does not have rights to reserves below the min
> watermark then subtract the high-atomic reserves. This will
> over-estimate the size of the atomic reserve but it avoids a search.
>
> However, it is possible to keep track of free pages in reserved
> highatomic pageblocks with a new per-zone counter nr_free_highatomic
> protected by the zone lock, to avoid a search when calculating the
> usable free memory. And the cost would be minimal, i.e., simple
> arithmetics in the highatomic alloc/free/move paths.
>
> Note that since nr_free_highatomic can be relatively small, using a
> per-cpu counter might cause too much drift and defeat its purpose,
> in addition to the extra memory overhead.
>
> Reported-by: Link Lin <linkl@google.com>
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
For LTS benefit I'd also add:
Cc: <stable@vger.kernel.org> # v6.12+
> ---
> include/linux/mmzone.h | 1 +
> mm/page_alloc.c | 10 +++++++---
> 2 files changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 2e8c4307c728..5e8f567753bd 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -825,6 +825,7 @@ struct zone {
> unsigned long watermark_boost;
>
> unsigned long nr_reserved_highatomic;
> + unsigned long nr_free_highatomic;
>
> /*
> * We don't know if the memory that we're going to allocate will be
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a78acaae6d9c..372a386f34f5 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -635,6 +635,8 @@ compaction_capture(struct capture_control *capc, struct page *page,
> static inline void account_freepages(struct zone *zone, int nr_pages,
> int migratetype)
> {
> + lockdep_assert_held(&zone->lock);
> +
> if (is_migrate_isolate(migratetype))
> return;
>
> @@ -642,6 +644,9 @@ static inline void account_freepages(struct zone *zone, int nr_pages,
>
> if (is_migrate_cma(migratetype))
> __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
> +
> + if (is_migrate_highatomic(migratetype))
> + WRITE_ONCE(zone->nr_free_highatomic, zone->nr_free_highatomic + nr_pages);
> }
>
> /* Used for pages not on another list */
> @@ -3117,11 +3122,10 @@ static inline long __zone_watermark_unusable_free(struct zone *z,
>
> /*
> * If the caller does not have rights to reserves below the min
> - * watermark then subtract the high-atomic reserves. This will
> - * over-estimate the size of the atomic reserve but it avoids a search.
> + * watermark then subtract the free pages reserved for highatomic.
> */
> if (likely(!(alloc_flags & ALLOC_RESERVES)))
> - unusable_free += z->nr_reserved_highatomic;
> + unusable_free += READ_ONCE(z->nr_free_highatomic);
>
> #ifdef CONFIG_CMA
> /* If allocation can't use CMA areas don't use free CMA pages */
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH mm-unstable v3] mm/page_alloc: keep track of free highatomic
2024-10-28 18:26 [PATCH mm-unstable v3] mm/page_alloc: keep track of free highatomic Yu Zhao
2024-10-28 18:33 ` Vlastimil Babka
@ 2024-10-29 16:46 ` Johannes Weiner
2024-10-29 16:49 ` Yu Zhao
1 sibling, 1 reply; 4+ messages in thread
From: Johannes Weiner @ 2024-10-29 16:46 UTC (permalink / raw)
To: Yu Zhao
Cc: Andrew Morton, Vlastimil Babka, linux-mm, linux-kernel, Link Lin,
David Rientjes
On Mon, Oct 28, 2024 at 12:26:53PM -0600, Yu Zhao wrote:
> OOM kills due to vastly overestimated free highatomic reserves were
> observed:
>
> ... invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0 ...
> Node 0 Normal free:1482936kB boost:0kB min:410416kB low:739404kB high:1068392kB reserved_highatomic:1073152KB ...
> Node 0 Normal: 1292*4kB (ME) 1920*8kB (E) 383*16kB (UE) 220*32kB (ME) 340*64kB (E) 2155*128kB (UE) 3243*256kB (UE) 615*512kB (U) 1*1024kB (M) 0*2048kB 0*4096kB = 1477408kB
>
> The second line above shows that the OOM kill was due to the following
> condition:
>
> free (1482936kB) - reserved_highatomic (1073152kB) = 409784KB < min (410416kB)
>
> And the third line shows there were no free pages in any
> MIGRATE_HIGHATOMIC pageblocks, which otherwise would show up as type
> 'H'. Therefore __zone_watermark_unusable_free() underestimated the
> usable free memory by over 1GB, which resulted in the unnecessary OOM
> kill above.
>
> The comments in __zone_watermark_unusable_free() warns about the
> potential risk, i.e.,
>
> If the caller does not have rights to reserves below the min
> watermark then subtract the high-atomic reserves. This will
> over-estimate the size of the atomic reserve but it avoids a search.
>
> However, it is possible to keep track of free pages in reserved
> highatomic pageblocks with a new per-zone counter nr_free_highatomic
> protected by the zone lock, to avoid a search when calculating the
> usable free memory. And the cost would be minimal, i.e., simple
> arithmetics in the highatomic alloc/free/move paths.
>
> Note that since nr_free_highatomic can be relatively small, using a
> per-cpu counter might cause too much drift and defeat its purpose,
> in addition to the extra memory overhead.
>
> Reported-by: Link Lin <linkl@google.com>
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> @@ -642,6 +644,9 @@ static inline void account_freepages(struct zone *zone, int nr_pages,
>
> if (is_migrate_cma(migratetype))
> __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
> +
> + if (is_migrate_highatomic(migratetype))
> + WRITE_ONCE(zone->nr_free_highatomic, zone->nr_free_highatomic + nr_pages);
Minor nit, the page can only be of one migratetype, so `else if' would
be better.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH mm-unstable v3] mm/page_alloc: keep track of free highatomic
2024-10-29 16:46 ` Johannes Weiner
@ 2024-10-29 16:49 ` Yu Zhao
0 siblings, 0 replies; 4+ messages in thread
From: Yu Zhao @ 2024-10-29 16:49 UTC (permalink / raw)
To: Johannes Weiner, Andrew Morton
Cc: Vlastimil Babka, linux-mm, linux-kernel, Link Lin, David Rientjes
On Tue, Oct 29, 2024 at 10:46 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Mon, Oct 28, 2024 at 12:26:53PM -0600, Yu Zhao wrote:
> > OOM kills due to vastly overestimated free highatomic reserves were
> > observed:
> >
> > ... invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0 ...
> > Node 0 Normal free:1482936kB boost:0kB min:410416kB low:739404kB high:1068392kB reserved_highatomic:1073152KB ...
> > Node 0 Normal: 1292*4kB (ME) 1920*8kB (E) 383*16kB (UE) 220*32kB (ME) 340*64kB (E) 2155*128kB (UE) 3243*256kB (UE) 615*512kB (U) 1*1024kB (M) 0*2048kB 0*4096kB = 1477408kB
> >
> > The second line above shows that the OOM kill was due to the following
> > condition:
> >
> > free (1482936kB) - reserved_highatomic (1073152kB) = 409784KB < min (410416kB)
> >
> > And the third line shows there were no free pages in any
> > MIGRATE_HIGHATOMIC pageblocks, which otherwise would show up as type
> > 'H'. Therefore __zone_watermark_unusable_free() underestimated the
> > usable free memory by over 1GB, which resulted in the unnecessary OOM
> > kill above.
> >
> > The comments in __zone_watermark_unusable_free() warns about the
> > potential risk, i.e.,
> >
> > If the caller does not have rights to reserves below the min
> > watermark then subtract the high-atomic reserves. This will
> > over-estimate the size of the atomic reserve but it avoids a search.
> >
> > However, it is possible to keep track of free pages in reserved
> > highatomic pageblocks with a new per-zone counter nr_free_highatomic
> > protected by the zone lock, to avoid a search when calculating the
> > usable free memory. And the cost would be minimal, i.e., simple
> > arithmetics in the highatomic alloc/free/move paths.
> >
> > Note that since nr_free_highatomic can be relatively small, using a
> > per-cpu counter might cause too much drift and defeat its purpose,
> > in addition to the extra memory overhead.
> >
> > Reported-by: Link Lin <linkl@google.com>
> > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > Acked-by: David Rientjes <rientjes@google.com>
>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
>
> > @@ -642,6 +644,9 @@ static inline void account_freepages(struct zone *zone, int nr_pages,
> >
> > if (is_migrate_cma(migratetype))
> > __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
> > +
> > + if (is_migrate_highatomic(migratetype))
> > + WRITE_ONCE(zone->nr_free_highatomic, zone->nr_free_highatomic + nr_pages);
>
> Minor nit, the page can only be of one migratetype, so `else if' would
> be better.
Right (copied and pasted without thinking).
Andrew, could you please fix this up in place? Thank you!
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2024-10-29 16:50 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-28 18:26 [PATCH mm-unstable v3] mm/page_alloc: keep track of free highatomic Yu Zhao
2024-10-28 18:33 ` Vlastimil Babka
2024-10-29 16:46 ` Johannes Weiner
2024-10-29 16:49 ` Yu Zhao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox