From: Johannes Weiner <hannes@cmpxchg.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>,
Brendan Jackman <jackmanb@google.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [PATCH 2/3] mm: page_alloc: remove remnants of unlocked migratetype updates
Date: Mon, 24 Feb 2025 19:08:25 -0500 [thread overview]
Message-ID: <20250225001023.1494422-3-hannes@cmpxchg.org> (raw)
In-Reply-To: <20250225001023.1494422-1-hannes@cmpxchg.org>
The freelist hygiene patches made migratetype accesses fully protected
under the zone->lock. Remove remnants of handling the race conditions
that existed before from the MIGRATE_HIGHATOMIC code.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/page_alloc.c | 50 ++++++++++++++++---------------------------------
1 file changed, 16 insertions(+), 34 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9ea14ec52449..53d315aa69c4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1991,20 +1991,10 @@ static inline bool boost_watermark(struct zone *zone)
static struct page *
try_to_steal_block(struct zone *zone, struct page *page,
int current_order, int order, int start_type,
- unsigned int alloc_flags)
+ int block_type, unsigned int alloc_flags)
{
int free_pages, movable_pages, alike_pages;
unsigned long start_pfn;
- int block_type;
-
- block_type = get_pageblock_migratetype(page);
-
- /*
- * This can happen due to races and we want to prevent broken
- * highatomic accounting.
- */
- if (is_migrate_highatomic(block_type))
- return NULL;
/* Take ownership for orders >= pageblock_order */
if (current_order >= pageblock_order) {
@@ -2179,33 +2169,22 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
spin_lock_irqsave(&zone->lock, flags);
for (order = 0; order < NR_PAGE_ORDERS; order++) {
struct free_area *area = &(zone->free_area[order]);
- int mt;
+ unsigned long size;
page = get_page_from_free_area(area, MIGRATE_HIGHATOMIC);
if (!page)
continue;
- mt = get_pageblock_migratetype(page);
/*
- * In page freeing path, migratetype change is racy so
- * we can counter several free pages in a pageblock
- * in this loop although we changed the pageblock type
- * from highatomic to ac->migratetype. So we should
- * adjust the count once.
+ * It should never happen but changes to
+ * locking could inadvertently allow a per-cpu
+ * drain to add pages to MIGRATE_HIGHATOMIC
+ * while unreserving so be safe and watch for
+ * underflows.
*/
- if (is_migrate_highatomic(mt)) {
- unsigned long size;
- /*
- * It should never happen but changes to
- * locking could inadvertently allow a per-cpu
- * drain to add pages to MIGRATE_HIGHATOMIC
- * while unreserving so be safe and watch for
- * underflows.
- */
- size = max(pageblock_nr_pages, 1UL << order);
- size = min(size, zone->nr_reserved_highatomic);
- zone->nr_reserved_highatomic -= size;
- }
+ size = max(pageblock_nr_pages, 1UL << order);
+ size = min(size, zone->nr_reserved_highatomic);
+ zone->nr_reserved_highatomic -= size;
/*
* Convert to ac->migratetype and avoid the normal
@@ -2217,10 +2196,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
* may increase.
*/
if (order < pageblock_order)
- ret = move_freepages_block(zone, page, mt,
+ ret = move_freepages_block(zone, page,
+ MIGRATE_HIGHATOMIC,
ac->migratetype);
else {
- move_to_free_list(page, zone, order, mt,
+ move_to_free_list(page, zone, order,
+ MIGRATE_HIGHATOMIC,
ac->migratetype);
change_pageblock_range(page, order,
ac->migratetype);
@@ -2294,7 +2275,8 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
page = get_page_from_free_area(area, fallback_mt);
page = try_to_steal_block(zone, page, current_order, order,
- start_migratetype, alloc_flags);
+ start_migratetype, fallback_mt,
+ alloc_flags);
if (page)
goto got_one;
}
--
2.48.1
next prev parent reply other threads:[~2025-02-25 0:10 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-25 0:08 [PATCH 0/3] mm: page_alloc: freelist hygiene follow-up Johannes Weiner
2025-02-25 0:08 ` [PATCH 1/3] mm: page_alloc: don't steal single pages from biggest buddy Johannes Weiner
2025-02-25 10:50 ` Vlastimil Babka
2025-02-25 13:34 ` Brendan Jackman
2025-02-25 14:35 ` Vlastimil Babka
2025-02-25 14:40 ` Brendan Jackman
2025-02-25 15:04 ` Johannes Weiner
2025-02-25 0:08 ` Johannes Weiner [this message]
2025-02-25 11:01 ` [PATCH 2/3] mm: page_alloc: remove remnants of unlocked migratetype updates Vlastimil Babka
2025-02-25 13:43 ` Brendan Jackman
2025-02-25 15:09 ` Johannes Weiner
2025-02-25 15:19 ` Brendan Jackman
2025-02-25 0:08 ` [PATCH 3/3] mm: page_alloc: group fallback functions together Johannes Weiner
2025-02-25 11:02 ` Vlastimil Babka
2025-02-26 8:51 ` Vlastimil Babka
2025-02-25 13:50 ` Brendan Jackman
2025-02-25 15:14 ` Johannes Weiner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250225001023.1494422-3-hannes@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=akpm@linux-foundation.org \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox