From: Mel Gorman <mgorman@techsingularity.net>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Hugh Dickins <hughd@google.com>, Yu Zhao <yuzhao@google.com>,
Vlastimil Babka <vbabka@suse.cz>,
Marcelo Tosatti <mtosatti@redhat.com>,
Michal Hocko <mhocko@kernel.org>,
Marek Szyprowski <m.szyprowski@samsung.com>,
LKML <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>,
Mel Gorman <mgorman@techsingularity.net>
Subject: [PATCH 2/2] mm/page_alloc: Simplify locking during free_unref_page_list
Date: Tue, 22 Nov 2022 13:12:29 +0000 [thread overview]
Message-ID: <20221122131229.5263-3-mgorman@techsingularity.net> (raw)
In-Reply-To: <20221122131229.5263-1-mgorman@techsingularity.net>
While freeing a large list, the zone lock will be released and reacquired
to avoid long hold times since commit c24ad77d962c ("mm/page_alloc.c: avoid
excessive IRQ disabled times in free_unref_page_list()"). As suggested
by Vlastimil Babka, the lockrelease/reacquire logic can be simplified by
reusing the logic that acquires a different lock when changing zones.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/page_alloc.c | 25 +++++++++----------------
1 file changed, 9 insertions(+), 16 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 445066617204..08e32daf0918 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3518,13 +3518,19 @@ void free_unref_page_list(struct list_head *list)
list_del(&page->lru);
migratetype = get_pcppage_migratetype(page);
- /* Different zone, different pcp lock. */
- if (zone != locked_zone) {
+ /*
+ * Either different zone requiring a different pcp lock or
+ * excessive lock hold times when freeing a large list of
+ * pages.
+ */
+ if (zone != locked_zone || batch_count == SWAP_CLUSTER_MAX) {
if (pcp) {
pcp_spin_unlock(pcp);
pcp_trylock_finish(UP_flags);
}
+ batch_count = 0;
+
/*
* trylock is necessary as pages may be getting freed
* from IRQ or SoftIRQ context after an IO completion.
@@ -3539,7 +3545,6 @@ void free_unref_page_list(struct list_head *list)
continue;
}
locked_zone = zone;
- batch_count = 0;
}
/*
@@ -3551,19 +3556,7 @@ void free_unref_page_list(struct list_head *list)
trace_mm_page_free_batched(page);
free_unref_page_commit(zone, pcp, page, migratetype, 0);
-
- /*
- * Guard against excessive lock hold times when freeing
- * a large list of pages. Lock will be reacquired if
- * necessary on the next iteration.
- */
- if (++batch_count == SWAP_CLUSTER_MAX) {
- pcp_spin_unlock(pcp);
- pcp_trylock_finish(UP_flags);
- batch_count = 0;
- pcp = NULL;
- locked_zone = NULL;
- }
+ batch_count++;
}
if (pcp) {
--
2.35.3
prev parent reply other threads:[~2022-11-22 13:13 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-22 13:12 [PATCH 0/2] Follow-up to Leave IRQs enabled for per-cpu page allocations Mel Gorman
2022-11-22 13:12 ` [PATCH 1/2] mm/page_alloc: Leave IRQs enabled for per-cpu page allocations -fix Mel Gorman
2022-11-22 13:12 ` Mel Gorman [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221122131229.5263-3-mgorman@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=akpm@linux-foundation.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=m.szyprowski@samsung.com \
--cc=mhocko@kernel.org \
--cc=mtosatti@redhat.com \
--cc=vbabka@suse.cz \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox