From: Mel Gorman <mgorman@techsingularity.net>
To: Yu Zhao <yuzhao@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Nicolas Saenz Julienne <nsaenzju@redhat.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
Vlastimil Babka <vbabka@suse.cz>,
Michal Hocko <mhocko@kernel.org>, Hugh Dickins <hughd@google.com>,
Marek Szyprowski <m.szyprowski@samsung.com>,
LKML <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>
Subject: [PATCH] mm/page_alloc: Replace local_lock with normal spinlock -fix
Date: Mon, 27 Jun 2022 09:46:45 +0100 [thread overview]
Message-ID: <20220627084645.GA27531@techsingularity.net> (raw)
In-Reply-To: <CAOUHufba2KQLbFMoHusTpvEBHS_EWQ1NnOfm3Wczmykk0A1pTw@mail.gmail.com>
As noted by Yu Zhao, use pcp_spin_trylock_irqsave instead of
pcpu_spin_trylock_irqsave. This is a fix to the mm-unstable patch
mm-page_alloc-replace-local_lock-with-normal-spinlock.patch
Reported-by: Yu Zhao <yuzhao@google.com>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 71065b01827b..934d1b5a5449 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3493,7 +3493,7 @@ void free_unref_page(struct page *page, unsigned int order)
zone = page_zone(page);
pcp_trylock_prepare(UP_flags);
- pcp = pcpu_spin_trylock_irqsave(struct per_cpu_pages, lock, zone->per_cpu_pageset, flags);
+ pcp = pcp_spin_trylock_irqsave(zone->per_cpu_pageset, flags);
if (pcp) {
free_unref_page_commit(zone, pcp, page, migratetype, order);
pcp_spin_unlock_irqrestore(pcp, flags);
next prev parent reply other threads:[~2022-06-27 8:58 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-24 12:54 [PATCH v5 00/7] Drain remote per-cpu directly Mel Gorman
2022-06-24 12:54 ` [PATCH 1/7] mm/page_alloc: Add page->buddy_list and page->pcp_list Mel Gorman
2022-06-24 12:54 ` [PATCH 2/7] mm/page_alloc: Use only one PCP list for THP-sized allocations Mel Gorman
2022-06-24 12:54 ` [PATCH 3/7] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Mel Gorman
2022-06-24 12:54 ` [PATCH 4/7] mm/page_alloc: Remove mistaken page == NULL check in rmqueue Mel Gorman
2022-06-24 12:54 ` [PATCH 5/7] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-07-04 12:31 ` Vlastimil Babka
2022-07-05 7:20 ` Mel Gorman
2022-07-04 16:32 ` Nicolas Saenz Julienne
2022-06-24 12:54 ` [PATCH 6/7] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-07-04 14:28 ` Vlastimil Babka
2022-06-24 12:54 ` [PATCH 7/7] mm/page_alloc: Replace local_lock with normal spinlock Mel Gorman
2022-06-24 18:59 ` Yu Zhao
2022-06-27 8:46 ` Mel Gorman [this message]
2022-07-04 14:39 ` Vlastimil Babka
2022-07-04 16:33 ` Nicolas Saenz Julienne
2022-07-03 23:28 ` [PATCH v5 00/7] Drain remote per-cpu directly Andrew Morton
2022-07-03 23:31 ` Yu Zhao
2022-07-03 23:35 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220627084645.GA27531@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=akpm@linux-foundation.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=m.szyprowski@samsung.com \
--cc=mhocko@kernel.org \
--cc=mtosatti@redhat.com \
--cc=nsaenzju@redhat.com \
--cc=vbabka@suse.cz \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox