From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Vlastimil Babka <vbabka@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>, Zi Yan <ziy@nvidia.com>
Cc: Mel Gorman <mgorman@techsingularity.net>,
Matthew Wilcox <willy@infradead.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Clark Williams <clrkwllms@kernel.org>,
Steven Rostedt <rostedt@goodmis.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-rt-devel@lists.linux.dev
Subject: Re: [PATCH 1/3] mm/page_alloc: effectively disable pcp with CONFIG_SMP=n
Date: Tue, 3 Mar 2026 17:09:35 +0100 [thread overview]
Message-ID: <5d5c8927-95ad-47c4-ae8c-9b56f5534e27@kernel.org> (raw)
In-Reply-To: <20260227-b4-pcp-locking-cleanup-v1-1-f7e22e603447@kernel.org>
> - * With the UP spinlock implementation, when we spin_lock(&pcp->lock) (for i.e.
> - * a potentially remote cpu drain) and get interrupted by an operation that
> - * attempts pcp_spin_trylock(), we can't rely on the trylock failure due to UP
> - * spinlock assumptions making the trylock a no-op. So we have to turn that
> - * spin_lock() to a spin_lock_irqsave(). This works because on UP there are no
> - * remote cpu's so we can only be locking the only existing local one.
> + * On CONFIG_SMP=n the UP implementation of spin_trylock() never fails and thus
> + * is not compatible with our locking scheme. However we do not need pcp for
> + * scalability in the first place, so just make all the trylocks fail and take
> + * the slow path unconditionally.
> */
> +#else
> +#define pcp_spin_trylock(ptr) \
> + NULL
> +
> +#define pcp_spin_unlock(ptr) \
> + BUG_ON(1)
Did you try turning this into a BUILD_BUG() ?
I'd assume that the compiler would optimize-out all dead code and
consequently not trigger the BUILD_BUG.
if (pcp_spin_trylock()) {
/* dead code */
pcp_spin_unlock()
}
IIUC, the trylock+unlock is not really split over multiple functions.
--
Cheers,
David
next prev parent reply other threads:[~2026-03-03 16:09 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-27 17:07 [PATCH 0/3] mm/page_alloc: pcp locking cleanup Vlastimil Babka
2026-02-27 17:07 ` [PATCH 1/3] mm/page_alloc: effectively disable pcp with CONFIG_SMP=n Vlastimil Babka
2026-03-02 17:12 ` Johannes Weiner
2026-03-03 16:09 ` David Hildenbrand (Arm) [this message]
2026-03-03 16:21 ` Vlastimil Babka
2026-02-27 17:07 ` [PATCH 2/3] mm/page_alloc: remove IRQ saving/restoring from pcp locking Vlastimil Babka
2026-03-02 17:17 ` Johannes Weiner
2026-02-27 17:08 ` [PATCH 3/3] mm/page_alloc: remove pcpu_spin_* wrappers Vlastimil Babka
2026-03-02 17:18 ` Johannes Weiner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5d5c8927-95ad-47c4-ae8c-9b56f5534e27@kernel.org \
--to=david@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=bigeasy@linutronix.de \
--cc=clrkwllms@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=rostedt@goodmis.org \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox