From: Hillf Danton <hdanton@sina.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Nicolas Saenz Julienne <nsaenzju@redhat.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
Vlastimil Babka <vbabka@suse.cz>,
Michal Hocko <mhocko@kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>
Subject: Re: [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock
Date: Wed, 20 Apr 2022 22:02:14 +0800 [thread overview]
Message-ID: <20220420140214.2330-1-hdanton@sina.com> (raw)
In-Reply-To: <20220420095906.27349-6-mgorman@techsingularity.net>
On Wed, 20 Apr 2022 10:59:05 +0100 Mel Gorman wrote:
> void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
> {
> - unsigned long flags;
> int to_drain, batch;
>
> - local_lock_irqsave(&pagesets.lock, flags);
> batch = READ_ONCE(pcp->batch);
> to_drain = min(pcp->count, batch);
> - if (to_drain > 0)
> + if (to_drain > 0) {
> + unsigned long flags;
> +
> + /* free_pcppages_bulk expects IRQs disabled for zone->lock */
> + local_irq_save(flags);
> +
> + spin_lock(&pcp->lock);
Nit, spin_lock_irqsave() instead.
> free_pcppages_bulk(zone, to_drain, pcp, 0);
> - local_unlock_irqrestore(&pagesets.lock, flags);
> + spin_unlock(&pcp->lock);
> +
> + local_irq_restore(flags);
> + }
> }
> #endif
Hillf
next prev parent reply other threads:[~2022-04-20 14:02 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-20 9:59 [RFC PATCH 0/6] Drain remote per-cpu directly Mel Gorman
2022-04-20 9:59 ` [PATCH 1/6] mm/page_alloc: Add page->buddy_list and page->pcp_list Mel Gorman
2022-04-20 20:43 ` Matthew Wilcox
2022-04-21 8:38 ` Mel Gorman
2022-04-20 9:59 ` [PATCH 2/6] mm/page_alloc: Use only one PCP list for THP-sized allocations Mel Gorman
2022-04-20 9:59 ` [PATCH 3/6] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Mel Gorman
2022-04-20 9:59 ` [PATCH 4/6] mm/page_alloc: Remove unnecessary page == NULL check in rmqueue Mel Gorman
2022-04-20 9:59 ` [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-04-20 14:02 ` Hillf Danton [this message]
2022-04-20 14:35 ` Nicolas Saenz Julienne
2022-04-26 16:42 ` Nicolas Saenz Julienne
2022-04-26 16:48 ` Vlastimil Babka
2022-04-29 9:13 ` Mel Gorman
2022-04-26 19:24 ` Minchan Kim
2022-04-29 9:05 ` Mel Gorman
2022-04-20 9:59 ` [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-04-25 22:58 ` [RFC PATCH 0/6] Drain remote per-cpu directly Minchan Kim
2022-04-26 11:06 ` Nicolas Saenz Julienne
2022-04-27 15:21 ` Marcelo Tosatti
2022-04-26 2:49 ` Suren Baghdasaryan
2022-04-26 6:30 ` Suren Baghdasaryan
2022-05-09 13:07 [RFC PATCH 0/6] Drain remote per-cpu directly v2 Mel Gorman
2022-05-09 13:08 ` [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-05-22 2:49 ` Hugh Dickins
2022-05-24 12:12 ` Mel Gorman
2022-05-24 12:19 ` Mel Gorman
2022-05-12 8:50 [PATCH 0/6] Drain remote per-cpu directly v3 Mel Gorman
2022-05-12 8:50 ` [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-05-13 12:22 ` Nicolas Saenz Julienne
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220420140214.2330-1-hdanton@sina.com \
--to=hdanton@sina.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@kernel.org \
--cc=mtosatti@redhat.com \
--cc=nsaenzju@redhat.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox