From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
To: akpm@linux-foundation.org
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
frederic@kernel.org, tglx@linutronix.de, peterz@infradead.org,
mtosatti@redhat.com, nilal@redhat.com, mgorman@suse.de,
linux-rt-users@vger.kernel.org, vbabka@suse.cz, cl@linux.com,
paulmck@kernel.org, ppandit@redhat.com,
Nicolas Saenz Julienne <nsaenzju@redhat.com>
Subject: [RFC 1/3] mm/page_alloc: Simplify __rmqueue_pcplist()'s arguments
Date: Fri, 8 Oct 2021 18:19:20 +0200 [thread overview]
Message-ID: <20211008161922.942459-2-nsaenzju@redhat.com> (raw)
In-Reply-To: <20211008161922.942459-1-nsaenzju@redhat.com>
Both users of __rmqueue_pcplist() use the same means to extract the
right list from their per-cpu lists: calculate the index based on the
page's migratetype and order. This data is already being passed to
__rmqueue_pcplist(), so centralize the list extraction process inside
the function.
Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
---
mm/page_alloc.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b37435c274cf..dd89933503b4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3600,11 +3600,13 @@ static inline
struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order,
int migratetype,
unsigned int alloc_flags,
- struct per_cpu_pages *pcp,
- struct list_head *list)
+ struct per_cpu_pages *pcp)
{
+ struct list_head *list;
struct page *page;
+ list = &pcp->lists[order_to_pindex(migratetype, order)];
+
do {
if (list_empty(list)) {
int batch = READ_ONCE(pcp->batch);
@@ -3643,7 +3645,6 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
unsigned int alloc_flags)
{
struct per_cpu_pages *pcp;
- struct list_head *list;
struct page *page;
unsigned long flags;
@@ -3656,8 +3657,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
*/
pcp = this_cpu_ptr(zone->per_cpu_pageset);
pcp->free_factor >>= 1;
- list = &pcp->lists[order_to_pindex(migratetype, order)];
- page = __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp, list);
+ page = __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp);
local_unlock_irqrestore(&pagesets.lock, flags);
if (page) {
__count_zid_vm_events(PGALLOC, page_zonenum(page), 1);
@@ -5202,7 +5202,6 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
struct zone *zone;
struct zoneref *z;
struct per_cpu_pages *pcp;
- struct list_head *pcp_list;
struct alloc_context ac;
gfp_t alloc_gfp;
unsigned int alloc_flags = ALLOC_WMARK_LOW;
@@ -5278,7 +5277,6 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
/* Attempt the batch allocation */
local_lock_irqsave(&pagesets.lock, flags);
pcp = this_cpu_ptr(zone->per_cpu_pageset);
- pcp_list = &pcp->lists[order_to_pindex(ac.migratetype, 0)];
while (nr_populated < nr_pages) {
@@ -5288,8 +5286,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
continue;
}
- page = __rmqueue_pcplist(zone, 0, ac.migratetype, alloc_flags,
- pcp, pcp_list);
+ page = __rmqueue_pcplist(zone, 0, ac.migratetype, alloc_flags, pcp);
if (unlikely(!page)) {
/* Try and get at least one page */
if (!nr_populated)
--
2.31.1
next prev parent reply other threads:[~2021-10-08 16:19 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-08 16:19 [RFC 0/3] mm/page_alloc: Remote per-cpu lists drain support Nicolas Saenz Julienne
2021-10-08 16:19 ` Nicolas Saenz Julienne [this message]
2021-10-08 16:19 ` [RFC 2/3] mm/page_alloc: Access lists in 'struct per_cpu_pages' indirectly Nicolas Saenz Julienne
2021-10-08 16:19 ` [RFC 3/3] mm/page_alloc: Add remote draining support to per-cpu lists Nicolas Saenz Julienne
2021-10-12 15:45 ` [RFC 0/3] mm/page_alloc: Remote per-cpu lists drain support Vlastimil Babka
2021-10-13 12:50 ` nsaenzju
2021-10-21 8:27 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211008161922.942459-2-nsaenzju@redhat.com \
--to=nsaenzju@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=frederic@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mtosatti@redhat.com \
--cc=nilal@redhat.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=ppandit@redhat.com \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox