linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hongru Zhang <zhanghongru06@gmail.com>
To: baohua@kernel.org
Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org,
	axelrasmussen@google.com, david@kernel.org, hannes@cmpxchg.org,
	jackmanb@google.com, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, lorenzo.stoakes@oracle.com, mhocko@suse.com,
	rppt@kernel.org, surenb@google.com, vbabka@suse.cz,
	weixugc@google.com, yuanchu@google.com, zhanghongru06@gmail.com,
	zhanghongru@xiaomi.com, ziy@nvidia.com
Subject: Re: [PATCH 0/3] mm: add per-migratetype counts to buddy allocator and optimize pagetypeinfo access
Date: Sun,  5 Apr 2026 17:32:24 +0800	[thread overview]
Message-ID: <20260405093224.3892321-1-zhanghongru@xiaomi.com> (raw)
In-Reply-To: <20260402071808.41828-1-baohua@kernel.org>

> On Fri, Nov 28, 2025 at 11:11 AM Hongru Zhang <zhanghongru06@gmail.com> wrote:
> >
> > On mobile devices, some user-space memory management components check
> > memory pressure and fragmentation status periodically or via PSI, and
> > take actions such as killing processes or performing memory compaction
> > based on this information.
> >
> > Under high load scenarios, reading /proc/pagetypeinfo causes memory
> > management components or memory allocation/free paths to be blocked
> > for extended periods waiting for the zone lock, leading to the following
> > issues:
> > 1. Long interrupt-disabled spinlocks - occasionally exceeding 10ms on Qcom
> >    8750 platforms, reducing system real-time performance
> > 2. Memory management components being blocked for extended periods,
> >    preventing rapid acquisition of memory fragmentation information for
> >    critical memory management decisions and actions
> > 3. Increased latency in memory allocation and free paths due to prolonged
> >    zone lock contention
>
> Do you have an idea how long each seq_printf call takes?
>
> Assuming seq_printf is costly, printing while holding
> zone->lock may be suboptimal. A further optimization might be:
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 2370c6fb1fcd..f501ca2840a6 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1570,6 +1570,7 @@ static int frag_show(struct seq_file *m, void *arg)
>  	return 0;
>  }
> 
> +#if 0
>  static void pagetypeinfo_showfree_print(struct seq_file *m,
>  					pg_data_t *pgdat, struct zone *zone)
>  {
> @@ -1611,6 +1612,63 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
>  		seq_putc(m, '\n');
>  	}
>  }
> +#endif
> +
> +static void pagetypeinfo_showfree_print(struct seq_file *m,
> +					pg_data_t *pgdat,
> +					struct zone *zone)
> +{
> +	unsigned long freecounts[MIGRATE_TYPES][NR_PAGE_ORDERS] = { 0 };
> +	bool overflow[MIGRATE_TYPES][NR_PAGE_ORDERS] = { 0 };
> +	int order, mtype;
> +
> +	for (mtype = 0; mtype < MIGRATE_TYPES; mtype++) {
> +		for (order = 0; order < NR_PAGE_ORDERS; ++order) {
> +			struct free_area *area;
> +			struct list_head *curr;
> +			unsigned long freecount = 0;
> +
> +			area = &zone->free_area[order];
> +
> +			list_for_each(curr, &area->free_list[mtype]) {
> +				/*
> +				 * Cap the free_list iteration because it might
> +				 * be really large and we are under a spinlock
> +				 * so a long time spent here could trigger a
> +				 * hard lockup detector. Anyway this is a
> +				 * debugging tool so knowing there is a handful
> +				 * of pages of this order should be more than
> +				 * sufficient.
> +				 */
> +				if (++freecount >= 100000) {
> +					overflow[mtype][order] = true;
> +					break;
> +				}
> +			}
> +			freecounts[mtype][order] = freecount;
> +			spin_unlock_irq(&zone->lock);
> +			cond_resched();
> +			spin_lock_irq(&zone->lock);
> +		}
> +	}
> +
> +	/* printing completely outside the lock */
> +	spin_unlock_irq(&zone->lock);
> +	for (mtype = 0; mtype < MIGRATE_TYPES; mtype++) {
> +		seq_printf(m, "Node %4d, zone %8s, type %12s ",
> +			   pgdat->node_id,
> +			   zone->name,
> +			   migratetype_names[mtype]);
> +
> +		for (order = 0; order < NR_PAGE_ORDERS; ++order) {
> +			seq_printf(m, "%s%6lu ",
> +				   overflow[mtype][order] ? ">" : "",
> +				   freecounts[mtype][order]);
> +		}
> +		seq_putc(m, '\n');
> +	}
> +	spin_lock_irq(&zone->lock);
> +}
> 
>  /* Print out the free pages at each order for each migratetype */
>  static void pagetypeinfo_showfree(struct seq_file *m, void *arg)

Yes. I included the data in [1]. This seq_printf loop costs about 5 us on my
machine. I'll look into optimizing this in the next revision.

[1] https://lore.kernel.org/linux-mm/20251201122912.348142-1-zhanghongru@xiaomi.com/


      reply	other threads:[~2026-04-05  9:33 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-28  3:10 Hongru Zhang
2025-11-28  3:11 ` [PATCH 1/3] mm/page_alloc: add per-migratetype counts to buddy allocator Hongru Zhang
2025-11-29  0:34   ` Barry Song
2025-11-28  3:12 ` [PATCH 2/3] mm/vmstat: get fragmentation statistics from per-migragetype count Hongru Zhang
2025-11-28 12:03   ` zhongjinji
2025-11-29  0:00     ` Barry Song
2025-11-29  7:55       ` Barry Song
2025-12-01 12:29       ` Hongru Zhang
2025-12-01 18:54         ` Barry Song
2025-11-28  3:12 ` [PATCH 3/3] mm: optimize free_area_empty() check using per-migratetype counts Hongru Zhang
2025-11-29  0:04   ` Barry Song
2025-11-29  9:24     ` Barry Song
2026-03-03  8:04       ` Hongru Zhang
2026-03-03  8:29     ` Hongru Zhang
2026-04-02  7:28       ` Barry Song
2025-11-28  7:49 ` [PATCH 0/3] mm: add per-migratetype counts to buddy allocator and optimize pagetypeinfo access Lorenzo Stoakes
2025-11-28  8:34   ` Hongru Zhang
2025-11-28  8:40     ` Lorenzo Stoakes
2025-11-28  9:24 ` Vlastimil Babka
2025-11-28 13:08   ` Johannes Weiner
2025-12-01  2:36   ` Hongru Zhang
2025-12-01 17:01     ` Zi Yan
2025-12-02  2:42       ` Hongru Zhang
2026-04-02  7:18 ` Barry Song
2026-04-05  9:32   ` Hongru Zhang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260405093224.3892321-1-zhanghongru@xiaomi.com \
    --to=zhanghongru06@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=axelrasmussen@google.com \
    --cc=baohua@kernel.org \
    --cc=david@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=jackmanb@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=rppt@kernel.org \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=weixugc@google.com \
    --cc=yuanchu@google.com \
    --cc=zhanghongru@xiaomi.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox