linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: akash.tyagi <akash.tyagi@mediatek.com>
To: <david@redhat.com>
Cc: <akash.tyagi@mediatek.com>, <akpm@linux-foundation.org>,
	<vbabka@suse.cz>, <matthias.bgg@gmail.com>,
	<angelogioacchino.delregno@collabora.com>, <surenb@google.com>,
	<mhocko@suse.com>, <jackmanb@google.com>, <hannes@cmpxchg.org>,
	<ziy@nvidia.com>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>,
	<linux-mediatek@lists.infradead.org>, <wsd_upstream@mediatek.com>,
	<chinwen.chang@mediatek.com>
Subject: Re: [RFC PATCH] mm/page_alloc: Add PCP list for THP CMA
Date: Fri, 25 Jul 2025 10:38:10 +0530	[thread overview]
Message-ID: <20250725050810.1164957-1-akash.tyagi@mediatek.com> (raw)
In-Reply-To: <67a54f31-e568-427a-8fc8-9791fd34e11b@redhat.com>

Hi David,

Thank you for your feedback.

We encountered this issue in the Android Common Kernel (version 6.12), which uses PCP lists for CMA pages.

page_owner trace-
Page allocated via order 9, mask 0x52dc0(GFP_KERNEL|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_ZERO), pid 1, tgid 1 (swapper/0), ts 1065952310 ns
PFN 0x23d200 type Unmovable Block 4585 type CMA Flags 0x4000000000000040(head|zone=1|kasantag=0x0)
 post_alloc_hook+0x228/0x230
 prep_new_page+0x28/0x148
 get_page_from_freelist+0x19d0/0x1a38
 __alloc_pages_noprof+0x1b0/0x440
 ___kmalloc_large_node+0xb4/0x1ec
 __kmalloc_large_node_noprof+0x2c/0xec
 __kmalloc_node_noprof+0x39c/0x548
 __kvmalloc_node_noprof+0xd8/0x18c
 nf_ct_alloc_hashtable+0x64/0x108
 nf_nat_init+0x3c/0xf8
 do_one_initcall+0x150/0x3c0
 do_initcall_level+0xa4/0x15c
 do_initcalls+0x70/0xc0
 do_basic_setup+0x1c/0x28
 kernel_init_freeable+0xcc/0x130
 kernel_init+0x20/0x1ac
 
This UNMOVABLE page was allocated from CMA, but it could not be migrated - so CMA alloc failed
At first, we fixed this by adding CMA THP pages to the movable THP PCP list.  
This fixed the issue of CMA pages being put in the wrong list, but now any movable allocation can use these CMA pages.

Later, we saw that a movable allocation used a CMA page and was pinned by __filemap_get_folio(). This page was pinned for too long, and eventually, CMA allocation failed

page_owner trace-
Page allocated via order 0, mask 0x140c48(GFP_NOFS|__GFP_COMP|__GFP_HARDWALL|__GFP_MOVABLE), pid 1198, tgid 1194 (ccci_mdinit), ts 17918751965 ns
PFN 0x207233 type Movable Block 4153 type CMA Flags 0x4020000000008224(referenced|lru|workingset|private|zone=1|kasantag=0x0)
 post_alloc_hook+0x23c/0x254
 prep_new_page+0x28/0x148
 get_page_from_freelist+0x19d8/0x1a40
 __alloc_pages_noprof+0x1a8/0x430
 __folio_alloc_noprof+0x14/0x5c
 __filemap_get_folio+0x1bc/0x430
 bdev_getblk+0xd4/0x294
 __read_extent_tree_block+0x6c/0x260
 ext4_find_extent+0x22c/0x3dc
 ext4_ext_map_blocks+0x88/0x173c
 ext4_map_query_blocks+0x54/0xe0
 ext4_map_blocks+0xf8/0x518
 _ext4_get_block+0x70/0x188
 ext4_get_block+0x18/0x24
 ext4_block_write_begin+0x154/0x62c
 ext4_write_begin+0x20c/0x630
Page has been migrated, last migrate reason: compaction
Charged to memcg /


Currently, free_unref_page treats CMA pages as movable. So, some MOVABLE allocations may use these CMA pages and pinned them. Later, when CMA needs these pages, these pages failed to migrate.
free_unref_page()/free_unref_folios
	migratetype = get_pfnblock_migratetype(page, pfn);
	if (unlikely(migratetype >= MIGRATE_PCPTYPES)) {
		migratetype = MIGRATE_MOVABLE;
	}


Best Regards,
Akash Tyagi


  reply	other threads:[~2025-07-25  5:16 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-24  7:53 akash.tyagi
2025-07-24  9:52 ` David Hildenbrand
2025-07-25  5:08   ` akash.tyagi [this message]
2025-07-25  7:04     ` David Hildenbrand
2025-07-25 14:27       ` Zi Yan
2025-07-29 12:30         ` akash.tyagi
2025-07-29 12:42           ` David Hildenbrand
2025-07-29 12:50           ` Matthew Wilcox
2025-08-04 18:31   ` Juan Yescas
2025-08-04 18:20 Juan Yescas
2025-08-04 18:49 ` David Hildenbrand
2025-08-04 19:00   ` Zi Yan
2025-08-04 19:10     ` David Hildenbrand
2025-08-05  1:24     ` Juan Yescas
2025-08-05  1:22   ` Juan Yescas
2025-08-05  9:54     ` Vlastimil Babka
2025-08-05 16:46       ` Juan Yescas
2025-08-05 17:12       ` Juan Yescas
2025-08-05 21:09         ` Vlastimil Babka
2025-08-06 21:54           ` Juan Yescas
2025-08-05  9:58     ` David Hildenbrand
2025-08-05 16:57       ` Juan Yescas
2025-08-05 21:08         ` David Hildenbrand
2025-08-06 21:44           ` Juan Yescas
2025-08-06 21:51           ` Juan Yescas
2025-09-09 20:07           ` Juan Yescas
2025-09-09 20:11           ` Juan Yescas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250725050810.1164957-1-akash.tyagi@mediatek.com \
    --to=akash.tyagi@mediatek.com \
    --cc=akpm@linux-foundation.org \
    --cc=angelogioacchino.delregno@collabora.com \
    --cc=chinwen.chang@mediatek.com \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=jackmanb@google.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=matthias.bgg@gmail.com \
    --cc=mhocko@suse.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=wsd_upstream@mediatek.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox