linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/mempool: fix poisoning order>0 pages with HIGHMEM
@ 2025-11-13 18:54 Vlastimil Babka
  2025-11-14  5:04 ` Christoph Hellwig
  0 siblings, 1 reply; 3+ messages in thread
From: Vlastimil Babka @ 2025-11-13 18:54 UTC (permalink / raw)
  To: Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
	Harry Yoo
  Cc: Jens Axboe, linux-block, linux-mm, linux-kernel,
	kernel test robot, Christoph Hellwig, Vlastimil Babka

The kernel test has reported:

  BUG: unable to handle page fault for address: fffba000
  #PF: supervisor write access in kernel mode
  #PF: error_code(0x0002) - not-present page
  *pde = 03171067 *pte = 00000000
  Oops: Oops: 0002 [#1]
  CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Tainted: G                T   6.18.0-rc2-00031-gec7f31b2a2d3 #1 NONE  a1d066dfe789f54bc7645c7989957d2bdee593ca
  Tainted: [T]=RANDSTRUCT
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
  EIP: memset (arch/x86/include/asm/string_32.h:168 arch/x86/lib/memcpy_32.c:17)
  Code: a5 8b 4d f4 83 e1 03 74 02 f3 a4 83 c4 04 5e 5f 5d 2e e9 73 41 01 00 90 90 90 3e 8d 74 26 00 55 89 e5 57 56 89 c6 89 d0 89 f7 <f3> aa 89 f0 5e 5f 5d 2e e9 53 41 01 00 cc cc cc 55 89 e5 53 57 56
  EAX: 0000006b EBX: 00000015 ECX: 001fefff EDX: 0000006b
  ESI: fffb9000 EDI: fffba000 EBP: c611fbf0 ESP: c611fbe8
  DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 EFLAGS: 00010287
  CR0: 80050033 CR2: fffba000 CR3: 0316e000 CR4: 00040690
  Call Trace:
   poison_element (mm/mempool.c:83 mm/mempool.c:102)
   mempool_init_node (mm/mempool.c:142 mm/mempool.c:226)
   mempool_init_noprof (mm/mempool.c:250 (discriminator 1))
   ? mempool_alloc_pages (mm/mempool.c:640)
   bio_integrity_initfn (block/bio-integrity.c:483 (discriminator 8))
   ? mempool_alloc_pages (mm/mempool.c:640)
   do_one_initcall (init/main.c:1283)

Christoph found out this is due to the poisoning code not dealing
properly with CONFIG_HIGHMEM because only the first page is mapped but
then the whole potentially high-order page is accessed.

This went unnoticed for years probably because nobody has yet used a
mempool for order>0 pages before the new block code in -next.

We could give up on HIGHMEM here, but it's straightforward to fix this
with a loop that's mapping, poisoning or checking and unmapping
individual pages.

Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202511111411.9ebfa1ba-lkp@intel.com
Analyzed-by: Christoph Hellwig <hch@lst.de>
Fixes: bdfedb76f4f5 ("mm, mempool: poison elements backed by slab allocator")
Tested-by: kernel test robot <oliver.sang@intel.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
Will push to slab/for-next-fixes 
---
 mm/mempool.c | 32 ++++++++++++++++++++++++++------
 1 file changed, 26 insertions(+), 6 deletions(-)

diff --git a/mm/mempool.c b/mm/mempool.c
index 1c38e873e546..d7bbf1189db9 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -68,10 +68,20 @@ static void check_element(mempool_t *pool, void *element)
 	} else if (pool->free == mempool_free_pages) {
 		/* Mempools backed by page allocator */
 		int order = (int)(long)pool->pool_data;
-		void *addr = kmap_local_page((struct page *)element);
 
-		__check_element(pool, addr, 1UL << (PAGE_SHIFT + order));
-		kunmap_local(addr);
+#ifdef CONFIG_HIGHMEM
+		for (int i = 0; i < (1 << order); i++) {
+			struct page *page = (struct page *)element;
+			void *addr = kmap_local_page(page + i);
+
+			__check_element(pool, addr, PAGE_SIZE);
+			kunmap_local(addr);
+		}
+#else
+		void *addr = page_address((struct page *)element);
+
+		__check_element(pool, addr, PAGE_SIZE << order);
+#endif
 	}
 }
 
@@ -97,10 +107,20 @@ static void poison_element(mempool_t *pool, void *element)
 	} else if (pool->alloc == mempool_alloc_pages) {
 		/* Mempools backed by page allocator */
 		int order = (int)(long)pool->pool_data;
-		void *addr = kmap_local_page((struct page *)element);
 
-		__poison_element(addr, 1UL << (PAGE_SHIFT + order));
-		kunmap_local(addr);
+#ifdef CONFIG_HIGHMEM
+		for (int i = 0; i < (1 << order); i++) {
+			struct page *page = (struct page *)element;
+			void *addr = kmap_local_page(page + i);
+
+			__poison_element(addr, PAGE_SIZE);
+			kunmap_local(addr);
+		}
+#else
+		void *addr = page_address((struct page *)element);
+
+		__poison_element(addr, PAGE_SIZE << order);
+#endif
 	}
 }
 #else /* CONFIG_SLUB_DEBUG_ON */

---
base-commit: e9a6fb0bcdd7609be6969112f3fbfcce3b1d4a7c
change-id: 20251113-mempool-poison-c081a82b9d54

Best regards,
-- 
Vlastimil Babka <vbabka@suse.cz>



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm/mempool: fix poisoning order>0 pages with HIGHMEM
  2025-11-13 18:54 [PATCH] mm/mempool: fix poisoning order>0 pages with HIGHMEM Vlastimil Babka
@ 2025-11-14  5:04 ` Christoph Hellwig
  2025-11-14 16:51   ` Vlastimil Babka
  0 siblings, 1 reply; 3+ messages in thread
From: Christoph Hellwig @ 2025-11-14  5:04 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
	Harry Yoo, Jens Axboe, linux-block, linux-mm, linux-kernel,
	kernel test robot, Christoph Hellwig

On Thu, Nov 13, 2025 at 07:54:35PM +0100, Vlastimil Babka wrote:
> Christoph found out this is due to the poisoning code not dealing
> properly with CONFIG_HIGHMEM because only the first page is mapped but
> then the whole potentially high-order page is accessed.
> 
> This went unnoticed for years probably because nobody has yet used a
> mempool for order>0 pages before the new block code in -next.

I did a quick audit: and bcache, dm-integrity (config dependent) and the
KASAN unit tests create page based mempools with order > 0.  It looks
like none of those ever got much testing on highmem systems.

The fix looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm/mempool: fix poisoning order>0 pages with HIGHMEM
  2025-11-14  5:04 ` Christoph Hellwig
@ 2025-11-14 16:51   ` Vlastimil Babka
  0 siblings, 0 replies; 3+ messages in thread
From: Vlastimil Babka @ 2025-11-14 16:51 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin,
	Harry Yoo, Jens Axboe, linux-block, linux-mm, linux-kernel,
	kernel test robot

On 11/14/25 06:04, Christoph Hellwig wrote:
> On Thu, Nov 13, 2025 at 07:54:35PM +0100, Vlastimil Babka wrote:
>> Christoph found out this is due to the poisoning code not dealing
>> properly with CONFIG_HIGHMEM because only the first page is mapped but
>> then the whole potentially high-order page is accessed.
>> 
>> This went unnoticed for years probably because nobody has yet used a
>> mempool for order>0 pages before the new block code in -next.
> 
> I did a quick audit: and bcache, dm-integrity (config dependent) and the
> KASAN unit tests create page based mempools with order > 0.  It looks
> like none of those ever got much testing on highmem systems.

Thanks,

KASAN unit tests sounds like something that would have been flagged by
automated testing long ago, however the mempool-specific poisoning isn't
compatible with it so poison_element() and check_element() both return
immediately with kasan_enabled().

bcache and dm-integrity are perhaps too complex setups for the test robots.
But I'll add cc: stable then.

> The fix looks good:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-11-14 16:51 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-11-13 18:54 [PATCH] mm/mempool: fix poisoning order>0 pages with HIGHMEM Vlastimil Babka
2025-11-14  5:04 ` Christoph Hellwig
2025-11-14 16:51   ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox