From: Vlastimil Babka <vbabka@suse.cz>
To: Oliver Sang <oliver.sang@intel.com>
Cc: Christoph Hellwig <hch@lst.de>,
Andrew Morton <akpm@linux-foundation.org>,
Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Harry Yoo <harry.yoo@oracle.com>,
linux-mm@kvack.org, oe-lkp@lists.linux.dev, lkp@intel.com,
Jens Axboe <axboe@kernel.dk>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
Johannes Thumshirn <johannes.thumshirn@wdc.com>,
Anuj Gupta <anuj20.g@samsung.com>,
Kanchan Joshi <joshi.k@samsung.com>,
linux-block@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: poison_element vs highmem, was Re: [linux-next:master] [block] ec7f31b2a2: BUG:unable_to_handle_page_fault_for_address
Date: Thu, 13 Nov 2025 14:48:06 +0100 [thread overview]
Message-ID: <e88f9909-cb5e-4205-b4cb-461fdd71120a@suse.cz> (raw)
In-Reply-To: <aRWMT6DTNhAdudn+@xsang-OptiPlex-9020>
On 11/13/25 08:44, Oliver Sang wrote:
> hi, Vlastimil Babka,
>
> On Wed, Nov 12, 2025 at 10:33:32AM +0100, Vlastimil Babka wrote:
>> On 11/11/25 08:48, Christoph Hellwig wrote:
>> > Looks like this is due to the code in poison_element, which tries
>> > to memset more than PAGE_SIZE for a single page. This probably
>> > implies we are the first users of the mempool page helpers for order > 0,
>> > or at least the first one tested by anyone on 32-bit with highmem :)
>> >
>> > That code seems to come from
>> >
>> > commit bdfedb76f4f5aa5e37380e3b71adee4a39f30fc6
>> > Author: David Rientjes <rientjes@google.com>
>> > Date: Wed Apr 15 16:14:17 2015 -0700
>> >
>> > mm, mempool: poison elements backed by slab allocator
>> >
>> > originally. The easiest fix would be to just skip poisoning for this
>> > case, although that would reduce the usefulness of the poisoning.
>>
>> #syz test
>
> we applied below patch upon ec7f31b2a2 directly, and confirmed the issue we
> reported gone now with the patch.
>
> Tested-by: kernel test robot <oliver.sang@intel.com>
Thanks!
> BTW, we are kernel test robot, not the syzbot :) thanks
Yeah I realized only after sending...
I'll make this a full patch then. How urgent is it, Christoph? I suppose
this is related to the bulk mempool changes, and we discussed the users will
target 6.20 (7.0?) merge window? So landing this fix in 6.19 is enough?
>> ----8<----
>> From 4d97b55c208c611cb01062e0fbf9dbda9f5617d5 Mon Sep 17 00:00:00 2001
>> From: Vlastimil Babka <vbabka@suse.cz>
>> Date: Wed, 12 Nov 2025 10:29:52 +0100
>> Subject: [PATCH] mm/mempool: fix poisoning order>0 pages with HIGHMEM
>>
>> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>> ---
>> mm/mempool.c | 28 ++++++++++++++++++++++------
>> 1 file changed, 22 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/mempool.c b/mm/mempool.c
>> index 1c38e873e546..75fea9441b93 100644
>> --- a/mm/mempool.c
>> +++ b/mm/mempool.c
>> @@ -68,10 +68,18 @@ static void check_element(mempool_t *pool, void *element)
>> } else if (pool->free == mempool_free_pages) {
>> /* Mempools backed by page allocator */
>> int order = (int)(long)pool->pool_data;
>> - void *addr = kmap_local_page((struct page *)element);
>> +#ifdef CONFIG_HIGHMEM
>> + for (int i = 0; i < (1 << order); i++) {
>> + struct page *page = (struct page *)element;
>> + void *addr = kmap_local_page(page + i);
>>
>> - __check_element(pool, addr, 1UL << (PAGE_SHIFT + order));
>> - kunmap_local(addr);
>> + __check_element(pool, addr, PAGE_SIZE);
>> + kunmap_local(addr);
>> + }
>> +#else
>> + void *addr = page_address((struct page *)element);
>> + __check_element(pool, addr, PAGE_SIZE << order);
>> +#endif
>> }
>> }
>>
>> @@ -97,10 +105,18 @@ static void poison_element(mempool_t *pool, void *element)
>> } else if (pool->alloc == mempool_alloc_pages) {
>> /* Mempools backed by page allocator */
>> int order = (int)(long)pool->pool_data;
>> - void *addr = kmap_local_page((struct page *)element);
>> +#ifdef CONFIG_HIGHMEM
>> + for (int i = 0; i < (1 << order); i++) {
>> + struct page *page = (struct page *)element;
>> + void *addr = kmap_local_page(page + i);
>>
>> - __poison_element(addr, 1UL << (PAGE_SHIFT + order));
>> - kunmap_local(addr);
>> + __poison_element(addr, PAGE_SIZE);
>> + kunmap_local(addr);
>> + }
>> +#else
>> + void *addr = page_address((struct page *)element);
>> + __poison_element(addr, PAGE_SIZE << order);
>> +#endif
>> }
>> }
>> #else /* CONFIG_SLUB_DEBUG_ON */
>> --
>> 2.51.1
>>
>>
next prev parent reply other threads:[~2025-11-13 13:48 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <202511111411.9ebfa1ba-lkp@intel.com>
2025-11-11 7:48 ` Christoph Hellwig
2025-11-12 9:33 ` Vlastimil Babka
2025-11-13 7:44 ` Oliver Sang
2025-11-13 13:48 ` Vlastimil Babka [this message]
2025-11-13 14:48 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e88f9909-cb5e-4205-b4cb-461fdd71120a@suse.cz \
--to=vbabka@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=anuj20.g@samsung.com \
--cc=axboe@kernel.dk \
--cc=cl@gentwo.org \
--cc=harry.yoo@oracle.com \
--cc=hch@lst.de \
--cc=johannes.thumshirn@wdc.com \
--cc=joshi.k@samsung.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lkp@intel.com \
--cc=martin.petersen@oracle.com \
--cc=oe-lkp@lists.linux.dev \
--cc=oliver.sang@intel.com \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox