From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Andrey Ryabinin <ryabinin.a.a@gmail.com>,
Alexander Potapenko <glider@google.com>,
Andrey Konovalov <andreyknvl@gmail.com>,
Dmitry Vyukov <dvyukov@google.com>,
Vincenzo Frascino <vincenzo.frascino@arm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Uladzislau Rezki <urezki@gmail.com>,
Christoph Hellwig <hch@infradead.org>,
Lorenzo Stoakes <lstoakes@gmail.com>,
<kasan-dev@googlegroups.com>, <linux-mm@kvack.org>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH -rfc 1/3] mm: kasan: shadow: add cond_resched() in kasan_populate_vmalloc_pte()
Date: Wed, 6 Sep 2023 20:42:32 +0800 [thread overview]
Message-ID: <20230906124234.134200-2-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20230906124234.134200-1-wangkefeng.wang@huawei.com>
The kasan_populate_vmalloc() will cost a lot of time when populate
large size, it will cause soft lockup,
watchdog: BUG: soft lockup - CPU#3 stuck for 26s! [insmod:458]
_raw_spin_unlock_irqrestore+0x50/0xb8
rmqueue_bulk+0x434/0x6b8
get_page_from_freelist+0xdd4/0x1680
__alloc_pages+0x244/0x508
alloc_pages+0xf0/0x218
__get_free_pages+0x1c/0x50
kasan_populate_vmalloc_pte+0x30/0x188
__apply_to_page_range+0x3ec/0x650
apply_to_page_range+0x1c/0x30
kasan_populate_vmalloc+0x60/0x70
alloc_vmap_area.part.67+0x328/0xe50
alloc_vmap_area+0x4c/0x78
__get_vm_area_node.constprop.76+0x130/0x240
__vmalloc_node_range+0x12c/0x340
__vmalloc_node+0x8c/0xb0
vmalloc+0x2c/0x40
Fix it by adding a cond_resched().
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/kasan/shadow.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index dd772f9d0f08..fd15e38ff80e 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -317,6 +317,8 @@ static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,
unsigned long page;
pte_t pte;
+ cond_resched();
+
if (likely(!pte_none(ptep_get(ptep))))
return 0;
--
2.41.0
next prev parent reply other threads:[~2023-09-06 12:32 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-06 12:42 [PATCH -rfc 0/3] mm: kasan: fix softlock when populate or depopulate pte Kefeng Wang
2023-09-06 12:42 ` Kefeng Wang [this message]
2023-09-06 12:42 ` [PATCH -rfc 2/3] mm: kasan: shadow: move free_page() out of page table lock Kefeng Wang
2023-09-06 12:42 ` [PATCH -rfc 3/3] mm: kasan: shadow: HACK: add cond_resched_lock() in kasan_depopulate_vmalloc_pte() Kefeng Wang
2023-09-13 8:48 ` kernel test robot
2023-09-13 11:21 ` Kefeng Wang
2023-09-15 0:58 ` [PATCH -rfc 0/3] mm: kasan: fix softlock when populate or depopulate pte Kefeng Wang
2023-10-18 14:16 ` Kefeng Wang
2023-10-18 16:37 ` Marco Elver
2023-10-19 1:40 ` Kefeng Wang
2023-10-19 6:17 ` Uladzislau Rezki
2023-10-19 7:26 ` Kefeng Wang
2023-10-19 8:53 ` Uladzislau Rezki
2023-10-19 9:47 ` Kefeng Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230906124234.134200-2-wangkefeng.wang@huawei.com \
--to=wangkefeng.wang@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=hch@infradead.org \
--cc=kasan-dev@googlegroups.com \
--cc=linux-mm@kvack.org \
--cc=lstoakes@gmail.com \
--cc=ryabinin.a.a@gmail.com \
--cc=urezki@gmail.com \
--cc=vincenzo.frascino@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox