From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A021C3ABD4 for ; Mon, 12 May 2025 14:27:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1CAB76B00C2; Mon, 12 May 2025 10:27:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EE2636B0154; Mon, 12 May 2025 10:27:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC4BD6B0155; Mon, 12 May 2025 10:27:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 756496B0153 for ; Mon, 12 May 2025 10:27:13 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 206831CC2E7 for ; Mon, 12 May 2025 14:27:14 +0000 (UTC) X-FDA: 83434483188.13.EE91D7F Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf20.hostedemail.com (Postfix) with ESMTP id A0C7E1C000B for ; Mon, 12 May 2025 14:27:11 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=ixebav21; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf20.hostedemail.com: domain of agordeev@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=agordeev@linux.ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747060031; a=rsa-sha256; cv=none; b=kKyrX4IZnojgcKdEz5cLNIbCqwS1yiqqBDGEumTkXvo/tyeeM3E1EDGkQrrbJhfCHuq7sP 7XgQQVUS4bfYr+NPS1Oqv+eAJxrFWJzFCsw/PnneZDuBxRSSScLbonSfJmqnozpReWtWE5 0l5sp4ChK3SGUXqMRDn+psfHrMYtr7o= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=ixebav21; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf20.hostedemail.com: domain of agordeev@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=agordeev@linux.ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747060031; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=obqfSkpMOObPNfAmbcCdBrwfFhHfGF47J/vDFVsSbKo=; b=6kUvYX+Q94GpNTFH/ObiQuroYchJ9QVG4ZY+VoEqnPZaQkQwb/VBg9v8uYkiTbnJLE5HeE fJfz4Bt0QEBeGlK6N73mO6MYRF3ZE22wJppKJ7+WBezxwixfXDTow2LVWmLXu/9XRs1hg6 yMDDq9O+l7kHgf1EN1c2EdIt0Wo61XM= Received: from pps.filterd (m0353725.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54CDnGAh001320; Mon, 12 May 2025 14:27:10 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=obqfSkpMOObPNfAmb cCdBrwfFhHfGF47J/vDFVsSbKo=; b=ixebav21n6EM/0W7H9qskLjDfywS2v4R7 tpVD82avhtQYpGe9d7d4GdTq6JKzTQX4OBIZzm1GY3NTxCmv0zK7k5X+k4ay4fn5 eC20rW+9l614mBPKbv1U2gFOm7W1B8iNaUpN28+KnUOx/qMKJuNAGFKyFgPVmJ3N +rNWd83PUsCCb2c43KKUGi3SvpM564/HjWoese2ZBb7DZcyBsdpmbExWzUhF3ZmF mHGVZN6Wdu9AfwcE5yTOxmfUfdADupPv+tfqln2adwZ5jD7siJHxUMu2qcXkYN5F l3jqrxEg1zCnceSwoPhkXpz0Xu839I6LQj1IePTLS9zq8YvHjC0SA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 46kj7586tu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 12 May 2025 14:27:09 +0000 (GMT) Received: from m0353725.ppops.net (m0353725.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 54CDr7nP009240; Mon, 12 May 2025 14:27:09 GMT Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 46kj7586tq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 12 May 2025 14:27:09 +0000 (GMT) Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 54CDnf0A024427; Mon, 12 May 2025 14:27:08 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 46jjmkx86u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 12 May 2025 14:27:08 +0000 Received: from smtpav05.fra02v.mail.ibm.com (smtpav05.fra02v.mail.ibm.com [10.20.54.104]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 54CER6kk16122338 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 12 May 2025 14:27:06 GMT Received: from smtpav05.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AD02B200B2; Mon, 12 May 2025 14:27:06 +0000 (GMT) Received: from smtpav05.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 95D89200B1; Mon, 12 May 2025 14:27:06 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav05.fra02v.mail.ibm.com (Postfix) with ESMTPS; Mon, 12 May 2025 14:27:06 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 55669) id 5609AE082E; Mon, 12 May 2025 16:27:06 +0200 (CEST) From: Alexander Gordeev To: Andrew Morton , Andrey Ryabinin , Daniel Axtens , Harry Yoo Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com, linux-s390@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH v7 1/1] kasan: Avoid sleepable page allocation from atomic context Date: Mon, 12 May 2025 16:27:06 +0200 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTEyMDE0NyBTYWx0ZWRfXyrKTwmPOo51Q jVXO+9c9AFtiSIehzQILyRPcDoF7/jMwNC+K+uET/QC1k2DQVEPJfKbBLuM/O3AK1sCyLhjrfuk JxsOwu6xyrSbuEzK2EwsLm2ST/z96qkHW3cObGPVJP2IQfuQQwNel1dzpPsNMWr8SESRCG0BPuq AjvFAd2XiMWNUXzCqifQqd4IFj6G7BniClODoyK8ZbsLpNLuaqnv8gW13U9dFzNFKayDaSeG110 kopKU3VxcWK0j34GIxOZXoMbYwqTM/CyFtLeMsLLyG2ofJjLlSih2yqB5F1kOuyxk3QyARtJoVf HPJDUDbZh2wIn+18p8DLNXIXnv07ke8DJfEHIf07plB95qTdbnEc0gMhPB9M3ylO6hizzGfZs9C T6G2jL9sKof1stQD7t1sgOys57//9vdK2d/gwcJRoPbEKEc70vwwiLeiH5g4103mFnvKjopc X-Authority-Analysis: v=2.4 cv=J4mq7BnS c=1 sm=1 tr=0 ts=6822053d cx=c_pps a=3Bg1Hr4SwmMryq2xdFQyZA==:117 a=3Bg1Hr4SwmMryq2xdFQyZA==:17 a=dt9VzEwgFbYA:10 a=pGLkceISAAAA:8 a=VwQbUJbxAAAA:8 a=VnNF1IyMAAAA:8 a=JPUNfvWgwyJmovPnq6kA:9 X-Proofpoint-ORIG-GUID: yf51ftufHbYNS3T2DJKZmgaP-CK1UybW X-Proofpoint-GUID: PFKdY_lkspTBDtCs5rvxBcsotItLtA39 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-12_04,2025-05-09_01,2025-02-21_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 adultscore=0 lowpriorityscore=0 bulkscore=0 mlxlogscore=608 priorityscore=1501 mlxscore=0 impostorscore=0 malwarescore=0 phishscore=0 clxscore=1015 spamscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2504070000 definitions=main-2505120147 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: A0C7E1C000B X-Rspam-User: X-Stat-Signature: 1fz1esnag34a1tc5g8wfh6bdmfo6msye X-HE-Tag: 1747060031-154053 X-HE-Meta: U2FsdGVkX19rT7xBUXpc8rLlno+Om+V32oT/ZweYAryz1dDVhyT4OfRZP7YBff3631Xk6oAsp6XHWA41FfyHgF953QakmGcW1HcKvkiWrGg+MoSJeltm6e5reqN8gpeM+jO1h2nNojQRaVTQgf3KpUqTqJDqQLLzbWwXnc8dpUj7qrxK+ZRSgIrqbVWiLzVuOnMHXldyV0vXtt8mM3L5OVXi5JEN2EdFuGH4Yaz7Gcv0Ar1kO0higbzR4d6eqgVf8XqfL+agHz3cjF96CiI802/JcTu/MR7SJrQWOuqUFWxyHSMmbu7z4bfAbl5AGFzhZX9UCJqMyBvS2/NSc4n6o3QMaUY+pR5tq9HPHe+Q2Tnry3dOmMzI7cHxJB9fIV8Gc/LwcA6eU+qq8FPyuZGwb2YrrbAmG+tYIRt3VfPv6jHhv5MUtyknE71kOEIO03jBUqVr4PidoevmtRCVNmWIHDfdm1jgf0WKuiHTMHIzxP7dmNS8luVaMC4tINPo3OKqcaP9qxoyKzE6LNurbWQya3tbPGYINhYRL6+c+iYSb9vPsCcUIjM1xqvaiLbyTShvLbUXSKNPE6kjdjekYAFhUmJ+6CrYk4vIOtqNKjNHH66zMBYSIOpRxgHcljEBsKepW37+O+eQ4T0fO3ZQQ3FkoXjmgSmcvb9b3ZUIpbO8j3SyhAlN8nVb2BES1ZM1djsbtjOSZAEXBSxNwQhuATKhT81pLHnEgaeyz0Hg9+F+GfG04Uifzb6p0zn0Lvl1cLmh7hW92XCEgrl4vYdDXgvNq6oF6LogbVECeyuthAEccS0K7NT+GEugyAXTwIl7R7Svsqk6YfJsyzhDiyJyL39xg+BKH5po9+Rx36N883sBtwrQWTXCz1yjpG4IBULIawbP4keVnKGWWoCRP0bUUcGEsQK3t32BLvf6BSe7qWF4saoqh0xwiwDwoTnOFFx4xcCrH5UYlxKqT3kdqmj1UmB HtgfxueC SS+1TJes7vg7ZYbzDKCTz3/c029jA6O7SYLLwQd3IsJKW4IsTr7X1DB9pdd4dQ8ggsD6CVgs1X17TwtKjW0kN/fqJFmqYyuM9rknbQ3IPNEmtseJM4fqEUkkekbyEFBIaXkwpBUvZDmjsGKXfUCGL2161QSHx+dtQINyuKAs1pr7dMGoga+DNIdHpdbKxsJK7ggrY4AfjDNd4fWOKySguCYpUXwpWcf4cJAan7HM5zOf0Tmtpp8uFhGDSZG3p7jODVH0gaP9MrrSBGi+ae1POrYf/3OvVsgUwE85vOkkyPvhIVIPSBigrqGTbCnEBCmCIQDly X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: apply_to_pte_range() enters the lazy MMU mode and then invokes kasan_populate_vmalloc_pte() callback on each page table walk iteration. However, the callback can go into sleep when trying to allocate a single page, e.g. if an architecutre disables preemption on lazy MMU mode enter. On s390 if make arch_enter_lazy_mmu_mode() -> preempt_enable() and arch_leave_lazy_mmu_mode() -> preempt_disable(), such crash occurs: [ 0.663336] BUG: sleeping function called from invalid context at ./include/linux/sched/mm.h:321 [ 0.663348] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2, name: kthreadd [ 0.663358] preempt_count: 1, expected: 0 [ 0.663366] RCU nest depth: 0, expected: 0 [ 0.663375] no locks held by kthreadd/2. [ 0.663383] Preemption disabled at: [ 0.663386] [<0002f3284cbb4eda>] apply_to_pte_range+0xfa/0x4a0 [ 0.663405] CPU: 0 UID: 0 PID: 2 Comm: kthreadd Not tainted 6.15.0-rc5-gcc-kasan-00043-gd76bb1ebb558-dirty #162 PREEMPT [ 0.663408] Hardware name: IBM 3931 A01 701 (KVM/Linux) [ 0.663409] Call Trace: [ 0.663410] [<0002f3284c385f58>] dump_stack_lvl+0xe8/0x140 [ 0.663413] [<0002f3284c507b9e>] __might_resched+0x66e/0x700 [ 0.663415] [<0002f3284cc4f6c0>] __alloc_frozen_pages_noprof+0x370/0x4b0 [ 0.663419] [<0002f3284ccc73c0>] alloc_pages_mpol+0x1a0/0x4a0 [ 0.663421] [<0002f3284ccc8518>] alloc_frozen_pages_noprof+0x88/0xc0 [ 0.663424] [<0002f3284ccc8572>] alloc_pages_noprof+0x22/0x120 [ 0.663427] [<0002f3284cc341ac>] get_free_pages_noprof+0x2c/0xc0 [ 0.663429] [<0002f3284cceba70>] kasan_populate_vmalloc_pte+0x50/0x120 [ 0.663433] [<0002f3284cbb4ef8>] apply_to_pte_range+0x118/0x4a0 [ 0.663435] [<0002f3284cbc7c14>] apply_to_pmd_range+0x194/0x3e0 [ 0.663437] [<0002f3284cbc99be>] __apply_to_page_range+0x2fe/0x7a0 [ 0.663440] [<0002f3284cbc9e88>] apply_to_page_range+0x28/0x40 [ 0.663442] [<0002f3284ccebf12>] kasan_populate_vmalloc+0x82/0xa0 [ 0.663445] [<0002f3284cc1578c>] alloc_vmap_area+0x34c/0xc10 [ 0.663448] [<0002f3284cc1c2a6>] __get_vm_area_node+0x186/0x2a0 [ 0.663451] [<0002f3284cc1e696>] __vmalloc_node_range_noprof+0x116/0x310 [ 0.663454] [<0002f3284cc1d950>] __vmalloc_node_noprof+0xd0/0x110 [ 0.663457] [<0002f3284c454b88>] alloc_thread_stack_node+0xf8/0x330 [ 0.663460] [<0002f3284c458d56>] dup_task_struct+0x66/0x4d0 [ 0.663463] [<0002f3284c45be90>] copy_process+0x280/0x4b90 [ 0.663465] [<0002f3284c460940>] kernel_clone+0xd0/0x4b0 [ 0.663467] [<0002f3284c46115e>] kernel_thread+0xbe/0xe0 [ 0.663469] [<0002f3284c4e440e>] kthreadd+0x50e/0x7f0 [ 0.663472] [<0002f3284c38c04a>] __ret_from_fork+0x8a/0xf0 [ 0.663475] [<0002f3284ed57ff2>] ret_from_fork+0xa/0x38 Instead of allocating single pages per-PTE, bulk-allocate the shadow memory prior to applying kasan_populate_vmalloc_pte() callback on a page range. Suggested-by: Andrey Ryabinin Cc: stable@vger.kernel.org Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory") Signed-off-by: Alexander Gordeev --- mm/kasan/shadow.c | 76 ++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 62 insertions(+), 14 deletions(-) diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 88d1c9dcb507..2bf00bf7e545 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -292,33 +292,83 @@ void __init __weak kasan_populate_early_vm_area_shadow(void *start, { } +struct vmalloc_populate_data { + unsigned long start; + struct page **pages; +}; + static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, - void *unused) + void *_data) { - unsigned long page; + struct vmalloc_populate_data *data = _data; + struct page *page; pte_t pte; + int index; if (likely(!pte_none(ptep_get(ptep)))) return 0; - page = __get_free_page(GFP_KERNEL); - if (!page) - return -ENOMEM; - - __memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE); - pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL); + index = PFN_DOWN(addr - data->start); + page = data->pages[index]; + __memset(page_to_virt(page), KASAN_VMALLOC_INVALID, PAGE_SIZE); + pte = pfn_pte(page_to_pfn(page), PAGE_KERNEL); spin_lock(&init_mm.page_table_lock); if (likely(pte_none(ptep_get(ptep)))) { set_pte_at(&init_mm, addr, ptep, pte); - page = 0; + data->pages[index] = NULL; } spin_unlock(&init_mm.page_table_lock); - if (page) - free_page(page); + return 0; } +static inline void free_pages_bulk(struct page **pages, int nr_pages) +{ + int i; + + for (i = 0; i < nr_pages; i++) { + if (pages[i]) { + __free_pages(pages[i], 0); + pages[i] = NULL; + } + } +} + +static int __kasan_populate_vmalloc(unsigned long start, unsigned long end) +{ + unsigned long nr_pages, nr_populated = 0, nr_total = PFN_UP(end - start); + struct vmalloc_populate_data data; + int ret = 0; + + data.pages = (struct page **)__get_free_page(GFP_KERNEL | __GFP_ZERO); + if (!data.pages) + return -ENOMEM; + + while (nr_total) { + nr_pages = min(nr_total, PAGE_SIZE / sizeof(data.pages[0])); + nr_populated = alloc_pages_bulk(GFP_KERNEL, nr_pages, data.pages); + if (nr_populated != nr_pages) { + ret = -ENOMEM; + break; + } + + data.start = start; + ret = apply_to_page_range(&init_mm, start, nr_pages * PAGE_SIZE, + kasan_populate_vmalloc_pte, &data); + if (ret) + break; + + start += nr_pages * PAGE_SIZE; + nr_total -= nr_pages; + } + + free_pages_bulk(data.pages, nr_populated); + free_page((unsigned long)data.pages); + + return ret; +} + int kasan_populate_vmalloc(unsigned long addr, unsigned long size) { unsigned long shadow_start, shadow_end; @@ -348,9 +398,7 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size) shadow_start = PAGE_ALIGN_DOWN(shadow_start); shadow_end = PAGE_ALIGN(shadow_end); - ret = apply_to_page_range(&init_mm, shadow_start, - shadow_end - shadow_start, - kasan_populate_vmalloc_pte, NULL); + ret = __kasan_populate_vmalloc(shadow_start, shadow_end); if (ret) return ret; -- 2.45.2