From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 117A0C83F03 for ; Fri, 4 Jul 2025 15:25:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 77C9C6B02BF; Fri, 4 Jul 2025 11:25:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7053C6B02C0; Fri, 4 Jul 2025 11:25:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A7556B02C1; Fri, 4 Jul 2025 11:25:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 45D6C6B02BF for ; Fri, 4 Jul 2025 11:25:47 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 103038011F for ; Fri, 4 Jul 2025 15:25:47 +0000 (UTC) X-FDA: 83626957134.21.77EE466 Received: from mail-lf1-f44.google.com (mail-lf1-f44.google.com [209.85.167.44]) by imf08.hostedemail.com (Postfix) with ESMTP id 28E31160009 for ; Fri, 4 Jul 2025 15:25:44 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=J2MzRiGJ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.44 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751642745; a=rsa-sha256; cv=none; b=pilhj5WfJC79PCG48Q9W0DgczGvnVHbbA7CVxXiHDULFhYWqNMT6FsCMqDmRrPOMUSmpBh XRWju0fheX8SRZtxQv5PG669SJ/XIqHy6m8Abx3HGAGYjun3J1T26zlQVtujsoCoWIxCEF fAWZnvWjKfgqYgtpGkc9ngzmqssfw3o= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=J2MzRiGJ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.44 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751642745; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OgS0KJaXTfPzzFhO8JiQ2ZNCuUgGHrASYJQSS3SosFc=; b=b0njU3SdXVteBO1plwQDbR781KnTTwtRds1JgYDEG+zVLUry9MPrs41lFU0xG9WlO0klIe 2+A6YgUyfx33g9dd85GjqcKA1dSYk6isSqYISALLJC0WXicLNyKsSHs+nQ4t+lrPH9vdqn 9BugfV3LqSrrpLqw4oAU2pMooTmYW4A= Received: by mail-lf1-f44.google.com with SMTP id 2adb3069b0e04-553dceb342fso1105060e87.0 for ; Fri, 04 Jul 2025 08:25:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1751642743; x=1752247543; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OgS0KJaXTfPzzFhO8JiQ2ZNCuUgGHrASYJQSS3SosFc=; b=J2MzRiGJX9WV7sjTbIc6mtXjTfWn5wjnhSWIsQRJeAkcA1h8JeMobl7PGr0gfXYdp+ 1BnxihYQ/8NrlJ0lfgOX3PcyaoL9rvNgwk7DLloTYMmN//4ZQmLgf1NW8hd97rJ9o3fV 4IsHN1SiupxQUMx6LITj1784qQbW2lDWnZbR1np+jY34OJJImSUeW67BNyPrtPNkWg+o 5skzo4iFQL4KsBagejFmBQdaQwwEb0Kr43Y4kJLLSdtDekR3Hi3cLJ8IWjc9zSmW05E1 b0Lo7HJvp+aD7t3xhHQ3zizMRQq1IGk5p9n0ZqRtfnU7H4bKzgmDncNKoHK+/a97369X HY6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751642743; x=1752247543; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OgS0KJaXTfPzzFhO8JiQ2ZNCuUgGHrASYJQSS3SosFc=; b=qYFbUI1otwnBCrRoeV1Jqdh/8a0RFRIpVyqA05taiRpZNli59s1Iiwohh9Nz0BF4qZ Z0sBbuaPKHLXp41eZ9uvTVIs61EqncI9jE36RiNHXsoA3bzq5+EKYMv6SPsvWC8UiBEI B1YIBiTLPb4JtxhK47yBjPznGJuMWYBBp9P2Pr18oJ+LsC7LqkfzKFz6O+xZ2k2XJM3+ nk2ReoJRBEIm1CAOq/+WxifbjU0oVMG3u54TLL8UtZ2Q6/UYTGTn0vN8SxhkMPi5AwMw rw0Q+kfUSNcvKVmruEIB0G4P2vgHropJffJa+wi5q+MHU0q/wKyS+WLknXVil+qyrMMP 70/A== X-Gm-Message-State: AOJu0YzWuUfyN+f4iVbV5aiHv3juDJhlVi8UPBF1kZqNo0hrzPP30r1R 0NKZ4kL7L9A16XnjhrWVfPAHzUq1xc8hiijCe4UpH31eD34SAw1rICXL7F7ECw== X-Gm-Gg: ASbGncu+KsxxaL065IC5af+tSG3TBepzdmsSXEb/V/q6Xv3WjacHZzUiFPW+S/Yd0N+ UJG+km3VYKC4+DVhsT4xbr++cxrK2MMUOTxVcrcq1FxxB5xYwC2x7iPT4rIDdY+lhbbumICkPoE 7oynnGGXYmt40YY4+xqtHpOb9hGRfeTLtWAHswVd6osCF9KuMR2o1V6rpbdgem4853OuaChk21e mh9sFY401nfAySnFAuu3W1Kp59lHWqFS+MjAMVHs9Ysgqw+ITW9k5BBlQyXWYhIWzqE9MJY+XLE jz6rh1dvay/uRlsMafnQG3dWoFvziEH4qzKyBiqnosvSmpVuX7+lAnCuLIutyRxpHFyS X-Google-Smtp-Source: AGHT+IF38JqZkPzw0ACXRYTdNdIPx2TOesuSd9rduFFD+CHMcf7TmprM3nkyvPUZKkjtKEYwubd/Jg== X-Received: by 2002:a05:6512:1246:b0:553:37e7:867a with SMTP id 2adb3069b0e04-556e6e3157cmr1097417e87.49.1751642743323; Fri, 04 Jul 2025 08:25:43 -0700 (PDT) Received: from pc638.lan ([2001:9b1:d5a0:a500:2d8:61ff:fec9:d743]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-556383bb113sm281028e87.11.2025.07.04.08.25.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jul 2025 08:25:42 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: Michal Hocko , LKML , Baoquan He , Uladzislau Rezki , Andrey Ryabinin , Alexander Potapenko Subject: [RFC 4/7] mm/kasan, mm/vmalloc: Respect GFP flags in kasan_populate_vmalloc() Date: Fri, 4 Jul 2025 17:25:34 +0200 Message-Id: <20250704152537.55724-5-urezki@gmail.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20250704152537.55724-1-urezki@gmail.com> References: <20250704152537.55724-1-urezki@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 28E31160009 X-Stat-Signature: spi6hbkqprqfcfpm7ofxwe6sap8yznk4 X-Rspam-User: X-HE-Tag: 1751642744-346278 X-HE-Meta: U2FsdGVkX1+IOC1iEYSLDHG1+hf/z6SnnXeVcBg0+YkXUNMGMwraJSu8yoO5X7WNZLj51naMhgedDBbGbfq6+ch1kTBWgcLXhk2F8sd8cVjD5i9sfXtNbfD0EtAzhpzYC5t6JPhaU73W3gKAuUus8bsaY6rAIkrSEP7szUIDo5AwFNhrtL0GS5yPFu8Yt06J7ejeX35PPZ0PzaUzpB/SUfsxTX311NT+8QEhoxazfKJRn+Mpui5M+3uWhRz/kFPkdbo2QKykjs3eHtQS74Ztf2P0aX1Tn7LleRzGIapmfNsRSWh6uJ//aS1GrB5LlyrzuC3mX7Uf3g3oqYrdlqhxRN0wE3BJUdlsPkkUXg814RPdvAz6t9fNitEhd+Y5y1rV02QLyZgDJEFA9Rx4Gg6Khi4fzJ4Ww1ttv+Tnv7fj8rWPD0SYjH32IEpHfeDMpcAwYeS8z64jScSY0s/0IeGvUsI4fKUvorIwdUBtvRzFRhBAzMN60cHjwRrn0AZlyWU9WgXoHguNPWkfB8ZvvIYNYk6bKGBcy+/g93RNviy52uYe8NGL+mFTQ/TlhtPR8yuiOCMYGOs/VmIAiZkLUYMbRtuvZaYLWh+FWwC7JeiiZzLDRe3D71VQKN1YB7NKLVstrEkNJtsKoo7WdgxwyakpTRgPIkW8Vs+USoLQyG8oMVVilpF16bAiIOLRGHr/IynxA7zngIOFph/LARDqtiO+Sagdy5UuplsWSR9oY3Vvo1SnaEY2xnj8B5UhOKJQMKCmmIG67LKLSd+bHSuvT4krVNRxeabpscGTHD73b/p3gKwHv25Sn8rGbqJl+09qottQbw0dldV2o0iS37hZWHpgAUg8jsV1Mnm+yAtv9M7smliB9Y80hkji4ZariuEw5mDNtqIJk/y8Cju9Q1hJQcNGJoAbBmgkJkNGATSstLxhJGQnxIqMGE8AS3OfIOeA2RKNQ9C2ej8f/43mxzR0vat n1Jibfs4 NFodnGKTHivZp/JF9Qs5DW0JUJc6smWQQZMnurck13M6wN6UHWJy21zIgQLd0BDhHqsVavqmrRcvUxHWkT3D6bspKJz/oBBHWS7U33fsroUvwO/vT64a4LDiVWBPC3kJa4YRZm2SvV2CCQIFteM2+EcHFhmcKX6tjVFsg9g1B32JlaA4ELc//aEDbD769w4I3ZNa950X7E/JMN5VUWaQmop0q2R+rxF3HDQOeOFjfMXvJn8+yGtvSRX93AyAe1y04eSsTEZv8K63ym3UVYcPCnkPMFQKQA7FJzIGXjLjIgr8Lba5pR2BR9LUG0tnPEoFrQGp7U04NLrDMR+fqmy+4WIproZX5pPpiKwEstRTDwlDPtLKH6rkceZKJW/N8QJKS4BBL/A82xOGWQctSGkN5Q7EovMZ0V1WhXgddKgBzdyfOAQtZPf7qRy66oT9t3a2NvVBQ4JyiuLzbXaHqDoFQlR9FUWdXXA2okWNyV3c79Xe27NYyyaT7D5t272tgKQ+W/pb73CT8v8zXVZWhi6zSBOYzXu1InEwepeAvxk2T2ui1j4JvT777AUpixsW3z1XTL7kJyHnQvnvPebo5MZ1xW2uc/g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The function kasan_populate_vmalloc() internally allocates a page using a hardcoded GFP_KERNEL flag. This is not safe in contexts where non-blocking allocation flags are required, such as GFP_ATOMIC or GFP_NOWAIT, for example during atomic vmalloc paths. This patch modifies kasan_populate_vmalloc() and its helpers to accept a gfp_mask argument to use it for a page allocation. It allows the caller to specify the correct allocation context. Also, when non-blocking flags are used, memalloc_noreclaim_save/restore() is used around apply_to_page_range() to suppress potential reclaim behavior that may otherwise violate atomic constraints. Cc: Andrey Ryabinin Cc: Alexander Potapenko Signed-off-by: Uladzislau Rezki (Sony) --- include/linux/kasan.h | 6 +++--- mm/kasan/shadow.c | 22 +++++++++++++++------- mm/vmalloc.c | 4 ++-- 3 files changed, 20 insertions(+), 12 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 890011071f2b..fe5ce9215821 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -562,7 +562,7 @@ static inline void kasan_init_hw_tags(void) { } #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) void kasan_populate_early_vm_area_shadow(void *start, unsigned long size); -int kasan_populate_vmalloc(unsigned long addr, unsigned long size); +int kasan_populate_vmalloc(unsigned long addr, unsigned long size, gfp_t gfp_mask); void kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, unsigned long free_region_end, @@ -574,7 +574,7 @@ static inline void kasan_populate_early_vm_area_shadow(void *start, unsigned long size) { } static inline int kasan_populate_vmalloc(unsigned long start, - unsigned long size) + unsigned long size, gfp_t gfp_mask) { return 0; } @@ -610,7 +610,7 @@ static __always_inline void kasan_poison_vmalloc(const void *start, static inline void kasan_populate_early_vm_area_shadow(void *start, unsigned long size) { } static inline int kasan_populate_vmalloc(unsigned long start, - unsigned long size) + unsigned long size, gfp_t gfp_mask) { return 0; } diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index d2c70cd2afb1..5edfc1f6b53e 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -335,13 +335,13 @@ static void ___free_pages_bulk(struct page **pages, int nr_pages) } } -static int ___alloc_pages_bulk(struct page **pages, int nr_pages) +static int ___alloc_pages_bulk(struct page **pages, int nr_pages, gfp_t gfp_mask) { unsigned long nr_populated, nr_total = nr_pages; struct page **page_array = pages; while (nr_pages) { - nr_populated = alloc_pages_bulk(GFP_KERNEL, nr_pages, pages); + nr_populated = alloc_pages_bulk(gfp_mask, nr_pages, pages); if (!nr_populated) { ___free_pages_bulk(page_array, nr_total - nr_pages); return -ENOMEM; @@ -353,25 +353,33 @@ static int ___alloc_pages_bulk(struct page **pages, int nr_pages) return 0; } -static int __kasan_populate_vmalloc(unsigned long start, unsigned long end) +static int __kasan_populate_vmalloc(unsigned long start, unsigned long end, gfp_t gfp_mask) { unsigned long nr_pages, nr_total = PFN_UP(end - start); + bool noblock = !gfpflags_allow_blocking(gfp_mask); struct vmalloc_populate_data data; + unsigned int flags; int ret = 0; - data.pages = (struct page **)__get_free_page(GFP_KERNEL | __GFP_ZERO); + data.pages = (struct page **)__get_free_page(gfp_mask | __GFP_ZERO); if (!data.pages) return -ENOMEM; while (nr_total) { nr_pages = min(nr_total, PAGE_SIZE / sizeof(data.pages[0])); - ret = ___alloc_pages_bulk(data.pages, nr_pages); + ret = ___alloc_pages_bulk(data.pages, nr_pages, gfp_mask); if (ret) break; data.start = start; + if (noblock) + flags = memalloc_noreclaim_save(); + ret = apply_to_page_range(&init_mm, start, nr_pages * PAGE_SIZE, kasan_populate_vmalloc_pte, &data); + if (noblock) + memalloc_noreclaim_restore(flags); + ___free_pages_bulk(data.pages, nr_pages); if (ret) break; @@ -385,7 +393,7 @@ static int __kasan_populate_vmalloc(unsigned long start, unsigned long end) return ret; } -int kasan_populate_vmalloc(unsigned long addr, unsigned long size) +int kasan_populate_vmalloc(unsigned long addr, unsigned long size, gfp_t gfp_mask) { unsigned long shadow_start, shadow_end; int ret; @@ -414,7 +422,7 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size) shadow_start = PAGE_ALIGN_DOWN(shadow_start); shadow_end = PAGE_ALIGN(shadow_end); - ret = __kasan_populate_vmalloc(shadow_start, shadow_end); + ret = __kasan_populate_vmalloc(shadow_start, shadow_end, gfp_mask); if (ret) return ret; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 25d09f753239..5bac15b09b03 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2091,7 +2091,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, BUG_ON(va->va_start < vstart); BUG_ON(va->va_end > vend); - ret = kasan_populate_vmalloc(addr, size); + ret = kasan_populate_vmalloc(addr, size, gfp_mask); if (ret) { free_vmap_area(va); return ERR_PTR(ret); @@ -4832,7 +4832,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, /* populate the kasan shadow space */ for (area = 0; area < nr_vms; area++) { - if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area])) + if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area], GFP_KERNEL)) goto err_free_shadow; } -- 2.39.5