From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E65AC3F68F for ; Wed, 4 Dec 2019 20:46:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 49AE9206DF for ; Wed, 4 Dec 2019 20:46:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 49AE9206DF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C6C046B0C7A; Wed, 4 Dec 2019 15:46:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C1B9B6B0C7B; Wed, 4 Dec 2019 15:46:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B31086B0C7C; Wed, 4 Dec 2019 15:46:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0004.hostedemail.com [216.40.44.4]) by kanga.kvack.org (Postfix) with ESMTP id A01DB6B0C7A for ; Wed, 4 Dec 2019 15:46:13 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 5468B82CC287 for ; Wed, 4 Dec 2019 20:46:13 +0000 (UTC) X-FDA: 76228641426.03.chair51_363e034e8c43b X-HE-Tag: chair51_363e034e8c43b X-Filterd-Recvd-Size: 9776 Received: from relay.sw.ru (relay.sw.ru [185.231.240.75]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Dec 2019 20:46:12 +0000 (UTC) Received: from dhcp-172-16-25-5.sw.ru ([172.16.25.5] helo=i7.sw.ru) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1icbWx-0001lh-Px; Wed, 04 Dec 2019 23:45:52 +0300 From: Andrey Ryabinin To: Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , kasan-dev@googlegroups.com, Daniel Axtens , Qian Cai , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Ryabinin , syzbot+82e323920b78d54aaed5@syzkaller.appspotmail.com Subject: [PATCH 1/2] kasan: fix crashes on access to memory mapped by vm_map_ram() Date: Wed, 4 Dec 2019 23:45:33 +0300 Message-Id: <20191204204534.32202-1-aryabinin@virtuozzo.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With CONFIG_KASAN_VMALLOC=3Dy any use of memory obtained via vm_map_ram() will crash because there is no shadow backing that memory. Instead of sprinkling additional kasan_populate_vmalloc() calls all over the vmalloc code, move it into alloc_vmap_area(). This will fix vm_map_ram() and simplify the code a bit. Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shad= ow memory") Reported-by: Dmitry Vyukov Reported-by: syzbot+82e323920b78d54aaed5@syzkaller.appspotmail.com Signed-off-by: Andrey Ryabinin --- include/linux/kasan.h | 15 +++++++++------ mm/kasan/common.c | 27 +++++++++++++++++--------- mm/vmalloc.c | 45 ++++++++++++++++++------------------------- 3 files changed, 46 insertions(+), 41 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 4f404c565db1..e18fe54969e9 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -205,20 +205,23 @@ static inline void *kasan_reset_tag(const void *add= r) #endif /* CONFIG_KASAN_SW_TAGS */ =20 #ifdef CONFIG_KASAN_VMALLOC -int kasan_populate_vmalloc(unsigned long requested_size, - struct vm_struct *area); -void kasan_poison_vmalloc(void *start, unsigned long size); +int kasan_populate_vmalloc(unsigned long addr, unsigned long size); +void kasan_poison_vmalloc(const void *start, unsigned long size); +void kasan_unpoison_vmalloc(const void *start, unsigned long size); void kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, unsigned long free_region_end); #else -static inline int kasan_populate_vmalloc(unsigned long requested_size, - struct vm_struct *area) +static inline int kasan_populate_vmalloc(unsigned long start, + unsigned long size) { return 0; } =20 -static inline void kasan_poison_vmalloc(void *start, unsigned long size)= {} +static inline void kasan_poison_vmalloc(const void *start, unsigned long= size) +{ } +static inline void kasan_unpoison_vmalloc(const void *start, unsigned lo= ng size) +{ } static inline void kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, diff --git a/mm/kasan/common.c b/mm/kasan/common.c index df3371d5c572..a1e6273be8c3 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -777,15 +777,17 @@ static int kasan_populate_vmalloc_pte(pte_t *ptep, = unsigned long addr, return 0; } =20 -int kasan_populate_vmalloc(unsigned long requested_size, struct vm_struc= t *area) +int kasan_populate_vmalloc(unsigned long addr, unsigned long size) { unsigned long shadow_start, shadow_end; int ret; =20 - shadow_start =3D (unsigned long)kasan_mem_to_shadow(area->addr); + if (!is_vmalloc_or_module_addr((void *)addr)) + return 0; + + shadow_start =3D (unsigned long)kasan_mem_to_shadow((void *)addr); shadow_start =3D ALIGN_DOWN(shadow_start, PAGE_SIZE); - shadow_end =3D (unsigned long)kasan_mem_to_shadow(area->addr + - area->size); + shadow_end =3D (unsigned long)kasan_mem_to_shadow((void *)addr + size); shadow_end =3D ALIGN(shadow_end, PAGE_SIZE); =20 ret =3D apply_to_page_range(&init_mm, shadow_start, @@ -796,10 +798,6 @@ int kasan_populate_vmalloc(unsigned long requested_s= ize, struct vm_struct *area) =20 flush_cache_vmap(shadow_start, shadow_end); =20 - kasan_unpoison_shadow(area->addr, requested_size); - - area->flags |=3D VM_KASAN; - /* * We need to be careful about inter-cpu effects here. Consider: * @@ -842,12 +840,23 @@ int kasan_populate_vmalloc(unsigned long requested_= size, struct vm_struct *area) * Poison the shadow for a vmalloc region. Called as part of the * freeing process at the time the region is freed. */ -void kasan_poison_vmalloc(void *start, unsigned long size) +void kasan_poison_vmalloc(const void *start, unsigned long size) { + if (!is_vmalloc_or_module_addr(start)) + return; + size =3D round_up(size, KASAN_SHADOW_SCALE_SIZE); kasan_poison_shadow(start, size, KASAN_VMALLOC_INVALID); } =20 +void kasan_unpoison_vmalloc(const void *start, unsigned long size) +{ + if (!is_vmalloc_or_module_addr(start)) + return; + + kasan_unpoison_shadow(start, size); +} + static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, void *unused) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4d3b3d60d893..a5412f14f57f 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1073,6 +1073,7 @@ static struct vmap_area *alloc_vmap_area(unsigned l= ong size, struct vmap_area *va, *pva; unsigned long addr; int purged =3D 0; + int ret =3D -EBUSY; =20 BUG_ON(!size); BUG_ON(offset_in_page(size)); @@ -1139,6 +1140,10 @@ static struct vmap_area *alloc_vmap_area(unsigned = long size, va->va_end =3D addr + size; va->vm =3D NULL; =20 + ret =3D kasan_populate_vmalloc(addr, size); + if (ret) + goto out; + spin_lock(&vmap_area_lock); insert_vmap_area(va, &vmap_area_root, &vmap_area_list); spin_unlock(&vmap_area_lock); @@ -1169,8 +1174,9 @@ static struct vmap_area *alloc_vmap_area(unsigned l= ong size, pr_warn("vmap allocation for size %lu failed: use vmalloc=3D to = increase size\n", size); =20 +out: kmem_cache_free(vmap_area_cachep, va); - return ERR_PTR(-EBUSY); + return ERR_PTR(ret); } =20 int register_vmap_purge_notifier(struct notifier_block *nb) @@ -1771,6 +1777,8 @@ void vm_unmap_ram(const void *mem, unsigned int cou= nt) BUG_ON(addr > VMALLOC_END); BUG_ON(!PAGE_ALIGNED(addr)); =20 + kasan_poison_vmalloc(mem, size); + if (likely(count <=3D VMAP_MAX_ALLOC)) { debug_check_no_locks_freed(mem, size); vb_free(mem, size); @@ -1821,6 +1829,9 @@ void *vm_map_ram(struct page **pages, unsigned int = count, int node, pgprot_t pro addr =3D va->va_start; mem =3D (void *)addr; } + + kasan_unpoison_vmalloc(mem, size); + if (vmap_page_range(addr, addr + size, prot, pages) < 0) { vm_unmap_ram(mem, count); return NULL; @@ -2075,6 +2086,7 @@ static struct vm_struct *__get_vm_area_node(unsigne= d long size, { struct vmap_area *va; struct vm_struct *area; + unsigned long requested_size =3D size; =20 BUG_ON(in_interrupt()); size =3D PAGE_ALIGN(size); @@ -2098,23 +2110,9 @@ static struct vm_struct *__get_vm_area_node(unsign= ed long size, return NULL; } =20 - setup_vmalloc_vm(area, va, flags, caller); + kasan_unpoison_vmalloc((void *)va->va_start, requested_size); =20 - /* - * For KASAN, if we are in vmalloc space, we need to cover the shadow - * area with real memory. If we come here through VM_ALLOC, this is - * done by a higher level function that has access to the true size, - * which might not be a full page. - * - * We assume module space comes via VM_ALLOC path. - */ - if (is_vmalloc_addr(area->addr) && !(area->flags & VM_ALLOC)) { - if (kasan_populate_vmalloc(area->size, area)) { - unmap_vmap_area(va); - kfree(area); - return NULL; - } - } + setup_vmalloc_vm(area, va, flags, caller); =20 return area; } @@ -2293,8 +2291,7 @@ static void __vunmap(const void *addr, int dealloca= te_pages) debug_check_no_locks_freed(area->addr, get_vm_area_size(area)); debug_check_no_obj_freed(area->addr, get_vm_area_size(area)); =20 - if (area->flags & VM_KASAN) - kasan_poison_vmalloc(area->addr, area->size); + kasan_poison_vmalloc(area->addr, area->size); =20 vm_remove_mappings(area, deallocate_pages); =20 @@ -2539,7 +2536,7 @@ void *__vmalloc_node_range(unsigned long size, unsi= gned long align, if (!size || (size >> PAGE_SHIFT) > totalram_pages()) goto fail; =20 - area =3D __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED | + area =3D __get_vm_area_node(real_size, align, VM_ALLOC | VM_UNINITIALIZ= ED | vm_flags, start, end, node, gfp_mask, caller); if (!area) goto fail; @@ -2548,11 +2545,6 @@ void *__vmalloc_node_range(unsigned long size, uns= igned long align, if (!addr) return NULL; =20 - if (is_vmalloc_or_module_addr(area->addr)) { - if (kasan_populate_vmalloc(real_size, area)) - return NULL; - } - /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED * flag. It means that vm_struct is not fully initialized. @@ -3437,7 +3429,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned= long *offsets, /* populate the shadow space outside of the lock */ for (area =3D 0; area < nr_vms; area++) { /* assume success here */ - kasan_populate_vmalloc(sizes[area], vms[area]); + kasan_populate_vmalloc(vas[area]->va_start, sizes[area]); + kasan_unpoison_vmalloc((void *)vms[area]->addr, sizes[area]); } =20 kfree(vas); --=20 2.23.0