From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51B17C43334 for ; Fri, 24 Jun 2022 10:27:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 70A5A8E020F; Fri, 24 Jun 2022 06:27:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6BA2C8E020E; Fri, 24 Jun 2022 06:27:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55BD88E020F; Fri, 24 Jun 2022 06:27:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 40CCB8E020E for ; Fri, 24 Jun 2022 06:27:47 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0CA61357CB for ; Fri, 24 Jun 2022 10:27:47 +0000 (UTC) X-FDA: 79612753374.25.9E53B94 Received: from mail-lj1-f176.google.com (mail-lj1-f176.google.com [209.85.208.176]) by imf27.hostedemail.com (Postfix) with ESMTP id 952984001C for ; Fri, 24 Jun 2022 10:27:46 +0000 (UTC) Received: by mail-lj1-f176.google.com with SMTP id o23so2227855ljg.13 for ; Fri, 24 Jun 2022 03:27:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=IpFK0odhcIUGlBz3mfeEAjaAzcx0LDi6jQEfvn/gq6s=; b=DPKizhNZUw/jri4bKrYvkXwtQcNPKkuTvpPRQAt5tc4leiKJnj1B8GyRfQB/RkSkzT WiLkxYeArblvGtKtSuFLQmfcSTlnPrc4gtRfzyfBn8QCJYn9BJQ0jgAozGg7cruTwASo jSm6AzHhwQWfkMQ8SItkFCt/jX+/mg/V9ZMmpTUaGpQItzjCVHnRfpDShhrnYI47xcrn fVBICuS5ewFpyRF3+iVkw1bpg96Us9dh0GCYobQMMAh07cb3z0vPcR/KU7E+NbjwUK0E D4EVT5U31z8NIA3b4BFqQcJ/8DMccRx1TgzXethoD8AeTJ12EKUDLxGlIhioTUZDxIb+ vygQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=IpFK0odhcIUGlBz3mfeEAjaAzcx0LDi6jQEfvn/gq6s=; b=H+Y8ySrj5vBbQHLqe1jhbBzfclNS4F1MLOhT9va9ZCUKaFhbJeOTIVNBzYBOAu7WSQ ul/mCDAj9BPMzD3MWmoOpFvZdWyG5UM0NCqUEEv+aSKRxYla8UH1dSay9Rgv1dBtRh5t XQJwDRlNxLktw493Zi8jq4Bfw+cm27haMJAYc/OaKUPXtbCnZLSV/BnCFHiX1mibbT4M xAC+wVCzctMl5Fd8H8La4LUr/fxJJCuGzWQPVje6VVnrJpO7HJMRuTNlTybEyLEmn5Bb cifzHk7DSyCdYbM+dXAwTDv56Enf9vKjxXR/pladxwS3lFYUuLzekoIzaShyQ4W1ver0 5ysQ== X-Gm-Message-State: AJIora92zQ5Di+T9Gy7LPORvGkXUIAd7d7L8QZXSULPTgwVqmqrpXxTJ JzI3S5U51iqx07T3OrpD758= X-Google-Smtp-Source: AGRyM1t9Qtch0i5VRQSyc3bD9NeJc3EbqXZAGFR3uhsVQQ3ewS6Ekd5Da/aqjl/PZDY1JGVtuOoZBA== X-Received: by 2002:a2e:94c9:0:b0:24a:fe13:ce04 with SMTP id r9-20020a2e94c9000000b0024afe13ce04mr7257465ljh.52.1656066464733; Fri, 24 Jun 2022 03:27:44 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id g6-20020a056512118600b0047f7b641951sm296941lfr.272.2022.06.24.03.27.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Jun 2022 03:27:43 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Fri, 24 Jun 2022 12:27:41 +0200 To: Zhaoyang Huang Cc: Uladzislau Rezki , "zhaoyang.huang" , Andrew Morton , "open list:MEMORY MANAGEMENT" , LKML , Ke Wang , Christoph Hellwig Subject: Re: [PATCH] mm: fix racing of vb->va when kasan enabled Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=DPKizhNZ; spf=pass (imf27.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.176 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656066466; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IpFK0odhcIUGlBz3mfeEAjaAzcx0LDi6jQEfvn/gq6s=; b=pIn/Iuu9SwMO8RIqLSoDkQDgAS6vc5r/a5eoeOg8Dfp9h43YPpE+4clBaOgGcaArHECpPv wtJNjh7Qo6vlNhLerG8jo5Txk7lI1BFlJuLG/d+oB15OQenPxdhCB9mCh7ZvDeI478yA3L mogLVzahzMqS+Ku62Legkf+plgniRLo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656066466; a=rsa-sha256; cv=none; b=WvrJ/p+yEPdYL7wUv7CAi5/9CQZXG8nXULSGJCi393Hq+nETl+875J3CiglLh0R0PUgbge CHzsnnxLNcQcfzomE8JFOCLeP3RPsd1Wsje4Mf+uKNQaShkkKdvbJm8g1d84zmxYrgRjB7 6/joy4X9aLLRWDbwDFLRcl3sqIcyRJ0= X-Stat-Signature: c1mokwdg15hdbn97a8kce1d7ybhcm6ot X-Rspamd-Queue-Id: 952984001C X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=DPKizhNZ; spf=pass (imf27.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.176 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam02 X-HE-Tag: 1656066466-352607 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > On Wed, Jun 22, 2022 at 11:15 AM Zhaoyang Huang wrote: > > > > On Tue, Jun 21, 2022 at 10:29 PM Uladzislau Rezki wrote: > > > > > > > On Tue, Jun 21, 2022 at 5:27 PM Uladzislau Rezki wrote: > > > > > > > > > > > On Mon, Jun 20, 2022 at 6:44 PM Uladzislau Rezki wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > Is it easy to reproduce? If so could you please describe the steps? As i see > > > > > > > > > the freeing of the "vb" is RCU safe whereas vb->va is not. But from the first > > > > > > > > > glance i do not see how it can accessed twice. Hm.. > > > > > > > > It was raised from a monkey test on A13_k515 system and got 1/20 pcs > > > > > > > > failed. IMO, vb->va which out of vmap_purge_lock protection could race > > > > > > > > with a concurrent ra freeing within __purge_vmap_area_lazy. > > > > > > > > > > > > > > > Do you have exact steps how you run "monkey" test? > > > > > > There are about 30+ kos inserted during startup which could be a > > > > > > specific criteria for reproduction. Do you have doubts about the test > > > > > > result or the solution? > > > > > > > > > > > > I do not have any doubt about your test results, so if you can trigger it > > > > > then there is an issue at least on the 5.4.161-android12 kernel. > > > > > > > > > > 1. With your fix we get expanded mutex range, thus the worst case of vmalloc > > > > > allocation can be increased when it fails and repeat. Because it also invokes > > > > > the purge_vmap_area_lazy() that access the same mutex. > > > > I am not sure I get your point. _vm_unmap_aliases calls > > > > _purge_vmap_area_lazy instead of purge_vmap_area_lazy. Do you have any > > > > other solutions? I really don't think my patch is the best way as I > > > > don't have a full view of vmalloc mechanism. > > > > > > > Yep, but it holds the mutex: > I still don't get how _purge_vmap_area_lazy hold vmap_purge_lock? > The user has to take the mutex if it invokes the __purge_vmap_area_lazy() function. > > > > > > > > > mutex_lock(&vmap_purge_lock); > > > purge_fragmented_blocks_allcpus(); > > > if (!__purge_vmap_area_lazy(start, end) && flush) > > > flush_tlb_kernel_range(start, end); > > > mutex_unlock(&vmap_purge_lock); > > > > > > > > > I do not have a solution yet. I am trying still to figure out how you can > > > trigger it. > > > > > > > > > rcu_read_lock(); > > > list_for_each_entry_rcu(vb, &vbq->free, free_list) { > > > spin_lock(&vb->lock); > > > if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { > > > unsigned long va_start = vb->va->va_start; > > > > > > > > > so you say that "vb->va->va_start" can be accessed twice. I do not see > > > how it can happen. The purge_fragmented_blocks() removes "vb" from the > > > free_list and set vb->dirty to the VMAP_BBMAP_BITS to prevent purging > > > it again. It is protected by the spin_lock(&vb->lock): > > > > > > > > > spin_lock(&vb->lock); > > > if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) { > > > vb->free = 0; /* prevent further allocs after releasing lock */ > > > vb->dirty = VMAP_BBMAP_BITS; /* prevent purging it again */ > > > vb->dirty_min = 0; > > > vb->dirty_max = VMAP_BBMAP_BITS; > > > > > > > > > so the VMAP_BBMAP_BITS is set under spinlock. The _vm_unmap_aliases() checks it: > > > > > > > > > list_for_each_entry_rcu(vb, &vbq->free, free_list) { > > > spin_lock(&vb->lock); > > > if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { > > > unsigned long va_start = vb->va->va_start; > > > unsigned long s, e; > > > > > > > > > if the "vb->dirty != VMAP_BBMAP_BITS". I am missing your point here? > > Could the racing be like bellowing scenario? vb->va accessed in [2] > > has been freed in [1] > > > > _vm_unmap_aliases > > _vm_unmap_aliases > > { > > { > > list_for_each_entry_rcu(vb, &vbq->free, free_list) { > > __purge_vmap_area_lazy > > spin_lock(&vb->lock); > > merge_or_add_vmap_area > > if (vb->dirty) { > > > > kmem_cache_free(vmap_area_cachep, va)[1] > > unsigned long va_start = vb->va->va_start; > > [2] > > reformat the racing graph > _vm_unmap_aliases > _vm_unmap_aliases > { > { > list_for_each_entry_rcu(vb, &vbq->free, free_list) { > __purge_vmap_area_lazy > spin_lock(&vb->lock); > merge_or_add_vmap_area > if (vb->dirty) { > > kmem_cache_free(vmap_area_cachep, va)[1] > unsigned long va_start = vb->va->va_start; [2] > > > > > > > > > > > > > > 2. You run 5.4.161-android12 kernel what is quite old. Could you please > > > > > retest with latest kernel? I am asking because on the latest kernel with > > > > > CONFIG_KASAN i am not able to reproduce it. > > > > > > > > > > I do a lot of: vm_map_ram()/vm_unmap_ram()/vmalloc()/vfree() in parallel > > > > > by 64 kthreads on my 64 CPUs test system. > > > > The failure generates at 20s from starting up, I think it is a rare timing. > > > > > > > > > > Could you please confirm that you can trigger an issue on the latest kernel? > > > > Sorry, I don't have an available latest kernel for now. > > > > > > > Can you do: "gdb ./vmlinux", execute "l *_vm_unmap_aliases+0x164" and provide > > > output? > Sorry, I have lost the vmlinux with KASAN enabled and just got some > instructions from logs. > > 0xffffffd010678da8 <_vm_unmap_aliases+0x134>: sub x22, x26, #0x28 > x26 vbq->free > 0xffffffd010678dac <_vm_unmap_aliases+0x138>: lsr x8, x22, #3 > 0xffffffd010678db0 <_vm_unmap_aliases+0x13c>: ldrb w8, [x8,x24] > 0xffffffd010678db4 <_vm_unmap_aliases+0x140>: cbz w8, > 0xffffffd010678dc0 <_vm_unmap_aliases+0x14c> > 0xffffffd010678db8 <_vm_unmap_aliases+0x144>: mov x0, x22 > 0xffffffd010678dbc <_vm_unmap_aliases+0x148>: bl 0xffffffd0106c9a34 > <__asan_report_load8_noabort> > 0xffffffd010678dc0 <_vm_unmap_aliases+0x14c>: ldr x22, [x22] > 0xffffffd010678dc4 <_vm_unmap_aliases+0x150>: lsr x8, x22, #3 > 0xffffffd010678dc8 <_vm_unmap_aliases+0x154>: ldrb w8, [x8,x24] > 0xffffffd010678dcc <_vm_unmap_aliases+0x158>: cbz w8, > 0xffffffd010678dd8 <_vm_unmap_aliases+0x164> > 0xffffffd010678dd0 <_vm_unmap_aliases+0x15c>: mov x0, x22 > 0xffffffd010678dd4 <_vm_unmap_aliases+0x160>: bl 0xffffffd0106c9a34 > <__asan_report_load8_noabort> > Could you please test below patch if that fixes an issue on the 5.4 kernel: diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 4e7809408073..d5b07d7239bd 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -55,6 +55,7 @@ struct vmap_area { struct rb_node rb_node; /* address sorted rbtree */ struct list_head list; /* address sorted list */ + struct rcu_head rcu; /* * The following three variables can be packed, because diff --git a/mm/vmalloc.c b/mm/vmalloc.c index a3c70e275f4e..bb8cfdb06ce6 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -337,14 +337,6 @@ static LLIST_HEAD(vmap_purge_list); static struct rb_root vmap_area_root = RB_ROOT; static bool vmap_initialized __read_mostly; -/* - * This kmem_cache is used for vmap_area objects. Instead of - * allocating from slab we reuse an object from this cache to - * make things faster. Especially in "no edge" splitting of - * free block. - */ -static struct kmem_cache *vmap_area_cachep; - /* * This linked list is used in pair with free_vmap_area_root. * It gives O(1) access to prev/next to perform fast coalescing. @@ -532,7 +524,7 @@ link_va(struct vmap_area *va, struct rb_root *root, } /* Address-sort this list */ - list_add(&va->list, head); + list_add_rcu(&va->list, head); } static __always_inline void @@ -547,7 +539,7 @@ unlink_va(struct vmap_area *va, struct rb_root *root) else rb_erase(&va->rb_node, root); - list_del(&va->list); + list_del_rcu(&va->list); RB_CLEAR_NODE(&va->rb_node); } @@ -721,7 +713,7 @@ merge_or_add_vmap_area(struct vmap_area *va, augment_tree_propagate_from(sibling); /* Free vmap_area object. */ - kmem_cache_free(vmap_area_cachep, va); + kfree_rcu(va, rcu); /* Point to the new merged area. */ va = sibling; @@ -748,7 +740,7 @@ merge_or_add_vmap_area(struct vmap_area *va, unlink_va(va, root); /* Free vmap_area object. */ - kmem_cache_free(vmap_area_cachep, va); + kfree_rcu(va, rcu); return; } } @@ -928,7 +920,7 @@ adjust_va_to_fit_type(struct vmap_area *va, * |---------------| */ unlink_va(va, &free_vmap_area_root); - kmem_cache_free(vmap_area_cachep, va); + kfree_rcu(va, rcu); } else if (type == LE_FIT_TYPE) { /* * Split left edge of fit VA. @@ -969,7 +961,7 @@ adjust_va_to_fit_type(struct vmap_area *va, * a first allocation (early boot up) when we have "one" * big free space that has to be split. */ - lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT); + lva = kmalloc(sizeof(struct vmap_area), GFP_NOWAIT); if (!lva) return -1; } @@ -1064,8 +1056,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, might_sleep(); - va = kmem_cache_alloc_node(vmap_area_cachep, - gfp_mask & GFP_RECLAIM_MASK, node); + va = kmalloc_node(sizeof(struct vmap_area), gfp_mask & GFP_RECLAIM_MASK, node); if (unlikely(!va)) return ERR_PTR(-ENOMEM); @@ -1091,12 +1082,12 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, preempt_disable(); if (!__this_cpu_read(ne_fit_preload_node)) { preempt_enable(); - pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); + pva = kmalloc_node(sizeof(struct vmap_area), GFP_KERNEL, node); preempt_disable(); if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) { if (pva) - kmem_cache_free(vmap_area_cachep, pva); + kfree_rcu(pva, rcu); } } @@ -1145,7 +1136,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, pr_warn("vmap allocation for size %lu failed: use vmalloc= to increase size\n", size); - kmem_cache_free(vmap_area_cachep, va); + kfree_rcu(va, rcu); return ERR_PTR(-EBUSY); } @@ -1870,7 +1861,7 @@ static void vmap_init_free_space(void) */ list_for_each_entry(busy, &vmap_area_list, list) { if (busy->va_start - vmap_start > 0) { - free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); + free = kzalloc(sizeof(struct vmap_area), GFP_NOWAIT); if (!WARN_ON_ONCE(!free)) { free->va_start = vmap_start; free->va_end = busy->va_start; @@ -1885,7 +1876,7 @@ static void vmap_init_free_space(void) } if (vmap_end - vmap_start > 0) { - free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); + free = kzalloc(sizeof(struct vmap_area), GFP_NOWAIT); if (!WARN_ON_ONCE(!free)) { free->va_start = vmap_start; free->va_end = vmap_end; @@ -1903,11 +1894,6 @@ void __init vmalloc_init(void) struct vm_struct *tmp; int i; - /* - * Create the cache for vmap_area objects. - */ - vmap_area_cachep = KMEM_CACHE(vmap_area, SLAB_PANIC); - for_each_possible_cpu(i) { struct vmap_block_queue *vbq; struct vfree_deferred *p; @@ -1922,7 +1908,7 @@ void __init vmalloc_init(void) /* Import existing vmlist entries. */ for (tmp = vmlist; tmp; tmp = tmp->next) { - va = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); + va = kzalloc(sizeof(struct vmap_area), GFP_NOWAIT); if (WARN_ON_ONCE(!va)) continue; @@ -3256,7 +3242,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, goto err_free2; for (area = 0; area < nr_vms; area++) { - vas[area] = kmem_cache_zalloc(vmap_area_cachep, GFP_KERNEL); + vas[area] = kzalloc(sizeof(struct vmap_area), GFP_KERNEL); vms[area] = kzalloc(sizeof(struct vm_struct), GFP_KERNEL); if (!vas[area] || !vms[area]) goto err_free; @@ -3376,8 +3362,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, if (vas[area]) continue; - vas[area] = kmem_cache_zalloc( - vmap_area_cachep, GFP_KERNEL); + vas[area] = kzalloc(sizeof(struct vmap_area), GFP_KERNEL); if (!vas[area]) goto err_free; } @@ -3388,7 +3373,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, err_free: for (area = 0; area < nr_vms; area++) { if (vas[area]) - kmem_cache_free(vmap_area_cachep, vas[area]); + kfree_rcu(vas[area], rcu); kfree(vms[area]); } -- Uladzislau Rezki