From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50084C43334 for ; Tue, 19 Jul 2022 14:43:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CFD816B0073; Tue, 19 Jul 2022 10:43:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CAD6A6B0074; Tue, 19 Jul 2022 10:43:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B9C796B0075; Tue, 19 Jul 2022 10:43:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AB2826B0073 for ; Tue, 19 Jul 2022 10:43:12 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7B4DB401BA for ; Tue, 19 Jul 2022 14:43:12 +0000 (UTC) X-FDA: 79704117024.24.EB6636A Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by imf15.hostedemail.com (Postfix) with ESMTP id 1EA76A0088 for ; Tue, 19 Jul 2022 14:43:09 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=rongwei.wang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0VJsAilE_1658241782; Received: from 30.30.98.197(mailfrom:rongwei.wang@linux.alibaba.com fp:SMTPD_---0VJsAilE_1658241782) by smtp.aliyun-inc.com; Tue, 19 Jul 2022 22:43:03 +0800 Message-ID: Date: Tue, 19 Jul 2022 22:43:02 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:103.0) Gecko/20100101 Thunderbird/103.0 Subject: Re: [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free Content-Language: en-US To: Vlastimil Babka , akpm@linux-foundation.org, roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com, rientjes@google.com, penberg@kernel.org, cl@linux.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Hyeonggon Yoo <42.hyeyoo@gmail.com> References: <20220529081535.69275-1-rongwei.wang@linux.alibaba.com> <69462916-2d1c-dd50-2e64-b31c2b61690e@suse.cz> <5344e023-29f0-9285-a402-19e2a556dbb0@linux.alibaba.com> From: Rongwei Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf15.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658241792; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yupJWdtfxdVygq/aJe75Vwvf4i/NkCndhK1dGa800Aw=; b=RekeXZpeeAViQW005BI52B4UgLT5U4wH1Gtwabhz2uz+tHY6/VINT2G6pDm5l46w4bRJnw fzQ4X3zGWcKW/fi4YMgdPzexbTWxTrxK1wK48w71nL8+WhbXD95ZUyqj0yMpxCBzyz17N8 DOCV0qABhq5IANsvBn3CFpHOQ4yzm/4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658241792; a=rsa-sha256; cv=none; b=dSbMjeHqYeUEoZfuLswfN5DE/W+N7pQxcqG/t36sHYChX4fOyOvssAPHqN0E4hjjT73Rwg aqSggP2wbsU67AfNU4sBnQdBhQn6KyDQVgh7oJgaTPjrIks2/vSDOtjOMyYKs8ZetwEzPA TYlrtEedSGsIgwV8TVLyl/6U6f66dHw= X-Stat-Signature: demrsf17t81nqenosutwucx1zc71jkwh X-Rspamd-Queue-Id: 1EA76A0088 X-Rspamd-Server: rspam08 Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf15.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com X-Rspam-User: X-HE-Tag: 1658241789-491269 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 7/19/22 10:21 PM, Vlastimil Babka wrote: > On 7/19/22 16:15, Rongwei Wang wrote: >> > ... >>> + >>> +    slab_unlock(slab, &flags2); >>> +    spin_unlock_irqrestore(&n->list_lock, flags); >>> +    if (!ret) >>> +        slab_fix(s, "Object at 0x%p not freed", object); >>> +    if (slab_to_discard) { >>> +        stat(s, FREE_SLAB); >>> +        discard_slab(s, slab); >>> +    } >>> + >>> +    return ret; >>> +} >> I had test this patch, and it indeed deal with the bug that I described. > > Thanks. > >> Though I am also has prepared this part of code, your code is ok to me. > > Aha, feel free to post your version, maybe it's simpler? We can compare. My code only includes the part of your free_debug_processing(), the structure of it likes: slab_free() { if (kmem_cache_debug(s)) slab_free_debug(); else __do_slab_free(); } The __slab_free_debug() here likes your free_debug_processing(). +/* + * Slow path handling for debugging. + */ +static void __slab_free_debug(struct kmem_cache *s, struct slab *slab, + void *head, void *tail, int cnt, + unsigned long addr) + +{ + void *prior; + int was_frozen; + struct slab new; + unsigned long counters; + struct kmem_cache_node *n = NULL; + unsigned long flags; + int ret; + + stat(s, FREE_SLOWPATH); + + if (kfence_free(head)) + return; + + n = get_node(s, slab_nid(slab)); + + spin_lock_irqsave(&n->list_lock, flags); + ret = free_debug_processing(s, slab, head, tail, cnt, addr); + if (!ret) { + spin_unlock_irqrestore(&n->list_lock, flags); + return; + } + + do { + prior = slab->freelist; + counters = slab->counters; + set_freepointer(s, tail, prior); + new.counters = counters; + was_frozen = new.frozen; + new.inuse -= cnt; + } while (!cmpxchg_double_slab(s, slab, + prior, counters, + head, new.counters, + "__slab_free")); + + if ((new.inuse && prior) || was_frozen) { + spin_unlock_irqrestore(&n->list_lock, flags); + if (likely(was_frozen)) { + stat(s, FREE_FROZEN); + } + + return; + } + + if (!new.inuse && n->nr_partial >= s->min_partial) { + /* Indicate no user in this slab, discarding it naturally. */ + if (prior) { + /* Slab on the partial list. */ + remove_partial(n, slab); + stat(s, FREE_REMOVE_PARTIAL); + } else { + /* Slab must be on the full list */ + remove_full(s, n, slab); + } + + spin_unlock_irqrestore(&n->list_lock, flags); + stat(s, FREE_SLAB); + discard_slab(s, slab); + return; + } + + /* + * Objects left in the slab. If it was not on the partial list before + * then add it. + */ + if (!prior) { + remove_full(s, n, slab); + add_partial(n, slab, DEACTIVATE_TO_TAIL); + stat(s, FREE_ADD_PARTIAL); + } + spin_unlock_irqrestore(&n->list_lock, flags); + + return; +}