From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C5ADC433EF for ; Wed, 13 Jul 2022 10:22:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 99267940125; Wed, 13 Jul 2022 06:22:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9413D9400E5; Wed, 13 Jul 2022 06:22:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 80918940125; Wed, 13 Jul 2022 06:22:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6FA209400E5 for ; Wed, 13 Jul 2022 06:22:09 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 455B080DDC for ; Wed, 13 Jul 2022 10:22:09 +0000 (UTC) X-FDA: 79681686378.17.3B4CFBE Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf12.hostedemail.com (Postfix) with ESMTP id D361740070 for ; Wed, 13 Jul 2022 10:22:08 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id a15so11405097pjs.0 for ; Wed, 13 Jul 2022 03:22:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=b5jq6UrVYBVqTLSb/d0xvUYQZH0h9wR5dakglSUgaZ8=; b=g5alBPtl1kiyK40iPkwyzCu6yFxh7PyJi6pFAkN61ulSqzjmelFtm6FtAkcCHes8kr daHxnuWSUSt12V/BhanrKNNmulx6+YXEgoQ2xxvvtaSWTlyhAqPR6mDbWMdWsFmDi9ee XuIhEH2ImnQ3v86iRJJLk+NzMej2G5icsNTuyxklpvTzxh6n6M4zzyAV7ydjNxufDRCy LvuHJu+aK8xB5AoXBTLUf9HBRmBwnglYQ/WMXHBxpWFcetDRVDDFUowrBZYqpRUN46px +h6hZOtJIv43kaBPGAd5+Eq9FCgkOXlmsWJyoU+sj0cvlvJ+Tpg3kalj2gUJLRpiXIUe hkQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=b5jq6UrVYBVqTLSb/d0xvUYQZH0h9wR5dakglSUgaZ8=; b=kn1FhOQ1kaQMLsryhsYZtqR/NYjbua+DNDOd2ea37IDwtSZN3c6pOqvgDtdzlwAwZM EH03z6NoT/esr7M2AVqv5f955RoePrrtFjt/6KbjGyf6c8xDjpj/5OkOBrMGP8N4Yht1 rosNfdkPKDizFbAR2DXaJDOOhN7/aaN+9qcmHCtnmsbDiZPdwGDrQbX0p+XXEI/Qmdbf qdfzUyyaBZm43pd+ACZ8MzaE94ny9S/GbEMqW47lko2sA99Ho2Bt5inOfvXFdsUVF5s2 RXGVv7euTO7JzbP0wN5yXb96OIl+XuqFL1oAd9YyZYA5Ql9Emh7byD4CciX6heTVSC3N vUpg== X-Gm-Message-State: AJIora+L5MLNKTaH5aC/CK/Cmmv3VZuG+MDerdfxdHj3KIdtqKp9HH3I RQRGRSt46hHSfx2yXX0bNs4= X-Google-Smtp-Source: AGRyM1sPq8ed9TwVbIZY8O2QeLWLmXL4AdxvDg5qyTxSxy5AwpGCuydUo80OlrN8cbqH0AwqqhUH4g== X-Received: by 2002:a17:902:e842:b0:16c:5639:eb3e with SMTP id t2-20020a170902e84200b0016c5639eb3emr2846227plg.119.1657707727826; Wed, 13 Jul 2022 03:22:07 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-77-58-189.ap-northeast-1.compute.amazonaws.com. [35.77.58.189]) by smtp.gmail.com with ESMTPSA id pi4-20020a17090b1e4400b001df264610c4sm6129791pjb.0.2022.07.13.03.22.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Jul 2022 03:22:06 -0700 (PDT) Date: Wed, 13 Jul 2022 10:22:02 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Rongwei Wang Cc: akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com, rientjes@google.com, penberg@kernel.org, cl@gentwo.de, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang Subject: Re: [PATCH v2 1/3] mm/slub: fix the race between validate_slab and slab_free Message-ID: References: <20220712022807.44113-1-rongwei.wang@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220712022807.44113-1-rongwei.wang@linux.alibaba.com> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657707728; a=rsa-sha256; cv=none; b=FtChJyid4bqulXsDYEzzqR2VULW6paQtCFZBA+RbMyS9MHAZt3e08+gCV2HwGsquH0LTUt nN5CIGZvRr5Bn4RO2Ag8Xh0ImWq7xDbHHoebUkqmhApLWsAOyOLQsnYIF78DyiPnq45wmC LjMGY57qAtxRg3iQLF5fjUhU8IUJxwg= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=g5alBPtl; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf12.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657707728; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=b5jq6UrVYBVqTLSb/d0xvUYQZH0h9wR5dakglSUgaZ8=; b=MRHM4GbfZYc3dxCoHpnoH290EuJCAELwi7JZ/vlSK2SEEZCmSgX+xxmap/Kz1rjSAA2BYA K+OXy4WZRzyWI/BgpVg4VcI/pz7y0vvSk7wWxp3SzvGvfYMoP7iRz1I9/qxLah6K4ZL9s5 FBooJ3FlGX+4stxrHNdeLK7W0sDg0xE= X-Rspam-User: X-Stat-Signature: 9t9ejomozo9eraasstny1thrqm5us9y9 X-Rspamd-Queue-Id: D361740070 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=g5alBPtl; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf12.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspamd-Server: rspam03 X-HE-Tag: 1657707728-186507 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jul 12, 2022 at 10:28:05AM +0800, Rongwei Wang wrote: > In use cases where allocating and freeing slab frequently, some > error messages, such as "Left Redzone overwritten", "First byte > 0xbb instead of 0xcc" would be printed when validating slabs. > That's because an object has been filled with SLAB_RED_INACTIVE, > but has not been added to slab's freelist. And between these > two states, the behaviour of validating slab is likely to occur. > > Actually, it doesn't mean the slab can not work stably. But, these > confusing messages will disturb slab debugging more or less. > > Signed-off-by: Rongwei Wang > --- > mm/slub.c | 43 +++++++++++++++++++++++++------------------ > 1 file changed, 25 insertions(+), 18 deletions(-) > This makes the code more complex. A part of me says it may be more pleasant to split implementation allocating from caches for debugging. That would make it simpler. something like: __slab_alloc() { if (kmem_cache_debug(s)) slab_alloc_debug() else ___slab_alloc() } slab_free() { if (kmem_cache_debug(s)) slab_free_debug() else __do_slab_free() } See also: https://lore.kernel.org/lkml/faf416b9-f46c-8534-7fb7-557c046a564d@suse.cz/ > diff --git a/mm/slub.c b/mm/slub.c > index b1281b8654bd..e950d8df8380 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1391,18 +1391,16 @@ static noinline int free_debug_processing( > void *head, void *tail, int bulk_cnt, > unsigned long addr) > { > - struct kmem_cache_node *n = get_node(s, slab_nid(slab)); > void *object = head; > int cnt = 0; > - unsigned long flags, flags2; > + unsigned long flags; > int ret = 0; > depot_stack_handle_t handle = 0; > > if (s->flags & SLAB_STORE_USER) > handle = set_track_prepare(); > > - spin_lock_irqsave(&n->list_lock, flags); > - slab_lock(slab, &flags2); > + slab_lock(slab, &flags); > > if (s->flags & SLAB_CONSISTENCY_CHECKS) { > if (!check_slab(s, slab)) > @@ -1435,8 +1433,7 @@ static noinline int free_debug_processing( > slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n", > bulk_cnt, cnt); > > - slab_unlock(slab, &flags2); > - spin_unlock_irqrestore(&n->list_lock, flags); > + slab_unlock(slab, &flags); > if (!ret) > slab_fix(s, "Object at 0x%p not freed", object); > return ret; > @@ -3330,7 +3327,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > > { > void *prior; > - int was_frozen; > + int was_frozen, to_take_off = 0; > struct slab new; > unsigned long counters; > struct kmem_cache_node *n = NULL; > @@ -3341,14 +3338,23 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > if (kfence_free(head)) > return; > > - if (kmem_cache_debug(s) && > - !free_debug_processing(s, slab, head, tail, cnt, addr)) > - return; > + n = get_node(s, slab_nid(slab)); > + if (kmem_cache_debug(s)) { > + int ret; > > - do { > - if (unlikely(n)) { > + spin_lock_irqsave(&n->list_lock, flags); > + ret = free_debug_processing(s, slab, head, tail, cnt, addr); > + if (!ret) { > spin_unlock_irqrestore(&n->list_lock, flags); > - n = NULL; > + return; > + } > + } > + > + do { > + if (unlikely(to_take_off)) { > + if (!kmem_cache_debug(s)) > + spin_unlock_irqrestore(&n->list_lock, flags); > + to_take_off = 0; > } > prior = slab->freelist; > counters = slab->counters; > @@ -3369,8 +3375,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > new.frozen = 1; > > } else { /* Needs to be taken off a list */ > - > - n = get_node(s, slab_nid(slab)); > /* > * Speculatively acquire the list_lock. > * If the cmpxchg does not succeed then we may > @@ -3379,8 +3383,10 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > * Otherwise the list_lock will synchronize with > * other processors updating the list of slabs. > */ > - spin_lock_irqsave(&n->list_lock, flags); > + if (!kmem_cache_debug(s)) > + spin_lock_irqsave(&n->list_lock, flags); > > + to_take_off = 1; > } > } > > @@ -3389,8 +3395,9 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > head, new.counters, > "__slab_free")); > > - if (likely(!n)) { > - > + if (likely(!to_take_off)) { > + if (kmem_cache_debug(s)) > + spin_unlock_irqrestore(&n->list_lock, flags); > if (likely(was_frozen)) { > /* > * The list lock was not taken therefore no list > -- > 2.27.0 >