From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAE2CC25B06 for ; Sun, 14 Aug 2022 14:54:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37B286B0073; Sun, 14 Aug 2022 10:54:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 32B136B0074; Sun, 14 Aug 2022 10:54:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F2D18D0001; Sun, 14 Aug 2022 10:54:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0A8946B0073 for ; Sun, 14 Aug 2022 10:54:17 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BEAF680364 for ; Sun, 14 Aug 2022 14:54:16 +0000 (UTC) X-FDA: 79798493712.01.F6EBD70 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf27.hostedemail.com (Postfix) with ESMTP id 7909740088 for ; Sun, 14 Aug 2022 14:54:16 +0000 (UTC) Received: by mail-pl1-f182.google.com with SMTP id y1so4515476plb.2 for ; Sun, 14 Aug 2022 07:54:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=cLPNz+9nBlwHl5xVYWE8yDWJdYxcWsW3t1DnoLbrqf0=; b=b1QJjEB1/yn3k9N8RfNxuNV4iLNNKuNh+yWRd8z/1pINtJmG3ibGk/OR1Xr2VHUW4C Ixpk+z9pJgHAfwJ4zjWYDGZOUoVmABA8gA/XjS+kPnpajhMU5CZBCiv+i+kPKo+RQM/v AmGXa2IFx/Au1bl28MkdqTQfpPp9vJMXqEiXreuyJntBZ2/BlispNAIe/m0LdI/s7uU6 lYdcZ9bK2vXNIU+ZR8s0ItOuBbkR0KStOe17lcMUr3IMWhIm9FTS8L+L6ujOpYuCsrhd DhTORItEZjHeIsVVysLvk74OQ0Pt2QsnfNSG9JpRYkAZnIg3mxb+r46ViAO6sNRuQo2W VvNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=cLPNz+9nBlwHl5xVYWE8yDWJdYxcWsW3t1DnoLbrqf0=; b=Z0G9PFUGCGTAqSSbHFEfhm0vqvjh9FNsRLUIqeqII905oZybFSd/2AJ+tmmwoS02yA Bpi9EhUpqJ28oVRwB1N/LAbBAHofBrIsjxRnFWsJZtDka4wHQ2G9q/v6PBPfGAEce/nz 8n+JWkWq2rolfmzg7tX2h8IanIcxnUym5phKdRl1Sdb+YNKnMMbZk7SP3sNG/gu8/zkf pnGJL5bsmTZvPGNKBq6IKm6AMiVa/s2XS+SLE3Sx3c4QrXt9hR5fZdOoRtSdGZkbypsu y2nJtA/FHAS4Jr87sJPx6QkwH+8NoiG+JIAFNgUDx0YwJFjCwoncF4n9w1jLyLodnZC5 vNeg== X-Gm-Message-State: ACgBeo0yrYjUHF0WGlkAb/3nqt1gLbnZp7lYsB4gEFeWh7CsmDQK7WFs oxl/85nJ6YgBXh/3nz8JveY= X-Google-Smtp-Source: AA6agR5FpbURBziiYt3eiizreKD/fp8llo91kpXnhEulsAlQzaOFwoLqxeQm+O2CGJo2bs13W++vgw== X-Received: by 2002:a17:902:e5c3:b0:16e:d968:634e with SMTP id u3-20020a170902e5c300b0016ed968634emr12860528plf.80.1660488855378; Sun, 14 Aug 2022 07:54:15 -0700 (PDT) Received: from hyeyoo ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id e1-20020a170902784100b0016c59b38254sm5468243pln.127.2022.08.14.07.54.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Aug 2022 07:54:14 -0700 (PDT) Date: Sun, 14 Aug 2022 23:54:09 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Rongwei Wang , Christoph Lameter , Joonsoo Kim , David Rientjes , Pekka Enberg , Roman Gushchin , linux-mm@kvack.org, Sebastian Andrzej Siewior , Thomas Gleixner , Mike Galbraith Subject: Re: [PATCH 3/5] mm/slub: remove slab_lock() usage for debug operations Message-ID: References: <20220812091426.18418-1-vbabka@suse.cz> <20220812091426.18418-4-vbabka@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220812091426.18418-4-vbabka@suse.cz> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660488856; a=rsa-sha256; cv=none; b=8auyy/CJgZsrYzKgkQ8P5xHBxcdzrbMUY1vH+BG5nw5NfFRJzE3C8+bmK93cyJszugcgXa rNE15NcoYEPMcuINRctpwyCkcVz770FDHLUoh+Qz+4wGaGrQWAntT1pawXg1auA4o86ZUd mujH54vHSG+qb6uJbSyKc6fteVyIVEY= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=b1QJjEB1; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660488856; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cLPNz+9nBlwHl5xVYWE8yDWJdYxcWsW3t1DnoLbrqf0=; b=uLoAaLWaeiLUO44QUS/KCSJ0LA8LwiboHl+QeLbN/jR+9vgnIW2BtjFpAVbjTF/QCMzl9X BCToo99OR1L5HMomln5u9UQCLx0bhyFDmdAAV3tgS2V95nQ128tNi+3rYBnrzHtt1bcL5J wY3eo9IWnjmfUkmCLydbhDiLjo6qlrw= X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 7909740088 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=b1QJjEB1; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Stat-Signature: xkxrxfmu9ej65h9feyoo1adgotd1scn4 X-Rspam-User: X-HE-Tag: 1660488856-541336 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Aug 12, 2022 at 11:14:24AM +0200, Vlastimil Babka wrote: > All alloc and free operations on debug caches are now serialized by > n->list_lock, so we can remove slab_lock() usage in validate_slab() > and list_slab_objects() as those also happen under n->list_lock. > > Note the usage in list_slab_objects() could happen even on non-debug > caches, but only during cache shutdown time so there should not be any > parallel freeing activity anymore. Except for buggy slab users, but in > that case the slab_lock() would not help against the common cmpxchg > based fast paths (in non-debug caches) anyway. > > Also adjust documentation comments accordingly. > > Suggested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > Signed-off-by: Vlastimil Babka > --- > mm/slub.c | 19 ++++++++----------- > 1 file changed, 8 insertions(+), 11 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index fa7efd2d98be..32b79bc3ae6d 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -50,7 +50,7 @@ > * 1. slab_mutex (Global Mutex) > * 2. node->list_lock (Spinlock) > * 3. kmem_cache->cpu_slab->lock (Local lock) > - * 4. slab_lock(slab) (Only on some arches or for debugging) > + * 4. slab_lock(slab) (Only on some arches) > * 5. object_map_lock (Only for debugging) > * > * slab_mutex > @@ -64,8 +64,9 @@ > * The slab_lock is a wrapper around the page lock, thus it is a bit > * spinlock. > * > - * The slab_lock is only used for debugging and on arches that do not > - * have the ability to do a cmpxchg_double. It only protects: > + * The slab_lock is only used on arches that do not have the ability > + * to do a cmpxchg_double. It only protects: > + * > * A. slab->freelist -> List of free objects in a slab > * B. slab->inuse -> Number of objects in use > * C. slab->objects -> Number of objects in slab > @@ -94,6 +95,9 @@ > * allocating a long series of objects that fill up slabs does not require > * the list lock. > * > + * For debug caches, all allocations are forced to go through a list_lock > + * protected region to serialize against concurrent validation. > + * > * cpu_slab->lock local lock > * > * This locks protect slowpath manipulation of all kmem_cache_cpu fields > @@ -4369,7 +4373,6 @@ static void list_slab_objects(struct kmem_cache *s, struct slab *slab, > void *p; > > slab_err(s, slab, text, s->name); > - slab_lock(slab, &flags); > > map = get_map(s, slab); > for_each_object(p, s, addr, slab->objects) { > @@ -4380,7 +4383,6 @@ static void list_slab_objects(struct kmem_cache *s, struct slab *slab, > } > } > put_map(map); > - slab_unlock(slab, &flags); > #endif > } > > @@ -5107,12 +5109,9 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab, > { > void *p; > void *addr = slab_address(slab); > - unsigned long flags; > - > - slab_lock(slab, &flags); > > if (!check_slab(s, slab) || !on_freelist(s, slab, NULL)) > - goto unlock; > + return; > > /* Now we know that a valid freelist exists */ > __fill_map(obj_map, s, slab); > @@ -5123,8 +5122,6 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab, > if (!check_object(s, slab, p, val)) > break; > } > -unlock: > - slab_unlock(slab, &flags); > } > > static int validate_slab_node(struct kmem_cache *s, > -- > 2.37.1 > Looks good to me. Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> -- Thanks, Hyeonggon