From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16BA4C46CCD for ; Thu, 21 Dec 2023 21:50:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 48B896B007D; Thu, 21 Dec 2023 16:50:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 43B526B007E; Thu, 21 Dec 2023 16:50:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32B536B0081; Thu, 21 Dec 2023 16:50:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 233C26B007D for ; Thu, 21 Dec 2023 16:50:50 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E5D39406B4 for ; Thu, 21 Dec 2023 21:50:49 +0000 (UTC) X-FDA: 81592170618.18.C620659 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf16.hostedemail.com (Postfix) with ESMTP id 061DB18000D for ; Thu, 21 Dec 2023 21:50:47 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="quL4/AHH"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of rientjes@google.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=rientjes@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703195448; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h7yfxJvsikJZhOKTjSLdY1JkJCkAjTZJo6HtEDGAXes=; b=7HfnWAD2SumqOv4bpvgmJes+TtVkRKa+jG5rG70L5TlKLNj5d/gx7KgVQfuru/6XBqxGhJ 53NF83jjp/fDTNXKbSd8WrsbubtvHVrBW3dOwBLbZ6lIfv4HKejxSKptMC+KaY+tLp9lJS WbM8bhHLtbPYaG1lNmzxr8gSzoWMamY= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="quL4/AHH"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of rientjes@google.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=rientjes@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703195448; a=rsa-sha256; cv=none; b=RDKnE2GWvcZXpqTC3+sAP4fC3Vtw4tgGulEIag9YBrYgk9UuL+nnBaBiRhPBYcER8iurHJ NI5l7xKzVU2+GRUgwyOsvxr3j4bWSEg5S+ft8BZ8QD/C7qyvCzapBtlsZF47vR6kMtjMls lJr183g7u9bfnTG6+hgVKjALMeg1ST0= Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1d3e4637853so41695ad.0 for ; Thu, 21 Dec 2023 13:50:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703195447; x=1703800247; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=h7yfxJvsikJZhOKTjSLdY1JkJCkAjTZJo6HtEDGAXes=; b=quL4/AHH8w1qEic5kG4EVGrlfQhmb/1TvIkYvO3swZyjDTGkry5mjS0dbcINqZqQyk INtuYp/nK2LYYwpDlekYlX+So8D+/p5k6BjIn+v0LYEsqIQvZGkqOJMnMvO7/xXTsDZV kh5Ot5XwQ+qZfK4/ztjEBtmtDSupNo0dAGxip/qOwYZKPykC/hgo7AEtrwf9ok1NrInb SLbdSYE5HUaUu7ujZrN+i6/cJNr9v+eWYFPJa05BsS/sgN/oT4ZkF+6MMyz7bqFze+mQ fjetgY9NJYoo5HoZ2+WBxgUQ8fScsaenCM9r0P6uT/fXj8UmwHCY8N/OofW4YlHO611B rL2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703195447; x=1703800247; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=h7yfxJvsikJZhOKTjSLdY1JkJCkAjTZJo6HtEDGAXes=; b=r2pIJXOVlu+wmARiBt3XrVsem5UBE7r2twtZy0E56MOqiB32Jm65PtawVkVao660nQ lmAjGn00tU0ZprFqtSTPnGw1oUYSJeB889uTz2tnsk5BSFRzWimQyc+/EA+RBDb5eqlX 4rFXKTxfOPWJfNG4AwUKyL4bg2zLr6h0caQAFfGrJ4GMtvrJssQBrSyc/0YdIh9+b6Y8 V3jRhWJk2/KfmeMXZWkaNCuP69t012vMgSth4uc2cFJyp+D4AbzbLnAq6efDNEoyGmoD IKC+675AO9jEA2c944phxYgGn1bpof6NBt+pyIbToXQrrnWE53ewhsIZlb9lRu2goLxW 0RlQ== X-Gm-Message-State: AOJu0YzXUif6I36kQ2Rh+FxXhBNXLFPbUzaHNo3BXr1THuxYT32ytM2s UM/+KZdQQj5K4D15S0ozcKz3miXMOl1m X-Google-Smtp-Source: AGHT+IH8TYaFbKwqTHpivSlxCp0sRH5qnfypD/tJasoimExGcqAbIjC4MsxNEHoSwa/ktrfqq/E/uA== X-Received: by 2002:a17:903:2310:b0:1d3:ddf9:df92 with SMTP id d16-20020a170903231000b001d3ddf9df92mr7401plh.3.1703195446557; Thu, 21 Dec 2023 13:50:46 -0800 (PST) Received: from [2620:0:1008:15:184:1476:510:6ea1] ([2620:0:1008:15:184:1476:510:6ea1]) by smtp.gmail.com with ESMTPSA id y6-20020a170902700600b001d0ca40158dsm2101132plk.280.2023.12.21.13.50.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Dec 2023 13:50:45 -0800 (PST) Date: Thu, 21 Dec 2023 13:50:44 -0800 (PST) From: David Rientjes To: Ruipeng Qi cc: cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, qiruipeng Subject: Re: [RFC PATCH 6/7] mm/slub: make slab data more observable In-Reply-To: <20231221133717.882-1-ruipengqi7@gmail.com> Message-ID: <9cb6a9a3-25fb-22ca-8b62-52c60519bee2@google.com> References: <20231221133717.882-1-ruipengqi7@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 061DB18000D X-Stat-Signature: iwkx8tp5atjco7eydwng9f4y8yh6ghhq X-HE-Tag: 1703195447-864893 X-HE-Meta: U2FsdGVkX18Jee6u+74VjWDWWh68IZWKSwTfLnPC4eG5LuQiwKZCBWlgxeC8yZtwM0i8CKggbXMLRUnJ5rr/cY65bOzRMmKDaV9zQQX5r8c1mOiaDizFVc45KrtuKUg/LmpvpyoDsoY+QJiS8i8xMz7CBT9KAb7q0JUeLVy6e3/WOwk4QiGDUtuOo+Q+b8Kj3/7mN8W+Y+lqcj8/y3Kf67jwy1py8nMq0BFJyr56KW25yeWMWN+4hyhwVmvN2VFt80Qf5cOTcGqwLQ4z/cmx60zqW7T5shGyM0OrnHjoucGe9GIyfKGSaDQ2h6/qSrXWMhW5OrzXNDX128Xi5z+PcDa0d8IepK/hyGRoHP6j3TLpg4H48NYaqrwZQUflKAwYtOIc/4ayotKVQkNq9LXW6bNZuezyG20xyfo+vtVY3CMSXRbKRa2dvg/2BwJgbvErhdkV9lLYDnnpa9Q4lR39s2/TuTwo0mHJP+605pGV7EH81UYN+TpNOgBLkTkNOITMsnR7PHVwaLvuBkOq5leeMCuMJfDZK+AvM/aOCuAgpkNa+UylDJcgOGxxm4YRI8Z/fIkIZAFku9HMPSiLwLrzv1Z89gva5F2EfAGqV/V/yGrWmq5jUnyJljbjARW/riWSVhyTYwlfBm3hhAeOkYL8dVytW8ZmREuuPNwE8t5dBA4g/0Zg7B3SMEREgy4PAWhmpT3bDBRu2TmEnJdIYMTpUUZBOtQy5AaquyeM/Mhj38wax34p7nCER5sZmffWG8C5uAiBCsfcxU2N7otbmrJ37SqjxdF7YkVotlqw2E3n0ZkLqFmAEgJMXCgV1WbzsvEMtedcXDY79GQkncC2arNwACBPN6PcL1eG1mdNuyP9OMbRJ9leGstmTbEz12eubpJ9msX2r+5OemstsoNIrOZu2pu9fsk4WwzR4IqE3RVPzanxvls1EI7aywXMTJ2/c94uvrWC1JvYr0Ba3ylJRcA fVGM+xB2 +TsjUStgHPJBYdNLoKEfkpDTF5xzs8Qnme8hRR/ws3P8JRSXGBC6bULV6SUs1T40knChD8ApIyJ0uL+eJYrxq2zZ54j5ZNuHoz4JOAZEpd+pvJlHjthdKZRH2agHfpeq2VzLII97X4Iz53qHmNdufbXImC8C87FtCl5E3P3XgWlSP2fAVjWHYgvGBkmAIZSbYqsBpppYGuBT2nY7m13hVby95fglPUiqC4cBZlD9NTcwbPGZ1Mw+Amkn/nrvKDb2W2kyWzr6/L8m6EP3zYEzD5DZDkLCZXo3cJY8zJPSjyhXJAAXXzFuecXaJe5nTWtGN6PCp40qoQ5iIp6Bf79l9q9IGt/yFAU3wFh7L7AOGmGCy5JvAemG2+47H6bxcAGZkcGFtcx5shPFBRp61/BCutl7/3D26CbJkfnK3gxe3PHCrWXHyHC3MD70TO4mTJKeZfvjU4lfdJp1Eeg8QN2R2+JufyMkTy0HAM7UagHRAajOir7A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, 21 Dec 2023, Ruipeng Qi wrote: > From: qiruipeng > > Osdump is interested in data stored within slab subsystem. Add full list > back into corresponding struct, and record full list within respective > functions instead of enabling SLUB_DEBUG directly, which will intruduce > sensible overhead. > > Signed-off-by: qiruipeng Hi Ruipeng, please make sure to send your patch sets as a single email thread (all patches 1-7 should be a reply to your 0/7 cover letter). There is some other feedback on previous patches in this series which refers to alternatives, so I think the cover letter to the patch series will need to spell out why we need a brand new solution to this. That said, from the slab perspective, this is basically splitting out a part of SLUB_DEBUG into a single feature. Likely best to propose it as a feature that SLUB_DEBUG would then directly enable itself rather than duplicating code in definitions. That is, assuming the question about why we need a new solution for this can be resolved in the cover letter of a v2. > --- > mm/slab.h | 2 ++ > mm/slub.c | 38 +++++++++++++++++++++++++++++++++++++- > 2 files changed, 39 insertions(+), 1 deletion(-) > > diff --git a/mm/slab.h b/mm/slab.h > index 3d07fb428393..a42a54c9c5de 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -799,6 +799,8 @@ struct kmem_cache_node { > atomic_long_t nr_slabs; > atomic_long_t total_objects; > struct list_head full; > +#elif defined(CONFIG_OS_MINIDUMP) > + struct list_head full; > #endif > #endif > > diff --git a/mm/slub.c b/mm/slub.c > index 63d281dfacdb..1a496ec945b6 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1730,10 +1730,26 @@ static inline int check_object(struct kmem_cache *s, struct slab *slab, > static inline depot_stack_handle_t set_track_prepare(void) { return 0; } > static inline void set_track(struct kmem_cache *s, void *object, > enum track_item alloc, unsigned long addr) {} > +#ifndef CONFIG_OS_MINIDUMP > static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n, > struct slab *slab) {} > static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, > struct slab *slab) {} > +#else > +static inline void add_full(struct kmem_cache *s, > + struct kmem_cache_node *n, struct slab *slab) > +{ > + lockdep_assert_held(&n->list_lock); > + list_add(&slab->slab_list, &n->full); > +} > + > +static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct slab *slab) > +{ > + lockdep_assert_held(&n->list_lock); > + list_del(&slab->slab_list); > +} > +#endif > + > slab_flags_t kmem_cache_flags(unsigned int object_size, > slab_flags_t flags, const char *name) > { > @@ -2570,6 +2586,14 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > spin_lock_irqsave(&n->list_lock, flags); > } else { > mode = M_FULL_NOLIST; > +#ifdef CONFIG_OS_MINIDUMP > + /* > + * Taking the spinlock removes the possibility that > + * acquire_slab() will see a slab that is frozen > + */ > + spin_lock_irqsave(&n->list_lock, flags); > + > +#endif > } > > > @@ -2577,7 +2601,11 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > old.freelist, old.counters, > new.freelist, new.counters, > "unfreezing slab")) { > +#ifndef CONFIG_OS_MINIDUMP > if (mode == M_PARTIAL) > +#else > + if (mode != M_FREE) > +#endif > spin_unlock_irqrestore(&n->list_lock, flags); > goto redo; > } > @@ -2592,6 +2620,10 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, > discard_slab(s, slab); > stat(s, FREE_SLAB); > } else if (mode == M_FULL_NOLIST) { > +#ifdef CONFIG_OS_MINIDUMP > + add_full(s, n, slab); > + spin_unlock_irqrestore(&n->list_lock, flags); > +#endif > stat(s, DEACTIVATE_FULL); > } > } > @@ -4202,6 +4234,9 @@ init_kmem_cache_node(struct kmem_cache_node *n) > atomic_long_set(&n->nr_slabs, 0); > atomic_long_set(&n->total_objects, 0); > INIT_LIST_HEAD(&n->full); > +#elif defined(CONFIG_OS_MINIDUMP) > + INIT_LIST_HEAD(&n->full); > + > #endif > } > > @@ -5009,7 +5044,8 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache) > list_for_each_entry(p, &n->partial, slab_list) > p->slab_cache = s; > > -#ifdef CONFIG_SLUB_DEBUG > +#if defined(CONFIG_SLUB_DEBUG) || \ > + defined(CONFIG_OS_MINIDUMP) > list_for_each_entry(p, &n->full, slab_list) > p->slab_cache = s; > #endif > -- > 2.17.1 > >