From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6113FC433EF for ; Fri, 1 Jul 2022 09:29:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CE4866B0071; Fri, 1 Jul 2022 05:29:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C6C606B0073; Fri, 1 Jul 2022 05:29:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE78F6B0074; Fri, 1 Jul 2022 05:29:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 99F906B0071 for ; Fri, 1 Jul 2022 05:29:28 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6EEA520E4A for ; Fri, 1 Jul 2022 09:29:28 +0000 (UTC) X-FDA: 79638008016.09.293B24B Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf15.hostedemail.com (Postfix) with ESMTP id D257CA0032 for ; Fri, 1 Jul 2022 09:29:27 +0000 (UTC) Received: by mail-pl1-f169.google.com with SMTP id d5so1824063plo.12 for ; Fri, 01 Jul 2022 02:29:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=a59lLUMru5J3/vBb5QEsBgl5LCm+nwvYS28UJ804m/E=; b=k4EiDJM5KX8Ojw1jz+UoHVz1Q1/GFlTTy774tphNvOeFtrYa81YNeNkd3Z6VQsBxev hMpbvpmm4HiUv+p6flnnuLEAL73bffXmkN9cSoqBtojOkQhFsPgM+4XLn1SEucO64OC+ QDlNBgcNetwLXpShyjL2srMBHOVEYFYPogD9TGlAlecVe5svHiIENm9QfYj0n0VIRem3 yeCw6CqH8/JFTCQa2q0zUfmLadcIWEeXUo/xgWrv+OZ2ZzyDV1rYOf8BQqfhAMWLkrQA a/un70wJJs1YlRlhEtTeuMBYbRGETbqnZejLC5gcofY9Tox1oXZMgBWzxtPqFiq9KPuM Wifw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=a59lLUMru5J3/vBb5QEsBgl5LCm+nwvYS28UJ804m/E=; b=ALNtCxvVKObD8zELAvTrifWB6hdaVYWT01Pctx0tYQW7lSASuzyCxjkxHq4kDNRBZN mhXKb4j2d7lKj+qMiE1p6oF5HjezDfYTwdLQCoEOHqru3J3H68l3gklYJPk5Rsc10xkU h+dyRDFQcMuAWZWdEZ62JQ7gj6yAl6r1gyMncM7MZbEBkpEonPdV0/UQxJO0sIDSYxED XsXTS8ZQS0i+9SRgksJ1kHvWPuKEePfleDoBJHhd2BdTNBvU8jrkEp11EI6KmB843n9+ tgMlyX1O7mp/Qgd104p5V4Aol4OJwRRooW//PMFaGw+Bqxty8qihwBZDq4owcxo05xrh cyVQ== X-Gm-Message-State: AJIora+Ugf35Ga1ji+vzImwnUoV1S3PCFZB1dB8IlcbGnPAVvwdjnDml 2NUoNwbh+UrDMZDnSaZ1RBg= X-Google-Smtp-Source: AGRyM1su/XrVrKG20KT5nDDn9+J8tCcGHQ4LvFo73oLjgg3pJ5A0YVVjPpTSWpV/7CxM2COLx0FWEQ== X-Received: by 2002:a17:90a:d805:b0:1ec:870e:1dd2 with SMTP id a5-20020a17090ad80500b001ec870e1dd2mr17407236pjv.146.1656667766795; Fri, 01 Jul 2022 02:29:26 -0700 (PDT) Received: from hyeyoo ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id k22-20020a170902761600b00167942e0ee9sm15030522pll.61.2022.07.01.02.29.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Jul 2022 02:29:26 -0700 (PDT) Date: Fri, 1 Jul 2022 18:29:20 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Feng Tang Cc: Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, dave.hansen@intel.com Subject: Re: [RFC PATCH] mm/slub: enable debugging memory wasting of kmalloc Message-ID: References: <20220630014715.73330-1-feng.tang@intel.com> <20220701022330.GA14806@shbuild999.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220701022330.GA14806@shbuild999.sh.intel.com> ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=k4EiDJM5; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656667768; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=a59lLUMru5J3/vBb5QEsBgl5LCm+nwvYS28UJ804m/E=; b=3amwYkhG4TMtFjmMZH0IHbIaiFGvGqQ0QEBLkB3IcmGlaEWAs/wv1Wz63FO/Tu+3a74J76 HIvX4/DRx3HIzu9MNAof6gcAxyoB6AuQosm4d4WKdrr4h3gBqJXzZw7AfJiqXz7D32aZfd XPU7EVduz3j07BVfOOFiRp0/pRtLrU0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656667768; a=rsa-sha256; cv=none; b=cspuLwdxaxO1jAPpzl1Nzo+N7JuSoJAstcDXWM9dJGp79Oz/CIJ8zSLoWZckM3eEmFPf76 bCH4n7AC2U66uo7U0AkCNOjyLDv4ZLWe/UiRZ04yYVOYLFdsWT4BydZLDgtpJ3aamfiAMk 5gHubExh8ybkKSyr4xXB60NpJaNkHQI= X-Stat-Signature: yx4tafj5utw8o81b3qqt13tiseft9yi7 X-Rspamd-Queue-Id: D257CA0032 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=k4EiDJM5; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam12 X-Rspam-User: X-HE-Tag: 1656667767-587993 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 01, 2022 at 10:23:30AM +0800, Feng Tang wrote: > Hi Hyeonggon, > > Thanks for the review! > > On Thu, Jun 30, 2022 at 11:38:26PM +0900, Hyeonggon Yoo wrote: > > On Thu, Jun 30, 2022 at 09:47:15AM +0800, Feng Tang wrote: > > > kmalloc's API family is critical for mm, with one shortcoming that > > > its object size is fixed to be power of 2. When user requests memory > > > for '2^n + 1' bytes, actually 2^(n+1) bytes will be allocated, so > > > in worst case, there is around 50% memory space waste. > > > > > > We've met a kernel boot OOM panic, and from the dumped slab info: > > > > > > [ 26.062145] kmalloc-2k 814056KB 814056KB > > > > > > From debug we found there are huge number of 'struct iova_magazine', > > > whose size is 1032 bytes (1024 + 8), so each allocation will waste > > > 1016 bytes. Though the issue is solved by giving the right(bigger) > > > size of RAM, it is still better to optimize the size (either use > > > a kmalloc friendly size or create a dedicated slab for it). > > > > > > And from lkml archive, there was another crash kernel OOM case [1] > > > back in 2019, which seems to be related with the similar slab waste > > > situation, as the log is similar: > > > > > > [ 4.332648] iommu: Adding device 0000:20:02.0 to group 16 > > > [ 4.338946] swapper/0 invoked oom-killer: gfp_mask=0x6040c0(GFP_KERNEL|__GFP_COMP), nodemask=(null), order=0, oom_score_adj=0 > > > ... > > > [ 4.857565] kmalloc-2048 59164KB 59164KB > > > > > > The crash kernel only has 256M memory, and 59M is pretty big here. > > > > > > So add an way to track each kmalloc's memory waste info, and leverage > > > the existing SLUB debug framework to show its call stack info, so > > > that user can evaluate the waste situation, identify some hot spots > > > and optimize accordingly, for a better utilization of memory. > > > > > > The waste info is integrated into existing interface: > > > /sys/kernel/debug/slab/kmalloc-xx/alloc_traces, one example of > > > 'kmalloc-4k' after boot is: > > > > > > 126 ixgbe_alloc_q_vector+0xa5/0x4a0 [ixgbe] waste: 233856/1856 age=1493302/1493830/1494358 pid=1284 cpus=32 nodes=1 > > > __slab_alloc.isra.86+0x52/0x80 > > > __kmalloc_node+0x143/0x350 > > > ixgbe_alloc_q_vector+0xa5/0x4a0 [ixgbe] > > > ixgbe_init_interrupt_scheme+0x1a6/0x730 [ixgbe] > > > ixgbe_probe+0xc8e/0x10d0 [ixgbe] > > > local_pci_probe+0x42/0x80 > > > work_for_cpu_fn+0x13/0x20 > > > process_one_work+0x1c5/0x390 > > > worker_thread+0x1b9/0x360 > > > kthread+0xe6/0x110 > > > ret_from_fork+0x1f/0x30 > > > > > > which means in 'kmalloc-4k' slab, there are 126 requests of > > > 2240 bytes which got a 4KB space (wasting 1856 bytes each > > > and 233856 bytes in total). And when system starts some real > > > workload like multiple docker instances, there are more > > > severe waste. > > > > > > [1]. https://lkml.org/lkml/2019/8/12/266 > > > > > > Signed-off-by: Feng Tang > > > --- > > > Note: > > > * this is based on linux-next tree with tag next-20220628 > > > > So this makes use of the fact that orig_size differ from > > s->object_size when allocated from kmalloc, and for non-kmalloc > > caches it doesn't track waste because s->object_size == orig_size. > > Am I following? > > Yes, you are right. > > > > And then it has overhead of 'waste' field for every non-kmalloc objects > > because track is saved per object. Also the field is not used at free. > > (Maybe that would be okay as it's only for debugging, just noting.) > > Yes, the field itself is a 'waste' for non-kmalloc objects :) I do > have another patch to add an option for this > > +config SLUB_DEBUG_KMALLOC_WASTE > + bool "Enable kmalloc memory waste debug" > + depends on SLUB_DEBUG && DEBUG_FS > ... > > And didn't post it due to the same debugging thought as you. And I can > add it back if it's really necessary. Let's see how others think :) I'm okay with current patch. > > > mm/slub.c | 45 ++++++++++++++++++++++++++++++--------------- > > > 1 file changed, 30 insertions(+), 15 deletions(-) > > > > > > diff --git a/mm/slub.c b/mm/slub.c > > > index 26b00951aad1..bc4f9d4fb1e2 100644 > > > --- a/mm/slub.c > > > +++ b/mm/slub.c > > > @@ -271,6 +271,7 @@ struct track { > > > #endif > > > int cpu; /* Was running on cpu */ > > > int pid; /* Pid context */ > > > + unsigned long waste; /* memory waste for a kmalloc-ed object */ > > > unsigned long when; /* When did the operation occur */ > > > }; > > > > > > @@ -747,6 +748,7 @@ static inline depot_stack_handle_t set_track_prepare(void) > > > > > > static void set_track_update(struct kmem_cache *s, void *object, > > > enum track_item alloc, unsigned long addr, > > > + unsigned long waste, > > > depot_stack_handle_t handle) > > > { > > > struct track *p = get_track(s, object, alloc); > > > @@ -758,14 +760,16 @@ static void set_track_update(struct kmem_cache *s, void *object, > > > p->cpu = smp_processor_id(); > > > p->pid = current->pid; > > > p->when = jiffies; > > > + p->waste = waste; > > > } > > > > > > static __always_inline void set_track(struct kmem_cache *s, void *object, > > > - enum track_item alloc, unsigned long addr) > > > + enum track_item alloc, unsigned long addr, > > > + unsigned long waste) > > > { > > > depot_stack_handle_t handle = set_track_prepare(); > > > > > > - set_track_update(s, object, alloc, addr, handle); > > > + set_track_update(s, object, alloc, addr, waste, handle); > > > } > > > > > > static void init_tracking(struct kmem_cache *s, void *object) > > > @@ -1325,7 +1329,9 @@ static inline int alloc_consistency_checks(struct kmem_cache *s, > > > > > > static noinline int alloc_debug_processing(struct kmem_cache *s, > > > struct slab *slab, > > > - void *object, unsigned long addr) > > > + void *object, unsigned long addr, > > > + unsigned long waste > > > + ) > > > { > > > if (s->flags & SLAB_CONSISTENCY_CHECKS) { > > > if (!alloc_consistency_checks(s, slab, object)) > > > @@ -1334,7 +1340,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s, > > > > > > /* Success perform special debug activities for allocs */ > > > if (s->flags & SLAB_STORE_USER) > > > - set_track(s, object, TRACK_ALLOC, addr); > > > + set_track(s, object, TRACK_ALLOC, addr, waste); > > > trace(s, slab, object, 1); > > > init_object(s, object, SLUB_RED_ACTIVE); > > > return 1; > > > @@ -1398,6 +1404,7 @@ static noinline int free_debug_processing( > > > int ret = 0; > > > depot_stack_handle_t handle = 0; > > > > > > + /* TODO: feng: we can slab->waste -= track?) or in set_track */ > > > if (s->flags & SLAB_STORE_USER) > > > handle = set_track_prepare(); > > > > > > @@ -1418,7 +1425,7 @@ static noinline int free_debug_processing( > > > } > > > > > > if (s->flags & SLAB_STORE_USER) > > > - set_track_update(s, object, TRACK_FREE, addr, handle); > > > + set_track_update(s, object, TRACK_FREE, addr, 0, handle); > > > trace(s, slab, object, 0); > > > /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */ > > > init_object(s, object, SLUB_RED_INACTIVE); > > > @@ -2905,7 +2912,7 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) > > > * already disabled (which is the case for bulk allocation). > > > */ > > > static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > > > - unsigned long addr, struct kmem_cache_cpu *c) > > > + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size) > > > { > > > void *freelist; > > > struct slab *slab; > > > @@ -3048,7 +3055,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > > > check_new_slab: > > > > > > if (kmem_cache_debug(s)) { > > > - if (!alloc_debug_processing(s, slab, freelist, addr)) { > > > + if (!alloc_debug_processing(s, slab, freelist, addr, s->object_size - orig_size)) { > > > /* Slab failed checks. Next slab needed */ > > > goto new_slab; > > > } else { > > > @@ -3102,7 +3109,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > > > * pointer. > > > */ > > > static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > > > - unsigned long addr, struct kmem_cache_cpu *c) > > > + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size) > > > { > > > void *p; > > > > > > @@ -3115,7 +3122,7 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > > > c = slub_get_cpu_ptr(s->cpu_slab); > > > #endif > > > > > > - p = ___slab_alloc(s, gfpflags, node, addr, c); > > > + p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size); > > > #ifdef CONFIG_PREEMPT_COUNT > > > slub_put_cpu_ptr(s->cpu_slab); > > > #endif > > > @@ -3206,7 +3213,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l > > > */ > > > if (IS_ENABLED(CONFIG_PREEMPT_RT) || > > > unlikely(!object || !slab || !node_match(slab, node))) { > > > - object = __slab_alloc(s, gfpflags, node, addr, c); > > > + object = __slab_alloc(s, gfpflags, node, addr, c, orig_size); > > > } else { > > > void *next_object = get_freepointer_safe(s, object); > > > > > > @@ -3709,7 +3716,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > > > * of re-populating per CPU c->freelist > > > */ > > > p[i] = ___slab_alloc(s, flags, NUMA_NO_NODE, > > > - _RET_IP_, c); > > > + _RET_IP_, c, size); > > > > This looks wrong. size here is size of array. > > Maybe just s->object_size instead of size? > > Good catch! should be s->object_size. thanks! > > > > if (unlikely(!p[i])) > > > goto error; > > > > > > @@ -5068,6 +5075,7 @@ struct location { > > > depot_stack_handle_t handle; > > > unsigned long count; > > > unsigned long addr; > > > + unsigned long waste; > > > long long sum_time; > > > long min_time; > > > long max_time; > > > @@ -5138,11 +5146,12 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, > > > if (pos == end) > > > break; > > > > > > - caddr = t->loc[pos].addr; > > > - chandle = t->loc[pos].handle; > > > - if ((track->addr == caddr) && (handle == chandle)) { > > > + l = &t->loc[pos]; > > > + caddr = l->addr; > > > + chandle = l->handle; > > > + if ((track->addr == caddr) && (handle == chandle) && > > > + (track->waste == l->waste)) { > > > > > > - l = &t->loc[pos]; > > > l->count++; > > > if (track->when) { > > > l->sum_time += age; > > > @@ -5190,6 +5199,7 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, > > > l->min_pid = track->pid; > > > l->max_pid = track->pid; > > > l->handle = handle; > > > + l->waste = track->waste; > > > > I think this may be fooled when there are different wastes values > > from same caller (i.e. when a kmalloc_track_caller() is used.) > > Yes, with the patch, we found quite some cases that the same caller > requests different sizes. > > > because the array is sorted by caller address, but not sorted by waste. > > In the patch we have in add_location(): > > + if ((track->addr == caddr) && (handle == chandle) && > + (track->waste == l->waste)) { > > Do you mean the following is missed? > > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -5176,6 +5176,8 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, > end = pos; > else if (track->addr == caddr && handle < chandle) > end = pos; > + else if (track->addr == caddr && handle == chandle && track->waste < l->waste) > + end = pos; > else > start = pos; > Yes. Exactly. :) Thanks, Hyeonggon > > And writing this I noticed that it already can be fooled now :) > > It's also not sorted by handle. > > > > > cpumask_clear(to_cpumask(l->cpus)); > > > cpumask_set_cpu(track->cpu, to_cpumask(l->cpus)); > > > nodes_clear(l->nodes); > > > @@ -6078,6 +6088,11 @@ static int slab_debugfs_show(struct seq_file *seq, void *v) > > > else > > > seq_puts(seq, ""); > > > > > > + > > > + if (l->waste) > > > + seq_printf(seq, " waste: %lu/%lu", > > > > Maybe waste=%lu/%lu like others? > > Sure, will follow current style. > > Thanks, > Feng