From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0908EC433EF for ; Mon, 4 Jul 2022 10:06:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9CD756B0073; Mon, 4 Jul 2022 06:06:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 97CCD6B0074; Mon, 4 Jul 2022 06:06:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 844346B0075; Mon, 4 Jul 2022 06:06:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 760B06B0073 for ; Mon, 4 Jul 2022 06:06:04 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 087176183C for ; Mon, 4 Jul 2022 10:05:37 +0000 (UTC) X-FDA: 79648985514.29.35CE0F8 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf21.hostedemail.com (Postfix) with ESMTP id 48D231C0052 for ; Mon, 4 Jul 2022 10:05:36 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id l12so2797152plk.13 for ; Mon, 04 Jul 2022 03:05:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=x0Lndh9S07lR5McueCUvv/COWsfBw/+Zg5G32HVPdiI=; b=CaDbi1ka+qGH/GL/TGhKOL2i5mqIQelXgaPhBStW2glMWGTSplREOXFq0GNs4Hv3Ug nxM5tlOnVEiIXQ5Lllr7uj7ETmngrwzjwwfOrmfiZISZQrXWtvQSE/dGY8NV5p+4p/zO meaAWwrCS9+dn2I5yiS+wb2hV12CxBounv3APsgmLSl7dI/5NiPOI0o6aRdnRQ2Q5kA2 kiHnWzhyVBpCTwiArH/6ZaYcoM3KMNtdpd5nUf3eISIEtyq4GB96ZZSGPBBT+bRwPDb+ AgDtQL5NkEASf4pz3a1yrlMo7FAwOSZIgF4fDZBBpOFnMAaXpMnyZEcXUiEO+IDc3pgX 7Vww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=x0Lndh9S07lR5McueCUvv/COWsfBw/+Zg5G32HVPdiI=; b=RfgaW70IXLzxl8tUcLDPPA5DlRfAsBb00XHB4eAzWBER+UuWExrqGTcTqJlthwpadI YUaT4OCoy5iWKdXHXOjrkjv/V9qH9dncPq+NZQdCbwQ/bo1f1jmj9yioK3CG8GH7Ok3f 2zhIWXXlIgoTnxkOvW8vVzDJe61czT6DiFXIEp/C1Kq/4tDX6BwxZNqLUB/0xjdY5YAV 6mVCtpxM0MaBBcqhUPvzFxF0vxX/tL6yIapCS0A6bepar9etrzOjH42HBKTEQ4L+5edw ebVX5dv7R4ZHXKwQUVTX13Ba2ztzmzQzWOa5G0dD3AH9Gam7m3/rx2zglHJ7bHiFxFFm vcCQ== X-Gm-Message-State: AJIora9tYSqx9aAnBoYvkUByQD/oo/jtg7icHVk5lzLfFpD93+dRHyiI b3TGJvzB6ffrx7MMRsfUC/Q= X-Google-Smtp-Source: AGRyM1sCNw4NLb+EkDtvgg4l81yCcrjA6uGnigcUHqHFt9GY2GLBwlP+ChAx2JX726EI7KZKb1nIAA== X-Received: by 2002:a17:90a:c907:b0:1ef:7655:98d7 with SMTP id v7-20020a17090ac90700b001ef765598d7mr13482803pjt.117.1656929135145; Mon, 04 Jul 2022 03:05:35 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-78-214-16.ap-northeast-1.compute.amazonaws.com. [35.78.214.16]) by smtp.gmail.com with ESMTPSA id u15-20020a170902e80f00b0015e8d4eb2e3sm2927725plg.301.2022.07.04.03.05.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Jul 2022 03:05:34 -0700 (PDT) Date: Mon, 4 Jul 2022 10:05:29 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Feng Tang Cc: Christoph Lameter , Andrew Morton , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, dave.hansen@intel.com, Robin Murphy , John Garry Subject: Re: [PATCH v1] mm/slub: enable debugging memory wasting of kmalloc Message-ID: References: <20220701135954.45045-1-feng.tang@intel.com> <20220701150451.GA62281@shbuild999.sh.intel.com> <20220704055600.GD62281@shbuild999.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220704055600.GD62281@shbuild999.sh.intel.com> ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656929136; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=x0Lndh9S07lR5McueCUvv/COWsfBw/+Zg5G32HVPdiI=; b=H4z7EaltZqtEjaUEj8+TJyGz0Z+d5pof0/dMefJg0kIiiB5ZrM/sllg3EYuKHy/1tADN2X RqlZhgpqsUUCdKcmqqDYlnwzd0f2ofuOwoSJzAfkkbYvRTHRfOOBp81IfwQsukI4z97xQs oMh9nO7MYpjsYW+PPJ1lLmFnOWAhBJY= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=CaDbi1ka; spf=pass (imf21.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656929136; a=rsa-sha256; cv=none; b=anRqhlcV96QN9iSpDVQ81/HQ3plN7gnGeOzWma/7ZIvkFquVImle9h7ulfhd/reiYngpkm xeXsMULtaOTGwezDR3aeG+604smKlemwC0yvK40biR8xaW7mxtWeAAdYeOhwVAHomHv4ns LgImBCN01U/eN2JQxtnp26U/hGWPFkk= X-Stat-Signature: 7quc6hw61ihihiuic6sjkbk7echpxajx X-Rspamd-Queue-Id: 48D231C0052 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=CaDbi1ka; spf=pass (imf21.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam02 X-HE-Tag: 1656929136-128633 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 04, 2022 at 01:56:00PM +0800, Feng Tang wrote: > On Sun, Jul 03, 2022 at 02:17:37PM +0000, Hyeonggon Yoo wrote: > > On Fri, Jul 01, 2022 at 11:04:51PM +0800, Feng Tang wrote: > > > Hi Christoph, > > > > > > On Fri, Jul 01, 2022 at 04:37:00PM +0200, Christoph Lameter wrote: > > > > On Fri, 1 Jul 2022, Feng Tang wrote: > > > > > > > > > static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > > > > > - unsigned long addr, struct kmem_cache_cpu *c) > > > > > + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size) > > > > > { > > > > > > > > It would be good to avoid expanding the basic slab handling functions for > > > > kmalloc. Can we restrict the mods to the kmalloc related functions? > > > > > > Yes, this is the part that concerned me. I tried but haven't figured > > > a way. > > > > > > I started implemting it several month ago, and stuck with several > > > kmalloc APIs in a hacky way like dump_stack() when there is a waste > > > over 1/4 of the object_size of the kmalloc_caches[][]. > > > > > > Then I found one central API which has all the needed info (object_size & > > > orig_size) that we can yell about the waste : > > > > > > static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_lru *lru, > > > gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) > > > > > > which I thought could be still hacky, as the existing 'alloc_traces' > > > can't be resued which already has the count/call-stack info. Current > > > solution leverage it at the cost of adding 'orig_size' parameters, but > > > I don't know how to pass the 'waste' info through as track/location is > > > in the lowest level. > > > > If adding cost of orig_size parameter for non-debugging case is concern, > > what about doing this in userspace script that makes use of kmalloc > > tracepoints? > > > > kmalloc: call_site=tty_buffer_alloc+0x43/0x90 ptr=00000000b78761e1 > > bytes_req=1056 bytes_alloc=2048 gfp_flags=GFP_ATOMIC|__GFP_NOWARN > > accounted=false > > > > calculating sum of (bytes_alloc - bytes_req) for each call_site > > may be an alternative solution. > > Yes, this is doable, and it will met some of the problems I met before, > one is there are currently 2 alloc path: kmalloc and kmalloc_node, also > we need to consider the free problem to calculate the real waste, and > the free trace point doesn't have size info (Yes, we could compare > the pointer with alloc path, and the user script may need to be more > complexer). That's why I love the current 'alloc_traces' interface, > which has the count (slove the free counting problem) and full call > stack info. Understood. > And for the extra parameter cost issue, I rethink about it, and we > can leverage the 'slab_alloc_node()' to solve it, and the patch is > much simpler now without adding a new parameter: > > --- > diff --git a/mm/slub.c b/mm/slub.c > index b1281b8654bd3..ce4568dbb0f2d 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -271,6 +271,7 @@ struct track { > #endif > int cpu; /* Was running on cpu */ > int pid; /* Pid context */ > + unsigned long waste; /* memory waste for a kmalloc-ed object */ > unsigned long when; /* When did the operation occur */ > }; > > @@ -3240,6 +3241,16 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l > init = slab_want_init_on_alloc(gfpflags, s); > > out: > + > +#ifdef CONFIG_SLUB_DEBUG > + if (object && s->object_size != orig_size) { > + struct track *track; > + > + track = get_track(s, object, TRACK_ALLOC); > + track->waste = s->object_size - orig_size; > + } > +#endif > + This scares me. It does not check if the cache has SLAB_STORE_USER flag. Also CONFIG_SLUB_DEBUG is enabled by default, which means that it is still against not affecting non-debugging case. I like v1 more than modified version. Thanks, Hyeonggon > slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init); > > return object; > @@ -5092,6 +5103,7 @@ struct location { > depot_stack_handle_t handle; > unsigned long count; > unsigned long addr; > + unsigned long waste; > long long sum_time; > long min_time; > long max_time; > @@ -5142,7 +5154,7 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, > { > long start, end, pos; > struct location *l; > - unsigned long caddr, chandle; > + unsigned long caddr, chandle, cwaste; > unsigned long age = jiffies - track->when; > depot_stack_handle_t handle = 0; > > @@ -5162,11 +5174,13 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, > if (pos == end) > break; > > - caddr = t->loc[pos].addr; > - chandle = t->loc[pos].handle; > - if ((track->addr == caddr) && (handle == chandle)) { > + l = &t->loc[pos]; > + caddr = l->addr; > + chandle = l->handle; > + cwaste = l->waste; > + if ((track->addr == caddr) && (handle == chandle) && > + (track->waste == cwaste)) { > > - l = &t->loc[pos]; > l->count++; > if (track->when) { > l->sum_time += age; > @@ -5191,6 +5205,9 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, > end = pos; > else if (track->addr == caddr && handle < chandle) > end = pos; > + else if (track->addr == caddr && handle == chandle && > + track->waste < cwaste) > + end = pos; > else > start = pos; > } > @@ -5214,6 +5231,7 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, > l->min_pid = track->pid; > l->max_pid = track->pid; > l->handle = handle; > + l->waste = track->waste; > cpumask_clear(to_cpumask(l->cpus)); > cpumask_set_cpu(track->cpu, to_cpumask(l->cpus)); > nodes_clear(l->nodes); > @@ -6102,6 +6120,10 @@ static int slab_debugfs_show(struct seq_file *seq, void *v) > else > seq_puts(seq, ""); > > + if (l->waste) > + seq_printf(seq, " waste=%lu/%lu", > + l->count * l->waste, l->waste); > + > if (l->sum_time != l->min_time) { > seq_printf(seq, " age=%ld/%llu/%ld", > l->min_time, div_u64(l->sum_time, l->count), > > Thanks, > Feng > > > Thanks, > > Hyeonggon > > > > > Thanks, > > > Feng > > > > > > > > >