From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4C3FC43334 for ; Mon, 4 Jul 2022 05:56:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E29AC8E0001; Mon, 4 Jul 2022 01:56:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DD9A16B0073; Mon, 4 Jul 2022 01:56:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA1A38E0001; Mon, 4 Jul 2022 01:56:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B7F096B0072 for ; Mon, 4 Jul 2022 01:56:07 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8C231345E3 for ; Mon, 4 Jul 2022 05:56:07 +0000 (UTC) X-FDA: 79648356774.14.8A29114 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf19.hostedemail.com (Postfix) with ESMTP id 8F9D61A0054 for ; Mon, 4 Jul 2022 05:56:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656914166; x=1688450166; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=7y3Cx5M8UyugyctgSE5CWaH4csKIe5LDHhkFe+VUCoc=; b=Dz7Oh+GbTZLCknJdlb47m2QY9zcYakl/KOBuK/CQqt8QgP479IeRQWj0 DPvN36Uuq1ctv/17cvwOpMPuG+tYK5vZvED/WznlX3T5RLw2v9151r6as F/+7IT+HDK6PvkUF0hJqMtQAFxAgFv0xtBB+eyF5CukPmBNkUDALfuwJg xD9G1A+rFB4wzblCzGOLQGDExmUXQYoALnvB5/xSYtscD4isFeX/p0WV/ YRlmk07koswbtBB3eBviGD97068rHa+M7h2NYE0tGgbUdaItynLab7s1T +srXpHAcHkOp/QV+5DqRjEz5D4Ek7KsnSRBTYNUg6ik37a5/+GxO8izqy Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10397"; a="284150199" X-IronPort-AV: E=Sophos;i="5.92,243,1650956400"; d="scan'208";a="284150199" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jul 2022 22:56:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.92,243,1650956400"; d="scan'208";a="624932903" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.146.138]) by orsmga001.jf.intel.com with ESMTP; 03 Jul 2022 22:56:00 -0700 Date: Mon, 4 Jul 2022 13:56:00 +0800 From: Feng Tang To: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Christoph Lameter Cc: Christoph Lameter , Andrew Morton , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, dave.hansen@intel.com, Robin Murphy , John Garry Subject: Re: [PATCH v1] mm/slub: enable debugging memory wasting of kmalloc Message-ID: <20220704055600.GD62281@shbuild999.sh.intel.com> References: <20220701135954.45045-1-feng.tang@intel.com> <20220701150451.GA62281@shbuild999.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Dz7Oh+Gb; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf19.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=feng.tang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656914167; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NEdmbb2lOowH5HyPggvmRgaONIJEx/kdCzdcf27GY6E=; b=Y7PpXJciSRoMskZrZrX+tj/IpYm/AW1Y7Cz2/Cha4YSs+fWlNr+Vw7wBnOLL605gZczNaE KCEPnGOnFqlFCVILrERl6uWUCLRBuhRSG1F1EMVBIxbTrcdzKQeFNAfXpQhuO3S1DwGVa5 /VCFjN3s2Q6kdqp1LUcbVWZQdqK5oTA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656914167; a=rsa-sha256; cv=none; b=hcxIwzuePr0IQIAF+mKEwOQP2orNCxWyv+2tArSgLGu4gTWVzlUgtKmfn/yX9QBoae8QtC V/lQagYYCeQI3SRocFHA6QFMTu+r6+BvPhrCLXmNpjc07mXxi6GmM/P8kXpLb39fLsLNvG i0NAdIsUUHtxaiY3ciazMc8aaUaeUL4= X-Stat-Signature: q7znfruwhzsfcxdkny1f6w1otan5uy1s X-Rspamd-Queue-Id: 8F9D61A0054 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Dz7Oh+Gb; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf19.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=feng.tang@intel.com X-Rspamd-Server: rspam03 X-Rspam-User: X-HE-Tag: 1656914166-373778 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Jul 03, 2022 at 02:17:37PM +0000, Hyeonggon Yoo wrote: > On Fri, Jul 01, 2022 at 11:04:51PM +0800, Feng Tang wrote: > > Hi Christoph, > > > > On Fri, Jul 01, 2022 at 04:37:00PM +0200, Christoph Lameter wrote: > > > On Fri, 1 Jul 2022, Feng Tang wrote: > > > > > > > static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > > > > - unsigned long addr, struct kmem_cache_cpu *c) > > > > + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size) > > > > { > > > > > > It would be good to avoid expanding the basic slab handling functions for > > > kmalloc. Can we restrict the mods to the kmalloc related functions? > > > > Yes, this is the part that concerned me. I tried but haven't figured > > a way. > > > > I started implemting it several month ago, and stuck with several > > kmalloc APIs in a hacky way like dump_stack() when there is a waste > > over 1/4 of the object_size of the kmalloc_caches[][]. > > > > Then I found one central API which has all the needed info (object_size & > > orig_size) that we can yell about the waste : > > > > static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_lru *lru, > > gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) > > > > which I thought could be still hacky, as the existing 'alloc_traces' > > can't be resued which already has the count/call-stack info. Current > > solution leverage it at the cost of adding 'orig_size' parameters, but > > I don't know how to pass the 'waste' info through as track/location is > > in the lowest level. > > If adding cost of orig_size parameter for non-debugging case is concern, > what about doing this in userspace script that makes use of kmalloc > tracepoints? > > kmalloc: call_site=tty_buffer_alloc+0x43/0x90 ptr=00000000b78761e1 > bytes_req=1056 bytes_alloc=2048 gfp_flags=GFP_ATOMIC|__GFP_NOWARN > accounted=false > > calculating sum of (bytes_alloc - bytes_req) for each call_site > may be an alternative solution. Yes, this is doable, and it will met some of the problems I met before, one is there are currently 2 alloc path: kmalloc and kmalloc_node, also we need to consider the free problem to calculate the real waste, and the free trace point doesn't have size info (Yes, we could compare the pointer with alloc path, and the user script may need to be more complexer). That's why I love the current 'alloc_traces' interface, which has the count (slove the free counting problem) and full call stack info. And for the extra parameter cost issue, I rethink about it, and we can leverage the 'slab_alloc_node()' to solve it, and the patch is much simpler now without adding a new parameter: --- diff --git a/mm/slub.c b/mm/slub.c index b1281b8654bd3..ce4568dbb0f2d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -271,6 +271,7 @@ struct track { #endif int cpu; /* Was running on cpu */ int pid; /* Pid context */ + unsigned long waste; /* memory waste for a kmalloc-ed object */ unsigned long when; /* When did the operation occur */ }; @@ -3240,6 +3241,16 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l init = slab_want_init_on_alloc(gfpflags, s); out: + +#ifdef CONFIG_SLUB_DEBUG + if (object && s->object_size != orig_size) { + struct track *track; + + track = get_track(s, object, TRACK_ALLOC); + track->waste = s->object_size - orig_size; + } +#endif + slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init); return object; @@ -5092,6 +5103,7 @@ struct location { depot_stack_handle_t handle; unsigned long count; unsigned long addr; + unsigned long waste; long long sum_time; long min_time; long max_time; @@ -5142,7 +5154,7 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, { long start, end, pos; struct location *l; - unsigned long caddr, chandle; + unsigned long caddr, chandle, cwaste; unsigned long age = jiffies - track->when; depot_stack_handle_t handle = 0; @@ -5162,11 +5174,13 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, if (pos == end) break; - caddr = t->loc[pos].addr; - chandle = t->loc[pos].handle; - if ((track->addr == caddr) && (handle == chandle)) { + l = &t->loc[pos]; + caddr = l->addr; + chandle = l->handle; + cwaste = l->waste; + if ((track->addr == caddr) && (handle == chandle) && + (track->waste == cwaste)) { - l = &t->loc[pos]; l->count++; if (track->when) { l->sum_time += age; @@ -5191,6 +5205,9 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, end = pos; else if (track->addr == caddr && handle < chandle) end = pos; + else if (track->addr == caddr && handle == chandle && + track->waste < cwaste) + end = pos; else start = pos; } @@ -5214,6 +5231,7 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, l->min_pid = track->pid; l->max_pid = track->pid; l->handle = handle; + l->waste = track->waste; cpumask_clear(to_cpumask(l->cpus)); cpumask_set_cpu(track->cpu, to_cpumask(l->cpus)); nodes_clear(l->nodes); @@ -6102,6 +6120,10 @@ static int slab_debugfs_show(struct seq_file *seq, void *v) else seq_puts(seq, ""); + if (l->waste) + seq_printf(seq, " waste=%lu/%lu", + l->count * l->waste, l->waste); + if (l->sum_time != l->min_time) { seq_printf(seq, " age=%ld/%llu/%ld", l->min_time, div_u64(l->sum_time, l->count), Thanks, Feng > Thanks, > Hyeonggon > > > Thanks, > > Feng > > > > > >