From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 907DDC43334 for ; Fri, 1 Jul 2022 03:56:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ADC416B0071; Thu, 30 Jun 2022 23:56:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A8BCE6B0073; Thu, 30 Jun 2022 23:56:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 952C06B0074; Thu, 30 Jun 2022 23:56:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 867E86B0071 for ; Thu, 30 Jun 2022 23:56:35 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 45E8F944 for ; Fri, 1 Jul 2022 03:56:35 +0000 (UTC) X-FDA: 79637169150.12.E88BFF6 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf29.hostedemail.com (Postfix) with ESMTP id 3707212000C for ; Fri, 1 Jul 2022 03:56:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656647794; x=1688183794; h=date:from:to:cc:subject:message-id:references: mime-version:content-transfer-encoding:in-reply-to; bh=RpCqpdA60H48DUaJg+JlKNTs3frcvmd++S2H7WWwe6k=; b=Jmyuh4plZPBT9NLiwT66tqw2660XQOrLO9g21sBv0WbZ4+NgyZeRuC4L FEplgvFonv5wsNcCCb/iW9w2i12w4fjhtV8kyux/PJE1Dn8Y2U25OrO3a ba4EAZsaeb1LI26u5f1N3VnXr19SVDUBJ8IqXhVoEmJTtqJcabFAsHJ7X rIOtf9hD53zjQ+zIY1UIcnuo8YAFJNOLVFWJIwFaQAsWjX3gLgKdSAP+z QWef4OELUTYxmSCyc+zb3gSi7Kbhze4ZA+KVHixTjEDRNUvCzwVEDQYtt 6QMWOaHxdYIEiI0CniEfdyA4CSEQfU3IovCKTpNnA0iGvlpKmp4FgLHvP w==; X-IronPort-AV: E=McAfee;i="6400,9594,10394"; a="265575939" X-IronPort-AV: E=Sophos;i="5.92,236,1650956400"; d="scan'208";a="265575939" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jun 2022 20:56:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.92,236,1650956400"; d="scan'208";a="694339877" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.146.138]) by fmsmga002.fm.intel.com with ESMTP; 30 Jun 2022 20:56:23 -0700 Date: Fri, 1 Jul 2022 11:56:22 +0800 From: Feng Tang To: John Garry Cc: Robin Murphy , Joerg Roedel , Will Deacon , iommu@lists.linux-foundation.org, iommu@lists.linux.dev, Andrew Morton , Christoph Lameter , Vlastimil Babka , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Paul Menzel Subject: Re: [PATCH] iommu/iova: change IOVA_MAG_SIZE to 127 to save memory Message-ID: <20220701035622.GB14806@shbuild999.sh.intel.com> References: <20220630073304.26945-1-feng.tang@intel.com> <13db50bb-57c7-0d54-3857-84b8a4591d9e@arm.com> <7c29d01d-d90c-58d3-a6e0-0b6c404173ac@huawei.com> <117b31b5-8d06-0af4-7f1c-231d86becf1d@arm.com> <2920df89-9975-5785-f79b-257d3052dfaf@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <2920df89-9975-5785-f79b-257d3052dfaf@huawei.com> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656647794; a=rsa-sha256; cv=none; b=cEFmLpNe5W8mJbWpBwj9Ou2rBspUMzDdB7MyzuAhcRYtArm7im/v4PvjTn9yzCOxbJj8K/ uJlgjR9AlHeXsj05EBrXMOrS3N1sJmWd0ubTbfinPQidOSOlArr7TbBu+DTp0ki/mAdjNK EWXGpqMlBSZQpGqS+cXwN4KDuFyQ7YQ= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Jmyuh4pl; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf29.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 134.134.136.126) smtp.mailfrom=feng.tang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656647794; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vEOzoODZmy9RfjyBB/ZyJKEipotVThyFhf4h/ZHQA9Q=; b=g/eTAX61zW2Hex8c77fKH56Po5cgBYtqerzY0FK/ELGaBoVk9t36DUeK4NGCqr6O9sXlv8 KoI9Jb0h3pUoJW9/7utYsiTNz2FQraQWi3OekbCfmtiNYC4p0VzTw0V/BnBVeOGMzKbLFc gMvZ3PJOmezYpQBOLKaDKUvEM8D6ydk= X-Rspamd-Queue-Id: 3707212000C X-Rspam-User: Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Jmyuh4pl; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf29.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 134.134.136.126) smtp.mailfrom=feng.tang@intel.com X-Rspamd-Server: rspam02 X-Stat-Signature: bnx3whhgnsx5abxwgoou8du7poqzmhjn X-HE-Tag: 1656647793-338586 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi John, On Thu, Jun 30, 2022 at 11:52:18AM +0100, John Garry wrote: > > > > > > > > >    [    4.319253] iommu: Adding device 0000:06:00.2 to group 5 > > > > >    [    4.325869] iommu: Adding device 0000:20:01.0 to group 15 > > > > >    [    4.332648] iommu: Adding device 0000:20:02.0 to group 16 > > > > >    [    4.338946] swapper/0 invoked oom-killer: > > > > > gfp_mask=0x6040c0(GFP_KERNEL|__GFP_COMP), nodemask=(null), > > > > > order=0, oom_score_adj=0 > > > > >    [    4.350251] swapper/0 cpuset=/ mems_allowed=0 > > > > >    [    4.354618] CPU: 0 PID: 1 Comm: swapper/0 Not > > > > > tainted 4.19.57.mx64.282 #1 > > > > >    [    4.355612] Hardware name: Dell Inc. PowerEdge > > > > > R7425/08V001, BIOS 1.9.3 06/25/2019 > > > > >    [    4.355612] Call Trace: > > > > >    [    4.355612]  dump_stack+0x46/0x5b > > > > >    [    4.355612]  dump_header+0x6b/0x289 > > > > >    [    4.355612]  out_of_memory+0x470/0x4c0 > > > > >    [    4.355612]  __alloc_pages_nodemask+0x970/0x1030 > > > > >    [    4.355612]  cache_grow_begin+0x7d/0x520 > > > > >    [    4.355612]  fallback_alloc+0x148/0x200 > > > > >    [    4.355612]  kmem_cache_alloc_trace+0xac/0x1f0 > > > > >    [    4.355612]  init_iova_domain+0x112/0x170 > > Note for Feng Tang: This callchain does not exist anymore since we separated > out the rcache init from the IOVA domain init. Indeed, not so much memory is > wasted for unused rcaches now. Thanks for the info, I also planned to remove the callstack as Robin suggested. > My point really is that it would be nicer to see a modern callchain - but > don't read that as me saying that the change is this patch is bad. > > > > > >    [    4.355612]  amd_iommu_domain_alloc+0x138/0x1a0 > > > > >    [    4.355612]  iommu_group_get_for_dev+0xc4/0x1a0 > > > > >    [    4.355612]  amd_iommu_add_device+0x13a/0x610 > > > > >    [    4.355612]  add_iommu_group+0x20/0x30 > > > > >    [    4.355612]  bus_for_each_dev+0x76/0xc0 > > > > >    [    4.355612]  bus_set_iommu+0xb6/0xf0 > > > > >    [    4.355612]  amd_iommu_init_api+0x112/0x132 > > > > >    [    4.355612]  state_next+0xfb1/0x1165 > > > > >    [    4.355612]  amd_iommu_init+0x1f/0x67 > > > > >    [    4.355612]  pci_iommu_init+0x16/0x3f > > > > >    ... > > > > >    [    4.670295] Unreclaimable slab info: > > > > >    ... > > > > >    [    4.857565] kmalloc-2048           59164KB      59164KB > > > > > > > > > > Change IOVA_MAG_SIZE from 128 to 127 to make size of 'iova_magazine' > > > > > 1024 bytes so that no memory will be wasted. > > > > > > > > > > [1]. https://lkml.org/lkml/2019/8/12/266 > > > > > > > > > > Signed-off-by: Feng Tang > > > > > --- > > > > >   drivers/iommu/iova.c | 7 ++++++- > > > > >   1 file changed, 6 insertions(+), 1 deletion(-) > > > > > > > > > > diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c > > > > > index db77aa675145b..27634ddd9b904 100644 > > > > > --- a/drivers/iommu/iova.c > > > > > +++ b/drivers/iommu/iova.c > > > > > @@ -614,7 +614,12 @@ EXPORT_SYMBOL_GPL(reserve_iova); > > > > >    * dynamic size tuning described in the paper. > > > > >    */ > > > > > -#define IOVA_MAG_SIZE 128 > > > > > +/* > > > > > + * As kmalloc's buffer size is fixed to power of 2, 127 is chosen to > > > > > + * assure size of 'iova_magzine' to be 1024 bytes, so that no memory > > > > > > > > Typo: iova_magazine > > > > > > > > > + * will be wasted. > > > > > + */ > > > > > +#define IOVA_MAG_SIZE 127 > > > > > > I do wonder if we will see some strange new behaviour since > > > IOVA_FQ_SIZE % IOVA_MAG_SIZE != 0 now... > > > > I doubt it - even if a flush queue does happen to be entirely full of > > equal-sized IOVAs, a CPU's loaded magazines also both being perfectly > > empty when it comes to dump a full fq seem further unlikely, so in > > practice I don't see this making any appreciable change to the > > likelihood of spilling back to the depot or not. In fact the smaller the > > magazines get, the less time would be spent flushing the depot back to > > the rbtree, where your interesting workload falls off the cliff and> > never catches back up with the fq timer, so at some point it might even > > improve (unless it's also already close to the point where smaller > > caches would bottleneck allocation)... might be interesting to > > experiment with a wider range of magazine sizes if you had the time and > > inclination. > > > > ok, what you are saying sounds reasonable. I just remember that when we > analyzed the longterm aging issue that we concluded that the FQ size and its > relation to the magazine size was a factor and this change makes me a little > worried about new issues. Better the devil you know and all that... > > Anyway, if I get some time I might do some testing to see if this change has > any influence. > > Another thought is if we need even store the size in the iova_magazine? mags > in the depot are always full. As such, we only need worry about mags loaded > in the cpu rcache and their sizes, so maybe we could have something like > this: > > struct iova_magazine { > - unsigned long size; > unsigned long pfns[IOVA_MAG_SIZE]; > }; > > @@ -631,6 +630,8 @@ struct iova_cpu_rcache { > spinlock_t lock; > struct iova_magazine *loaded; > struct iova_magazine *prev; > + int loaded_size; > + int prev_size; > }; > > I haven't tried to implement it though.. I have very few knowledge of iova, so you can chose what's the better solution. I just wanted to raise the problem and will be happy to see it solved :) Thanks, Feng > Thanks, > John