From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F67EC433EF for ; Tue, 31 May 2022 19:48:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 640D46B0071; Tue, 31 May 2022 15:48:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5EF996B0073; Tue, 31 May 2022 15:48:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4DD1E6B0074; Tue, 31 May 2022 15:48:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3F1496B0071 for ; Tue, 31 May 2022 15:48:32 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 097DE214BC for ; Tue, 31 May 2022 19:48:32 +0000 (UTC) X-FDA: 79527075264.27.32BE41A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf03.hostedemail.com (Postfix) with ESMTP id 9888420055 for ; Tue, 31 May 2022 19:48:16 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4283523A; Tue, 31 May 2022 12:48:30 -0700 (PDT) Received: from [10.57.81.38] (unknown [10.57.81.38]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B791E3F766; Tue, 31 May 2022 12:48:28 -0700 (PDT) Message-ID: Date: Tue, 31 May 2022 20:48:24 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: Re: [PATCH 04/10] dmapool: improve accuracy of debug statistics Content-Language: en-GB To: Tony Battersby , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> From: Robin Murphy In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Stat-Signature: zfqxyxoruqxsh9rwmzikijk48ap1cnxu X-Rspam-User: Authentication-Results: imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of robin.murphy@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=robin.murphy@arm.com; dmarc=pass (policy=none) header.from=arm.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9888420055 X-HE-Tag: 1654026496-85668 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022-05-31 19:17, Tony Battersby wrote: > The "total number of blocks in pool" debug statistic currently does not > take the boundary value into account, so it diverges from the "total > number of blocks in use" statistic when a boundary is in effect. Add a > calculation for the number of blocks per allocation that takes the > boundary into account, and use it to replace the inaccurate calculation. > > This depends on the patch "dmapool: fix boundary comparison" for the > calculated blks_per_alloc value to be correct. > > Signed-off-by: Tony Battersby > --- > mm/dmapool.c | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/mm/dmapool.c b/mm/dmapool.c > index 782143144a32..9e30f4425dea 100644 > --- a/mm/dmapool.c > +++ b/mm/dmapool.c > @@ -47,6 +47,7 @@ struct dma_pool { /* the pool */ > struct device *dev; > unsigned int allocation; > unsigned int boundary; > + unsigned int blks_per_alloc; > char name[32]; > struct list_head pools; > }; > @@ -92,8 +93,7 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha > /* per-pool info, no real statistics yet */ > temp = scnprintf(next, size, "%-16s %4zu %4zu %4u %2u\n", Nit: if we're tinkering with this, it's probably worth updating the whole function to use sysfs_emit{_at}(). > pool->name, blocks, > - (size_t) pages * > - (pool->allocation / pool->size), > + (size_t) pages * pool->blks_per_alloc, > pool->size, pages); > size -= temp; > next += temp; > @@ -168,6 +168,9 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, > retval->size = size; > retval->boundary = boundary; > retval->allocation = allocation; > + retval->blks_per_alloc = > + (allocation / boundary) * (boundary / size) + > + (allocation % boundary) / size; Do we really need to store this? Sure, 4 divisions (which could possibly be fewer given the constraints on boundary) isn't the absolute cheapest calculation, but I still can't imagine anyone would be polling sysfs stats hard enough to even notice. Thanks, Robin. > > INIT_LIST_HEAD(&retval->pools); >