From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6283C2D0FA for ; Wed, 13 May 2020 08:33:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7103720769 for ; Wed, 13 May 2020 08:33:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7103720769 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=8bytes.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 277B1900114; Wed, 13 May 2020 04:33:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 229AF9000F3; Wed, 13 May 2020 04:33:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16542900114; Wed, 13 May 2020 04:33:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0125.hostedemail.com [216.40.44.125]) by kanga.kvack.org (Postfix) with ESMTP id F04D29000F3 for ; Wed, 13 May 2020 04:33:13 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BEDDF124F for ; Wed, 13 May 2020 08:33:13 +0000 (UTC) X-FDA: 76811031066.23.offer66_736c9dad6b362 X-HE-Tag: offer66_736c9dad6b362 X-Filterd-Recvd-Size: 4500 Received: from theia.8bytes.org (8bytes.org [81.169.241.247]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Wed, 13 May 2020 08:33:13 +0000 (UTC) Received: by theia.8bytes.org (Postfix, from userid 1000) id 6BC8F3C3; Wed, 13 May 2020 10:33:11 +0200 (CEST) Date: Wed, 13 May 2020 10:33:08 +0200 From: Joerg Roedel To: Ajay Kumar , Robin Murphy Cc: iommu@lists.linux-foundation.org, linux-mm@kvack.org, robin.murphy@arm.com, shaik.ameer@samsung.com, shaik.samsung@gmail.com, Sathyam Panda Subject: Re: [RFC PATCH] drivers: iommu: reset cached node if dma_mask is changed Message-ID: <20200513083308.GA9820@8bytes.org> References: <20200504183759.42924-1-ajaykumar.rs@samsung.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200504183759.42924-1-ajaykumar.rs@samsung.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adding Robin. On Tue, May 05, 2020 at 12:07:59AM +0530, Ajay Kumar wrote: > The current IOVA allocation code stores a cached copy of the > first allocated IOVA address node, and all the subsequent allocations > have no way to get past(higher than) the first allocated IOVA range. > > This causes issue when dma_mask for the master device is changed. > Though the DMA window is increased, the allocation code unaware of > the change, goes ahead allocating IOVA address lower than the > first allocated IOVA address. > > This patch adds a check for dma_mask change in the IOVA allocation > function and resets the cached IOVA node to anchor node everytime > the dma_mask change is observed. > > NOTE: > This patch is needed to address the issue discussed in below thread: > https://www.spinics.net/lists/iommu/msg43586.html > > Signed-off-by: Ajay Kumar > Signed-off-by: Sathyam Panda > --- > drivers/iommu/iova.c | 17 ++++++++++++++++- > include/linux/iova.h | 1 + > 2 files changed, 17 insertions(+), 1 deletion(-) > > diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c > index 41c605b0058f..0e99975036ae 100644 > --- a/drivers/iommu/iova.c > +++ b/drivers/iommu/iova.c > @@ -44,6 +44,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, > iovad->granule = granule; > iovad->start_pfn = start_pfn; > iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad)); > + iovad->curr_limit_pfn = iovad->dma_32bit_pfn; > iovad->max32_alloc_size = iovad->dma_32bit_pfn; > iovad->flush_cb = NULL; > iovad->fq = NULL; > @@ -116,9 +117,20 @@ EXPORT_SYMBOL_GPL(init_iova_flush_queue); > static struct rb_node * > __get_cached_rbnode(struct iova_domain *iovad, unsigned long limit_pfn) > { > - if (limit_pfn <= iovad->dma_32bit_pfn) > + if (limit_pfn <= iovad->dma_32bit_pfn) { > + /* re-init cached node if DMA limit has changed */ > + if (limit_pfn != iovad->curr_limit_pfn) { > + iovad->cached32_node = &iovad->anchor.node; > + iovad->curr_limit_pfn = limit_pfn; > + } > return iovad->cached32_node; > + } > > + /* re-init cached node if DMA limit has changed */ > + if (limit_pfn != iovad->curr_limit_pfn) { > + iovad->cached_node = &iovad->anchor.node; > + iovad->curr_limit_pfn = limit_pfn; > + } > return iovad->cached_node; > } > > @@ -190,6 +202,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, > if (size_aligned) > align_mask <<= fls_long(size - 1); > > + if (limit_pfn != iovad->curr_limit_pfn) > + iovad->max32_alloc_size = iovad->dma_32bit_pfn; > + > /* Walk the tree backwards */ > spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); > if (limit_pfn <= iovad->dma_32bit_pfn && > diff --git a/include/linux/iova.h b/include/linux/iova.h > index a0637abffee8..be2220c096ef 100644 > --- a/include/linux/iova.h > +++ b/include/linux/iova.h > @@ -73,6 +73,7 @@ struct iova_domain { > unsigned long granule; /* pfn granularity for this domain */ > unsigned long start_pfn; /* Lower limit for this domain */ > unsigned long dma_32bit_pfn; > + unsigned long curr_limit_pfn; /* Current max limit for this domain */ > unsigned long max32_alloc_size; /* Size of last failed allocation */ > struct iova_fq __percpu *fq; /* Flush Queue */ > > -- > 2.17.1