From: Robin Murphy <robin.murphy@arm.com>
To: Ajay Kumar <ajaykumar.rs@samsung.com>,
iommu@lists.linux-foundation.org, linux-mm@kvack.org
Cc: Sathyam Panda <sathya.panda@samsung.com>, shaik.ameer@samsung.com
Subject: Re: [RFC PATCH] drivers: iommu: reset cached node if dma_mask is changed
Date: Thu, 7 May 2020 14:37:16 +0100 [thread overview]
Message-ID: <30e2a563-df52-3fc1-3d59-adc2dc75beff@arm.com> (raw)
In-Reply-To: <20200504183759.42924-1-ajaykumar.rs@samsung.com>
On 2020-05-04 7:37 pm, Ajay Kumar wrote:
> The current IOVA allocation code stores a cached copy of the
> first allocated IOVA address node, and all the subsequent allocations
> have no way to get past(higher than) the first allocated IOVA range.
Strictly they do, after that first allocation gets freed, or if the
first limit was <=32 bits and the subsequent limit >32 bits ;)
> This causes issue when dma_mask for the master device is changed.
> Though the DMA window is increased, the allocation code unaware of
> the change, goes ahead allocating IOVA address lower than the
> first allocated IOVA address.
>
> This patch adds a check for dma_mask change in the IOVA allocation
> function and resets the cached IOVA node to anchor node everytime
> the dma_mask change is observed.
This isn't the right approach, since limit_pfn is by design a transient
per-allocation thing. Devices with different limits may well be
allocating from the same IOVA domain concurrently, which is the whole
reason for maintaining two cached nodes to serve the expected PCI case
of mixing 32-bit and 64-bit limits. Trying to track a per-allocation
property on a per-domain basis is just going to thrash and massively
hurt such cases.
A somewhat more appropriate fix to the allocation loop itself has been
proposed here:
https://lore.kernel.org/linux-iommu/1588795317-20879-1-git-send-email-vjitta@codeaurora.org/
Robin.
> NOTE:
> This patch is needed to address the issue discussed in below thread:
> https://www.spinics.net/lists/iommu/msg43586.html
>
> Signed-off-by: Ajay Kumar <ajaykumar.rs@samsung.com>
> Signed-off-by: Sathyam Panda <sathya.panda@samsung.com>
> ---
> drivers/iommu/iova.c | 17 ++++++++++++++++-
> include/linux/iova.h | 1 +
> 2 files changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 41c605b0058f..0e99975036ae 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -44,6 +44,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
> iovad->granule = granule;
> iovad->start_pfn = start_pfn;
> iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad));
> + iovad->curr_limit_pfn = iovad->dma_32bit_pfn;
> iovad->max32_alloc_size = iovad->dma_32bit_pfn;
> iovad->flush_cb = NULL;
> iovad->fq = NULL;
> @@ -116,9 +117,20 @@ EXPORT_SYMBOL_GPL(init_iova_flush_queue);
> static struct rb_node *
> __get_cached_rbnode(struct iova_domain *iovad, unsigned long limit_pfn)
> {
> - if (limit_pfn <= iovad->dma_32bit_pfn)
> + if (limit_pfn <= iovad->dma_32bit_pfn) {
> + /* re-init cached node if DMA limit has changed */
> + if (limit_pfn != iovad->curr_limit_pfn) {
> + iovad->cached32_node = &iovad->anchor.node;
> + iovad->curr_limit_pfn = limit_pfn;
> + }
> return iovad->cached32_node;
> + }
>
> + /* re-init cached node if DMA limit has changed */
> + if (limit_pfn != iovad->curr_limit_pfn) {
> + iovad->cached_node = &iovad->anchor.node;
> + iovad->curr_limit_pfn = limit_pfn;
> + }
> return iovad->cached_node;
> }
>
> @@ -190,6 +202,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
> if (size_aligned)
> align_mask <<= fls_long(size - 1);
>
> + if (limit_pfn != iovad->curr_limit_pfn)
> + iovad->max32_alloc_size = iovad->dma_32bit_pfn;
> +
> /* Walk the tree backwards */
> spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
> if (limit_pfn <= iovad->dma_32bit_pfn &&
> diff --git a/include/linux/iova.h b/include/linux/iova.h
> index a0637abffee8..be2220c096ef 100644
> --- a/include/linux/iova.h
> +++ b/include/linux/iova.h
> @@ -73,6 +73,7 @@ struct iova_domain {
> unsigned long granule; /* pfn granularity for this domain */
> unsigned long start_pfn; /* Lower limit for this domain */
> unsigned long dma_32bit_pfn;
> + unsigned long curr_limit_pfn; /* Current max limit for this domain */
> unsigned long max32_alloc_size; /* Size of last failed allocation */
> struct iova_fq __percpu *fq; /* Flush Queue */
>
>
next prev parent reply other threads:[~2020-05-07 13:37 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20200504185042epcas5p11447ae722d33bd00c7d002a9d1b8d6c1@epcas5p1.samsung.com>
2020-05-04 18:37 ` Ajay Kumar
2020-05-07 13:37 ` Robin Murphy [this message]
2020-05-07 17:44 ` Ajay kumar
2020-05-13 8:33 ` Joerg Roedel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=30e2a563-df52-3fc1-3d59-adc2dc75beff@arm.com \
--to=robin.murphy@arm.com \
--cc=ajaykumar.rs@samsung.com \
--cc=iommu@lists.linux-foundation.org \
--cc=linux-mm@kvack.org \
--cc=sathya.panda@samsung.com \
--cc=shaik.ameer@samsung.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox