From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0258EC38A2A for ; Thu, 7 May 2020 17:44:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A773D215A4 for ; Thu, 7 May 2020 17:44:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QGIb+LgW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A773D215A4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6F0C1900004; Thu, 7 May 2020 13:44:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A295900002; Thu, 7 May 2020 13:44:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58FD3900004; Thu, 7 May 2020 13:44:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id 3DECB900002 for ; Thu, 7 May 2020 13:44:18 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id EEA8A180ACF61 for ; Thu, 7 May 2020 17:44:17 +0000 (UTC) X-FDA: 76790646954.16.sail89_42940300cb216 X-HE-Tag: sail89_42940300cb216 X-Filterd-Recvd-Size: 7720 Received: from mail-yb1-f195.google.com (mail-yb1-f195.google.com [209.85.219.195]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Thu, 7 May 2020 17:44:17 +0000 (UTC) Received: by mail-yb1-f195.google.com with SMTP id f5so3361502ybo.4 for ; Thu, 07 May 2020 10:44:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=OI2xu1NqQ4nK8CSz3pymKBNjObYJMCIvthVEflc/cxo=; b=QGIb+LgWoXl3cN2eJu2P4HTuxHd5qUihVbS2Mnt85s2xVStHO3/45lZo/FIosAjj/u gfv63P3K/P4BvbK83ulMtgc8BNVU/1+xnCdG9PSoLe/uRwRP+h4KFVJPnS9F0X9Y3loy DmBv141/1pv1v5UUDjUzlmHSf3h5Ryq6CueYmTSTHdomeSea+RFp7mR0Y348LbR+YawZ 63RBNJCslCPQ60s2JdLLZzaUQOgufCLovaDjm+N53hmW9IcCr4TSsUEM2bYESboIX/NT Ram1BavEOssGAxrFsy2ld92UgsSgIw3F1lGKa5e207S3DbfCNTbiEi2LbXQafCozKsGw DFfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=OI2xu1NqQ4nK8CSz3pymKBNjObYJMCIvthVEflc/cxo=; b=coduhm82Cjei7ObAE0BeQ51RIFruIe2MJRP2t3U5dT4TSiuAzzn7dah5HEVDK0E92h aOSrRTqs+diw8K6/IVzHoBJ/ApBdNvQ4Lc3gDI0GaQH/Qtg13y+VqUgqzuj5gxp1i0rO YWwd+nx4MhioFCVyp3xY9YR2Pu6fDrD6tNvsvp1kcbcLya6pAjlbAwnjtcbfuOhY5bCy Jq5EA5/jTx9II0/otyZvsErHbHJpgRvjo0tL4Ui8uUfF5ni9OtZHxJymio5FUm+lxvvp 2azFxn+kudUott8sCx+sLg7emXYclU+X6BERf2JM/2ppnnZ7lugK68G96NYhmn/WTTWE B6QA== X-Gm-Message-State: AGi0PuYMwXrs09qmGhwgPF2fK1Oyk3lrHT822TW3LCmBwpgk3mJWOoy5 Cgeyb8GwP8T1GlJEaIPp2LEp9xy/BgyVZi2B2F8= X-Google-Smtp-Source: APiQypIdmega8gbA7LhTYV3avyaOVVUs0s/qpzhKK7HefXcH48tpjfD18HK4FHSsK6C2hfzWW/8awd8jrE1zmNoSDeQ= X-Received: by 2002:a25:bb92:: with SMTP id y18mr25455080ybg.496.1588873455277; Thu, 07 May 2020 10:44:15 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a05:6900:26b:0:0:0:0 with HTTP; Thu, 7 May 2020 10:44:14 -0700 (PDT) In-Reply-To: <30e2a563-df52-3fc1-3d59-adc2dc75beff@arm.com> References: <20200504183759.42924-1-ajaykumar.rs@samsung.com> <30e2a563-df52-3fc1-3d59-adc2dc75beff@arm.com> From: Ajay kumar Date: Thu, 7 May 2020 23:14:14 +0530 Message-ID: Subject: Re: [RFC PATCH] drivers: iommu: reset cached node if dma_mask is changed To: Robin Murphy Cc: Ajay Kumar , iommu@lists.linux-foundation.org, linux-mm@kvack.org, shaik.ameer@samsung.com, Sathyam Panda Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Robin, On 5/7/20, Robin Murphy wrote: > On 2020-05-04 7:37 pm, Ajay Kumar wrote: >> The current IOVA allocation code stores a cached copy of the >> first allocated IOVA address node, and all the subsequent allocations >> have no way to get past(higher than) the first allocated IOVA range. > > Strictly they do, after that first allocation gets freed, or if the > first limit was <=32 bits and the subsequent limit >32 bits ;) In my case, the first allocated buffer is the firmware buffer, and its not released. >> This causes issue when dma_mask for the master device is changed. >> Though the DMA window is increased, the allocation code unaware of >> the change, goes ahead allocating IOVA address lower than the >> first allocated IOVA address. >> >> This patch adds a check for dma_mask change in the IOVA allocation >> function and resets the cached IOVA node to anchor node everytime >> the dma_mask change is observed. > > This isn't the right approach, since limit_pfn is by design a transient > per-allocation thing. Devices with different limits may well be > allocating from the same IOVA domain concurrently, which is the whole > reason for maintaining two cached nodes to serve the expected PCI case > of mixing 32-bit and 64-bit limits. Trying to track a per-allocation > property on a per-domain basis is just going to thrash and massively > hurt such cases. Agreed. But if two or more non PCI devices share the same domain, and though one of the devices has higher addressable limit, the current logic forces it to take the lower window. > A somewhat more appropriate fix to the allocation loop itself has been > proposed here: > > https://lore.kernel.org/linux-iommu/1588795317-20879-1-git-send-email-vjitta@codeaurora.org/ This patch addresses my problem indirectly. Shall add my concerns there. Ajay >> NOTE: >> This patch is needed to address the issue discussed in below thread: >> https://www.spinics.net/lists/iommu/msg43586.html >> >> Signed-off-by: Ajay Kumar >> Signed-off-by: Sathyam Panda >> --- >> drivers/iommu/iova.c | 17 ++++++++++++++++- >> include/linux/iova.h | 1 + >> 2 files changed, 17 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c >> index 41c605b0058f..0e99975036ae 100644 >> --- a/drivers/iommu/iova.c >> +++ b/drivers/iommu/iova.c >> @@ -44,6 +44,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned >> long granule, >> iovad->granule = granule; >> iovad->start_pfn = start_pfn; >> iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad)); >> + iovad->curr_limit_pfn = iovad->dma_32bit_pfn; >> iovad->max32_alloc_size = iovad->dma_32bit_pfn; >> iovad->flush_cb = NULL; >> iovad->fq = NULL; >> @@ -116,9 +117,20 @@ EXPORT_SYMBOL_GPL(init_iova_flush_queue); >> static struct rb_node * >> __get_cached_rbnode(struct iova_domain *iovad, unsigned long limit_pfn) >> { >> - if (limit_pfn <= iovad->dma_32bit_pfn) >> + if (limit_pfn <= iovad->dma_32bit_pfn) { >> + /* re-init cached node if DMA limit has changed */ >> + if (limit_pfn != iovad->curr_limit_pfn) { >> + iovad->cached32_node = &iovad->anchor.node; >> + iovad->curr_limit_pfn = limit_pfn; >> + } >> return iovad->cached32_node; >> + } >> >> + /* re-init cached node if DMA limit has changed */ >> + if (limit_pfn != iovad->curr_limit_pfn) { >> + iovad->cached_node = &iovad->anchor.node; >> + iovad->curr_limit_pfn = limit_pfn; >> + } >> return iovad->cached_node; >> } >> >> @@ -190,6 +202,9 @@ static int __alloc_and_insert_iova_range(struct >> iova_domain *iovad, >> if (size_aligned) >> align_mask <<= fls_long(size - 1); >> >> + if (limit_pfn != iovad->curr_limit_pfn) >> + iovad->max32_alloc_size = iovad->dma_32bit_pfn; >> + >> /* Walk the tree backwards */ >> spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); >> if (limit_pfn <= iovad->dma_32bit_pfn && >> diff --git a/include/linux/iova.h b/include/linux/iova.h >> index a0637abffee8..be2220c096ef 100644 >> --- a/include/linux/iova.h >> +++ b/include/linux/iova.h >> @@ -73,6 +73,7 @@ struct iova_domain { >> unsigned long granule; /* pfn granularity for this domain */ >> unsigned long start_pfn; /* Lower limit for this domain */ >> unsigned long dma_32bit_pfn; >> + unsigned long curr_limit_pfn; /* Current max limit for this domain */ >> unsigned long max32_alloc_size; /* Size of last failed allocation */ >> struct iova_fq __percpu *fq; /* Flush Queue */ >> >> > _______________________________________________ > iommu mailing list > iommu@lists.linux-foundation.org > https://lists.linuxfoundation.org/mailman/listinfo/iommu >