From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0411C3814E for ; Sat, 6 Jul 2024 00:16:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CD5FE6B0085; Fri, 5 Jul 2024 20:16:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C5EF06B0089; Fri, 5 Jul 2024 20:16:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD7AC6B008A; Fri, 5 Jul 2024 20:16:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 86BD96B0085 for ; Fri, 5 Jul 2024 20:16:51 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 26348A0418 for ; Sat, 6 Jul 2024 00:16:51 +0000 (UTC) X-FDA: 82307412222.05.5EE10CD Received: from out-179.mta0.migadu.com (out-179.mta0.migadu.com [91.218.175.179]) by imf16.hostedemail.com (Postfix) with ESMTP id CFC41180005 for ; Sat, 6 Jul 2024 00:16:48 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=Wdpt0JKL; spf=pass (imf16.hostedemail.com: domain of yanjun.zhu@linux.dev designates 91.218.175.179 as permitted sender) smtp.mailfrom=yanjun.zhu@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720224983; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qkdIw0urR+OykNZ7ebQ1OpK8yAKeHHf5BZ61+76IsCY=; b=ypajlGWfyHCFSmsHpwm/5FCgB1dE510uaDN9DdFUuSUOQGAFQflsPlNZWq1C4dPJMvhXBX EKQZoSRmCjDy/FTzEfANzf+DMxelgtmkzy8i+l31j6CGnE6+E09dBDxSJ6zjUF1DC+COu/ dquYkGmHzSB5SLFflKhTy3MgqYmUCi4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720224983; a=rsa-sha256; cv=none; b=gMJOcm8fy8iVJ04o8pPwDlZcfZqf3xMejIqzxrwCMDeUKg9zihDtvbdClQM1+v/lyL56hC APUDw4QFDKbce1D1KdlzkcT6ZYoj+FpzhIWFC3I0Fmc64gwrXDRrcPvCNVTFIoxhwiuFRP 6MTb6TJA2cUBvrFQp03+065z5i+OEZU= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=Wdpt0JKL; spf=pass (imf16.hostedemail.com: domain of yanjun.zhu@linux.dev designates 91.218.175.179 as permitted sender) smtp.mailfrom=yanjun.zhu@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Envelope-To: anumula@chelsio.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1720225006; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qkdIw0urR+OykNZ7ebQ1OpK8yAKeHHf5BZ61+76IsCY=; b=Wdpt0JKLyGzOMpSSYE7d0v0lAz2cMHY2Toe7/dauQCCTppCSB36XM6/v7Qv8YxQHpKgs+P ZCn8EM2mdGHTGcXfx/Rhl5eE9B/0KpFztpUnLTHr0srXLFtjy2tCH/9U8GKXG5qjPHUBpr 2WCeZfjMRsQAPvTjYBej/AQJ7/MQvPc= X-Envelope-To: jgg@nvidia.com X-Envelope-To: leonro@nvidia.com X-Envelope-To: linux-rdma@vger.kernel.org X-Envelope-To: bharat@chelsio.com X-Envelope-To: linux-mm@kvack.org Message-ID: <91a12ee0-d30e-42f0-82fc-e06f6ffd5700@linux.dev> Date: Sat, 6 Jul 2024 08:16:35 +0800 MIME-Version: 1.0 Subject: Re: [PATCH for-next] RDMA/cxgb4: use dma_mmap_coherent() for mapping non-contiguous memory To: Anumula Murali Mohan Reddy , jgg@nvidia.com, leonro@nvidia.com Cc: linux-rdma@vger.kernel.org, Potnuri Bharat Teja , Linux Memory Management List References: <20240705131753.15550-1-anumula@chelsio.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Zhu Yanjun In-Reply-To: <20240705131753.15550-1-anumula@chelsio.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: CFC41180005 X-Stat-Signature: 386mittd6cizj7nf3s9jxqtjwee8ci1p X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1720225008-351518 X-HE-Meta: U2FsdGVkX18iWPORa2apTk/+RdnqmT8ZM8kOVu72+qFcHWA8JABkgC2K6COT9D1rFlyKrwiBoK7b+DQqRfeaUFSVddRDT0q4WPbScTubDzCHgMi3dbjBzpel1XWFmshZ6BtWSRRTycJn3aoIw4tLOFw4Kz+MBfLQFbUnxut9ID9cZf+SIoN/59tx7XW+atwdMhtmDbQlOx4WpPB8AqmR5OSKMZSAGPgv9dzmwHo29q94cGfP1QtJ+r3F9REsnj1RsGQ73H3HY2o4qIyGWrZ8XV0WSYb+vQbod44FEcdiyldV008bgf6I4kCVELnLFRMgrqhUwPG9kAgL/6PF5hBxzZnkHhyib/5VLl/UO4ZAhC3tN521vALUiu9CFXii/5MEoU9pY9zCTuwJWyM5z/dEBiWMLpJ9jOd6uS92pNV0cC+qLOOfumaF+/LhmipVSk2NirmXqquG36qOKzWAsBKTHPyj76UcEMQFHLA+jWiRXz2aM5GAJXIc2y5lzTlLrejWhFxuBjVMlp8l74iDNgMPEpl9gBQklrgSvWIuoFQ2+ZU2q1iZghHOH2SF5bz3IHBDO8EtBUTtokhMSfzFuESuZOeHKMbfWXfdO9yPtgFe+nAZVX/vJiNEf+uEeWlH54dOKiZRxUcqn/cqCQFR25Uy6gEAjVWCrFcv2069WImCMUx6j6POcsHAZMio3+1lrZiqDu2TtPvTuMhloxCP1N2DKYquGp+DK9qj4tIciB6JFX8l0iMENo4dv7yme4fsGt3VgDmeEGtdZ/PNPTvajVnzEbkQ1fRvDJXCHHsBx3HLMMOhhIzPEb+7CRfpujYFPnkHTZwCzm5gjMuzojcs7tbzu+mNKUIvdpOmR9avdFbZRgel5Y0CxC5SxzbLA9THRjM6nHfSJtAhMr74DYw0IQRvGIRp8H83OXuYmgS3MhjQ7Gz2imMvt2ZiOodadaGNpIj4Oh3xX8JhgvTH0eLFL+N YjUhxdT2 agR9RTc1nDFXZ4Gv7Eja5g/ZzhFpxcJA8lcODdvrAYAVgT3xbAphMOw27DwXg4Dp1BgWbdV6O13EAK6l+dlMoGdz4xV0ALjK3rJhPWSjq/BVVZDlDLIpaM2nkERVpZI5FCsjgEZtMOLjHd/C6HKCbxXfKgL89Lj1MwXJ7ibYwwIn3QGWMb/lxCHuAd6CiBm6i4xtPcQu2Wy9ULgZjr1MaDDYnMa9/307b0wMlZDFyqClnAGXSu6yHoQOfBLj7i4ngX+qhO68PzWNm6rW1m7Q4IZS/WIwupl2WzKbIA9oKD4vhc/XrAh/92GUfNg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: 在 2024/7/5 21:17, Anumula Murali Mohan Reddy 写道: > dma_alloc_coherent() allocates contiguous memory irrespective of > iommu mode, but after commit f5ff79fddf0e ("dma-mapping: remove > CONFIG_DMA_REMAP") if iommu is enabled in translate mode, CC linux-mm@kvack.org Zhu Yanjun > dma_alloc_coherent() may allocate non-contiguous memory. > Attempt to map this memory results in panic. > This patch fixes the issue by using dma_mmap_coherent() to map each page > to user space. > > Fixes: f5ff79fddf0e ("dma-mapping: remove CONFIG_DMA_REMAP") > Signed-off-by: Anumula Murali Mohan Reddy > Signed-off-by: Potnuri Bharat Teja > --- > drivers/infiniband/hw/cxgb4/cq.c | 4 +++ > drivers/infiniband/hw/cxgb4/iw_cxgb4.h | 2 ++ > drivers/infiniband/hw/cxgb4/provider.c | 48 +++++++++++++++++++++----- > drivers/infiniband/hw/cxgb4/qp.c | 14 ++++++++ > 4 files changed, 59 insertions(+), 9 deletions(-) > > diff --git a/drivers/infiniband/hw/cxgb4/cq.c b/drivers/infiniband/hw/cxgb4/cq.c > index 5111421f9473..81cfc876fa89 100644 > --- a/drivers/infiniband/hw/cxgb4/cq.c > +++ b/drivers/infiniband/hw/cxgb4/cq.c > @@ -1127,12 +1127,16 @@ int c4iw_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, > > mm->key = uresp.key; > mm->addr = virt_to_phys(chp->cq.queue); > + mm->vaddr = chp->cq.queue; > + mm->dma_addr = chp->cq.dma_addr; > mm->len = chp->cq.memsize; > insert_mmap(ucontext, mm); > > mm2->key = uresp.gts_key; > mm2->addr = chp->cq.bar2_pa; > mm2->len = PAGE_SIZE; > + mm2->vaddr = NULL; > + mm2->dma_addr = 0; > insert_mmap(ucontext, mm2); > } > > diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h > index f838bb6718af..5eedc6cf0f8c 100644 > --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h > +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h > @@ -536,6 +536,8 @@ struct c4iw_mm_entry { > struct list_head entry; > u64 addr; > u32 key; > + void *vaddr; > + dma_addr_t dma_addr; > unsigned len; > }; > > diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c > index 246b739ddb2b..6227775970c9 100644 > --- a/drivers/infiniband/hw/cxgb4/provider.c > +++ b/drivers/infiniband/hw/cxgb4/provider.c > @@ -131,6 +131,10 @@ static int c4iw_mmap(struct ib_ucontext *context, struct vm_area_struct *vma) > struct c4iw_mm_entry *mm; > struct c4iw_ucontext *ucontext; > u64 addr; > + size_t size; > + void *vaddr; > + unsigned long vm_pgoff; > + dma_addr_t dma_addr; > > pr_debug("pgoff 0x%lx key 0x%x len %d\n", vma->vm_pgoff, > key, len); > @@ -145,6 +149,9 @@ static int c4iw_mmap(struct ib_ucontext *context, struct vm_area_struct *vma) > if (!mm) > return -EINVAL; > addr = mm->addr; > + vaddr = mm->vaddr; > + dma_addr = mm->dma_addr; > + size = mm->len; > kfree(mm); > > if ((addr >= pci_resource_start(rdev->lldi.pdev, 0)) && > @@ -155,9 +162,17 @@ static int c4iw_mmap(struct ib_ucontext *context, struct vm_area_struct *vma) > * MA_SYNC register... > */ > vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); > - ret = io_remap_pfn_range(vma, vma->vm_start, > - addr >> PAGE_SHIFT, > - len, vma->vm_page_prot); > + if (vaddr && is_vmalloc_addr(vaddr)) { > + vm_pgoff = vma->vm_pgoff; > + vma->vm_pgoff = 0; > + ret = dma_mmap_coherent(&rdev->lldi.pdev->dev, vma, > + vaddr, dma_addr, size); > + vma->vm_pgoff = vm_pgoff; > + } else { > + ret = io_remap_pfn_range(vma, vma->vm_start, > + addr >> PAGE_SHIFT, > + len, vma->vm_page_prot); > + } > } else if ((addr >= pci_resource_start(rdev->lldi.pdev, 2)) && > (addr < (pci_resource_start(rdev->lldi.pdev, 2) + > pci_resource_len(rdev->lldi.pdev, 2)))) { > @@ -175,17 +190,32 @@ static int c4iw_mmap(struct ib_ucontext *context, struct vm_area_struct *vma) > vma->vm_page_prot = > pgprot_noncached(vma->vm_page_prot); > } > - ret = io_remap_pfn_range(vma, vma->vm_start, > - addr >> PAGE_SHIFT, > - len, vma->vm_page_prot); > + if (vaddr && is_vmalloc_addr(vaddr)) { > + vm_pgoff = vma->vm_pgoff; > + vma->vm_pgoff = 0; > + ret = dma_mmap_coherent(&rdev->lldi.pdev->dev, vma, > + vaddr, dma_addr, size); > + vma->vm_pgoff = vm_pgoff; > + } else { > + ret = io_remap_pfn_range(vma, vma->vm_start, > + addr >> PAGE_SHIFT, > + len, vma->vm_page_prot); > + } > } else { > > /* > * Map WQ or CQ contig dma memory... > */ > - ret = remap_pfn_range(vma, vma->vm_start, > - addr >> PAGE_SHIFT, > - len, vma->vm_page_prot); > + if (vaddr && is_vmalloc_addr(vaddr)) { > + vm_pgoff = vma->vm_pgoff; > + vma->vm_pgoff = 0; > + ret = dma_mmap_coherent(&rdev->lldi.pdev->dev, vma, > + vaddr, dma_addr, size); > + } else { > + ret = remap_pfn_range(vma, vma->vm_start, > + addr >> PAGE_SHIFT, > + len, vma->vm_page_prot); > + } > } > > return ret; > diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c > index d16d8eaa1415..3f6fb4b34d5a 100644 > --- a/drivers/infiniband/hw/cxgb4/qp.c > +++ b/drivers/infiniband/hw/cxgb4/qp.c > @@ -2282,16 +2282,22 @@ int c4iw_create_qp(struct ib_qp *qp, struct ib_qp_init_attr *attrs, > goto err_free_ma_sync_key; > sq_key_mm->key = uresp.sq_key; > sq_key_mm->addr = qhp->wq.sq.phys_addr; > + sq_key_mm->vaddr = qhp->wq.sq.queue; > + sq_key_mm->dma_addr = qhp->wq.sq.dma_addr; > sq_key_mm->len = PAGE_ALIGN(qhp->wq.sq.memsize); > insert_mmap(ucontext, sq_key_mm); > if (!attrs->srq) { > rq_key_mm->key = uresp.rq_key; > rq_key_mm->addr = virt_to_phys(qhp->wq.rq.queue); > + rq_key_mm->vaddr = qhp->wq.rq.queue; > + rq_key_mm->dma_addr = qhp->wq.rq.dma_addr; > rq_key_mm->len = PAGE_ALIGN(qhp->wq.rq.memsize); > insert_mmap(ucontext, rq_key_mm); > } > sq_db_key_mm->key = uresp.sq_db_gts_key; > sq_db_key_mm->addr = (u64)(unsigned long)qhp->wq.sq.bar2_pa; > + sq_db_key_mm->vaddr = NULL; > + sq_db_key_mm->dma_addr = 0; > sq_db_key_mm->len = PAGE_SIZE; > insert_mmap(ucontext, sq_db_key_mm); > if (!attrs->srq) { > @@ -2299,6 +2305,8 @@ int c4iw_create_qp(struct ib_qp *qp, struct ib_qp_init_attr *attrs, > rq_db_key_mm->addr = > (u64)(unsigned long)qhp->wq.rq.bar2_pa; > rq_db_key_mm->len = PAGE_SIZE; > + rq_db_key_mm->vaddr = NULL; > + rq_db_key_mm->dma_addr = 0; > insert_mmap(ucontext, rq_db_key_mm); > } > if (ma_sync_key_mm) { > @@ -2307,6 +2315,8 @@ int c4iw_create_qp(struct ib_qp *qp, struct ib_qp_init_attr *attrs, > (pci_resource_start(rhp->rdev.lldi.pdev, 0) + > PCIE_MA_SYNC_A) & PAGE_MASK; > ma_sync_key_mm->len = PAGE_SIZE; > + ma_sync_key_mm->vaddr = NULL; > + ma_sync_key_mm->dma_addr = 0; > insert_mmap(ucontext, ma_sync_key_mm); > } > > @@ -2763,10 +2773,14 @@ int c4iw_create_srq(struct ib_srq *ib_srq, struct ib_srq_init_attr *attrs, > srq_key_mm->key = uresp.srq_key; > srq_key_mm->addr = virt_to_phys(srq->wq.queue); > srq_key_mm->len = PAGE_ALIGN(srq->wq.memsize); > + srq_key_mm->vaddr = srq->wq.queue; > + srq_key_mm->dma_addr = srq->wq.dma_addr; > insert_mmap(ucontext, srq_key_mm); > srq_db_key_mm->key = uresp.srq_db_gts_key; > srq_db_key_mm->addr = (u64)(unsigned long)srq->wq.bar2_pa; > srq_db_key_mm->len = PAGE_SIZE; > + srq_db_key_mm->vaddr = NULL; > + srq_db_key_mm->dma_addr = 0; > insert_mmap(ucontext, srq_db_key_mm); > } >