From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0BD0C369C2 for ; Tue, 22 Apr 2025 05:01:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E4C56B000A; Tue, 22 Apr 2025 01:01:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2BAF06B000D; Tue, 22 Apr 2025 01:01:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 182126B000E; Tue, 22 Apr 2025 01:01:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id ED1486B000A for ; Tue, 22 Apr 2025 01:01:00 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 7B2071402B4 for ; Tue, 22 Apr 2025 05:01:00 +0000 (UTC) X-FDA: 83360480280.30.AE30AB3 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by imf10.hostedemail.com (Postfix) with ESMTP id A4EF5C000F for ; Tue, 22 Apr 2025 05:00:57 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=lst.de; spf=pass (imf10.hostedemail.com: domain of hch@lst.de designates 213.95.11.211 as permitted sender) smtp.mailfrom=hch@lst.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745298057; a=rsa-sha256; cv=none; b=MI5xQT57e2BGouZhC59+soYe7Ng9R4B2hpp1t3QElvj0uFiVYqwi671hCVT5HMEvJxvxUf ulFAVLKFcI9pPcv4HnYP1BkiNmBhCVN/9s+AGqxLcTSWjICHeFR0je+Ey4z8HGjnz0yStG fN09rCWN8pOHZ8f1gi2pzkoDtSsejQM= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=lst.de; spf=pass (imf10.hostedemail.com: domain of hch@lst.de designates 213.95.11.211 as permitted sender) smtp.mailfrom=hch@lst.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745298057; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hecHTTFiHoubGNs1ml9EB1UZx9/d9a75345Icg87B1M=; b=NjCZPRxMhry6N9EQrmxXjW6E5364YsoDq/fMWtPYLTlWktbA0+NiI1TOB5l0R/KzHeWTDh rP/X4SI3B+Ckoddx/MJjPvk/7zsL2jk1KegnSgKTNg++xPumat9EapnsHBgfx39oeFt+Ws 4prLwobV53bS6qxiCgkynp7GlsCmYNg= Received: by verein.lst.de (Postfix, from userid 2407) id 1414268BFE; Tue, 22 Apr 2025 07:00:51 +0200 (CEST) Date: Tue, 22 Apr 2025 07:00:50 +0200 From: Christoph Hellwig To: Leon Romanovsky Cc: Marek Szyprowski , Jens Axboe , Christoph Hellwig , Keith Busch , Jake Edge , Jonathan Corbet , Jason Gunthorpe , Zhu Yanjun , Robin Murphy , Joerg Roedel , Will Deacon , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?iso-8859-1?B?Suly9G1l?= Glisse , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Niklas Schnelle , Chuck Lever , Luis Chamberlain , Matthew Wilcox , Dan Williams , Kanchan Joshi , Chaitanya Kulkarni , Leon Romanovsky Subject: Re: [PATCH v8 23/24] nvme-pci: convert to blk_rq_dma_map Message-ID: <20250422050050.GB28077@lst.de> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) X-Rspam-User: X-Rspamd-Queue-Id: A4EF5C000F X-Rspamd-Server: rspam04 X-Stat-Signature: t4kt151j5hqow77tb1w3cgo5fbsn8xoe X-HE-Tag: 1745298057-334457 X-HE-Meta: U2FsdGVkX1/X7ZUL1YcTmnM1oTAD8FSHpLKymhqfl4d0woNtZcE8Ujgh1uEJ2QdHMPM3aJykGopZ5/jHeXwunN+rSY8ij1urq6pRpKvZZqnMlHOYpP/kvFVg61cYHSXTd6A2Hsu4O8pzZac+hS+YyP7M2lzkfVma7QevhprLQINu99i81pZ2FWctMrDN/ssC1XjdPogkUA/pzGsCcFROpgmggOFCMTENNlkXT2NV4BA6MACVi/3qvrlCtLNXj2RH7eAFzWJ+IK+KFQxNNVpI0oiXB8KF6Txt2RmKGq9q3QrolzQrpilVKN8Ifox+LvHJO9fH+MwjSq437Gvr8jA+i08vm/YDWhPcr0M6eqmUdN/73xuheDny850H/HgRAN96K11LS5U4IL6pZtQGVEq/HJdO5vgqYF0MmoG8Xb0cmWQqdjURVhO3cnbIENZe5KIU/hkW1/TZS+a0l8LemHzh8TiSl6cxE+e5nob+Tb04Sn3eMKDOtJPSZRvJiamOOQZorpx7BWo3XuJqnlqJax+itB4Z+tQfMYfP1XpPmnT2kbUxaC+WrUdTetkZdoes8bPGBmtwjAeGxKnxz7ygws0ex+3sFmvHIwvZmszz1LwODoruyFPfIyVVCdVt3BdHdgNqfFRM7shY0EmaP/5mDr+0QXcK7+IsT99eRCPlF/WzDxfm1wi6P6fGFJDJ1b4uTOUW+iyF6QujPdKeVyPqT14nQxKY1RWH4eXRQPuLUBgtGzRuoz0Aw2LC0uaE1D4WH5nRnhHHUL36i1UoHzwDDIGeeSDyiWg/Z653EgsfXbmtxE7QUs+0MprT9+0LU+Bf/MEAjft2IN0eN81k8NTmZadaw8B/O6RoDnY+jnYtIjjLmlSzlVHwTrhGwP5GDkNWBTyPyq1gkdmGSf8r1Fi9lwwKlIAAYhdCX+7cLZcfMeev8DFklf7FFQDwW3LbA2WlHbwJrnJ+Dnm1D3JmXLqyq6j bsyCgM4z zg4nvYKRWc4dOw3fKCba6Y048xtWUzPCKJ+5QkgYnB5yBcHvSzfRngqUX4nqqjQT4ZcI0qOLi4MgJngmYAd6eis/a91j0KBhX2i0KP7lqzaqXVZmOSCY77FrBsZY468Q1NuVjdlJW2HjjC8E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: > + dma_len = min_t(u32, length, NVME_CTRL_PAGE_SIZE - (dma_addr & (NVME_CTRL_PAGE_SIZE - 1))); And overly long line slipped in here during one of the rebases. > + /* > + * We are in this mode as IOVA path wasn't taken and DMA length > + * is morethan two sectors. In such case, mapping was perfoormed > + * per-NVME_CTRL_PAGE_SIZE, so unmap accordingly. > + */ Where does this comment come from? Lots of spelling errors, and I also don't understand what it is talking about as setors are entirely irrelevant here. > + if (!blk_rq_dma_unmap(req, dev->dev, &iod->dma_state, iod->total_len)) { > + if (iod->cmd.common.flags & NVME_CMD_SGL_METABUF) > + nvme_free_sgls(dev, req); With the addition of metadata SGL support this also needs to check NVME_CMD_SGL_METASEG. The commit message should also really mentioned that someone significantly altered the patch for merging with latest upstream, as I as the nominal author can't recognize some of that code. > + unsigned int entries = req->nr_integrity_segments; > struct nvme_iod *iod = blk_mq_rq_to_pdu(req); > > + if (!blk_rq_dma_unmap(req, dev->dev, &iod->dma_meta_state, > + iod->total_meta_len)) { > + if (entries == 1) { > + dma_unmap_page(dev->dev, iod->meta_dma, > + rq_integrity_vec(req).bv_len, > + rq_dma_dir(req)); > + return; > } > } > > + dma_pool_free(dev->prp_small_pool, iod->meta_list, iod->meta_dma); This now doesn't unmap for entries > 1 in the non-IOVA case.