From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E7D8C369AB for ; Fri, 18 Apr 2025 08:02:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B22FE280166; Fri, 18 Apr 2025 04:02:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AAA31280165; Fri, 18 Apr 2025 04:02:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8FEE8280166; Fri, 18 Apr 2025 04:02:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6E782280165 for ; Fri, 18 Apr 2025 04:02:46 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 44FBD12160F for ; Fri, 18 Apr 2025 08:02:47 +0000 (UTC) X-FDA: 83346423174.03.AF667B9 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf13.hostedemail.com (Postfix) with ESMTP id 80D6320010 for ; Fri, 18 Apr 2025 08:02:45 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=AfNjpKOb; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf13.hostedemail.com: domain of dlemoal@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=dlemoal@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744963365; a=rsa-sha256; cv=none; b=UWEfy0678BDDybBfX0jQa2+rFmRE0R/TMtZdZ1uHmOgrHeyhhwgLJaAkFyFDQA5afPVHTt GFN0agh/3fmF3l153qYz7uSnstNVXhnhAbw3HRJwfFbh5EuqrfLdc3g56hy1iRGloxXX7B L9WP+r6T7SfbKMbCFHvSQdLocL0gIyM= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=AfNjpKOb; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf13.hostedemail.com: domain of dlemoal@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=dlemoal@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744963365; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5xnYUnjlPLMuKhibM+7cC9sCIRKgMq6xtZ4OhCazpOY=; b=syUccpH/c/32w4+Em2yb8pcpScLvJVtWKC1pPo8p7Zj06xzNXa7VPstSn6vFkno1AF4ME2 PDvOj0j4nD6tebKvAB6XfazbQ6SuyoOFXZ2LAqB43a56JvqszAitQJTzCtvom2yXZ8K8rC MNIOvD+lHu6C1LB2/C0TVV2yGtqlItc= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 7563DA4B31F; Fri, 18 Apr 2025 07:57:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B3F4C4CEE2; Fri, 18 Apr 2025 08:02:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1744963364; bh=dp/a0KQW6s1HUowZp3pVpovc87ZhWIisQB2UIJrvz9E=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=AfNjpKOb21ywlYC/OrmRmyZovar5QMJh+pzsXdQLXpgfvDc8s2FeKsSSmY9uXRTYw 7wueHQYcigWmtPF+uctGlYB6aqVtEHv+CpGDOHl1li5M44Rc5SMCIoY6/r9MejZdQH YfSW7LPrPI/uKn3wu+W+GH0hzdb2JPLJEoABq2IVJ2AT4uHQc0IkJaUk/JTNFeZucb KZbcgbbNjw/HEo4kl9nYCTZuLh5jAH55WgCZcGrxraZ5NX/CyK0uoSq8QRicV6VWFF 2omjCGL8y8CFmi6A3Nbm1MMks5Iz1Jz1pZsFO4W6VoN007BeBsG1aNaiETWWvXDioq cwUe4iTyuUaEg== Message-ID: <1284adf3-7e93-4530-9921-408c5eaeb337@kernel.org> Date: Fri, 18 Apr 2025 17:02:38 +0900 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v8 24/24] nvme-pci: optimize single-segment handling To: Leon Romanovsky , Marek Szyprowski , Jens Axboe , Christoph Hellwig , Keith Busch Cc: Kanchan Joshi , Jake Edge , Jonathan Corbet , Jason Gunthorpe , Zhu Yanjun , Robin Murphy , Joerg Roedel , Will Deacon , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Niklas Schnelle , Chuck Lever , Luis Chamberlain , Matthew Wilcox , Dan Williams , Chaitanya Kulkarni , Nitesh Shetty , Leon Romanovsky References: <670389227a033bd5b7c5aa55191aac9943244028.1744825142.git.leon@kernel.org> Content-Language: en-US From: Damien Le Moal Organization: Western Digital Research In-Reply-To: <670389227a033bd5b7c5aa55191aac9943244028.1744825142.git.leon@kernel.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Queue-Id: 80D6320010 X-Rspamd-Server: rspam04 X-Stat-Signature: 6yu1mmxdqk674bp3z6uy1fdcmot8htbe X-HE-Tag: 1744963365-416798 X-HE-Meta: U2FsdGVkX194JwQ/4RPiBq/us9aBlQHEODxPfBwq3J5RTdqqEGvii0rGtTwdv1xlaWhCBGTnT5lswva0wGE0WcQJfzs+9xZJNTrMvID7r0YzSfpYWSAt8ka0HGtpoUa97fwtiRPa9vISHaPpHZs4RyaWrTfNB5TBMJBNYOatGyCfqWJLwdlkrCe25K2EqtUwNcRhU27A0MdRdETPtCtflQ/hi5BmeUyp9hrJv4S6vW1IpGffqSvEwEl4qyrlWvRo+msDLhEHz2U3RTT8MMkg8+NLHWNHAEslYTdX7WmhJFXP7AsL+tT7RX/kuXbemH9EqFfteN/DEFqrFlHkYIchEYqNz28z7XKoQwDnr+6EykQc7Bxmp50lcxp86cONHSBOHNmEsIfLhk2LXJi4TujsTWCGzJlLiEkoh36ceO4Kkpt+r37+l6NMS5YaQMXBZ1tiUezi8/jtA0bgBoSfslF2Bbh+xnj7jhSUx435Mww95RtCYyeoIuKVRAqeUjPWLn109FMRxLwVqIBAjMohBDXAYu5MHecgYJsidILhYhOCJy+Mj0fAGEIVCLCCtATgsSiICHJWnmZj/SfaB/utBOMNrwSZTudWKHrhFRVv7lU9cfmHyyjKzaV828m/Z1ba4LS8HP74/7kSEfab5jw0hqYf145Md99jKxAO6HTxOnBOmZhsz4Bd+AD93vvn51WyCR+/M5JNmPK8dHut4XoD44aBk7jEZdrqpLpSNJIPRbDm40A+Q4WhwYLQxt5P4sHtKhyMZTOuPwRIE6tScg5LZAhwhTPLv6SmMG8mlOl18RDmITPXPL2j9EHer8/bf98cEapI8+vHMF/tntiwF5XiLdetQ05EE9KnikC956Hc4JX4LQMVWB9VWEclX8q/SafJlZPmZNJ5EsmB4VwuSdPx37AKU2sxW0tRtGE/6IlW7ig/Xmw3pcLL5wf38PAyeF+9Wq9U54CTT8JsAGNZa6rbZhw b7Qwoxy5 6keSSTOETKrNvnqZDMUetZYTv5wKjbTj1u7iHAHl48dBVq2zsACqEpySQ0ZenY0k1EbEapkWX+erWq+Ai0/Ue4l2rx5bwm0e92NZ010YoWTA1IRY+zcE0b7XZxV8bz+w7qSG6OUKA2tRDq+hidVJJhWkR+YqYfMMrDEr5Q9cQ30YNlmr4u9/8SdwxqmzeDnDU/OE+WUCyn47Tn9pB/OUaNE34AGOGKDAKbFuOpZY0Pvl0HfoDRQnYKmSZLW8OYBzD2UoXIe5ewkvD54cidqujpMwtYgepDDO+R8ufstXl1qBX5nMDataeqqjr+sMMvqxkrrEVNHm0nObivSP+3F6wHwBb2Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 4/18/25 15:47, Leon Romanovsky wrote: > From: Kanchan Joshi > > blk_rq_dma_map API is costly for single-segment requests. > Avoid using it and map the bio_vec directly. > > Signed-off-by: Kanchan Joshi > Signed-off-by: Nitesh Shetty > Signed-off-by: Leon Romanovsky > --- > drivers/nvme/host/pci.c | 65 +++++++++++++++++++++++++++++++++++++---- > 1 file changed, 60 insertions(+), 5 deletions(-) > > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c > index 8d99a8f871ea..cf020de82962 100644 > --- a/drivers/nvme/host/pci.c > +++ b/drivers/nvme/host/pci.c > @@ -216,6 +216,11 @@ struct nvme_queue { > struct completion delete_done; > }; > > +enum { > + IOD_LARGE_DESCRIPTORS = 1, /* uses the full page sized descriptor pool */ > + IOD_SINGLE_SEGMENT = 2, /* single segment dma mapping */ > +}; > + > /* > * The nvme_iod describes the data in an I/O. > */ > @@ -224,7 +229,7 @@ struct nvme_iod { > struct nvme_command cmd; > bool aborted; > u8 nr_descriptors; /* # of PRP/SGL descriptors */ > - bool large_descriptors; /* uses the full page sized descriptor pool */ > + unsigned int flags; > unsigned int total_len; /* length of the entire transfer */ > unsigned int total_meta_len; /* length of the entire metadata transfer */ > dma_addr_t meta_dma; > @@ -529,7 +534,7 @@ static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req, > static inline struct dma_pool *nvme_dma_pool(struct nvme_dev *dev, > struct nvme_iod *iod) > { > - if (iod->large_descriptors) > + if (iod->flags & IOD_LARGE_DESCRIPTORS) > return dev->prp_page_pool; > return dev->prp_small_pool; > } > @@ -630,6 +635,15 @@ static void nvme_free_sgls(struct nvme_dev *dev, struct request *req) > static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) > { > struct nvme_iod *iod = blk_mq_rq_to_pdu(req); > + unsigned int nr_segments = blk_rq_nr_phys_segments(req); > + dma_addr_t dma_addr; > + > + if (nr_segments == 1 && (iod->flags & IOD_SINGLE_SEGMENT)) { nvme_pci_setup_prps() calls nvme_try_setup_prp_simple() which sets IOD_SINGLE_SEGMENT if and only if the req has a single phys segment. So why do you need to count the segments again here ? Looking at the flag only should be enough, no ? > + dma_addr = le64_to_cpu(iod->cmd.common.dptr.prp1); > + dma_unmap_page(dev->dev, dma_addr, iod->total_len, > + rq_dma_dir(req)); > + return; > + } > > if (!blk_rq_dma_unmap(req, dev->dev, &iod->dma_state, iod->total_len)) { > if (iod->cmd.common.flags & NVME_CMD_SGL_METABUF) > @@ -642,6 +656,41 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) > nvme_free_descriptors(dev, req); > } > > +static bool nvme_try_setup_prp_simple(struct nvme_dev *dev, struct request *req, > + struct nvme_rw_command *cmnd, > + struct blk_dma_iter *iter) > +{ > + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); > + struct bio_vec bv = req_bvec(req); > + unsigned int first_prp_len; > + > + if (is_pci_p2pdma_page(bv.bv_page)) > + return false; > + if ((bv.bv_offset & (NVME_CTRL_PAGE_SIZE - 1)) + bv.bv_len > > + NVME_CTRL_PAGE_SIZE * 2) > + return false; > + > + iter->addr = dma_map_bvec(dev->dev, &bv, rq_dma_dir(req), 0); > + if (dma_mapping_error(dev->dev, iter->addr)) { > + iter->status = BLK_STS_RESOURCE; > + goto out; > + } > + iod->total_len = bv.bv_len; > + cmnd->dptr.prp1 = cpu_to_le64(iter->addr); > + > + first_prp_len = NVME_CTRL_PAGE_SIZE - > + (bv.bv_offset & (NVME_CTRL_PAGE_SIZE - 1)); > + if (bv.bv_len > first_prp_len) > + cmnd->dptr.prp2 = cpu_to_le64(iter->addr + first_prp_len); > + else > + cmnd->dptr.prp2 = 0; > + > + iter->status = BLK_STS_OK; > + iod->flags |= IOD_SINGLE_SEGMENT; > +out: > + return true; > +} > + > static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, > struct request *req) > { > @@ -652,6 +701,12 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, > dma_addr_t prp1_dma, prp2_dma = 0; > unsigned int prp_len, i; > __le64 *prp_list; > + unsigned int nr_segments = blk_rq_nr_phys_segments(req); > + > + if (nr_segments == 1) { > + if (nvme_try_setup_prp_simple(dev, req, cmnd, &iter)) > + return iter.status; > + } > > if (!blk_rq_dma_map_iter_start(req, dev->dev, &iod->dma_state, &iter)) > return iter.status; > @@ -693,7 +748,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, > > if (DIV_ROUND_UP(length, NVME_CTRL_PAGE_SIZE) > > NVME_SMALL_DESCRIPTOR_SIZE / sizeof(__le64)) > - iod->large_descriptors = true; > + iod->flags |= IOD_LARGE_DESCRIPTORS; > > prp_list = dma_pool_alloc(nvme_dma_pool(dev, iod), GFP_ATOMIC, > &prp2_dma); > @@ -808,7 +863,7 @@ static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev, > } > > if (entries > NVME_SMALL_DESCRIPTOR_SIZE / sizeof(*sg_list)) > - iod->large_descriptors = true; > + iod->flags |= IOD_LARGE_DESCRIPTORS; > > sg_list = dma_pool_alloc(nvme_dma_pool(dev, iod), GFP_ATOMIC, &sgl_dma); > if (!sg_list) > @@ -932,7 +987,7 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) > > iod->aborted = false; > iod->nr_descriptors = 0; > - iod->large_descriptors = false; > + iod->flags = 0; > iod->total_len = 0; > iod->total_meta_len = 0; > -- Damien Le Moal Western Digital Research