From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8997C83030 for ; Fri, 4 Jul 2025 06:26:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F7DC6B024C; Fri, 4 Jul 2025 02:26:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CF826B024D; Fri, 4 Jul 2025 02:26:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BE4B6B024E; Fri, 4 Jul 2025 02:26:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 17C946B024C for ; Fri, 4 Jul 2025 02:26:59 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1BA2D141DE3 for ; Fri, 4 Jul 2025 06:26:58 +0000 (UTC) X-FDA: 83625599316.29.CCD1558 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by imf03.hostedemail.com (Postfix) with ESMTP id 319A92000E for ; Fri, 4 Jul 2025 06:26:56 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=GoSEI5dY; spf=pass (imf03.hostedemail.com: domain of lizhe.67@bytedance.com designates 209.85.210.181 as permitted sender) smtp.mailfrom=lizhe.67@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751610416; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=e8Cadi1skh4P1ApGM+PqU48O5jgE1uMvu2URGbuzsOs=; b=eMmJ8PqfS7RqQd5dC59nMP4T2sUoMr1u0TfpXGGAYLULL2mKv8wThUOsgSahfAUkN3nnWG L+t6DdOtWlh+Ei21AXTlX9RGrIrWZPTVRO78qh1ZbBNQmxTrX4ZFVwNGwWXnsJpjQCL2mP fVNg8l1PEkX7z7xvp47FdNRetitiVfQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751610416; a=rsa-sha256; cv=none; b=1aAN2QxRFEe1aPxaS0TBR2tmsNW+rnjanQ6euoidP0AqjoczFEwXXqkrxpgH82a/hBDcq9 4YVHkHuns40t1TXiJ9g4glXZBW0IyszQ40aoy0CzOsP1+jnz3oPj51GkmHbV/nf/O2eNJ0 YamtHuToR80la2v7lVYFiMpt/TCSJ64= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=GoSEI5dY; spf=pass (imf03.hostedemail.com: domain of lizhe.67@bytedance.com designates 209.85.210.181 as permitted sender) smtp.mailfrom=lizhe.67@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-7425bd5a83aso743403b3a.0 for ; Thu, 03 Jul 2025 23:26:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1751610415; x=1752215215; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e8Cadi1skh4P1ApGM+PqU48O5jgE1uMvu2URGbuzsOs=; b=GoSEI5dYomjt/6aeJZHIOLLmic0ZzVE9R/0sNUQQbGcBWjBvnAUZgF3zqZ/8PRHxKO ICl0kE7zlwUzZcIh9bo1jabCRs4bAM+/rwlM9GeJ0CUAf/HpdR4n/ZgEIlbOZ57IRhp/ wLM5DNzthMoDXGwvSNEvvmQKe6ja9ynaO5u1nzImzEw6VUytqkxUqSrF+C/yk1MunUuZ MukjingwX4nVvHQMpOs1R4EISCRcN42Gd/SYoQjd8ILrbF/bKaodhpHj5ODUvED0y+QU x2KNkGerioK4J0uZBkVhwAvisWPHI+5Q6ZVL1gZqsCuO5ZXzxAbniaSE8L5hspCNv7Ut uqiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751610415; x=1752215215; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e8Cadi1skh4P1ApGM+PqU48O5jgE1uMvu2URGbuzsOs=; b=pkbN4dyQQ1T20Wk/sbwpHuSDEd1HkhaRo9WIHkOtLVUQUUjD13M0WSRomhH//aLX5b kjSQUdAH3kBjVHjQHr7KehJ3a3uTyRVrhEZCdZ/C0+LD5A3+dHBXNBrueCuwitttbSOU /yjvUqYhUhXaeZuvx4j+TuzLtsaIZUf14XhlMbMx9zyEEGbdLIHUABATUlfowWcudZmr ACSu22gI3Da7sZhR/U2URO+mZhaPI/R1ogQQn0CAfRkGhpp7yAqdf2OEmYz8dqzFsrE9 7pth6oNwe4IVJ8n0EhZy8GySCuX7qizLMCq1nY6UvI1kdJZBMywCTMnxeqmneQXOLSzx 652g== X-Forwarded-Encrypted: i=1; AJvYcCVXOknVGzTy5weBT2/sHxPU7qlP732aAJ1SNvFl9mK6i5n6GJ0HZfH6KhneKOlnZ2+5AYT5jiWdtQ==@kvack.org X-Gm-Message-State: AOJu0Yw9vkcqXrf3Qcjbq25Pm+k5Fi6BkvI48ReMBOE5rY6VIuvyqcSP gucnFbgwEQZ0jF/1Fj9MjA5m29HaFBimxUnbiTx2FwxJU4hfijcAAK4v2JLlX0eU5tE= X-Gm-Gg: ASbGncuDL1J08BWp9uWjLWhU1ZcOfk50BbMrksw3T0h+RFrziRKhZKrZNlLScxWzU9d +PXjeH9xF2PsCsV5iriik+UrWDNuWxOkIFyz1OIbbovKEklYaKjIK4ToXhCVlJOXlvQd52FmcS6 uvVV/sHWu/ujvSaNg1xFNUlVqyMCTfnAzh3fTSPewD3rPX+iZHYRDgCP511JckBbebA1zjPHAuj swl3ufeN0x+u1J8g2uXfeR4FUKI4eOviF3xNJoAsIYAwMBDTlop3kcw6Q52HLTgWuq0LYThzbAL IxglBK/BoSOqokNe8re7norFRT9qptNQ1KtZEhXdklnIx3E9vdgpzmcp8BSP2VBZc7acOkZELpq TtF8emWhtQjoh X-Google-Smtp-Source: AGHT+IHFaMJdkIhohSafckGbE8EHuJ/2ezNmEPpqd4C13bAJDIW+YKMzKbMEQKLN3/tjGWY/2GuvOw== X-Received: by 2002:a05:6a21:3a85:b0:204:4573:d855 with SMTP id adf61e73a8af0-226092b0ea7mr1255567637.9.1751610414723; Thu, 03 Jul 2025 23:26:54 -0700 (PDT) Received: from localhost.localdomain ([203.208.189.8]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b38ee5f643dsm1183240a12.37.2025.07.03.23.26.51 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 03 Jul 2025 23:26:54 -0700 (PDT) From: lizhe.67@bytedance.com To: alex.williamson@redhat.com, akpm@linux-foundation.org, david@redhat.com, peterx@redhat.com, jgg@ziepe.ca Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, lizhe.67@bytedance.com Subject: [PATCH v2 2/5] vfio/type1: optimize vfio_pin_pages_remote() Date: Fri, 4 Jul 2025 14:25:59 +0800 Message-ID: <20250704062602.33500-3-lizhe.67@bytedance.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250704062602.33500-1-lizhe.67@bytedance.com> References: <20250704062602.33500-1-lizhe.67@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 319A92000E X-Stat-Signature: xeb3tj1a44ffk4rwxgonx4t79ghgnc37 X-HE-Tag: 1751610416-672801 X-HE-Meta: U2FsdGVkX196/2+P86R49C+A94nNR9fpZzOJHCQOZlFBybKoV66LL3SDJU3+VEYdFX1QcddzKl8gPCOINACMXSmBk58lYPju+v32fT2C0/q9ZmjRqwuaVNkmQoBD80IMgAtHQQoaZznNczM59s+wi2bWKN60NJzHY2nfplalmls+w/Nkkwq6pGG7y8Ev/cV1aAEG4w1D62Ma4iKCzvDiYUa/cXiIYlXwF2y8wbMg/Q+IZjklNIiG19BZOc5c72X0R94ZiJXPoaCDAeJZm7OSZNDuz9V4/E5FXSBBxJ26V2QcOXz46uTMs52kGNKbu95sZurJ7M2OFlwitOL1AeQJIZV8YvTleU8cAgG2xxrA2+kawYtW8pvjFNTgCXGUeJrFlaDnigP6GptrldHfWNHAprXm1Wl234c09AleSiVFzX8+6vlTyXIUMkemXQO3WsIyrVbzTzUskt0+FCbRWNZj+xZCAmMo3PxEva06mZyNddu/m5p4qu5oZMy+s+nO7TYf21bRweI9KRrOYihEgD8DmIeHdTEnb2Al6lRFz0OUaAzlbh8/x8DoDSJkC5fppTQcffe+wi7arO95/iaUIm839tQhkx5JlF/eyfoQZnqGxWOIk/SS/NXr/mXaBQLLvmaogkT2AdN/7iFU/BB4JEPiVAgIXvhuqJ0WvRNOo4+jrLxJBwLeI5J7SY5icDARxABr3p3sjNF0+jHwaFBy4XxF+oxcRR6BM+cjFy5JB53za+bMGUzLbwytTx5B6lGiZdqeIe+MTQrzKcbxPYo24H5W0K+C6qZ4+XNWd/R/fyXHSeitGa7j8iP7fBWsRyd8i0vi6MIdUeInxxmZcpcN5WEmsCaXxMVxOtAHN+53+l1W1qrE7eRpOsAKuRqXzpAR/9lum+kkbIEh/g22maMd3Ff+sy+NCjA/Jkqg59Z3+kROBOmZYNaYBHBW/sstU2Atocrexs9ORUVYiTgjry5HfYu HBHDl+Bz uR0nbLvDSM1yP0n3PbHwLTOLBEOl1+HUEjItLvbb85oy/4iXoUXRju0kwgGMbwQuAtYmZrWYwiwlhJUgrwqh6Jguf5N2Zh5t7/WzpkF+pG4wkTNdVNr7TON0zbp2jX9R6eaUIZSjGMKGXtqeS9OGo9Z4+N6Xt6FLfljUxRrQ31bafg+Y3Idkko/mYyPNHUfqu6Y7BmnTgaKtg4zwnzZH5X6d0hLzG0x5DJd/3qxfgAgcHwhg/dllFsEWuK6z8YYPQpvFqeCqshrpHw9fVArrisQvIvflzsBBEXjtkQOTXPQkE5p+TJmZJxx6C0anAuMi3YVKOZwfdFuJJYkHyrIVA/HZPHTHchv+hucmcJTPt2n4DK3ECCEsvPx2iXQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Li Zhe When vfio_pin_pages_remote() is called with a range of addresses that includes large folios, the function currently performs individual statistics counting operations for each page. This can lead to significant performance overheads, especially when dealing with large ranges of pages. Batch processing of statistical counting operations can effectively enhance performance. In addition, the pages obtained through longterm GUP are neither invalid nor reserved. Therefore, we can reduce the overhead associated with some calls to function is_invalid_reserved_pfn(). The performance test results for completing the 16G VFIO IOMMU DMA mapping are as follows. Base(v6.16-rc4): ------- AVERAGE (MADV_HUGEPAGE) -------- VFIO MAP DMA in 0.047 s (340.2 GB/s) ------- AVERAGE (MAP_POPULATE) -------- VFIO MAP DMA in 0.280 s (57.2 GB/s) ------- AVERAGE (HUGETLBFS) -------- VFIO MAP DMA in 0.052 s (310.5 GB/s) With this patch: ------- AVERAGE (MADV_HUGEPAGE) -------- VFIO MAP DMA in 0.027 s (602.1 GB/s) ------- AVERAGE (MAP_POPULATE) -------- VFIO MAP DMA in 0.257 s (62.4 GB/s) ------- AVERAGE (HUGETLBFS) -------- VFIO MAP DMA in 0.031 s (517.4 GB/s) For large folio, we achieve an over 40% performance improvement. For small folios, the performance test results indicate a slight improvement. Signed-off-by: Li Zhe Co-developed-by: Alex Williamson Signed-off-by: Alex Williamson --- drivers/vfio/vfio_iommu_type1.c | 83 ++++++++++++++++++++++++++++----- 1 file changed, 71 insertions(+), 12 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 1136d7ac6b59..03fce54e1372 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -318,7 +318,13 @@ static void vfio_dma_bitmap_free_all(struct vfio_iommu *iommu) /* * Helper Functions for host iova-pfn list */ -static struct vfio_pfn *vfio_find_vpfn(struct vfio_dma *dma, dma_addr_t iova) + +/* + * Find the highest vfio_pfn that overlapping the range + * [iova_start, iova_end) in rb tree. + */ +static struct vfio_pfn *vfio_find_vpfn_range(struct vfio_dma *dma, + dma_addr_t iova_start, dma_addr_t iova_end) { struct vfio_pfn *vpfn; struct rb_node *node = dma->pfn_list.rb_node; @@ -326,9 +332,9 @@ static struct vfio_pfn *vfio_find_vpfn(struct vfio_dma *dma, dma_addr_t iova) while (node) { vpfn = rb_entry(node, struct vfio_pfn, node); - if (iova < vpfn->iova) + if (iova_end <= vpfn->iova) node = node->rb_left; - else if (iova > vpfn->iova) + else if (iova_start > vpfn->iova) node = node->rb_right; else return vpfn; @@ -336,6 +342,11 @@ static struct vfio_pfn *vfio_find_vpfn(struct vfio_dma *dma, dma_addr_t iova) return NULL; } +static inline struct vfio_pfn *vfio_find_vpfn(struct vfio_dma *dma, dma_addr_t iova) +{ + return vfio_find_vpfn_range(dma, iova, iova + PAGE_SIZE); +} + static void vfio_link_pfn(struct vfio_dma *dma, struct vfio_pfn *new) { @@ -614,6 +625,39 @@ static long vaddr_get_pfns(struct mm_struct *mm, unsigned long vaddr, return ret; } + +static long vpfn_pages(struct vfio_dma *dma, + dma_addr_t iova_start, long nr_pages) +{ + dma_addr_t iova_end = iova_start + (nr_pages << PAGE_SHIFT); + struct vfio_pfn *top = vfio_find_vpfn_range(dma, iova_start, iova_end); + long ret = 1; + struct vfio_pfn *vpfn; + struct rb_node *prev; + struct rb_node *next; + + if (likely(!top)) + return 0; + + prev = next = &top->node; + + while ((prev = rb_prev(prev))) { + vpfn = rb_entry(prev, struct vfio_pfn, node); + if (vpfn->iova < iova_start) + break; + ret++; + } + + while ((next = rb_next(next))) { + vpfn = rb_entry(next, struct vfio_pfn, node); + if (vpfn->iova >= iova_end) + break; + ret++; + } + + return ret; +} + /* * Attempt to pin pages. We really don't want to track all the pfns and * the iommu can only map chunks of consecutive pfns anyway, so get the @@ -680,32 +724,47 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, * and rsvd here, and therefore continues to use the batch. */ while (true) { + long nr_pages, acct_pages = 0; + if (pfn != *pfn_base + pinned || rsvd != is_invalid_reserved_pfn(pfn)) goto out; + /* + * Using GUP with the FOLL_LONGTERM in + * vaddr_get_pfns() will not return invalid + * or reserved pages. + */ + nr_pages = num_pages_contiguous( + &batch->pages[batch->offset], + batch->size); + if (!rsvd) { + acct_pages = nr_pages; + acct_pages -= vpfn_pages(dma, iova, nr_pages); + } + /* * Reserved pages aren't counted against the user, * externally pinned pages are already counted against * the user. */ - if (!rsvd && !vfio_find_vpfn(dma, iova)) { + if (acct_pages) { if (!dma->lock_cap && - mm->locked_vm + lock_acct + 1 > limit) { + mm->locked_vm + lock_acct + acct_pages > limit) { pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", __func__, limit << PAGE_SHIFT); ret = -ENOMEM; goto unpin_out; } - lock_acct++; + lock_acct += acct_pages; } - pinned++; - npage--; - vaddr += PAGE_SIZE; - iova += PAGE_SIZE; - batch->offset++; - batch->size--; + pinned += nr_pages; + npage -= nr_pages; + vaddr += PAGE_SIZE * nr_pages; + iova += PAGE_SIZE * nr_pages; + batch->offset += nr_pages; + batch->size -= nr_pages; if (!batch->size) break; -- 2.20.1