From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E44AC2BD05 for ; Mon, 24 Jun 2024 07:05:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F230A6B03B9; Mon, 24 Jun 2024 03:05:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ED0346B03BA; Mon, 24 Jun 2024 03:05:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D6D2E6B03BB; Mon, 24 Jun 2024 03:05:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B8F536B03B9 for ; Mon, 24 Jun 2024 03:05:34 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 78127A28EE for ; Mon, 24 Jun 2024 07:05:34 +0000 (UTC) X-FDA: 82264896588.01.BCFFFFE Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf26.hostedemail.com (Postfix) with ESMTP id 4FBE6140019 for ; Mon, 24 Jun 2024 07:05:32 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Ql+OWm8e; spf=pass (imf26.hostedemail.com: domain of vivek.kasireddy@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719212725; a=rsa-sha256; cv=none; b=1mKM8rB1ArL7jM2VSGUpRXIQXb1+qwpz8+lTU/U3ZiZICqN7iXGspZukO5/jO2b9WlB9bv 3DoRVtH47eyhEUxEp+ji41sV4q7hkJGFZU/UIRU6RhWkh5LCfF296AnS0I3ahUftV+bJlG nohTi8aQe8xH9dK3bUOl0I32PbJeMSI= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Ql+OWm8e; spf=pass (imf26.hostedemail.com: domain of vivek.kasireddy@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=vivek.kasireddy@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719212725; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=e2ZvWo51t2OTB9zRJu4TglOkpkiWkEyG9BmKUgtRmOA=; b=5FpmQ1kMVINGUEHMYNCKCa+9uIkgRWep02W7BeNwR6XtWHdNzvEl01zNyKvZJSnrIj2Q4o 39y2+W8k2qcUVsWASKclkMmm8/GIVDIfTLjPGnbNKWMHdmjjRwIQNflphaFtFxEVILdAqx wsBk8u23MP+sYWK+zlUyWrYhHMjdEVU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1719212733; x=1750748733; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tt6lFLF/Si06sd9m8G4bs00brFbAJk62FJizVRolUmA=; b=Ql+OWm8ecd0uSqp8O0XS1WohbyXsOeopSk3yoBtEX/lQMBXbLTVqe6BV +kaM9n6Zh8bnss0eQBcIMy4I5wwb2rFJ3bh+JuAgO6VnnUmwlKvs7jICb mHWzfZuDA2PJWRZUHpJMtKt0PQipq2oraUABXP6YY37lPHzuj8ce9egCE ehqB+JiR+rNqogmsJJRqLoLHu+2/vL7Ie52qh5m3s17beF660m0Yro5sQ MqZyj06v6dFQKxNzUZpr7Y412uc6aqdKdvfRHkYnY8/SBbDSWjKem0elZ UF0+FJCC+66ieaUiabVcMevdMSo6uYYOpdEwEvj9ty04/A1X2Vdo7FBu/ g==; X-CSE-ConnectionGUID: pC72eetZQGmopTnnIgAXZA== X-CSE-MsgGUID: TmS9YhQJTlO7UV0o52EDkA== X-IronPort-AV: E=McAfee;i="6700,10204,11112"; a="16134977" X-IronPort-AV: E=Sophos;i="6.08,261,1712646000"; d="scan'208";a="16134977" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jun 2024 00:05:26 -0700 X-CSE-ConnectionGUID: BtgqaUXsQlG55bKNyDgIcg== X-CSE-MsgGUID: /x8KsNWHRjq6X3OiQwIF+Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,261,1712646000"; d="scan'208";a="73955890" Received: from vkasired-desk2.fm.intel.com ([10.105.128.132]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jun 2024 00:05:26 -0700 From: Vivek Kasireddy To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Vivek Kasireddy , David Hildenbrand , Matthew Wilcox , Daniel Vetter , Hugh Dickins , Peter Xu , Jason Gunthorpe , Gerd Hoffmann , Dongwon Kim , Junxiao Chang , Dave Airlie Subject: [PATCH v16 8/9] udmabuf: Pin the pages using memfd_pin_folios() API Date: Sun, 23 Jun 2024 23:36:16 -0700 Message-ID: <20240624063952.1572359-9-vivek.kasireddy@intel.com> X-Mailer: git-send-email 2.45.1 In-Reply-To: <20240624063952.1572359-1-vivek.kasireddy@intel.com> References: <20240624063952.1572359-1-vivek.kasireddy@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: qjfxrcx98uquworgd497w4o3jtggesyi X-Rspamd-Queue-Id: 4FBE6140019 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1719212732-287001 X-HE-Meta: U2FsdGVkX18rzuVDiyzOMM9S0ocU2qPCIZaqx4wskQo5i8Tzs/h3oCNfr7RUPPJMHtzKQo0ZZkPkd+2mXsFU/iX7m4aooTgJypJ7Tx4wWjxMRCRAxQMws5JUusNLPws7fo1kWNLORcOQ2sIEEuARE+QOrqFEkc0DrV+iP+uEWdm/Q3dQiNnTV99VdEAUDT85iIEDd+n7wPOhNnV0/o6s1Xf0/K5GRD/QziIDDGrpECwYj7209mgs1QnY53eNNYqAbUTz1fg3rFG1RjRlemKj5QuXAHB531W89uqZiBgkD0KWsrUkNxNLAmUE147QLOEPJpS0klc/V7gVb5714OlQKNuzv7f7wGYCxx+ZkZ1259DxiyiYGufrbq0cs8mZZsdzWCn9WE7Kh+5g6qx7Aar4bemymAPRyjwhq8nxlbkn6YGjUN+qpzmJyjpOjDlsKFqjzk9j1SOFDGgrDDxU4kS4n/0NztAAKkX8ex14zpkBHAPQlcY00v+MS0sxOSSMlV85rAwYRS9Yf2PTOxYwy9aj8RaC8I94cLfU5zJIFl4mK2ZvR8go3+TMMOMGyBL1YPXrOArSr+iC83Qcob8tJV0L6MCViR0ZYc36vQAwwqtho+MTMD2TBVk6AvmRsxHE0UezVsLSydXQbgpRMhCxDOcxGoMxxRDIAkRzqUPjkJm7EdfOBv8bjtEqaONpHoVvqoPYq0wbfnXLO2jTUS6dBr0JXZqsAF4fai/oqTGEd2n0Sd8mJ9JU1Y6D6CVIabieoSAyD0JVxKeOFJj6jwONOo/I+v8JlPZdBvnQ/Ly5JQym0SenDLwUkfjMWEjzBBM24J2KQAQXH5ikR1gmb8oIVT0s3mBVHvU8u270E+GY4yucVwUVwtvp+TaBMi073fwsRDtWEz0FbVCw/SB6YBBPy7tLmPm9okn4OdrPJEbIXaV9d+M4qrQloe//r+cwqLXQ8zLoYhL+RptK5OKUvNgt5Nl /ff4JaAB 2NW011u4MhAlVBkFKTtaD5dAt4wNUY0YLi8ICzpYfoZ/dqN1Fe0jCHHTavaF+PKotRrnoO9wad/MBnbPjG05kzjF/UXw6xTlD65+9tmbF12XYrxqr9u/BmeIMRLNsE/lq2VMmLt4tNNRhuwNxlpHuwsYRrMOLonV98pijQ9xJepFTwub2YM5XunB4SrIiHC2jzIyq48XAEDbNAgNxk55csOpBSqDvLXfJMRWL44i8/IhxnoeJ/y5ODJwfc6Bd8XVS6WgxHWDIK8pXnbHh4EorVHYarS7metOCZiuhmx30e+cKrKZ+SUr5folFq4Ts1qcaP+rGqNxlZWkVTHo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Using memfd_pin_folios() will ensure that the pages are pinned correctly using FOLL_PIN. And, this also ensures that we don't accidentally break features such as memory hotunplug as it would not allow pinning pages in the movable zone. Using this new API also simplifies the code as we no longer have to deal with extracting individual pages from their mappings or handle shmem and hugetlb cases separately. Cc: David Hildenbrand Cc: Matthew Wilcox Cc: Daniel Vetter Cc: Hugh Dickins Cc: Peter Xu Cc: Jason Gunthorpe Cc: Gerd Hoffmann Cc: Dongwon Kim Cc: Junxiao Chang Acked-by: Dave Airlie Acked-by: Gerd Hoffmann Signed-off-by: Vivek Kasireddy --- drivers/dma-buf/udmabuf.c | 155 ++++++++++++++++++++------------------ 1 file changed, 80 insertions(+), 75 deletions(-) diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c index e67515808ed3..047c3cd2ceff 100644 --- a/drivers/dma-buf/udmabuf.c +++ b/drivers/dma-buf/udmabuf.c @@ -30,6 +30,12 @@ struct udmabuf { struct sg_table *sg; struct miscdevice *device; pgoff_t *offsets; + struct list_head unpin_list; +}; + +struct udmabuf_folio { + struct folio *folio; + struct list_head list; }; static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf) @@ -153,17 +159,43 @@ static void unmap_udmabuf(struct dma_buf_attachment *at, return put_sg_table(at->dev, sg, direction); } +static void unpin_all_folios(struct list_head *unpin_list) +{ + struct udmabuf_folio *ubuf_folio; + + while (!list_empty(unpin_list)) { + ubuf_folio = list_first_entry(unpin_list, + struct udmabuf_folio, list); + unpin_folio(ubuf_folio->folio); + + list_del(&ubuf_folio->list); + kfree(ubuf_folio); + } +} + +static int add_to_unpin_list(struct list_head *unpin_list, + struct folio *folio) +{ + struct udmabuf_folio *ubuf_folio; + + ubuf_folio = kzalloc(sizeof(*ubuf_folio), GFP_KERNEL); + if (!ubuf_folio) + return -ENOMEM; + + ubuf_folio->folio = folio; + list_add_tail(&ubuf_folio->list, unpin_list); + return 0; +} + static void release_udmabuf(struct dma_buf *buf) { struct udmabuf *ubuf = buf->priv; struct device *dev = ubuf->device->this_device; - pgoff_t pg; if (ubuf->sg) put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL); - for (pg = 0; pg < ubuf->pagecount; pg++) - folio_put(ubuf->folios[pg]); + unpin_all_folios(&ubuf->unpin_list); kfree(ubuf->offsets); kfree(ubuf->folios); kfree(ubuf); @@ -218,64 +250,6 @@ static const struct dma_buf_ops udmabuf_ops = { #define SEALS_WANTED (F_SEAL_SHRINK) #define SEALS_DENIED (F_SEAL_WRITE) -static int handle_hugetlb_pages(struct udmabuf *ubuf, struct file *memfd, - pgoff_t offset, pgoff_t pgcnt, - pgoff_t *pgbuf) -{ - struct hstate *hpstate = hstate_file(memfd); - pgoff_t mapidx = offset >> huge_page_shift(hpstate); - pgoff_t subpgoff = (offset & ~huge_page_mask(hpstate)) >> PAGE_SHIFT; - pgoff_t maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT; - struct folio *folio = NULL; - pgoff_t pgidx; - - mapidx <<= huge_page_order(hpstate); - for (pgidx = 0; pgidx < pgcnt; pgidx++) { - if (!folio) { - folio = __filemap_get_folio(memfd->f_mapping, - mapidx, - FGP_ACCESSED, 0); - if (IS_ERR(folio)) - return PTR_ERR(folio); - } - - folio_get(folio); - ubuf->folios[*pgbuf] = folio; - ubuf->offsets[*pgbuf] = subpgoff << PAGE_SHIFT; - (*pgbuf)++; - if (++subpgoff == maxsubpgs) { - folio_put(folio); - folio = NULL; - subpgoff = 0; - mapidx += pages_per_huge_page(hpstate); - } - } - - if (folio) - folio_put(folio); - - return 0; -} - -static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd, - pgoff_t offset, pgoff_t pgcnt, - pgoff_t *pgbuf) -{ - pgoff_t pgidx, pgoff = offset >> PAGE_SHIFT; - struct folio *folio = NULL; - - for (pgidx = 0; pgidx < pgcnt; pgidx++) { - folio = shmem_read_folio(memfd->f_mapping, pgoff + pgidx); - if (IS_ERR(folio)) - return PTR_ERR(folio); - - ubuf->folios[*pgbuf] = folio; - (*pgbuf)++; - } - - return 0; -} - static int check_memfd_seals(struct file *memfd) { int seals; @@ -321,16 +295,19 @@ static long udmabuf_create(struct miscdevice *device, struct udmabuf_create_list *head, struct udmabuf_create_item *list) { - pgoff_t pgcnt, pgbuf = 0, pglimit; + pgoff_t pgoff, pgcnt, pglimit, pgbuf = 0; + long nr_folios, ret = -EINVAL; struct file *memfd = NULL; + struct folio **folios; struct udmabuf *ubuf; - int ret = -EINVAL; - u32 i, flags; + u32 i, j, k, flags; + loff_t end; ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL); if (!ubuf) return -ENOMEM; + INIT_LIST_HEAD(&ubuf->unpin_list); pglimit = (size_limit_mb * 1024 * 1024) >> PAGE_SHIFT; for (i = 0; i < head->count; i++) { if (!IS_ALIGNED(list[i].offset, PAGE_SIZE)) @@ -366,17 +343,46 @@ static long udmabuf_create(struct miscdevice *device, goto err; pgcnt = list[i].size >> PAGE_SHIFT; - if (is_file_hugepages(memfd)) - ret = handle_hugetlb_pages(ubuf, memfd, - list[i].offset, - pgcnt, &pgbuf); - else - ret = handle_shmem_pages(ubuf, memfd, - list[i].offset, - pgcnt, &pgbuf); - if (ret < 0) + folios = kmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL); + if (!folios) { + ret = -ENOMEM; goto err; + } + end = list[i].offset + (pgcnt << PAGE_SHIFT) - 1; + ret = memfd_pin_folios(memfd, list[i].offset, end, + folios, pgcnt, &pgoff); + if (ret <= 0) { + kfree(folios); + if (!ret) + ret = -EINVAL; + goto err; + } + + nr_folios = ret; + pgoff >>= PAGE_SHIFT; + for (j = 0, k = 0; j < pgcnt; j++) { + ubuf->folios[pgbuf] = folios[k]; + ubuf->offsets[pgbuf] = pgoff << PAGE_SHIFT; + + if (j == 0 || ubuf->folios[pgbuf-1] != folios[k]) { + ret = add_to_unpin_list(&ubuf->unpin_list, + folios[k]); + if (ret < 0) { + kfree(folios); + goto err; + } + } + + pgbuf++; + if (++pgoff == folio_nr_pages(folios[k])) { + pgoff = 0; + if (++k == nr_folios) + break; + } + } + + kfree(folios); fput(memfd); memfd = NULL; } @@ -389,10 +395,9 @@ static long udmabuf_create(struct miscdevice *device, return ret; err: - while (pgbuf > 0) - folio_put(ubuf->folios[--pgbuf]); if (memfd) fput(memfd); + unpin_all_folios(&ubuf->unpin_list); kfree(ubuf->offsets); kfree(ubuf->folios); kfree(ubuf); -- 2.45.1