From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82C63C2D0D2 for ; Mon, 16 Dec 2019 22:25:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4747521835 for ; Mon, 16 Dec 2019 22:25:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="I9dDhBzF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4747521835 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C919C8E0020; Mon, 16 Dec 2019 17:25:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BDF998E0023; Mon, 16 Dec 2019 17:25:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91B868E0022; Mon, 16 Dec 2019 17:25:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id 5EB618E0020 for ; Mon, 16 Dec 2019 17:25:45 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 31F331EE6 for ; Mon, 16 Dec 2019 22:25:45 +0000 (UTC) X-FDA: 76272437850.16.spark23_1503a0708da0c X-HE-Tag: spark23_1503a0708da0c X-Filterd-Recvd-Size: 5700 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Mon, 16 Dec 2019 22:25:44 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 16 Dec 2019 14:25:33 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 16 Dec 2019 14:25:41 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 16 Dec 2019 14:25:41 -0800 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Dec 2019 22:25:41 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 16 Dec 2019 22:25:41 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 16 Dec 2019 14:25:41 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard Subject: [PATCH v11 19/25] vfio, mm: pin_user_pages (FOLL_PIN) and put_user_page() conversion Date: Mon, 16 Dec 2019 14:25:31 -0800 Message-ID: <20191216222537.491123-20-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20191216222537.491123-1-jhubbard@nvidia.com> References: <20191216222537.491123-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1576535133; bh=ENBSgRCtmfM034uozTO4hXHrimG7hXnL4nYIQJVrTfw=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=I9dDhBzF/PguU+XE12Ggn+/FAYJsjhJc4450IdMgxxc584tUNJ4dAdiZ4jfLz87pC dNXLPxqvBq7K5v+dBDEEsVkyN9OETUte0/TjuxZfFV1T0NfmqR4VQp26smQsgxdqgp t67CXfVm+lYk9M2PmnWquK2wLZdd3+9MKJFJrKcGkI5UYxW+WJMhWZwLo3ASf/tIGa D1BCp6xpz0TAlO7zLVhaTI3Dh3cliI8n9gH0hTHzOlss6uotBnEbM4O+iu56BT3Rh0 5sE9C/+X270qbgvSnz6xXiSzjO4lsaC7tfPfYpIojTZRomZ+dClk1gYjfXVEL6Uhqx PxKz6AZB2voFA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: 1. Change vfio from get_user_pages_remote(), to pin_user_pages_remote(). 2. Because all FOLL_PIN-acquired pages must be released via put_user_page(), also convert the put_page() call over to put_user_pages_dirty_lock(). Note that this effectively changes the code's behavior in vfio_iommu_type1.c: put_pfn(): it now ultimately calls set_page_dirty_lock(), instead of set_page_dirty(). This is probably more accurate. As Christoph Hellwig put it, "set_page_dirty() is only safe if we are dealing with a file backed page where we have reference on the inode it hangs off." [1] [1] https://lore.kernel.org/r/20190723153640.GB720@lst.de Tested-by: Alex Williamson Acked-by: Alex Williamson Signed-off-by: John Hubbard --- drivers/vfio/vfio_iommu_type1.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type= 1.c index b800fc9a0251..18bfc2fc8e6d 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -309,9 +309,8 @@ static int put_pfn(unsigned long pfn, int prot) { if (!is_invalid_reserved_pfn(pfn)) { struct page *page =3D pfn_to_page(pfn); - if (prot & IOMMU_WRITE) - SetPageDirty(page); - put_page(page); + + put_user_pages_dirty_lock(&page, 1, prot & IOMMU_WRITE); return 1; } return 0; @@ -329,7 +328,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned= long vaddr, flags |=3D FOLL_WRITE; =20 down_read(&mm->mmap_sem); - ret =3D get_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM, + ret =3D pin_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM, page, NULL, NULL); if (ret =3D=3D 1) { *pfn =3D page_to_pfn(page[0]); --=20 2.24.1