From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16AFAC004D4 for ; Thu, 19 Jan 2023 21:19:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 79DCA6B0071; Thu, 19 Jan 2023 16:19:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 74D546B0073; Thu, 19 Jan 2023 16:19:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5ED926B0078; Thu, 19 Jan 2023 16:19:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4C8D26B0071 for ; Thu, 19 Jan 2023 16:19:31 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id F30D2160E00 for ; Thu, 19 Jan 2023 21:19:30 +0000 (UTC) X-FDA: 80372814900.20.C4CFBFA Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2058.outbound.protection.outlook.com [40.107.94.58]) by imf22.hostedemail.com (Postfix) with ESMTP id D2A0FC0003 for ; Thu, 19 Jan 2023 21:19:27 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=NHPquu7J; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf22.hostedemail.com: domain of jhubbard@nvidia.com designates 40.107.94.58 as permitted sender) smtp.mailfrom=jhubbard@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674163168; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WJd3dLVLQ+iGtMkL3alO808ylo/P4efTp4/hbCwAkgU=; b=bSv2h50ox9ivYfkCrNGkC9DayjJPZlQ6us0vNBQ5p3nb5r0n/HGgOd+UkF6rfUi/wSgO17 aOP7HQstY+3MBt1BZ8QNCWD27hYCg3BJOl30H2+z6BXhE7Cj8kwbYytZLreqnuKJzJK9m7 A9L6zmpvK6N5d87GhtPavs38idocQWQ= ARC-Authentication-Results: i=2; imf22.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=NHPquu7J; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf22.hostedemail.com: domain of jhubbard@nvidia.com designates 40.107.94.58 as permitted sender) smtp.mailfrom=jhubbard@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1674163168; a=rsa-sha256; cv=pass; b=t158hJrwTBI1eucFSMP21ckYUZhUvpFIPQLLVkr6lciGhIfYRt55L20LuSYjN2jtUTKEQo esS8wvbhomkFkOsW5uyxDHFKHlOw5vrmFMkhqyVd/ZDcWyPbAdDXScvgKV7dHGXxY2gFz7 4SGrBrogNAPmFDDkPagxhoeZ84Xlqmk= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ao5EQwcfxc0S9+yoLTvDph8FWFhBQ10I7o5/e0aOfFIxYqovSxh0aZPO5+Lgrvw6lGw4mVY1X093U+5ypUMGvDMyfrCx52qz9mTlVlmhrX9cxTh8fkX4jqagJsstgabdk/96bI5Ug5+q9kXwbqO5e5472X9V5m5c1mA0fQFdvvLJqMqhSdDVKnM8evAJM/dSsKdqCWVuOPi1LZFiE77h3/u/YPtIdKcbuZK6sEjtxQ1kogOuLR/UgxvoOUpbHRBxwQD4SL2gP/CdhTggmMefeVDFs9NPrqnvCsfW3gUNTA8g4cc4QQdI8PBLlkQyoW8zXI77wpha6FxnJTS72gyofQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WJd3dLVLQ+iGtMkL3alO808ylo/P4efTp4/hbCwAkgU=; b=Q+jdbqOoVsi7cyZGi6N+2gSUGUvhFCqKlG4hOJwntF29Ch31VJ9p35MQI/Jouqg7YZZuiKYYJ/haBSsa6fq/kPSoVDps+pDX78S2azQ2zAKLg02ktOO0vH0XcWVek+GhlHKDrFJjKV60QidFnxl6qALEyd2x0vAWQAGEUywp38bixyepH1cN6xqo1GVf+RlGLRjZP3dZ7IXavpbf1IhUD69hkmg9UXFNJ6yBrCbwxBHKN++IJWY4fKWwpay9gFpa5fmkSIPArBOBU8EAybJCPAqU/kW6nLUTifHJAONo6DE+Qv9FO6NfEQs+6SgR6gKlyD8TMQQ6NaXtte/GDhTS3w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=kvack.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WJd3dLVLQ+iGtMkL3alO808ylo/P4efTp4/hbCwAkgU=; b=NHPquu7JAmtSJU8dRXP62EvG0mchWsuOJ8tDTb32Hs98FhFO8VKAbEfZauETrWv1IH/0E+GoMNcCASZeWEuPwllS6cTYnBwLQucZWHzPynHPcZFslLdXXkAHvPm585sxK56KDRWvfhJb3l+TLZDy7wwZ5W8N2HsgqJZW17wuhoCvrUcMRdnVTAe03E1FR5CfzUQhdpvdg+34PDYqeWHTtRI5i10FAtZIFob5HyUNXZtuf21ypSdmHSPIm1XLh0iQ9zG3cXAscxq+Egfpu8I6Yx/LfW86Fkyzpub9Fh/NFZH1iTtpddzU1KdNYIZUuXJ1fzZSa2i+BFztMUaneDJZeA== Received: from DM6PR02CA0143.namprd02.prod.outlook.com (2603:10b6:5:332::10) by CY8PR12MB7708.namprd12.prod.outlook.com (2603:10b6:930:87::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11; Thu, 19 Jan 2023 21:19:25 +0000 Received: from DM6NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:5:332:cafe::76) by DM6PR02CA0143.outlook.office365.com (2603:10b6:5:332::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.25 via Frontend Transport; Thu, 19 Jan 2023 21:19:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT022.mail.protection.outlook.com (10.13.172.210) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.16 via Frontend Transport; Thu, 19 Jan 2023 21:19:25 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 19 Jan 2023 13:19:07 -0800 Received: from [10.110.48.28] (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 19 Jan 2023 13:19:07 -0800 Message-ID: <6b1f8b54-031f-d0ea-f393-86f5a23aa93d@nvidia.com> Date: Thu, 19 Jan 2023 13:19:06 -0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Subject: Re: [PATCH 1/8] mm/gup: have internal functions get the mmap_read_lock() Content-Language: en-US To: Jason Gunthorpe CC: Alistair Popple , References: <1-v1-dd94f8f0d5ad+716-gup_tidy_jgg@nvidia.com> From: John Hubbard In-Reply-To: <1-v1-dd94f8f0d5ad+716-gup_tidy_jgg@nvidia.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT022:EE_|CY8PR12MB7708:EE_ X-MS-Office365-Filtering-Correlation-Id: b39caf7f-1616-4197-d732-08dafa62d75d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JQUv6Va/EGp1oc8K7TwzqEeZM87mM8+iVAyRk8rYfPzpq52rmIUO2MMoaPSLJfy30w4AjpgyNDtKQCas0Kcbj40PLuL5RnsLmCucwhtkF3gzATw+aLQIHIST/9yo7eOpH1Huj0AVcxX7ssx8oKVhoYDahgzKHVImH7Ydf9dzfaUP6hJMvxxfvWxNyRaMehtbv5mEWAFt2f3ZsnsoIW1XdQ9x74S6b2BbQ+3PE72MZGv7Xj5OSAZiYiDJ4V/kqO5qIUtG1aOwB6J9kGZGUh/ex9kW2zejzk07DFJ0inhr8ESHEGv0g8zqP9waiMIhAt38YWv4KzrjppOA7qzKSy9RAylyNmw5PJtaymxSSZ9wOA32Kyh4OquIZJO+X/6i7E/2DVyK1556YPeKgxNf4lgZfAQNkDs4jPxUh4xq2UpQE+KnZYv9UlVDxg8UkCfI0PzAnE3Y/7jA1JlzH9yzcX6ybJvoIeUHddysRQzuhweCTosjwPZb3ew73/nVjdAqq1nMSUrNWGSGyZahzCGcoBpFoZFRy8bCH+BsgEF7g586LRxTveNscgLPM3lcPZjwNnGWjZgRmhGHMmcZrhVbEhZ5Eku0A0NFIYMvvs5+Xzxc/GKi/ak6xN1LIcV4Yvk1lFztc2XImLr9/zIrrsn8+LLkCV/b+0huYLFSa/dTLweZqEyUtpUUIBiqnfKXsPdFk8mH5wg8RU1Dknf47DATFo2SCch8TI8tcsa7v2Lry5Gh8RTUS2+1trK6x2bZwo8tD+l6FxDOp2iVN/zWKAqZKyzgVw== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(136003)(346002)(39860400002)(396003)(451199015)(46966006)(36840700001)(40470700004)(82740400003)(2906002)(31696002)(36860700001)(356005)(7636003)(31686004)(2616005)(36756003)(40460700003)(426003)(47076005)(40480700001)(82310400005)(16576012)(6636002)(336012)(54906003)(37006003)(316002)(86362001)(5660300002)(70586007)(41300700001)(4326008)(70206006)(8676002)(53546011)(26005)(478600001)(186003)(83380400001)(16526019)(8936002)(6862004)(15583001)(43740500002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 21:19:25.1774 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b39caf7f-1616-4197-d732-08dafa62d75d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7708 X-Rspamd-Queue-Id: D2A0FC0003 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: cyq7ppsk7okdh7gtsy4ewu7et9parqze X-HE-Tag: 1674163167-747854 X-HE-Meta: U2FsdGVkX1+LxgaaA86yqsmaOwxioE13CXT3efdwXlZtgmaUGpZCmJVi/072tnB4AzYYoHuvbXKVVCcr+AV+C5hCzWXZ1y+yNMnLyLGlMBZBCjht7BRr+VB1CJo6xTIEoJ3eZpyMBztGFdaQbcm7/8FRqEfgXzhQqZZQkxCYF/Ku2QuACWCOJbn71TufTURhJNEmPgvyLR6x4jfJ4piSFj3X5eJl8QDT9n6Y9Nc9EpUyY9ZmcG4hBu3VHfpTogGQ9P48FgK4Raq6l8psh/pxJkxHg5GnvRxCbB/48+izij/WacMRquJU+QscG3IIpfKzyFTAcKhjGBWtdIObmz3F91Ip0yipgrm3teXe0byUBczpEf5jdtVmcXZe62PTyMfLQBOiN51qIfUvV8HTDwkyftIfNFdDAnDl1Z7Lq6yu/JGztlPIwZWtMj/2r+F2tlq6g7qVm/CwIwgF26DDMvjuRRR0pHSk3HcFIMDHftm7Q4ykAQuemnf2W0ms0AjPzKA8u7SU2zZDPhe2bkL0RWs6zeEbpmZWCNAVjMksGCtR/4TGfoRKEuhPrcdY/3LpfVwm1NCTpQI3+fTPYuRTNK7tYmrZ0CluX9HAN2SYJfetlHdDqUvL9NsXmRvBPtwBmjNIgVll4Vr8v17xjlKU6QYClLIW1iwj9m7w21sjhoc+cy/jhD4sTUhA3NUhGjjTjFo7DZ9h0rFnmqhRC61GfEsyhXVzWiLWDGK2rg/qhwDeAeWwIZl30WUZNFuaVhyT5HALuAE/Ridm4DM2iMzyoMsPEMirrtrGxdLSkeQMolUbe+uU0Kd/jhZGYpizwHGjwS9CBh833/OJSXS74gMIdhaFqw7MaLRSOyMpv+T4u8uMT/KYJFaY86DGvhb/+pr0mlEQiVKlaVjfteoULt+ErrXaP3lnG/68BlI/y+SwST4/aB7lHiyCWO/wsbSMlvkvt3wBuHS7Ft5wTjYbfGjrYQw BPvQj6RP S7HRKQi5cRYEdjMedDCuJF8kjvtxSuE6jbQCcdDE7X6vGX4iQg96kW/TbcqU05MQuEfMOw7YfeQfDk2CR1gJDNIAKT17nbTsY/0H28504hzISnxB4Vlwm882ayTA8qB0YsTFlqQg9Xg7jiGmKDcvMivl0Yx3VJpHIBH8cxLl3DNi+B5gKQOhcqOp8/Fcg4aMN9ppPEBQkvhf1AGbO+c6h5WNFiyb8e+nF28rxndRDUM/t4y3badmA4G1ZOla0MYCakTFKzM2SgdeitQFbK7OHKD0NxOKNQKqC1qp4tXvavwmPbcSKfkTEZi6ZrFKQoN6MVs27+7gpwAM4PIK1I0l6v3FY8u9bqIQdqt/LHLT7lAcG6hJrVnjSnozFzsfF/aViO2K6ijzlj/Pcp+Q= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 1/17/23 07:58, Jason Gunthorpe wrote: > __get_user_pages_locked() and __gup_longterm_locked() both require the > mmap lock to be held. They have a slightly unusual locked parameter that > is used to allow these functions to unlock and relock the mmap lock and > convey that fact to the caller. > > Several places wrapper these functions with a simple mmap_read_lock() just > so they can follow the optimized locked protocol. > > Consolidate this internally to the functions. Allow internal callers to > set locked = 0 to cause the functions to obtain and release the lock on > their own. > > Reorganize __gup_longterm_locked() to use the autolocking in > __get_user_pages_locked(). > > Replace all the places obtaining the mmap_read_lock() just to call > __get_user_pages_locked() with the new mechanism. Replace all the internal > callers of get_user_pages_unlocked() with direct calls to > __gup_longterm_locked() using the new mechanism. > > A following patch will add assertions ensuring the external interface > continues to always pass in locked = 1. > > Signed-off-by: Jason Gunthorpe > --- > mm/gup.c | 92 +++++++++++++++++++++++++++++--------------------------- > 1 file changed, 47 insertions(+), 45 deletions(-) Nice cleanup already in just this first patch... > > diff --git a/mm/gup.c b/mm/gup.c > index f45a3a5be53a48..3a9f764165f50b 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1343,13 +1343,22 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, > unsigned int flags) > { > long ret, pages_done; > - bool lock_dropped; > + bool lock_dropped = false; A small thing, but the naming and the associated comments later are now inaccurate. Let's add function-level documentation to explain the new calling convention for __get_user_pages_locked(), change the name of lock_dropped to must_unlock, add a comment mid-function, and correct the comment at the end of the function. Like this: diff --git a/mm/gup.c b/mm/gup.c index c0f86ff7848a..5b7c09be7442 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1331,8 +1331,17 @@ bool gup_signal_pending(unsigned int flags) } /* - * Please note that this function, unlike __get_user_pages will not - * return 0 for nr_pages > 0 without FOLL_NOWAIT + * Locking: (*locked == 1) means that the mmap_lock has already been acquired by + * the caller. This function may drop the mmap_lock. If it does so, then it will + * set (*locked = 0). + * + * (*locked == 0) means that the caller expects this function to acquire and + * drop the mmap_lock. Therefore, the value of *locked will still be zero when + * the function returns, even though it may have changed temporarily during + * function execution. + * + * Please note that this function, unlike __get_user_pages(), will not + * return 0 for nr_pages > 0, unless FOLL_NOWAIT is used. */ long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, @@ -1343,7 +1352,7 @@ long __get_user_pages_locked(struct mm_struct *mm, unsigned int flags) { long ret, pages_done; - bool lock_dropped = false; + bool must_unlock = false; if (locked) { /* if VM_FAULT_RETRY can be returned, vmas become invalid */ @@ -1357,7 +1366,7 @@ long __get_user_pages_locked(struct mm_struct *mm, if (locked && !*locked) { if (mmap_read_lock_killable(mm)) return -EAGAIN; - lock_dropped = true; + must_unlock = true; *locked = 1; } @@ -1412,7 +1421,9 @@ long __get_user_pages_locked(struct mm_struct *mm, if (likely(pages)) pages += ret; start += ret << PAGE_SHIFT; - lock_dropped = true; + + /* The lock was temporarily dropped, so we must unlock later: */ + must_unlock = true; retry: /* @@ -1459,10 +1470,11 @@ long __get_user_pages_locked(struct mm_struct *mm, pages++; start += PAGE_SIZE; } - if (lock_dropped && *locked) { + if (must_unlock && *locked) { /* - * We must let the caller know we temporarily dropped the lock - * and so the critical section protected by it was lost. + * We either temporarily dropped the lock, or the caller + * requested that we both acquire and drop the lock. Either way, + * we must now unlock, and notify the caller of that state. */ mmap_read_unlock(mm); *locked = 0; ... > @@ -3180,14 +3183,13 @@ EXPORT_SYMBOL(pin_user_pages); > long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, > struct page **pages, unsigned int gup_flags) > { > - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ > - if (WARN_ON_ONCE(gup_flags & FOLL_GET)) > - return -EINVAL; Why are we dropping the assertion check for FOLL_GET? thanks, -- John Hubbard NVIDIA