From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BA75C54E94 for ; Wed, 25 Jan 2023 02:31:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 145EF6B0071; Tue, 24 Jan 2023 21:31:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F5F96B0072; Tue, 24 Jan 2023 21:31:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E89FF6B0073; Tue, 24 Jan 2023 21:31:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D1F666B0071 for ; Tue, 24 Jan 2023 21:31:01 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 91D941C6527 for ; Wed, 25 Jan 2023 02:31:01 +0000 (UTC) X-FDA: 80391743922.16.FE97523 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2084.outbound.protection.outlook.com [40.107.223.84]) by imf07.hostedemail.com (Postfix) with ESMTP id 6F1464001B for ; Wed, 25 Jan 2023 02:30:58 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=TnLXUX9N; spf=pass (imf07.hostedemail.com: domain of jhubbard@nvidia.com designates 40.107.223.84 as permitted sender) smtp.mailfrom=jhubbard@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674613858; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sKskuDM1VVuGmViJuhKiE6KLXAQuo76qaTBR3ubnpSA=; b=yIizYKTga5nY7CDiquQqgdsNID1De/BpU3721cbfm9qKuHS04PLUI0ATv370A+H3l2ZgrD olOkebPDoObhzLNQQz5BqtPUB2/g1ukRtThZuldA8+ZCRt5r4V48uiBEBBmB/LVQvJT86S 0ecfqcMk/8VgRl539CtCr85YQiRQHBI= ARC-Authentication-Results: i=2; imf07.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=TnLXUX9N; spf=pass (imf07.hostedemail.com: domain of jhubbard@nvidia.com designates 40.107.223.84 as permitted sender) smtp.mailfrom=jhubbard@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1674613858; a=rsa-sha256; cv=pass; b=UscNO+dnVAo5jvnq1Yz/CRrzuceAp/+CDsejDER/KoOtfj0lHSoGMNXxM6nejKrd9sN07p JIBjc/W6Wv849+2Zi4HS/kzEMLlR+q0FE4Aqkh0UEItAl+EM5d7iPo5BshBroeF5xMSDFA oQlIxKxOehLCpVhkag/lDTGIgTgazpY= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OmYvyTp/kkiArBgiUqyJp9M4AxJTELdefgCQbvXN4xCnWEQ3qQC9HuaX/vZ1tJTmOhcg3WE1iFgxFGvZxLw5HfxsZtYJHLQXe1/7lPxdb0XABsDFEUrTNWbqzD4b5S0zN6cha6bdgzfCl/YVKkp5Q0orkoeG0Ioh8i1J3ALH0VqAj6uCNIUwi8fjNNHdcI90U/aplV5bE13WhsZ3z8TVl77wsT7CrHX6OTw1xSjoOaFxB7QA9lT/Sj56KVSbStG+lNTMf8tTE9lMqT6egUn5Q4s/Dh0rMLaNszAa5H/X+ybmln4kNyBswRP872+MK08cH4mDImDpRd9gvZqs3HEgmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sKskuDM1VVuGmViJuhKiE6KLXAQuo76qaTBR3ubnpSA=; b=gpB3GvqUYsTuYFjyzWenKNZn+GPuoBMkmjoaEMAHoY4Mt6gbJy5MHMIrIbmAOPod0DAST/8wZppP4YtizEPovPt30Ktv4Acoz3/tI9SCEY98mtH+p19lPvnvObTPg5fxOkBKNrUkdMxZKmiJHkT0bdro0BsDU1TULSp5grBCsq0lqf38rls+dTdjRaHrKjE4SpNpoEEoDteL8BQVFBlWgDSSt5CN4hr67Poydax7bvsDQKAP6aZTukQUl5ZsQVDBrQ09Z1sQTR3gxVzGUL4MON+6QowKpbweLy4N0QlfzPlDoSSq2DLqXannQS/sxHYAbEAmnIyTwglTiSNBP2v2Nw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sKskuDM1VVuGmViJuhKiE6KLXAQuo76qaTBR3ubnpSA=; b=TnLXUX9NweAPeM+DaPXuYpBH2I7X/PaBhcfQq//fgR8OHfjbvvjUeDD4uaJ0EGMpVQacqZwXAH2ePdYCu6RmNujTpUt5ypUo+fUZopkJqm5odyBodaOKRRGMZQ/w7VII6Tz2GjWLvcd01/+daZIKVmbXFLKW+57KPGme6ShFcQ1LzpmN7itkBR4VStulo6ewSf6cglJLMUqGX1ytxdkTDVafjqw80YOGzRD1w8CsU9uoMXQFDew8bXihslbzpISFtlaSwzKsmLVGMsRe25Zn0+O+Y4XYqEz0o9M2ushRuIWBF7QODcukC7ozqHbQCytmvCo4l2IlMvK5H5nG80D2VA== Received: from MW4PR04CA0241.namprd04.prod.outlook.com (2603:10b6:303:88::6) by SN7PR12MB6790.namprd12.prod.outlook.com (2603:10b6:806:269::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan 2023 02:30:56 +0000 Received: from CO1NAM11FT093.eop-nam11.prod.protection.outlook.com (2603:10b6:303:88:cafe::ba) by MW4PR04CA0241.outlook.office365.com (2603:10b6:303:88::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend Transport; Wed, 25 Jan 2023 02:30:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT093.mail.protection.outlook.com (10.13.175.59) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.17 via Frontend Transport; Wed, 25 Jan 2023 02:30:55 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 24 Jan 2023 18:30:41 -0800 Received: from [10.110.48.28] (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 24 Jan 2023 18:30:41 -0800 Message-ID: <35e0ad12-6c78-c067-1430-b22311ce9a48@nvidia.com> Date: Tue, 24 Jan 2023 18:30:40 -0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Subject: Re: [PATCH v2 05/13] mm/gup: simplify the external interface functions and consolidate invariants Content-Language: en-US To: Jason Gunthorpe CC: Alistair Popple , David Hildenbrand , David Howells , Christoph Hellwig , , "Mike Rapoport (IBM)" References: <5-v2-987e91b59705+36b-gup_tidy_jgg@nvidia.com> From: John Hubbard In-Reply-To: <5-v2-987e91b59705+36b-gup_tidy_jgg@nvidia.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT093:EE_|SN7PR12MB6790:EE_ X-MS-Office365-Filtering-Correlation-Id: 3c8c3ac5-5569-418d-ee3a-08dafe7c2fe2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: AUjYNRLO074QmkeXxTBzpVUMUghFKezJBpme3/sf/Rlicfh0hFKbfCeskhE5gm3WgN/04lL/HMdi6I8BBcRuDIzr/l5rszpfrMhlAPw2qUv/GRpmTgezsJXpz8I52KNKktKI607SepamPTG0E86L3GjBcfglhnHOHXyaqmpw+TIl7J4Hk4NEnsR8eWmjYuY6f2Np+mAe7LMhKfsIrZK49Xd+cvnAZKZOYWPT372AJppXB89m15FoTt6CoxXH8LDgiLQWlgSmRGLkFl65Qqe5eaWtgC3OZ/uyKkidwIjKrCpoCPWcj/iiphToEmy8BwW6tuDYlELm/rwChiu4rWC102L0apLOKr3gFVPBz0032HW1FreHSMDl3ROSLKsDMS88nPy+avyRJKoaYjB7Ki/JeuFaHO6Bg0+xiFDT3NqsAven2Ud6fCpyauZfp0NYlr38ztP+/1EsD0+w1zEJsFK1LbALyIhCqHOqKRaP0P7CC9tVVf7dA0tlGlvdPzHwM1bWlVzJCYtISF9hKlwC5ndiqHxcsijAHQs5+MHp7aHiK465JUFfd0qL0zKxwqd1z21Wi6nmj9IXe2UqstaWLzcNiVif/c4k+MWpwnq86LVuFyDHFoqO+S8pxmo6HBHq3mihdTMVXUQLQltcLOF2l+6cBbAIQQFDmuNXgcLsRQ29maczNNxwVVzCUeAVayfnkNDSa/JtEjlCNeGEyuxh0tCg+5RNWolwBgtO6O3REqMqy+w= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(376002)(346002)(396003)(136003)(451199018)(36840700001)(46966006)(40470700004)(40460700003)(6636002)(316002)(54906003)(31696002)(36756003)(86362001)(40480700001)(37006003)(478600001)(186003)(82740400003)(7636003)(356005)(36860700001)(82310400005)(26005)(47076005)(336012)(426003)(83380400001)(53546011)(2616005)(31686004)(16526019)(30864003)(70586007)(4326008)(70206006)(8676002)(6862004)(2906002)(16576012)(5660300002)(8936002)(41300700001)(43740500002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 02:30:55.8210 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3c8c3ac5-5569-418d-ee3a-08dafe7c2fe2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT093.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6790 X-Stat-Signature: h67wsx3exejx7g4ssiwjr711ejr3hnjt X-Rspam-User: X-Rspamd-Queue-Id: 6F1464001B X-Rspamd-Server: rspam06 X-HE-Tag: 1674613858-623454 X-HE-Meta: U2FsdGVkX1+QsX2tXZIHb9IgDlsQLCE7LcsKIARGfjiLxpPRChCViLzwGDL7C3lrVzmSLGzjpArU0+rNiAE6QAiVwAiDJxxPst37lU5NdBXUZmxIgL4qRN/nYUxJZMo/1kWwl6VwfXmeLzUBSEPNxBjGZxGtIvuZIrsEfmRn6taF8n0bcZFKXCSvoMycEMKrv6NGQzFEslgUbKSTA/chG9nn1G/Pa6/teEVBWOxrhSSdZempHgSbqmqa/1fGUmvPwkzaIaW0xm9rOsYvZXAyvR43n/yBnanVnDVjSgHC39bTA/HmhtzkRDc8bVJc9AapgUKU8rW0uk31zaUug5sj29W3Ez5lxRV8Si8cIAFsfmOOOQnc650aBMG0Evvk3mKzkMwGcJRXv5ZkCyE6Y9Ms0bLsyGPsEQQbC5smi33kTpaa59TVhAu1xh/PdowSBEaLpL7LjoyIuafIdGyR5JP82fToGoI8IVryjwqDQ2Aol/lHmYHdA13HnMITEURzmVX2o5+eeTSoXBTVovIvQojWvTrXo3UYscx5IQ8gBS8DUXvOH1pbw2zso1oDo85a9j/K3uaStzKkwFUCHHiJ934m3UCFuR+dPEBJrjCL6iLmb2nktHhR8onXQgVej7RIIKgVJm65soddsZcPhpZCvDcakeHqiQ0GgVqhNZW1jAHWQfbkwP4iqQxKMMk2kS5a3kWybazs6ZfS9KE4cyZNmq2PTET1waY2elPRVtotFFvfbKwYXVyrk68d5CGHspoQGO4gNtVYMKPnGruT/LR9AGcSEtaggSj7LVfH1/eTjg6yrWVbdw8rMW3LeIF0qpVQ0zR3S+JpiO7y0pt8MoyYduq4j6HnOAsTF9Xp2FvylZqjwii9ACCDgjJWKlWhnGL8GDtqc09pj66lNj/Eqt0Df+n9G/H+qhtUxdGV0HFJs+g3pxumA2nsOLJDOaGPuJ2OsyDM/UQK0BE3RgJ7UlikC+M KDLWnQ/O HZANe7yvSy/NNQzw/2eJFO9pNQXo3b0nEY1DGGYgrHLKmH3J6sr+HCUW6+hwfHArG+mIOYoLug3wkucVWdzXqyxr2pD3KDEsCKL5xERUYAOKOV0NVpkEA6DdXh1iRkCdNfdK8nhlAsMyzQSUODXjhzgdACgO/cBW6f4bI40Dq3a5hnvmZ8aeAykh+iRzfY5PlyC2nprRyP75gf8Yd8JLKt4Sl6WOxHTvPqaLbDDrg6uGaS06GVb9DRASutDfWy1iljOD5Z3PXe3+Zlcka+xkeVGlCVQEUQBRhSEl0ClVAOftGi6h8f+fSu9aPbJeJGKC8yjZtnjOztUUHqml8P50Z2MX69W+NlJ4IGBqYTmabiRDeb7E95ZLWFpGiOEIH1TQygjTPxGYhFjFuR2gbeSvD5IfgHg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 1/24/23 12:34, Jason Gunthorpe wrote: > The GUP family of functions have a complex, but fairly well defined, set > of invariants for their arguments. Currently these are sprinkled about, > sometimes in duplicate through many functions. > > Internally we don't follow all the invariants that the external interface > has to follow, so place these checks directly at the exported > interface. This ensures the internal functions never reach a violated > invariant. > > Remove the duplicated invariant checks. > > The end result is to make these functions fully internal: > __get_user_pages_locked() > internal_get_user_pages_fast() > __gup_longterm_locked() > > And all the other functions call directly into one of these. > > Suggested-by: John Hubbard > Acked-by: Mike Rapoport (IBM) > Signed-off-by: Jason Gunthorpe > --- > mm/gup.c | 153 +++++++++++++++++++++++------------------------ > mm/huge_memory.c | 10 ---- > 2 files changed, 75 insertions(+), 88 deletions(-) Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA > > diff --git a/mm/gup.c b/mm/gup.c > index a6559d7243db92..4c236fb83dcd3e 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -215,7 +215,6 @@ int __must_check try_grab_page(struct page *page, unsigned int flags) > { > struct folio *folio = page_folio(page); > > - WARN_ON_ONCE((flags & (FOLL_GET | FOLL_PIN)) == (FOLL_GET | FOLL_PIN)); > if (WARN_ON_ONCE(folio_ref_count(folio) <= 0)) > return -ENOMEM; > > @@ -818,7 +817,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, > if (vma_is_secretmem(vma)) > return NULL; > > - if (foll_flags & FOLL_PIN) > + if (WARN_ON_ONCE(foll_flags & FOLL_PIN)) > return NULL; > > page = follow_page_mask(vma, address, foll_flags, &ctx); > @@ -975,9 +974,6 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) > if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) > return -EOPNOTSUPP; > > - if ((gup_flags & FOLL_LONGTERM) && (gup_flags & FOLL_PCI_P2PDMA)) > - return -EOPNOTSUPP; > - > if (vma_is_secretmem(vma)) > return -EFAULT; > > @@ -1354,11 +1350,6 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, > long ret, pages_done; > bool must_unlock = false; > > - if (locked) { > - /* if VM_FAULT_RETRY can be returned, vmas become invalid */ > - BUG_ON(vmas); > - } > - > /* > * The internal caller expects GUP to manage the lock internally and the > * lock must be released when this returns. > @@ -2087,16 +2078,6 @@ static long __gup_longterm_locked(struct mm_struct *mm, > return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, > locked, gup_flags); > > - /* > - * If we get to this point then FOLL_LONGTERM is set, and FOLL_LONGTERM > - * implies FOLL_PIN (although the reverse is not true). Therefore it is > - * correct to unconditionally call check_and_migrate_movable_pages() > - * which assumes pages have been pinned via FOLL_PIN. > - * > - * Enforce the above reasoning by asserting that FOLL_PIN is set. > - */ > - if (WARN_ON(!(gup_flags & FOLL_PIN))) > - return -EINVAL; > flags = memalloc_pin_save(); > do { > nr_pinned_pages = __get_user_pages_locked(mm, start, nr_pages, > @@ -2106,28 +2087,66 @@ static long __gup_longterm_locked(struct mm_struct *mm, > rc = nr_pinned_pages; > break; > } > + > + /* FOLL_LONGTERM implies FOLL_PIN */ > rc = check_and_migrate_movable_pages(nr_pinned_pages, pages); > } while (rc == -EAGAIN); > memalloc_pin_restore(flags); > return rc ? rc : nr_pinned_pages; > } > > -static bool is_valid_gup_flags(unsigned int gup_flags) > +/* > + * Check that the given flags are valid for the exported gup/pup interface, and > + * update them with the required flags that the caller must have set. > + */ > +static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, > + int *locked, unsigned int *gup_flags_p, > + unsigned int to_set) > { > + unsigned int gup_flags = *gup_flags_p; > + > /* > - * FOLL_PIN must only be set internally by the pin_user_pages*() APIs, > - * never directly by the caller, so enforce that with an assertion: > + * These flags not allowed to be specified externally to the gup > + * interfaces: > + * - FOLL_PIN/FOLL_TRIED/FOLL_FAST_ONLY are internal only > + * - FOLL_REMOTE is internal only and used on follow_page() > */ > - if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) > + if (WARN_ON_ONCE(gup_flags & (FOLL_PIN | FOLL_TRIED | > + FOLL_REMOTE | FOLL_FAST_ONLY))) > + return false; > + > + gup_flags |= to_set; > + > + /* FOLL_GET and FOLL_PIN are mutually exclusive. */ > + if (WARN_ON_ONCE((gup_flags & (FOLL_PIN | FOLL_GET)) == > + (FOLL_PIN | FOLL_GET))) > + return false; > + > + /* LONGTERM can only be specified when pinning */ > + if (WARN_ON_ONCE(!(gup_flags & FOLL_PIN) && (gup_flags & FOLL_LONGTERM))) > + return false; > + > + /* Pages input must be given if using GET/PIN */ > + if (WARN_ON_ONCE((gup_flags & (FOLL_GET | FOLL_PIN)) && !pages)) > return false; > + > + /* At the external interface locked must be set */ > + if (WARN_ON_ONCE(locked && *locked != 1)) > + return false; > + > + /* We want to allow the pgmap to be hot-unplugged at all times */ > + if (WARN_ON_ONCE((gup_flags & FOLL_LONGTERM) && > + (gup_flags & FOLL_PCI_P2PDMA))) > + return false; > + > /* > - * FOLL_PIN is a prerequisite to FOLL_LONGTERM. Another way of saying > - * that is, FOLL_LONGTERM is a specific case, more restrictive case of > - * FOLL_PIN. > + * Can't use VMAs with locked, as locked allows GUP to unlock > + * which invalidates the vmas array > */ > - if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) > + if (WARN_ON_ONCE(vmas && locked)) > return false; > > + *gup_flags_p = gup_flags; > return true; > } > > @@ -2197,11 +2216,12 @@ long get_user_pages_remote(struct mm_struct *mm, > unsigned int gup_flags, struct page **pages, > struct vm_area_struct **vmas, int *locked) > { > - if (!is_valid_gup_flags(gup_flags)) > + if (!is_valid_gup_args(pages, vmas, locked, &gup_flags, > + FOLL_TOUCH | FOLL_REMOTE)) > return -EINVAL; > > return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, locked, > - gup_flags | FOLL_TOUCH | FOLL_REMOTE); > + gup_flags); > } > EXPORT_SYMBOL(get_user_pages_remote); > > @@ -2235,11 +2255,11 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, > unsigned int gup_flags, struct page **pages, > struct vm_area_struct **vmas) > { > - if (!is_valid_gup_flags(gup_flags)) > + if (!is_valid_gup_args(pages, vmas, NULL, &gup_flags, FOLL_TOUCH)) > return -EINVAL; > > return __get_user_pages_locked(current->mm, start, nr_pages, pages, > - vmas, NULL, gup_flags | FOLL_TOUCH); > + vmas, NULL, gup_flags); > } > EXPORT_SYMBOL(get_user_pages); > > @@ -2263,8 +2283,11 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, > { > int locked = 0; > > + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_TOUCH)) > + return -EINVAL; > + > return __get_user_pages_locked(current->mm, start, nr_pages, pages, > - NULL, &locked, gup_flags | FOLL_TOUCH); > + NULL, &locked, gup_flags); > } > EXPORT_SYMBOL(get_user_pages_unlocked); > > @@ -2992,7 +3015,9 @@ int get_user_pages_fast_only(unsigned long start, int nr_pages, > * FOLL_FAST_ONLY is required in order to match the API description of > * this routine: no fall back to regular ("slow") GUP. > */ > - gup_flags |= FOLL_GET | FOLL_FAST_ONLY; > + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, > + FOLL_GET | FOLL_FAST_ONLY)) > + return -EINVAL; > > nr_pinned = internal_get_user_pages_fast(start, nr_pages, gup_flags, > pages); > @@ -3029,16 +3054,14 @@ EXPORT_SYMBOL_GPL(get_user_pages_fast_only); > int get_user_pages_fast(unsigned long start, int nr_pages, > unsigned int gup_flags, struct page **pages) > { > - if (!is_valid_gup_flags(gup_flags)) > - return -EINVAL; > - > /* > * The caller may or may not have explicitly set FOLL_GET; either way is > * OK. However, internally (within mm/gup.c), gup fast variants must set > * FOLL_GET, because gup fast is always a "pin with a +1 page refcount" > * request. > */ > - gup_flags |= FOLL_GET; > + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_GET)) > + return -EINVAL; > return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); > } > EXPORT_SYMBOL_GPL(get_user_pages_fast); > @@ -3062,14 +3085,8 @@ EXPORT_SYMBOL_GPL(get_user_pages_fast); > int pin_user_pages_fast(unsigned long start, int nr_pages, > unsigned int gup_flags, struct page **pages) > { > - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ > - if (WARN_ON_ONCE(gup_flags & FOLL_GET)) > + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_PIN)) > return -EINVAL; > - > - if (WARN_ON_ONCE(!pages)) > - return -EINVAL; > - > - gup_flags |= FOLL_PIN; > return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); > } > EXPORT_SYMBOL_GPL(pin_user_pages_fast); > @@ -3085,20 +3102,14 @@ int pin_user_pages_fast_only(unsigned long start, int nr_pages, > { > int nr_pinned; > > - /* > - * FOLL_GET and FOLL_PIN are mutually exclusive. Note that the API > - * rules require returning 0, rather than -errno: > - */ > - if (WARN_ON_ONCE(gup_flags & FOLL_GET)) > - return 0; > - > - if (WARN_ON_ONCE(!pages)) > - return 0; > /* > * FOLL_FAST_ONLY is required in order to match the API description of > * this routine: no fall back to regular ("slow") GUP. > */ > - gup_flags |= (FOLL_PIN | FOLL_FAST_ONLY); > + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, > + FOLL_PIN | FOLL_FAST_ONLY)) > + return 0; > + > nr_pinned = internal_get_user_pages_fast(start, nr_pages, gup_flags, > pages); > /* > @@ -3140,16 +3151,11 @@ long pin_user_pages_remote(struct mm_struct *mm, > unsigned int gup_flags, struct page **pages, > struct vm_area_struct **vmas, int *locked) > { > - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ > - if (WARN_ON_ONCE(gup_flags & FOLL_GET)) > - return -EINVAL; > - > - if (WARN_ON_ONCE(!pages)) > - return -EINVAL; > - > + if (!is_valid_gup_args(pages, vmas, locked, &gup_flags, > + FOLL_PIN | FOLL_TOUCH | FOLL_REMOTE)) > + return 0; > return __gup_longterm_locked(mm, start, nr_pages, pages, vmas, locked, > - gup_flags | FOLL_PIN | FOLL_TOUCH | > - FOLL_REMOTE); > + gup_flags); > } > EXPORT_SYMBOL(pin_user_pages_remote); > > @@ -3174,14 +3180,8 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, > unsigned int gup_flags, struct page **pages, > struct vm_area_struct **vmas) > { > - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ > - if (WARN_ON_ONCE(gup_flags & FOLL_GET)) > - return -EINVAL; > - > - if (WARN_ON_ONCE(!pages)) > - return -EINVAL; > - > - gup_flags |= FOLL_PIN; > + if (!is_valid_gup_args(pages, vmas, NULL, &gup_flags, FOLL_PIN)) > + return 0; > return __gup_longterm_locked(current->mm, start, nr_pages, > pages, vmas, NULL, gup_flags); > } > @@ -3195,15 +3195,12 @@ EXPORT_SYMBOL(pin_user_pages); > long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, > struct page **pages, unsigned int gup_flags) > { > - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ > - if (WARN_ON_ONCE(gup_flags & FOLL_GET)) > - return -EINVAL; > int locked = 0; > > - if (WARN_ON_ONCE(!pages)) > - return -EINVAL; > + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, > + FOLL_PIN | FOLL_TOUCH)) > + return 0; > > - gup_flags |= FOLL_PIN | FOLL_TOUCH; > return __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, > &locked, gup_flags); > } > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 1d6977dc6b31ba..1343a7d88299be 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1042,11 +1042,6 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, > > assert_spin_locked(pmd_lockptr(mm, pmd)); > > - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ > - if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == > - (FOLL_PIN | FOLL_GET))) > - return NULL; > - > if (flags & FOLL_WRITE && !pmd_write(*pmd)) > return NULL; > > @@ -1205,11 +1200,6 @@ struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, > if (flags & FOLL_WRITE && !pud_write(*pud)) > return NULL; > > - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ > - if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == > - (FOLL_PIN | FOLL_GET))) > - return NULL; > - > if (pud_present(*pud) && pud_devmap(*pud)) > /* pass */; > else