From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B0A1C38142 for ; Wed, 25 Jan 2023 02:42:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C7AD6B0071; Tue, 24 Jan 2023 21:42:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2775C6B0072; Tue, 24 Jan 2023 21:42:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F2006B0073; Tue, 24 Jan 2023 21:42:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id F2BA46B0071 for ; Tue, 24 Jan 2023 21:42:01 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C21E8C0717 for ; Wed, 25 Jan 2023 02:42:01 +0000 (UTC) X-FDA: 80391771642.05.618D701 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2054.outbound.protection.outlook.com [40.107.223.54]) by imf09.hostedemail.com (Postfix) with ESMTP id B1F3C140005 for ; Wed, 25 Jan 2023 02:41:58 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=HoBSusRK; spf=pass (imf09.hostedemail.com: domain of jhubbard@nvidia.com designates 40.107.223.54 as permitted sender) smtp.mailfrom=jhubbard@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674614518; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OKS4WR1wMs8BGihi8TP6RaiSDQ38TYP0JLTyNycM8f0=; b=solek1kg6OW15hTIFpANCArq/qu8hrUG5l2x+ndok6uA6CIFgBuvaulDk6/lnSHu/sUADJ NNOW59LFlHY6dREcFFph/yQL6W7b9XfyIMlKDZJu0Je/bUodu6BeBfG13002I6jNFbbuDU bD1fnXUONg3h0h+M/t7u2Pp7ZnJyraI= ARC-Authentication-Results: i=2; imf09.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=HoBSusRK; spf=pass (imf09.hostedemail.com: domain of jhubbard@nvidia.com designates 40.107.223.54 as permitted sender) smtp.mailfrom=jhubbard@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1674614518; a=rsa-sha256; cv=pass; b=brbHvRAqOVCb/pMqsHHH94PJIQgMOCxzFIzO8RcUeFZI4aJhf4Gm2JWwSjYa2oqVUR+hpS Iaq4meaPs6CzO4YHW2dm+rzbom2g1ULSFBNz7s5XlGzZjrAsiXFOPBEsISt1bsIt6wOYsv 9txD8MIWgZe4n6ADGuVfgNYeoo9elM4= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FwHTlKtSuVZXuSreGYMFfmDYdXxUVg4h6sSoxp4VTaOcdz5iYZbwnuQRXveCzH3lXfDeJOBsDhJtvUsHhqEM/h7EminOJDb73xyq+JDDdTh2pKleN4oUpPxUat/KIs3rF7hWaL3bJP2XAUpY3cbKP5tOL8M+quLbPolNUUjqdNvzDsTsWEL84hBsOCzH9yqppCd49kNT72/0LfV21Nxejy+nriN6sbj9YdzFysLXyoCn3edxtq146UJeb4kmESpvRRPIl+yoavlBsEp4JUpb7TvOBdd1elTgHDOO6mzSmvRhXyLhxc5jC6oQcZd0fzCBppM+6KEcKP6VxoGRsH6gEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OKS4WR1wMs8BGihi8TP6RaiSDQ38TYP0JLTyNycM8f0=; b=PG/CF27kq4u1QdKiGJihE9wEsoNlw4pz0J5VT/BCuZfVDiRL7L4lqiZPznPCHBSVd0bHaCXZX2GtNj9gLpujRXJA2w9aKzAqtEoGX9ZL9Ojfm86863V7txlWTDrespIC1XkLqrT5UNsiIrdVVP6pSobDGJf4MWEWkpdFnMAGAHMEHv4KuyJrBHu+j2H5FvCbPl00aTwsoOYgEQXe9PQ6vLbIDS2ciU6o6R5HzgPX0eSHNR0j4qmjmrbSajF+HDEA9eRag1LXY3vaJmtkX7kqm1PAIJj4cNOosN1hqoRyOmKJDeg6g/6k9l/bJqQyUd/1oKKOMzuQ9z28+nb4px2fgA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OKS4WR1wMs8BGihi8TP6RaiSDQ38TYP0JLTyNycM8f0=; b=HoBSusRKuf0fCWvfjBuFnfNDZNm7ORNRqspU7iX+iNfF79Ab7GXS/+1SBlR2d0VR4a1BtIMbSSIoq0c44wXbt8RiWFJ5EmVafiZB3rnnXuoG32O0mTg1rhBJV2GEYy64YPDN3YAEMaf9sAqQ9Nhlv69WiSrCRnURkpjk4F4xu22xd2iJd37UtsKXQu9JxRjougFuqRvA+E9tp19Z8/enHseCg4/o0OVXI8sGNNaXoogOYOEnT/YFwlqcZLhdbprMRmAgIr1/U23D05yiI3NwV7HYdExHjcq7gOXhBwFTBa68QXV5Yg2+8EQXIr7KhpIrTrcyRrPfLfGGwd9x9wEe+g== Received: from DS7P222CA0028.NAMP222.PROD.OUTLOOK.COM (2603:10b6:8:2e::21) by MN2PR12MB4422.namprd12.prod.outlook.com (2603:10b6:208:265::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Wed, 25 Jan 2023 02:41:55 +0000 Received: from DM6NAM11FT012.eop-nam11.prod.protection.outlook.com (2603:10b6:8:2e:cafe::a0) by DS7P222CA0028.outlook.office365.com (2603:10b6:8:2e::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33 via Frontend Transport; Wed, 25 Jan 2023 02:41:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT012.mail.protection.outlook.com (10.13.173.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.16 via Frontend Transport; Wed, 25 Jan 2023 02:41:55 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 24 Jan 2023 18:41:46 -0800 Received: from [10.110.48.28] (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 24 Jan 2023 18:41:46 -0800 Message-ID: Date: Tue, 24 Jan 2023 18:41:45 -0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Subject: Re: [PATCH v2 12/13] mm/gup: move gup_must_unshare() to mm/internal.h Content-Language: en-US To: Jason Gunthorpe CC: Alistair Popple , David Hildenbrand , David Howells , Christoph Hellwig , , "Mike Rapoport (IBM)" References: <12-v2-987e91b59705+36b-gup_tidy_jgg@nvidia.com> From: John Hubbard In-Reply-To: <12-v2-987e91b59705+36b-gup_tidy_jgg@nvidia.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT012:EE_|MN2PR12MB4422:EE_ X-MS-Office365-Filtering-Correlation-Id: beeb6155-0c2c-4f7f-4053-08dafe7db8ef X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: sRkABVWe9xt/F7XEiwiRlo85x0xGWoMKBlq7lnu8K7lnfTBpGlUCxj/kAoEsvnRMjqzpyjroCerbQZQDHrofTgFBnGswhqZxuneRE+LQ1aU7RtTUj7LND7seV0wuSY3WAovpoX9pJ0OpZJYE8ZRh+Op6cF9cyZtZDkL73EkxtxCHNJnZCVYH5DwsFqZauJj/zXYH/3nN1VxDl3XNl87qPQHL3hWRtdaBYgg/UroRh2hQHG5/H6X11oLgsgZdnhxMQBbUmvZcY2RuALUKLj2IyDAGmPkrF+s9j0L2uHSV9Gls/WN0GFhbTZyKhUF0IcQwXZkzfhK3zKmd4mt9n4EmC+gXwvO7YOaznUBXED97ppDsMcLFUEL2hI7vShME6pZtoM9WLFNYH9cMy4iOhmBmZMG1qOCkl4+69dBhj/H5cq6vKCglrGGQiBwiHXCUgFZKmyt31hv1O54k4GOvItNIDLQWxNunPu6NCVYBh0GTBXdB8PQHSsfW0hnJlHS9tN/r0iHiDu7YT6aYG7Oo0rnTSt334W1Hy6/Rtsobyufk+subhNAoMwVGMEoNKzvbR4M2K5MVmlo+ZtjJze3P0j7smDnMdY3PbDe2SZUk+HjgFhbbyT0ZNH9jcgsx4wl7RV/X3dTwtmVtHb7aIYPQXcJndWD1uCkkFw/8T/ptASIj7e4/ZoGcSYMIbRde9sAnBvCDtLTFb+/vBKPvkpOf9fRiiU6l7PiJj81t+nXoSmDDDHI= X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230025)(4636009)(376002)(136003)(396003)(39860400002)(346002)(451199018)(46966006)(36840700001)(40470700004)(6636002)(31696002)(356005)(86362001)(82310400005)(336012)(53546011)(478600001)(37006003)(16526019)(2616005)(26005)(186003)(316002)(70206006)(41300700001)(4326008)(16576012)(70586007)(54906003)(40460700003)(82740400003)(36860700001)(7636003)(426003)(5660300002)(8676002)(40480700001)(6862004)(83380400001)(8936002)(2906002)(31686004)(36756003)(47076005)(43740500002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2023 02:41:55.1994 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: beeb6155-0c2c-4f7f-4053-08dafe7db8ef X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT012.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4422 X-Stat-Signature: n85omjd8cfzx1p6jpdynxbsr4p5ee5n5 X-Rspam-User: X-Rspamd-Queue-Id: B1F3C140005 X-Rspamd-Server: rspam06 X-HE-Tag: 1674614518-633243 X-HE-Meta: U2FsdGVkX1/sZER+FpQW7LeViSy8Qz7aNtjWsxLjizz+bijKFlOsm0GngeH3w2wIBIxC1jwF+EC34VN5NoLTtN05zKJznoSEdqqf0qrlyLTAN6Lu7fGlaBuOjFNbUB5p6HumWmuEvL20ntX2znhX0sMWd2Ia7VdN7IhC8FyfoG7U4TZvZw2/lOk6cW2s5NhBwMEVD0BspZ9v/XGRoiAVCEbw1EYwkw187RNKKQ2Ip1jm+CrkKt1FZ5jXUiMlGRwCw1jEr2wY/sbRxNhnueCYEmrSEOCKpgFsF7G4nxXSU73YsbvOxRHITCSBDjxln7b08PZ8z4c7qPlOt1NvjgI1iNiFtUGvP1boLcVkImkuTyakxbW9mx/zt5HzATBSFqI78vjCbVaU82khpJRcJPB72CA2VO49jVMWB6tBp/pTBnZ7DQTq77ajRoVRYJv6XGFOqb7QUR3KzN65q7xEnhXHvVRV6Sb4adcHZCueasNAFILg3pWxDWc4z7NYtZfKjihXCylZBRFpDfT+zw/rbzVvxqCdXjZ9sNyxvIeAEiJTnGLkxi+dVDeOBaBXt3GdZ4fteubBGmGEVXKAcIiptVymkv6LpE/lqJITLv1N9nn9aV91jlB+bXqR1o8I6CHCeXNU6rYQqfgVhkDqW4XZWcWkBaiO9aV2V+oc6QCYsN3hQJmFvcAIDgwJHKi89hhsORt/9CpnVv7QjrEV9wnqgSRcrXfM1PGy8atXMvV+IYy1yHpdkh8FHh+av8GLtOurx6NxSe1SEEWk2rEC7YwBmhIOOOg6b/uWmpTV20XlVPsjk8wi94NkzrcpWCptw5/ccA75dBhDrCOyHPizrCLgV85ObnhmZdLsAZzY6hcMXavzBaKkJ86b0EORJkktmOhsCAmETu0MRVzh98MD3/v7VFp6sGxFaycun9ncRb11sBetwkjrbcCMrWsbtP9SsMBYKyjFDkVwF0t+c1UFRzTSV/M 3XU0HwKp KKDyfQzYwTwTSBrxKFSvN21XiLDwwoc7JW2DpeEKxAhDLuq5JP+daZ8z8TsqTjs5lwP9FVruWfiPVWkEGAr59lSBog8RyUComAMjfbI+vDc6StCjDMiWRmqMRGBwU6Q1r1qZ1i0Ngi5Is66i2fiHaM88CwW+lfxaa0WrC1/5AAYRuQnRzSvh4ago7Nz6ihQz9Tj4nukB3Z9F+z6XzpvnDgRxSjhyx8zckLbi0T3dNh9bYk0wt3EPHwPkL7bG4faftyWZx1yssT7d4SCM80D/S6eyCg9uSfkYiShnV/mZuHx/QRdfjAC9Ez4QcLIe8J3C/0StOYjScE62NtllGSIomMgZfMtlP51avRBdWHos4AHnhHIMTD2sYz6BGHvJUf8HUJfP3tMY3PYP6f+E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 1/24/23 12:34, Jason Gunthorpe wrote: > This function is only used in gup.c and closely related. It touches > FOLL_PIN so it must be moved before the next patch. > > Signed-off-by: Jason Gunthorpe > --- > include/linux/mm.h | 65 ---------------------------------------------- > mm/internal.h | 65 ++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 65 insertions(+), 65 deletions(-) Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index a47a6e8a9c78be..e0bacf9f2c5ebe 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -3087,71 +3087,6 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) > return 0; > } > > -/* > - * Indicates for which pages that are write-protected in the page table, > - * whether GUP has to trigger unsharing via FAULT_FLAG_UNSHARE such that the > - * GUP pin will remain consistent with the pages mapped into the page tables > - * of the MM. > - * > - * Temporary unmapping of PageAnonExclusive() pages or clearing of > - * PageAnonExclusive() has to protect against concurrent GUP: > - * * Ordinary GUP: Using the PT lock > - * * GUP-fast and fork(): mm->write_protect_seq > - * * GUP-fast and KSM or temporary unmapping (swap, migration): see > - * page_try_share_anon_rmap() > - * > - * Must be called with the (sub)page that's actually referenced via the > - * page table entry, which might not necessarily be the head page for a > - * PTE-mapped THP. > - * > - * If the vma is NULL, we're coming from the GUP-fast path and might have > - * to fallback to the slow path just to lookup the vma. > - */ > -static inline bool gup_must_unshare(struct vm_area_struct *vma, > - unsigned int flags, struct page *page) > -{ > - /* > - * FOLL_WRITE is implicitly handled correctly as the page table entry > - * has to be writable -- and if it references (part of) an anonymous > - * folio, that part is required to be marked exclusive. > - */ > - if ((flags & (FOLL_WRITE | FOLL_PIN)) != FOLL_PIN) > - return false; > - /* > - * Note: PageAnon(page) is stable until the page is actually getting > - * freed. > - */ > - if (!PageAnon(page)) { > - /* > - * We only care about R/O long-term pining: R/O short-term > - * pinning does not have the semantics to observe successive > - * changes through the process page tables. > - */ > - if (!(flags & FOLL_LONGTERM)) > - return false; > - > - /* We really need the vma ... */ > - if (!vma) > - return true; > - > - /* > - * ... because we only care about writable private ("COW") > - * mappings where we have to break COW early. > - */ > - return is_cow_mapping(vma->vm_flags); > - } > - > - /* Paired with a memory barrier in page_try_share_anon_rmap(). */ > - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP)) > - smp_rmb(); > - > - /* > - * Note that PageKsm() pages cannot be exclusive, and consequently, > - * cannot get pinned. > - */ > - return !PageAnonExclusive(page); > -} > - > /* > * Indicates whether GUP can follow a PROT_NONE mapped page, or whether > * a (NUMA hinting) fault is required. > diff --git a/mm/internal.h b/mm/internal.h > index 0f035bcaf133f5..5c1310b98db64d 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -854,6 +854,71 @@ int migrate_device_coherent_page(struct page *page); > struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags); > int __must_check try_grab_page(struct page *page, unsigned int flags); > > +/* > + * Indicates for which pages that are write-protected in the page table, > + * whether GUP has to trigger unsharing via FAULT_FLAG_UNSHARE such that the > + * GUP pin will remain consistent with the pages mapped into the page tables > + * of the MM. > + * > + * Temporary unmapping of PageAnonExclusive() pages or clearing of > + * PageAnonExclusive() has to protect against concurrent GUP: > + * * Ordinary GUP: Using the PT lock > + * * GUP-fast and fork(): mm->write_protect_seq > + * * GUP-fast and KSM or temporary unmapping (swap, migration): see > + * page_try_share_anon_rmap() > + * > + * Must be called with the (sub)page that's actually referenced via the > + * page table entry, which might not necessarily be the head page for a > + * PTE-mapped THP. > + * > + * If the vma is NULL, we're coming from the GUP-fast path and might have > + * to fallback to the slow path just to lookup the vma. > + */ > +static inline bool gup_must_unshare(struct vm_area_struct *vma, > + unsigned int flags, struct page *page) > +{ > + /* > + * FOLL_WRITE is implicitly handled correctly as the page table entry > + * has to be writable -- and if it references (part of) an anonymous > + * folio, that part is required to be marked exclusive. > + */ > + if ((flags & (FOLL_WRITE | FOLL_PIN)) != FOLL_PIN) > + return false; > + /* > + * Note: PageAnon(page) is stable until the page is actually getting > + * freed. > + */ > + if (!PageAnon(page)) { > + /* > + * We only care about R/O long-term pining: R/O short-term > + * pinning does not have the semantics to observe successive > + * changes through the process page tables. > + */ > + if (!(flags & FOLL_LONGTERM)) > + return false; > + > + /* We really need the vma ... */ > + if (!vma) > + return true; > + > + /* > + * ... because we only care about writable private ("COW") > + * mappings where we have to break COW early. > + */ > + return is_cow_mapping(vma->vm_flags); > + } > + > + /* Paired with a memory barrier in page_try_share_anon_rmap(). */ > + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP)) > + smp_rmb(); > + > + /* > + * Note that PageKsm() pages cannot be exclusive, and consequently, > + * cannot get pinned. > + */ > + return !PageAnonExclusive(page); > +} > + > extern bool mirrored_kernelcore; > > static inline bool vma_soft_dirty_enabled(struct vm_area_struct *vma)