From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 420F6C25B7A for ; Fri, 17 May 2024 15:07:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6DEB46B007B; Fri, 17 May 2024 11:07:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 68EB46B0083; Fri, 17 May 2024 11:07:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 557EC6B0085; Fri, 17 May 2024 11:07:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 37EA36B007B for ; Fri, 17 May 2024 11:07:40 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id AAFA8140373 for ; Fri, 17 May 2024 15:07:39 +0000 (UTC) X-FDA: 82128217038.01.DE4330E Received: from mail-lf1-f45.google.com (mail-lf1-f45.google.com [209.85.167.45]) by imf25.hostedemail.com (Postfix) with ESMTP id A2FA6A0013 for ; Fri, 17 May 2024 15:07:37 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=aF2HWHNJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of vdonnefort@google.com designates 209.85.167.45 as permitted sender) smtp.mailfrom=vdonnefort@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715958457; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=e2IpcqFuguhkshSycy+14Kc4bI4lgV1eoYcFaNGuWgI=; b=1UVHpsxjls1idGLrBU4SjqRWRFoFf+eOiD7UbnwdlQ+Ec7fieIgKJlALHzn81zSCDRgvtP ItQmx7hFGG2piU293jlScI//WR/ACH+ArkN9GMydRUP6jSDt8+tLFCAgsfNfdbuURlRKez i7wxDOyzWGUkIiS4ED4y4hC1SPfA9r0= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=aF2HWHNJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of vdonnefort@google.com designates 209.85.167.45 as permitted sender) smtp.mailfrom=vdonnefort@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715958457; a=rsa-sha256; cv=none; b=u1KvOgcUuh/XI6HogmCjColbxistGxSgA32DAMenKtZvJYC/86rLQ60rfQn5DqCixFU6Zq UbnOUUu6uo82vTFhKt2KDc7gepATZkeJOE4K8o9kHl0qknEf1U65A0TLptFPSr5XRqOI1s C5Jzuq0g3wbOcaq7tRdao3JjWvDMGDU= Received: by mail-lf1-f45.google.com with SMTP id 2adb3069b0e04-51f72a29f13so964893e87.3 for ; Fri, 17 May 2024 08:07:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1715958456; x=1716563256; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=e2IpcqFuguhkshSycy+14Kc4bI4lgV1eoYcFaNGuWgI=; b=aF2HWHNJFGD5MUvDSTSiUSdq17SpzSDRD2Y7ToBRVdN9XAM74bi+nOeLMuEgJHQ3yH w7qyzC06qcwsO2mMCq/4ElSMKammfKURfu2UAEKCsc8Pg/5rKarXjdkb4O9nfbmkC0Y3 6FDXUwqmu6bh9W1RqLN+U9fluXngOMIcvxiJX89KxG7VeWKYxqBJ0IOjqNqRBpk7dDE/ lcSSmR9qp8dN5fGlj84oP5/rjVaVq/5OSD3jML6DnRLx4J44POzxpcxqNdT96VkPcEqk Aw3psIufqLqRMNLGcRVW90VkWF9a9Y6PGVnDxfX+aEwtSqUttNEg8BZ/BO9kfBFW2R84 QUbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715958456; x=1716563256; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=e2IpcqFuguhkshSycy+14Kc4bI4lgV1eoYcFaNGuWgI=; b=FYPCYGIBNa0dsYZibG2b5ljyIx5rK3vajpa19nji/Sd9es6slA4hgFILaf/YJAmB+z OMeHnzlo+QqTAW639+sxl+7wpdiqa61iFxra23lPfmIvhbPAMTZJGd5Qcz5n5S/suPxQ Ngoj6myYQFtk2eD3onkzefIW30PEvvDRsAz1+dFAgVn070zWmemvspPJ6/2okLMfFo9l vxU6eoQWyvZFCFs4vV06TrY6YILgvJysSdy4qzF7DtCk6qZxYUbUaOi46mUWDyRbDYvk O+AqKBa5dHSVWffEvzrlTKpd+n64RBzDy3kZMy4WH7i/9731p8aTfLoo8HtIJLi0rEz7 HwSg== X-Forwarded-Encrypted: i=1; AJvYcCV0nPF/ZVV2+vRzmdGj7YA9ERN9iZhyRSpCOMRqaLrWFO4U8KUA7dGfTR87L3aGVFRqOyWV6Xv8drRm/+UgZQhlOW4= X-Gm-Message-State: AOJu0YyWTkymdSgiBUE9/Xu4/mBhUIwN99Ym1j2bDRn6Jt6nzL+jf0ve 9jy2E1DYUMOW6lO8aFDhihmtngR3m69mIscAwq0vuLINm8x72MjbBltlaldzxQ== X-Google-Smtp-Source: AGHT+IHl8G1Pv0ELVow4czV1iE+a6fx2dTWRemZUhtz+vu03Dn0Skt9D9cqMJ+o3lWoS05lHw0k4VQ== X-Received: by 2002:a05:6512:3b86:b0:518:a55b:b612 with SMTP id 2adb3069b0e04-52210579225mr18978308e87.54.1715958455570; Fri, 17 May 2024 08:07:35 -0700 (PDT) Received: from google.com (65.0.187.35.bc.googleusercontent.com. [35.187.0.65]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3502b79be1dsm22040543f8f.10.2024.05.17.08.07.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 08:07:35 -0700 (PDT) Date: Fri, 17 May 2024 16:07:31 +0100 From: Vincent Donnefort To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Dan Williams , rostedt@goodmis.org Subject: Re: [PATCH v1 1/2] mm/memory: cleanly support zeropage in vm_insert_page*(), vm_map_pages*() and vmf_insert_mixed() Message-ID: References: <20240430204044.52755-1-david@redhat.com> <20240430204044.52755-2-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240430204044.52755-2-david@redhat.com> X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A2FA6A0013 X-Stat-Signature: a4ataqqkp8ai4pudgh1djshxqpishdrc X-HE-Tag: 1715958457-778686 X-HE-Meta: U2FsdGVkX1/d3viZ8ad+YgLsDglbzSC/UxJxMCd4QLsPdETcpCO4sQCmyEXdBaDCe5JqgDHGkQek9GwbYJJ4GbtLK2vfv9tNRLGTRq3hLl3xf+zUXLR/MY64MK2Q65OXftryg86lCcBvO8pw1a+Xm5fi014OLCPNXR231jcThqohr4F2R/CscUNitcRUt6OyyMQ2DLQvvNt9IeatoYPsMjIxIUnaAgjVNOJuRxWMZ87bdh5c2ypev0tDPOLEJIfAGMhhJp1FNiL8Ut5U9BXnbMW7uhee3SZcLk+mBLL1DP7B0ZBmvulnNZfjSOKSAxvFkoranvri7RXTBEOBlvji+zJfpUTQ4Zyi2+/ctaB92qzHl3B5RzlH0NRIF/fhbZgJBianIMM0jq1mF5k1OyMIWfYMUxe3B+wL18X0JaXDXmMgfOTLHL5idlcgQ9LRLMe0vp9wa08CNvyOCerig+hSTxcPkNv1ESfVg+CBWUkidNyoqBbHpFjmxcMnobA6OsiBKvOc8SY0PlLR2HGP/6HckPAxabPmU5YO2N/+PV46NR7iCurK5eHGShzDz3pvu7+t5Rc9IiFkjlnzvgNWoUFHHbHuNC0fZ53jTGp5zulcdsu/Aguov8i+X7r25r/RnaWr0eNAxiaY1uxY7btfZ5zVcISC2rKfHlobz4aKxa5qoze27nCO4fchEU7Ei1jK7VvNeMoo7CkW9o8X9PdDJMoyPhp+o4jn+aVWtfGmYYuHMwrfPR9Ppff0wyVIQ3i+GS70Wnj6XC7+PbLgKo4WVFNpTwEt9b+fBpMCyHiR3UkdMLVFiwyMGjXQceC9EENAb7N5guHxw+nmDHDOJXbj68fmqruFndAvKs/Q5Ojkde0in4WfJ5gsROgkhqwTLPkWw8ek6rW2dpS0JYlxtBJ+TFL9BGb/qibDOmdrwDPGNQ5Nhgu6gTc/jPM0X9ahvtWJzL+wMV4S7VuZ54qRE8BrP0C IHj5BQGR IQVEMiiQD9aSpJQ/LSRHsZNZ4rUebH8W/YCDHeEzM+l4ZvBx6IY0lKRq+ZBEuJOooMHlkVUeqORjzlP/5ladDx8TogQRjykeRLEJ6hvn4VZk1/8CHnbKCBs9tUw3IsjJf5y9evrr32NkVVXXmDnPKzcm4m/YTNv9TK2g48CvzdfP00MH0+5T/FyNy4XFiLWUGwaCMI4JkOXduwl1fwJBLNfFrNHZkagxFdsOFIfVMi4vutSaiDAfRi++EuugALFe/Xedo/1klZbObuF2n+JZMvwAHsSrPyyD7UN4zt9K0mQcq9LQ5WisHC9pmwdrf15DZ0srgvefMoYzebIvVh7w2r4nvDLOVlVB5gXSw6UOCISf7J+jykt0yJDYx2gcSiPVUs59TfCbTsq3vsTEqDKcSzWfydQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi David, [...] > -static int validate_page_before_insert(struct page *page) > +static bool vm_mixed_zeropage_allowed(struct vm_area_struct *vma) > +{ > + VM_WARN_ON_ONCE(vma->vm_flags & VM_PFNMAP); > + /* > + * Whoever wants to forbid the zeropage after some zeropages > + * might already have been mapped has to scan the page tables and > + * bail out on any zeropages. Zeropages in COW mappings can > + * be unshared using FAULT_FLAG_UNSHARE faults. > + */ > + if (mm_forbids_zeropage(vma->vm_mm)) > + return false; > + /* zeropages in COW mappings are common and unproblematic. */ > + if (is_cow_mapping(vma->vm_flags)) > + return true; > + /* Mappings that do not allow for writable PTEs are unproblematic. */ > + if (!(vma->vm_flags & (VM_WRITE | VM_MAYWRITE))) > + return false; Shouldn't we return true here? > + /* > + * Why not allow any VMA that has vm_ops->pfn_mkwrite? GUP could > + * find the shared zeropage and longterm-pin it, which would > + * be problematic as soon as the zeropage gets replaced by a different > + * page due to vma->vm_ops->pfn_mkwrite, because what's mapped would > + * now differ to what GUP looked up. FSDAX is incompatible to > + * FOLL_LONGTERM and VM_IO is incompatible to GUP completely (see > + * check_vma_flags). > + */ > + return vma->vm_ops && vma->vm_ops->pfn_mkwrite && > + (vma_is_fsdax(vma) || vma->vm_flags & VM_IO); > +} > + [...] > > -/* > - * This is the old fallback for page remapping. > - * > - * For historical reasons, it only allows reserved pages. Only > - * old drivers should use this, and they needed to mark their > - * pages reserved for the old functions anyway. > - */ > static int insert_page(struct vm_area_struct *vma, unsigned long addr, > struct page *page, pgprot_t prot) > { > @@ -2023,7 +2065,7 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr, > pte_t *pte; > spinlock_t *ptl; > > - retval = validate_page_before_insert(page); > + retval = validate_page_before_insert(vma, page); > if (retval) > goto out; > retval = -ENOMEM; > @@ -2043,7 +2085,7 @@ static int insert_page_in_batch_locked(struct vm_area_struct *vma, pte_t *pte, > > if (!page_count(page)) > return -EINVAL; This test here prevents inserting the zero-page. > - err = validate_page_before_insert(page); > + err = validate_page_before_insert(vma, page); > if (err) > return err; > return insert_page_into_pte_locked(vma, pte, addr, page, prot); > @@ -2149,7 +2191,8 @@ EXPORT_SYMBOL(vm_insert_pages); > * @page: source kernel page > * > * This allows drivers to insert individual pages they've allocated > - * into a user vma. > + * into a user vma. The zeropage is supported in some VMAs, > + * see vm_mixed_zeropage_allowed(). > * > * The page has to be a nice clean _individual_ kernel allocation. > * If you allocate a compound page, you need to have marked it as > @@ -2195,6 +2238,8 @@ EXPORT_SYMBOL(vm_insert_page); > * @offset: user's requested vm_pgoff > * > * This allows drivers to map range of kernel pages into a user vma. > + * The zeropage is supported in some VMAs, see > + * vm_mixed_zeropage_allowed(). > * > * Return: 0 on success and error code otherwise. > */ > @@ -2410,8 +2455,11 @@ vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr, > } > EXPORT_SYMBOL(vmf_insert_pfn); > > -static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn)