From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6231C77B61 for ; Fri, 28 Apr 2023 14:20:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2BCB96B0071; Fri, 28 Apr 2023 10:20:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 26BCF6B0072; Fri, 28 Apr 2023 10:20:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 133A76B0074; Fri, 28 Apr 2023 10:20:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 057F26B0071 for ; Fri, 28 Apr 2023 10:20:57 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BC52AA0011 for ; Fri, 28 Apr 2023 14:20:56 +0000 (UTC) X-FDA: 80731011312.19.F4B55C8 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf12.hostedemail.com (Postfix) with ESMTP id DB6A840002 for ; Fri, 28 Apr 2023 14:20:52 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=iiCCUadk; spf=pass (imf12.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682691653; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gv0P60I1m+zx490TNpqDLMm/gN26L3C73hsWxl2z9Eg=; b=qTdWmEZoAKB89bg32coBalMIOQOuCz+L9nHYNvxlHbQXvosBiY8tEp9a9SFN5hD6kaJrgK ZmdineJze0vk0oV8QONF1PmeMRzx/U3knYH/yxvBxBCymTwksQLFZmuzWdO/Ryy7ekvHM4 3Dzg5pPKWGpyY9hTuYuPYK0mjG4vK+U= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=iiCCUadk; spf=pass (imf12.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682691653; a=rsa-sha256; cv=none; b=MqE2UYZhq1c4OxZgdH14wY/bEQACy494YOPnsgUFOCacenYY4L27tzDYCt2o9LYjQGd5+V DoG5is4y+A96tEkbYe9X+t09aFaxhXC5TcnYRhowQGghRbVgosVi3vIFweCpNuIYVbjqEH 3ROzBFCuMkVr7F2LZ0UJSGNnXgiHYq4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682691652; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gv0P60I1m+zx490TNpqDLMm/gN26L3C73hsWxl2z9Eg=; b=iiCCUadkPcoHCAmusq2UFHJqdrLoh1ix7KmFtueZBbgXqo0SHtLqc1wQzbpcSMsU3Hc4wU 0970vXEhcr0tMEsVovpTOBP5s8XjJy1xow3WzUtExkE7qXy/V2Gf7NOHPy83OkmkctI/9Y XBWpB0D8bILScZV9lO0lfwFrmEN9qJg= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-18-eBptg8ANM9a-_I3vRjZX1g-1; Fri, 28 Apr 2023 10:20:51 -0400 X-MC-Unique: eBptg8ANM9a-_I3vRjZX1g-1 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-3f195129aa4so53103025e9.2 for ; Fri, 28 Apr 2023 07:20:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682691650; x=1685283650; h=content-transfer-encoding:in-reply-to:subject:organization:from :content-language:references:cc:to:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gv0P60I1m+zx490TNpqDLMm/gN26L3C73hsWxl2z9Eg=; b=K9V0Jx81485Dqt15dDbS+7AgNSIfC2XalpnJHHYEE1DDsY7klYY0j4vzxsriiFimHV CiV5NVZDpFuRR64ZbO7/QL4KTLSqzriicdnb0Z3pNY0CUjngOa8UHPeX9juQyWr6Om2N z15tMAKxfI0IKY9/n0P1Tdoi6USvSoBP6ReAN8SvY8PaGXLlTzRGDNSzrB140R1k3AYA fe5nmGSq4yb0rwSiYcRLxdNlBDNc7x4qkhboN1lqpK24LlEitwpT205BXqUiolvRKnP9 YSQi+HX6zXWs1yMQnHKGjlu130Ft9+Fj8N5Nz1M0LOsB2bXxvVYEx7iySAtOISlvULfZ 6/iA== X-Gm-Message-State: AC+VfDx46evXg2o/6X38RlVkO3tgy0QnJvOUR8m5ll7ttyA8IiHEfjFk zBZMFN/k5Hmx4RbOhQRAPLu4gnHmX1Dgx2OlaLmyRvuzhesncvibiscTcGOWHohT2vlZEWgkyRN OM/XeAhE2uVw= X-Received: by 2002:a1c:f706:0:b0:3f2:5028:a54d with SMTP id v6-20020a1cf706000000b003f25028a54dmr4286849wmh.0.1682691649674; Fri, 28 Apr 2023 07:20:49 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Ef+6JXTWHdAiFME6ae0mlQwifNq9sneotXxTQMRQu6YRP1y3jjdZfpGN/dZmsrEadhA0T1g== X-Received: by 2002:a1c:f706:0:b0:3f2:5028:a54d with SMTP id v6-20020a1cf706000000b003f25028a54dmr4286805wmh.0.1682691649225; Fri, 28 Apr 2023 07:20:49 -0700 (PDT) Received: from ?IPV6:2003:cb:c726:9300:1711:356:6550:7502? (p200300cbc72693001711035665507502.dip0.t-ipconnect.de. [2003:cb:c726:9300:1711:356:6550:7502]) by smtp.gmail.com with ESMTPSA id z4-20020a05600c0a0400b003ef4cd057f5sm28589070wmp.4.2023.04.28.07.20.46 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 28 Apr 2023 07:20:48 -0700 (PDT) Message-ID: Date: Fri, 28 Apr 2023 16:20:46 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 To: Lorenzo Stoakes , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , David Howells , Christoph Hellwig References: <6b73e692c2929dc4613af711bdf92e2ec1956a66.1682638385.git.lstoakes@gmail.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v5] mm/gup: disallow GUP writing to file-backed mappings by default In-Reply-To: <6b73e692c2929dc4613af711bdf92e2ec1956a66.1682638385.git.lstoakes@gmail.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: DB6A840002 X-Stat-Signature: gkyz7oxbrajw9h1n3o66a8k8qd51eh5i X-HE-Tag: 1682691652-203919 X-HE-Meta: U2FsdGVkX19e54LBgLOXEZu+aXsqsyh1rRU5idy2mHoz873IqFxjk2GxfV+1ZB8w6EPFWQud5TD9RpaeWyBs+g8VsJnBCzzh2wbRdCKIXhfMYuUWe1+MPjL+RdZVOcM9aKibs3Lm+Ir/5UwD/OMkKn++nEMHfvGWICG0Wpkw9oTySJT5y6scTg54Zv/o7bzW1kEmdgWl/zprOzUXxuvbSc1nA62DbvxBRIEWggzajJJDOEKYcJyM+OgX+5zTA1hz+vUQHtr8kyWi5l4yhpu29pODu9XXEZWE5mCHruBaEtdf7qvDeiRt5CJdiAyLnbuj34IM00lgByKEgWk51xtCrKWIFHNMQHOFXt2wCd6Y+dDBkZcw/8eA1Wm1xdCsYdrW4RX9q9OIsF7wt8pYpfAWSY247exoN22kqdX0DLNagMkiuPKSOrJY6idFak2r0lspMrdJHsBkmyL6VF0uh9RyN79OML96gkS3/krPmkAmSeSEE8KnMelHTesIg5s4dHhjFDYitfpVMOLwRXgc/xEx0pnz8HL7o9Reg/wJIXTjDpplaxHRdZ17996SKhAr+5g6iRCayM8YM0lM4wc0msrWojlp4iL4hInAsfUjwT0yC/yfGT8h66eOdwOAG3omLL+fzPdsxDdVN9mf4GBH+2oeZP94fHTJHGOQbFuY8Xvy3YjCiUcLUKgJTb2pwt9Kclf19qwWt3xv2xIYlXsaWZj6hUPI6FhOv1FRriCTr78oD8cZd7SOFi7sOqei7+HKZk+N1mw+aadnyuKojsUT4hK3mH4gOfWog7ZZ6ZUqF1V/zV6GSOVNj6dlV29ZRhBy8Bt8iyWcQW2dw4qZPnnCXDf9qiDKKRL3anMVDlut+vBcg9mziBPMr9eX2YHe+UkjqjVe4cGRB8ntySi9890L41da4noxvzu3Co8jeSVk17bmqqu9bH6ggLGlU2viX4dIZQ66Mcl9vEVyEo7ZaKLox62 gArAvKY2 l+z414hTW5lx59Efp4r8HBsFvmN/HcVy02uqzQKhYDxlYeBeHC1iEtJYYQX2kjjgqHpWnzgfaMzmQbIXACh/FfpghmKlfgwA5gSKimvZGnVN+/Kp8vSfpWFt2eY6L2oLTz39wZU/SaI76VyxnrG7gWD1w3g645CkzxGBaNJ7nTYlMlh84y0kSTpeiyIReZgrWJ8egAMEHh9dUHIAEkTyOFvCRCge7KJSiRsdm8C/bI313dF9bmcfDDHM93PTpEHSaxjjQ44eD9d6HSfZnQgIVxM5NVlBCTjR13WGI9VvurzZHEvhqqT6HYm/QcTll8NiLNSYCwwtDwtBeEENCF6qQuFUG6g8bw//I61SDbpIXzNw3WUkD5ZWabA++6mvqdF73O7n2Xrcui/CLfY6AFgy9hMpg1nG+6gmDKnXYDSR6vCmsccd9TJ/YXUC1YccuXqTL3LWyRPfquVkzfOcUUZUWKWJzGV+J1sjtvWxKupmeH7WcPmh4oPZiSnl5u6gqVZxDVOG914thZK7VpGat02Ty4y6SkuQwbcknoMjxupVJcaKu0cZ8e/OtjFk9yA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Sorry for jumping in late, I'm on vacation :) On 28.04.23 01:42, Lorenzo Stoakes wrote: > Writing to file-backed mappings which require folio dirty tracking using > GUP is a fundamentally broken operation, as kernel write access to GUP > mappings do not adhere to the semantics expected by a file system. > > A GUP caller uses the direct mapping to access the folio, which does not > cause write notify to trigger, nor does it enforce that the caller marks > the folio dirty. How should we enforce it? It would be a BUG in the GUP user. > > The problem arises when, after an initial write to the folio, writeback > results in the folio being cleaned and then the caller, via the GUP > interface, writes to the folio again. > > As a result of the use of this secondary, direct, mapping to the folio no > write notify will occur, and if the caller does mark the folio dirty, this > will be done so unexpectedly. Right, in mprotect() code we only allow upgrading write permissions in this case if the pte is dirty, so we always go via the pagefault path. > > For example, consider the following scenario:- > > 1. A folio is written to via GUP which write-faults the memory, notifying > the file system and dirtying the folio. > 2. Later, writeback is triggered, resulting in the folio being cleaned and > the PTE being marked read-only. How would that be triggered? Would that writeback triggered by e.g., fsync that Jan tried to tackle recently? > 3. The GUP caller writes to the folio, as it is mapped read/write via the > direct mapping. > 4. The GUP caller, now done with the page, unpins it and sets it dirty > (though it does not have to). > > This results in both data being written to a folio without writenotify, and > the folio being dirtied unexpectedly (if the caller decides to do so). > > This issue was first reported by Jan Kara [1] in 2018, where the problem > resulted in file system crashes. > > This is only relevant when the mappings are file-backed and the underlying > file system requires folio dirty tracking. File systems which do not, such > as shmem or hugetlb, are not at risk and therefore can be written to > without issue. > > Unfortunately this limitation of GUP has been present for some time and > requires future rework of the GUP API in order to provide correct write > access to such mappings. > > However, for the time being we introduce this check to prevent the most > egregious case of this occurring, use of the FOLL_LONGTERM pin. > > These mappings are considerably more likely to be written to after > folios are cleaned and thus simply must not be permitted to do so. > > As part of this change we separate out vma_needs_dirty_tracking() as a > helper function to determine this which is distinct from > vma_wants_writenotify() which is specific to determining which PTE flags to > set. > > [1]:https://lore.kernel.org/linux-mm/20180103100430.GE4911@quack2.suse.cz/ > This change has the potential to break existing setups. Simple example: libvirt domains configured for file-backed VM memory that also has a vfio device configured. It can easily be configured by users (evolving VM configuration, copy-paste etc.). And it works from a VM perspective, because the guest memory is essentially stale once the VM is shutdown and the pages were unpinned. At least we're not concerned about stale data on disk. With your changes, such VMs would no longer start, breaking existing user setups with a kernel update. I don't really see a lot of reasons to perform this change now. It's been known to be problematic for a long time. People are working on a fix (I see Jan is already CCed, CCing Dave and Christop). FOLL_LONGTERM check is only handling some of the problematic cases, so it's not even a complete blocker. I know, Jason und John will disagree, but I don't think we want to be very careful with changing the default. Sure, we could warn, or convert individual users using a flag (io_uring). But maybe we should invest more energy on a fix? > Suggested-by: Jason Gunthorpe > Signed-off-by: Lorenzo Stoakes > --- > include/linux/mm.h | 1 + > mm/gup.c | 41 ++++++++++++++++++++++++++++++++++++++++- > mm/mmap.c | 36 +++++++++++++++++++++++++++--------- > 3 files changed, 68 insertions(+), 10 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 37554b08bb28..f7da02fc89c6 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2433,6 +2433,7 @@ extern unsigned long move_page_tables(struct vm_area_struct *vma, > #define MM_CP_UFFD_WP_ALL (MM_CP_UFFD_WP | \ > MM_CP_UFFD_WP_RESOLVE) > > +bool vma_needs_dirty_tracking(struct vm_area_struct *vma); > int vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot); > static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma) > { > diff --git a/mm/gup.c b/mm/gup.c > index 1f72a717232b..d36a5db9feb1 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -959,16 +959,51 @@ static int faultin_page(struct vm_area_struct *vma, > return 0; > } > > +/* > + * Writing to file-backed mappings which require folio dirty tracking using GUP > + * is a fundamentally broken operation, as kernel write access to GUP mappings > + * do not adhere to the semantics expected by a file system. > + * > + * Consider the following scenario:- > + * > + * 1. A folio is written to via GUP which write-faults the memory, notifying > + * the file system and dirtying the folio. > + * 2. Later, writeback is triggered, resulting in the folio being cleaned and > + * the PTE being marked read-only. > + * 3. The GUP caller writes to the folio, as it is mapped read/write via the > + * direct mapping. > + * 4. The GUP caller, now done with the page, unpins it and sets it dirty > + * (though it does not have to). > + * > + * This results in both data being written to a folio without writenotify, and > + * the folio being dirtied unexpectedly (if the caller decides to do so). > + */ > +static bool writeable_file_mapping_allowed(struct vm_area_struct *vma, > + unsigned long gup_flags) > +{ > + /* If we aren't pinning then no problematic write can occur. */ > + if (!(gup_flags & (FOLL_GET | FOLL_PIN))) > + return true; FOLL_LONGTERM only applies to FOLL_PIN. This check can be dropped. > + > + /* We limit this check to the most egregious case - a long term pin. */ > + if (!(gup_flags & FOLL_LONGTERM)) > + return true; > + > + /* If the VMA requires dirty tracking then GUP will be problematic. */ > + return vma_needs_dirty_tracking(vma); > +} > + > static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) > { > vm_flags_t vm_flags = vma->vm_flags; > int write = (gup_flags & FOLL_WRITE); > int foreign = (gup_flags & FOLL_REMOTE); > + bool vma_anon = vma_is_anonymous(vma); > > if (vm_flags & (VM_IO | VM_PFNMAP)) > return -EFAULT; > > - if (gup_flags & FOLL_ANON && !vma_is_anonymous(vma)) > + if ((gup_flags & FOLL_ANON) && !vma_anon) > return -EFAULT; > > if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) > @@ -978,6 +1013,10 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) > return -EFAULT; > > if (write) { > + if (!vma_anon && > + !writeable_file_mapping_allowed(vma, gup_flags)) > + return -EFAULT; > + > if (!(vm_flags & VM_WRITE)) { > if (!(gup_flags & FOLL_FORCE)) > return -EFAULT; > diff --git a/mm/mmap.c b/mm/mmap.c > index 536bbb8fa0ae..7b6344d1832a 100644 > --- a/mm/mmap.c I'm probably missing something, why don't we have to handle GUP-fast (having said that, it's hard to handle ;) )? The sequence you describe above should apply to GUP-fast as well, no? 1) Pin writable mapped page using GUP-fast 2) Trigger writeback 3) Write to page via pin 4) Unpin and set dirty -- Thanks, David / dhildenb