From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 348F8FA373D for ; Fri, 21 Oct 2022 16:27:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AAD818E0002; Fri, 21 Oct 2022 12:27:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A367F8E0001; Fri, 21 Oct 2022 12:27:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8891E8E0002; Fri, 21 Oct 2022 12:27:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7105B8E0001 for ; Fri, 21 Oct 2022 12:27:06 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4A327A0A2D for ; Fri, 21 Oct 2022 16:27:06 +0000 (UTC) X-FDA: 80045486052.14.A2506AF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf23.hostedemail.com (Postfix) with ESMTP id DF657140034 for ; Fri, 21 Oct 2022 16:27:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666369625; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t4YXc9wFXWiolS2M98ub2tB5uXWZVM5HGscvZ76+RtM=; b=RDA9GIBE+kuWdT3ibIcTzLJAhPpE3HPnTCLIoqKsl6ls5zYHTx3rnoL1GlKrEl7YrQQ755 KmaYOH9iU+Y0GuGxdyrgwlVDJmFTU0cWyySLsPMjyaRiv8bY68H6MKOO/CKc7Vl8FBfOzK FnZNiywAJXCXjvl2Y/lafXHHuteCGgM= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-187-etkZHZSPOFGBAp1qTE7qKg-1; Fri, 21 Oct 2022 12:27:02 -0400 X-MC-Unique: etkZHZSPOFGBAp1qTE7qKg-1 Received: by mail-wm1-f70.google.com with SMTP id i5-20020a1c3b05000000b003c47c8569easo3577751wma.1 for ; Fri, 21 Oct 2022 09:27:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:organization:references:cc:to :from:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=t4YXc9wFXWiolS2M98ub2tB5uXWZVM5HGscvZ76+RtM=; b=Dj8AIMOT0T65Gyrs+iNb4qR8bYCcBiZRHfyUiMt8/Fjv8K8K7veMTaiQhbDvecULSP vz7KKgU54gRjGVmqnbanS+ZbWxHrkNakaDtx2+oyKQup7lQDMs4J42FETrnXLpzSB+0F SOnRzyRw5aJ3KIgwu1fQlzKcNhy1v7i4/xRrA9vHUAgW4wQ3mk+mTG8OAoH0VueGPX+b stns7mFtsI39y6UBohE5Cg1jmEsIS1L2zgnVQIEPJ01MdEZUQNDkb/uTedXpGiQ1NNPO uMN82pXdtSF4vTTGmt1NAmiRNRbibomQaDCkfXMJddh9keGkB3ohrZsoCc8uxQRzPWw0 duBA== X-Gm-Message-State: ACrzQf3Q6CZtoVdMd41R7ZdAuV2Xo9h2bBY536TnXA6PgZ7wyW7JnDgE df7mjcPxwOH7nyIDEhu2DwcRogT8/MPEuIzdOe6aIMsNtbKjJcqcmzb2iJ58O+z9XKqHKQXVYDJ 1HVdVJLcJ4Qc= X-Received: by 2002:a05:600c:5486:b0:3b4:7e47:e19 with SMTP id iv6-20020a05600c548600b003b47e470e19mr13671761wmb.12.1666369621293; Fri, 21 Oct 2022 09:27:01 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5xoDm64it5QRksuKbCfhZzG9JHxctqZB6k0Zmqa6xi0SP2d5o2tc6tMBZi22nFxrXZzoMnfg== X-Received: by 2002:a05:600c:5486:b0:3b4:7e47:e19 with SMTP id iv6-20020a05600c548600b003b47e470e19mr13671736wmb.12.1666369620932; Fri, 21 Oct 2022 09:27:00 -0700 (PDT) Received: from ?IPV6:2003:cb:c708:1700:e40d:574c:c991:5f78? (p200300cbc7081700e40d574cc9915f78.dip0.t-ipconnect.de. [2003:cb:c708:1700:e40d:574c:c991:5f78]) by smtp.gmail.com with ESMTPSA id m29-20020a05600c091d00b003c21ba7d7d6sm2780038wmp.44.2022.10.21.09.27.00 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 21 Oct 2022 09:27:00 -0700 (PDT) Message-ID: <7600024f-98a2-7231-548b-26b07090f581@redhat.com> Date: Fri, 21 Oct 2022 18:26:59 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.3.1 Subject: Re: Avoiding allocation of unused shmem page From: David Hildenbrand To: Peter Xu Cc: Matthew Wilcox , linux-mm@kvack.org, Hugh Dickins References: <4e1f4fb4-559e-2be3-c091-40ce0130b6c3@redhat.com> <3b19120b-05f6-10b8-c1af-67d8eb60fea0@redhat.com> Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RDA9GIBE; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666369625; a=rsa-sha256; cv=none; b=L6rQdSkEiVLkSz0ZMCabcGT5ZoqF+laRL3U0Zq4z57MROXRAVsFcv7DxplZBq/13SvYOkv y9arlPtgbN1KeZ0snM6aScYhzOfCU8BEKKKSnWhoX2O+p8MbrT1sCXXu0hhYQFoe6tanh7 T2NfiP6CnnByVCCynHmNup0xEJj59+k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666369625; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=t4YXc9wFXWiolS2M98ub2tB5uXWZVM5HGscvZ76+RtM=; b=a5vXac8Wx5HlZvYaBoRIN9EIXrepwrL4lZ9DV/y4+7zuCgwcksVi1oQygr7CtlkUR1sHXr f9h2JAdumSzVZAn497Nde3WxSB7mtcJXmSACiUpYDPhNlwtU6L5l+nk/4kfwTDvewy4BA5 04u+WzWgk5fcidSVQejDfNYIRl+F4/I= Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RDA9GIBE; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: 89dmt8yj1t9ykgqc4z5ofqxy8kczf8oe X-Rspamd-Queue-Id: DF657140034 X-Rspamd-Server: rspam02 X-Rspam-User: X-HE-Tag: 1666369625-705775 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 21.10.22 18:19, David Hildenbrand wrote: > On 21.10.22 18:01, Peter Xu wrote: >> On Fri, Oct 21, 2022 at 05:17:08PM +0200, David Hildenbrand wrote: >>> On 21.10.22 17:08, Peter Xu wrote: >>>> On Fri, Oct 21, 2022 at 04:45:27PM +0200, David Hildenbrand wrote: >>>>> On 21.10.22 16:28, Peter Xu wrote: >>>>>> On Fri, Oct 21, 2022 at 04:10:41PM +0200, David Hildenbrand wrote: >>>>>>> On 21.10.22 16:01, Peter Xu wrote: >>>>>>>> On Fri, Oct 21, 2022 at 09:23:00AM +0200, David Hildenbrand wrote: >>>>>>>>> On 20.10.22 23:10, Peter Xu wrote: >>>>>>>>>> On Thu, Oct 20, 2022 at 09:14:09PM +0100, Matthew Wilcox wrote: >>>>>>>>>>> In yesterday's call, David brought up the case where we fallocate a file >>>>>>>>>>> in shmem, call mmap(MAP_PRIVATE) and then store to a page which is over >>>>>>>>>>> a hole. That currently causes shmem to allocate a page, zero-fill it, >>>>>>>>>>> then COW it, resulting in two pages being allocated when only the >>>>>>>>>>> COW page really needs to be allocated. >>>>>>>>>>> >>>>>>>>>>> The path we currently take through the MM when we take the page fault >>>>>>>>>>> looks like this (correct me if I'm wrong ...): >>>>>>>>>>> >>>>>>>>>>> handle_mm_fault() >>>>>>>>>>> __handle_mm_fault() >>>>>>>>>>> handle_pte_fault() >>>>>>>>>>> do_fault() >>>>>>>>>>> do_cow_fault() >>>>>>>>>>> __do_fault() >>>>>>>>>>> vm_ops->fault() >>>>>>>>>>> >>>>>>>>>>> ... which is where we come into shmem_fault(). Apart from the >>>>>>>>>>> horrendous hole-punch handling case, shmem_fault() is quite simple: >>>>>>>>>>> >>>>>>>>>>> err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, SGP_CACHE, >>>>>>>>>>> gfp, vma, vmf, &ret); >>>>>>>>>>> if (err) >>>>>>>>>>> return vmf_error(err); >>>>>>>>>>> vmf->page = folio_file_page(folio, vmf->pgoff); >>>>>>>>>>> return ret; >>>>>>>>>>> >>>>>>>>>>> What we could do here is detect this case. Something like: >>>>>>>>>>> >>>>>>>>>>> enum sgp_type sgp = SGP_CACHE; >>>>>>>>>>> >>>>>>>>>>> if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) >>>>>>>>>>> sgp = SGP_READ; >>>>>>>>>> >>>>>>>>>> Yes this will start to save the space, but just to mention this may start >>>>>>>>>> to break anything that will still depend on the pagecache to work. E.g., >>>>>>>>>> it'll change behavior if the vma is registered with uffd missing mode; >>>>>>>>>> we'll start to lose MISSING events for these private mappings. Not sure >>>>>>>>>> whether there're other side effects. >>>>>>>>> >>>>>>>>> I don't follow, can you elaborate? >>>>>>>>> >>>>>>>>> hugetlb doesn't perform this kind of unnecessary allocation and should be fine in regards to uffd. Why should it matter here and how exactly would a problematic sequence look like? >>>>>>>> >>>>>>>> Hugetlb is special because hugetlb detects pte first and relies on pte at >>>>>>>> least for uffd. shmem is not. >>>>>>>> >>>>>>>> Feel free to also reference the recent fix which relies on the stable >>>>>>>> hugetlb pte with commit 2ea7ff1e39cbe375. >>>>>>> >>>>>>> Sorry to be dense here, but I don't follow how that relates. >>>>>>> >>>>>>> Assume we have a MAP_PRIVATE shmem mapping and someone registers uffd >>>>>>> missing events on that mapping. >>>>>>> >>>>>>> Assume we get a page fault on a hole. We detect no page is mapped and check >>>>>>> if the page cache has a page mapped -- which is also not the case, because >>>>>>> there is a hole. >>>>>>> >>>>>>> So we notify uffd. >>>>>>> >>>>>>> Uffd will place a page. It should *not* touch the page cache and only insert >>>>>>> that page into the page table -- otherwise we'd be violating MAP_PRIVATE >>>>>>> semantics. >>>>>> >>>>>> That's actually exactly what we do right now... we insert into page cache >>>>>> for the shmem. See shmem_mfill_atomic_pte(). >>>>>> >>>>>> Why it violates MAP_PRIVATE? Private pages only guarantee the exclusive >>>>>> ownership of pages, I don't see why it should restrict uffd behavior. Uffd >>>>>> missing mode (afaiu) is defined to resolve page cache missings in this >>>>>> case. Hugetlb is special but not shmem IMO comparing to most of the rest >>>>>> of the file systems. >>>>> >>>>> If a write (or uffd placement) via a MAP_PRIVATE mapping results in other >>>>> shared/private mappings from observing these modifications, you have a clear >>>>> violation of MAP_PRIVATE semantics. >>>> >>>> I think I understand what you meant, but just to mention again that I think >>>> it's a matter of how we defined the uffd missing semantics when shmem >>>> missing mode was introduced years ago. It does not need to be the same >>>> semantic as writting directly to a private mapping. >>>> >>> >>> I think uffd does exactly the right thing in mfill_atomic_pte: >>> >>> /* >>> * The normal page fault path for a shmem will invoke the >>> * fault, fill the hole in the file and COW it right away. The >>> * result generates plain anonymous memory. So when we are >>> * asked to fill an hole in a MAP_PRIVATE shmem mapping, we'll >>> * generate anonymous memory directly without actually filling >>> * the hole. For the MAP_PRIVATE case the robustness check >>> * only happens in the pagetable (to verify it's still none) >>> * and not in the radix tree. >>> */ >>> if (!(dst_vma->vm_flags & VM_SHARED)) { >>> if (mode == MCOPY_ATOMIC_NORMAL) >>> err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, >>> dst_addr, src_addr, page, >>> wp_copy); >>> else >>> err = mfill_zeropage_pte(dst_mm, dst_pmd, >>> dst_vma, dst_addr); >>> } else { >>> err = shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, >>> dst_addr, src_addr, >>> mode != MCOPY_ATOMIC_NORMAL, >>> wp_copy, page); >>> } >>> >>> Unless we have a writable shared mapping, we end up not touching the pagecache. >>> >>> After what I understand from your last message (maybe I understood it wrong), >>> I tried exploiting uffd behavior by writing into a hole of a file without >>> write permissions using uffd. I failed because it does the right thing ;) >> >> Very interesting to learn this, thanks for the pointer, David. :) >> Definitely helpful to me on knowing better on the vma security model. >> >> Though note that it'll be a different topic if we go back to the original >> problem we're discussing - the fake-read approach of shmem will still keep >> the hole in file which will still change the behavior of missing messages >> from generating. >> >> Said that, I don't really know whether there can be a real impact on any >> uffd users, or anything else that similarly access the file cache. > > One odd behavior I could think of is if one would have someone a process > A that uses uffd on a MAP_SHARED shmem and someone other process B > (e.g., with read-only permissions) have a MAP_PRIVATE mapping on the > same file. > > A read (or a write) from process B to the private mapping would result > in process A not receiving uffd events. > > > Of course, the same would happen if you have multiple MAP_SHARED > mappings as well ... but it feels a bit weird being able to do that > without write permissions to the file. > BTW, in a private mapping it would be perfectly fine to always populate a shared zeropage when reading or a fresh zero page into the process' page tables when finding a file hole -- without touching the file (page-cache) (to which we might not even have write permissions) at all. -- Thanks, David / dhildenb