From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B10CEFA373D for ; Fri, 21 Oct 2022 16:02:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 122658E0002; Fri, 21 Oct 2022 12:02:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D31F8E0001; Fri, 21 Oct 2022 12:02:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB5228E0002; Fri, 21 Oct 2022 12:02:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D95E58E0001 for ; Fri, 21 Oct 2022 12:02:13 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5729EAB260 for ; Fri, 21 Oct 2022 16:02:13 +0000 (UTC) X-FDA: 80045423346.26.586EF41 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 7FEF6140069 for ; Fri, 21 Oct 2022 16:02:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666368126; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=3FrY2Wz3mhQVEPBR+r7x5yDtNL9h45iCKPZNtaSX/Pw=; b=Bq5t+tmUZmOYjr9clCqC/dAFaiRs1bU2Z8X6v/dHTJd06V2V46pPcaG5Pa+9gVLXXZhnf2 28qrceve/TULl3A1VtnAAQbHttgZ8Otk6L7uBMGiLPbEiQwkOL1oSSgPHzugpeW8oNSfw6 y5+FGBkMm7UE13CqoRRjM8NWa8E296M= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-616-II1Qe5slNemnCYMJhj1pdw-1; Fri, 21 Oct 2022 12:02:05 -0400 X-MC-Unique: II1Qe5slNemnCYMJhj1pdw-1 Received: by mail-qk1-f199.google.com with SMTP id n13-20020a05620a294d00b006cf933c40feso3846657qkp.20 for ; Fri, 21 Oct 2022 09:02:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=3FrY2Wz3mhQVEPBR+r7x5yDtNL9h45iCKPZNtaSX/Pw=; b=cXTwYgKF2cboL/ebVIcml+J47GBDHPgbvzfKg2NQyf/ZUzAKtw1MXCWqgKNfypoCZp +jW+XOB7rQaIfKTOO5z5Z4YO9ItwQ/YjiAF2pgDh02oOgV6kwJGxuyDDCRFaJAo//f1l 6Owsg0L7otTDlnYTeZR1GC8eLlLD0f/vepOtZMu1JnwittmLP36oGuB8ygy7+izrl8Po T+8CkJKxdNIb8ap8WuaJmjmYTl6/oPb9d05qnIWKzSFxAYHlDO2YpAut8uJ9v83+UkkO vqRRnNtVn9S0W7w0BqEfnZKuM9JrnWtIrsHP8zGYRZJ4FZ7FAAvZsbdSIMGClfIDNshL OlMg== X-Gm-Message-State: ACrzQf2fg+V1lvA4U05M82KwEwi72+WbJV1L2PLoywdSSVe+PJcQ9+gw DQmQjZz0ExfHArp6cF2TCn22ChmL3ICSNMW8OKESRSRCMJ/MKirQjBRAFDgT1+71PAcY7H86C6n FjUW+UrV/H5s= X-Received: by 2002:a05:620a:45a6:b0:6ee:a169:f22b with SMTP id bp38-20020a05620a45a600b006eea169f22bmr13914394qkb.244.1666368124923; Fri, 21 Oct 2022 09:02:04 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7dEECdbFlRE8WEbL7T5SRDqmtf23WCipuqFwIAua4JDpJIJwQQIeMWvZL1plQbMRJ5YhEzHA== X-Received: by 2002:a05:620a:f83:b0:6e9:706b:854c with SMTP id b3-20020a05620a0f8300b006e9706b854cmr13448671qkn.366.1666368113211; Fri, 21 Oct 2022 09:01:53 -0700 (PDT) Received: from x1n (bras-base-aurron9127w-grc-46-70-31-27-79.dsl.bell.ca. [70.31.27.79]) by smtp.gmail.com with ESMTPSA id f6-20020a05622a114600b0039d0366af44sm4738500qty.1.2022.10.21.09.01.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Oct 2022 09:01:52 -0700 (PDT) Date: Fri, 21 Oct 2022 12:01:51 -0400 From: Peter Xu To: David Hildenbrand Cc: Matthew Wilcox , linux-mm@kvack.org, Hugh Dickins Subject: Re: Avoiding allocation of unused shmem page Message-ID: References: <4e1f4fb4-559e-2be3-c091-40ce0130b6c3@redhat.com> <3b19120b-05f6-10b8-c1af-67d8eb60fea0@redhat.com> MIME-Version: 1.0 In-Reply-To: <3b19120b-05f6-10b8-c1af-67d8eb60fea0@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666368127; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3FrY2Wz3mhQVEPBR+r7x5yDtNL9h45iCKPZNtaSX/Pw=; b=WG+i8okNU5EjtOSBJbLQZDcau6DQOYseX1QwCRNncajFriKrS1rXWIwm5lRQkGFXfLdV9O UuJ6R420OK/SXHdbFuoYhd1vGXP1mWEa8HJZPT+ifPOxGboS41uZqiz/oGPUn19ugbC5wp jPSSPCEBeLxokta5GpvRc20fROaWQ80= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Bq5t+tmU; spf=pass (imf23.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666368127; a=rsa-sha256; cv=none; b=MI48IAtp2vW5dqBv191fLdoxE/5LViOW0N6FFpjFRP7nDgGpUFxyUKXtog0LIbe3Lk++Yq r9EPS7WSPDq5jmGTr5v1zuWDyg+HJWz0G4cRVS9cEawVV71W58vyJS/eydPaOedVmrD+fR iCx3G7126PlFWhXBxQwKHryoV02oK1c= X-Stat-Signature: uoiq6jm35g9payute9j7hoshwbq16e5n X-Rspam-User: Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Bq5t+tmU; spf=pass (imf23.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7FEF6140069 X-HE-Tag: 1666368127-472278 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Oct 21, 2022 at 05:17:08PM +0200, David Hildenbrand wrote: > On 21.10.22 17:08, Peter Xu wrote: > > On Fri, Oct 21, 2022 at 04:45:27PM +0200, David Hildenbrand wrote: > > > On 21.10.22 16:28, Peter Xu wrote: > > > > On Fri, Oct 21, 2022 at 04:10:41PM +0200, David Hildenbrand wrote: > > > > > On 21.10.22 16:01, Peter Xu wrote: > > > > > > On Fri, Oct 21, 2022 at 09:23:00AM +0200, David Hildenbrand wrote: > > > > > > > On 20.10.22 23:10, Peter Xu wrote: > > > > > > > > On Thu, Oct 20, 2022 at 09:14:09PM +0100, Matthew Wilcox wrote: > > > > > > > > > In yesterday's call, David brought up the case where we fallocate a file > > > > > > > > > in shmem, call mmap(MAP_PRIVATE) and then store to a page which is over > > > > > > > > > a hole. That currently causes shmem to allocate a page, zero-fill it, > > > > > > > > > then COW it, resulting in two pages being allocated when only the > > > > > > > > > COW page really needs to be allocated. > > > > > > > > > > > > > > > > > > The path we currently take through the MM when we take the page fault > > > > > > > > > looks like this (correct me if I'm wrong ...): > > > > > > > > > > > > > > > > > > handle_mm_fault() > > > > > > > > > __handle_mm_fault() > > > > > > > > > handle_pte_fault() > > > > > > > > > do_fault() > > > > > > > > > do_cow_fault() > > > > > > > > > __do_fault() > > > > > > > > > vm_ops->fault() > > > > > > > > > > > > > > > > > > ... which is where we come into shmem_fault(). Apart from the > > > > > > > > > horrendous hole-punch handling case, shmem_fault() is quite simple: > > > > > > > > > > > > > > > > > > err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, SGP_CACHE, > > > > > > > > > gfp, vma, vmf, &ret); > > > > > > > > > if (err) > > > > > > > > > return vmf_error(err); > > > > > > > > > vmf->page = folio_file_page(folio, vmf->pgoff); > > > > > > > > > return ret; > > > > > > > > > > > > > > > > > > What we could do here is detect this case. Something like: > > > > > > > > > > > > > > > > > > enum sgp_type sgp = SGP_CACHE; > > > > > > > > > > > > > > > > > > if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) > > > > > > > > > sgp = SGP_READ; > > > > > > > > > > > > > > > > Yes this will start to save the space, but just to mention this may start > > > > > > > > to break anything that will still depend on the pagecache to work. E.g., > > > > > > > > it'll change behavior if the vma is registered with uffd missing mode; > > > > > > > > we'll start to lose MISSING events for these private mappings. Not sure > > > > > > > > whether there're other side effects. > > > > > > > > > > > > > > I don't follow, can you elaborate? > > > > > > > > > > > > > > hugetlb doesn't perform this kind of unnecessary allocation and should be fine in regards to uffd. Why should it matter here and how exactly would a problematic sequence look like? > > > > > > > > > > > > Hugetlb is special because hugetlb detects pte first and relies on pte at > > > > > > least for uffd. shmem is not. > > > > > > > > > > > > Feel free to also reference the recent fix which relies on the stable > > > > > > hugetlb pte with commit 2ea7ff1e39cbe375. > > > > > > > > > > Sorry to be dense here, but I don't follow how that relates. > > > > > > > > > > Assume we have a MAP_PRIVATE shmem mapping and someone registers uffd > > > > > missing events on that mapping. > > > > > > > > > > Assume we get a page fault on a hole. We detect no page is mapped and check > > > > > if the page cache has a page mapped -- which is also not the case, because > > > > > there is a hole. > > > > > > > > > > So we notify uffd. > > > > > > > > > > Uffd will place a page. It should *not* touch the page cache and only insert > > > > > that page into the page table -- otherwise we'd be violating MAP_PRIVATE > > > > > semantics. > > > > > > > > That's actually exactly what we do right now... we insert into page cache > > > > for the shmem. See shmem_mfill_atomic_pte(). > > > > > > > > Why it violates MAP_PRIVATE? Private pages only guarantee the exclusive > > > > ownership of pages, I don't see why it should restrict uffd behavior. Uffd > > > > missing mode (afaiu) is defined to resolve page cache missings in this > > > > case. Hugetlb is special but not shmem IMO comparing to most of the rest > > > > of the file systems. > > > > > > If a write (or uffd placement) via a MAP_PRIVATE mapping results in other > > > shared/private mappings from observing these modifications, you have a clear > > > violation of MAP_PRIVATE semantics. > > > > I think I understand what you meant, but just to mention again that I think > > it's a matter of how we defined the uffd missing semantics when shmem > > missing mode was introduced years ago. It does not need to be the same > > semantic as writting directly to a private mapping. > > > > I think uffd does exactly the right thing in mfill_atomic_pte: > > /* > * The normal page fault path for a shmem will invoke the > * fault, fill the hole in the file and COW it right away. The > * result generates plain anonymous memory. So when we are > * asked to fill an hole in a MAP_PRIVATE shmem mapping, we'll > * generate anonymous memory directly without actually filling > * the hole. For the MAP_PRIVATE case the robustness check > * only happens in the pagetable (to verify it's still none) > * and not in the radix tree. > */ > if (!(dst_vma->vm_flags & VM_SHARED)) { > if (mode == MCOPY_ATOMIC_NORMAL) > err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, > dst_addr, src_addr, page, > wp_copy); > else > err = mfill_zeropage_pte(dst_mm, dst_pmd, > dst_vma, dst_addr); > } else { > err = shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, > dst_addr, src_addr, > mode != MCOPY_ATOMIC_NORMAL, > wp_copy, page); > } > > Unless we have a writable shared mapping, we end up not touching the pagecache. > > After what I understand from your last message (maybe I understood it wrong), > I tried exploiting uffd behavior by writing into a hole of a file without > write permissions using uffd. I failed because it does the right thing ;) Very interesting to learn this, thanks for the pointer, David. :) Definitely helpful to me on knowing better on the vma security model. Though note that it'll be a different topic if we go back to the original problem we're discussing - the fake-read approach of shmem will still keep the hole in file which will still change the behavior of missing messages from generating. Said that, I don't really know whether there can be a real impact on any uffd users, or anything else that similarly access the file cache. -- Peter Xu