From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3537EC4332F for ; Fri, 21 Oct 2022 14:01:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 76F578E0002; Fri, 21 Oct 2022 10:01:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 71E318E0001; Fri, 21 Oct 2022 10:01:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E6158E0002; Fri, 21 Oct 2022 10:01:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 4EC938E0001 for ; Fri, 21 Oct 2022 10:01:34 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BF92980206 for ; Fri, 21 Oct 2022 14:01:33 +0000 (UTC) X-FDA: 80045119266.16.4157CFB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 961501C0061 for ; Fri, 21 Oct 2022 14:01:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666360891; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=BgcNEU6/IhXcA/1P/mA0ll/C0DfI2QJVfyMHVIxbr/w=; b=b92tVLhIO/ti6IVHLZ0tPo2c6SyXPPfU/w3PE6wXTY4Ff1WR9AfKDc3mHYBVLjMEREEbSr kHtivgGsuz/u/oDIi4TFoGF6OiDDAjaeWvccRh8NkWhYZzCdlX0s+RUXxQdl7yWpLqgER6 Mim7OdSNuhSxElVhHSVJDUrravfse2E= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-232-1SBPcbAKPSaP_sr5HBDxkA-1; Fri, 21 Oct 2022 10:01:30 -0400 X-MC-Unique: 1SBPcbAKPSaP_sr5HBDxkA-1 Received: by mail-qv1-f70.google.com with SMTP id h1-20020a0ceda1000000b004b899df67a4so2381501qvr.1 for ; Fri, 21 Oct 2022 07:01:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=BgcNEU6/IhXcA/1P/mA0ll/C0DfI2QJVfyMHVIxbr/w=; b=LqXeBcURJZRTAQ8ddc97Outk8xYwzb16y+9b5NlzAm/qgTcf7Tv9yG9mYHl/wsW0KK h2bkvI3u24Ie+ThaYDD+hWrYV0wGAMvi4EPIQcTYUWFEgXfg2XBrZLwJU6Z5m5KdjDYC wsSMR9Z3eBRwX6+ob5ChgleJa65CnHr5y5qNKUBZUBJWEaaMcDmUSfi8oC/cPsc26Lnc 2Jcw0wZOvxUcfnWz3c/+As8FBuT9rXTe9p48D5BMIuwzlETO/O8P0ePzriIYOr7b20W/ ei37Vo2w4yR6gQn5K5HscRvjLHRMLBDFQRhWo/U5p1gf8GopefzwdL7xHAPs//44H76K lbzw== X-Gm-Message-State: ACrzQf3kv1ASaK/+odD3MEjQPqi0qVuBFysZHdK8bVUx8tmBKWIb8pM7 l0rJ0zEZGyDBxZqyQRTid/ndqwctm13yd9ptuvA1yeQkUAYwHCz39gKnBCUqCS/yjqpLog8uAFK clZ9AQYZwv68= X-Received: by 2002:a05:620a:1250:b0:6ee:8d19:d6de with SMTP id a16-20020a05620a125000b006ee8d19d6demr14118796qkl.669.1666360889740; Fri, 21 Oct 2022 07:01:29 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4qgcTCgJiGO8BKyqJ+6Udd7m9Dc4E+ByhTv28vDK040PDYn5KRxSHn/57/uWUselzWA2VaMg== X-Received: by 2002:a05:620a:1250:b0:6ee:8d19:d6de with SMTP id a16-20020a05620a125000b006ee8d19d6demr14118761qkl.669.1666360889432; Fri, 21 Oct 2022 07:01:29 -0700 (PDT) Received: from x1n (bras-base-aurron9127w-grc-46-70-31-27-79.dsl.bell.ca. [70.31.27.79]) by smtp.gmail.com with ESMTPSA id h4-20020a05620a400400b006e702033b15sm9899388qko.66.2022.10.21.07.01.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Oct 2022 07:01:28 -0700 (PDT) Date: Fri, 21 Oct 2022 10:01:27 -0400 From: Peter Xu To: David Hildenbrand Cc: Matthew Wilcox , linux-mm@kvack.org, Hugh Dickins Subject: Re: Avoiding allocation of unused shmem page Message-ID: References: MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=b92tVLhI; spf=pass (imf20.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666360892; a=rsa-sha256; cv=none; b=uu1GYR5cpEZOWMvFE2RI1k39oqUVSArUksA4/diamvJmx0UxQoTsJZiRBDXehYpvCkDj/L oKF3t8lTEzIMe67R/q0RaU200wyLxYsMJ4QUYkYFAw+PufzdLVgTeKlgipu7uUaw4KZRhB WIa4Oxx4/ouobcdfoh0Y39PelrD4mLI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666360892; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BgcNEU6/IhXcA/1P/mA0ll/C0DfI2QJVfyMHVIxbr/w=; b=NapMPG+K5DtfUL2LBu44IhTjc1Ktj+e5NmxihF5kw1KnAp1NxSUPK0YXFYNPM8JK5iDFzu z0PtkoSMytCGWG6Jqrrd9G8aRst5g16ydyE0csCH+ngyiJADQlFub4IV8E3FcfNDz7MMTz iVhHFNFXqGBKEAxQzztXLZqCtEcvfmA= X-Stat-Signature: ub5idajp5q593yjmj318g4iepj1wbfuh X-Rspamd-Queue-Id: 961501C0061 X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=b92tVLhI; spf=pass (imf20.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam06 X-HE-Tag: 1666360892-49291 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Oct 21, 2022 at 09:23:00AM +0200, David Hildenbrand wrote: > On 20.10.22 23:10, Peter Xu wrote: > > On Thu, Oct 20, 2022 at 09:14:09PM +0100, Matthew Wilcox wrote: > > > In yesterday's call, David brought up the case where we fallocate a file > > > in shmem, call mmap(MAP_PRIVATE) and then store to a page which is over > > > a hole. That currently causes shmem to allocate a page, zero-fill it, > > > then COW it, resulting in two pages being allocated when only the > > > COW page really needs to be allocated. > > > > > > The path we currently take through the MM when we take the page fault > > > looks like this (correct me if I'm wrong ...): > > > > > > handle_mm_fault() > > > __handle_mm_fault() > > > handle_pte_fault() > > > do_fault() > > > do_cow_fault() > > > __do_fault() > > > vm_ops->fault() > > > > > > ... which is where we come into shmem_fault(). Apart from the > > > horrendous hole-punch handling case, shmem_fault() is quite simple: > > > > > > err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, SGP_CACHE, > > > gfp, vma, vmf, &ret); > > > if (err) > > > return vmf_error(err); > > > vmf->page = folio_file_page(folio, vmf->pgoff); > > > return ret; > > > > > > What we could do here is detect this case. Something like: > > > > > > enum sgp_type sgp = SGP_CACHE; > > > > > > if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) > > > sgp = SGP_READ; > > > > Yes this will start to save the space, but just to mention this may start > > to break anything that will still depend on the pagecache to work. E.g., > > it'll change behavior if the vma is registered with uffd missing mode; > > we'll start to lose MISSING events for these private mappings. Not sure > > whether there're other side effects. > > I don't follow, can you elaborate? > > hugetlb doesn't perform this kind of unnecessary allocation and should be fine in regards to uffd. Why should it matter here and how exactly would a problematic sequence look like? Hugetlb is special because hugetlb detects pte first and relies on pte at least for uffd. shmem is not. Feel free to also reference the recent fix which relies on the stable hugetlb pte with commit 2ea7ff1e39cbe375. > > > > > The zero-page approach will not have such issue as long as the pagecache is > > still filled with something. > > Having the shared zeropage available would be nice. But I understand that it adds quite some complexity -- on write fault, we have to invalidate all these zeropage mappings. Right, I didn't think further than that, if to do so the write will need pagecache update, and we'd need to properly rmap walk and drop ptes that referencing to the zero page, just leave the CoWed pages alone. What I wanted to express is only that the other approach will not suffer from this specific issue as long as it was still pagecache-based. > > > > > > err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, sgp, gfp, > > > vma, vmf, &ret); > > > if (err) > > > return vmf_error(err); > > > if (folio) > > > vmf->page = folio_file_page(folio, vmf->pgoff); > > > else > > > vmf->page = NULL; > > > return ret; > > > > > > and change do_cow_fault() like this: > > > > > > +++ b/mm/memory.c > > > @@ -4575,12 +4575,17 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf) > > > if (ret & VM_FAULT_DONE_COW) > > > return ret; > > > > > > - copy_user_highpage(vmf->cow_page, vmf->page, vmf->address, vma); > > > + if (vmf->page) > > > + copy_user_highpage(vmf->cow_page, vmf->page, vmf->address, vma); > > > + else > > > + clear_user_highpage(vmf->cow_page, vmf->address); > > > __SetPageUptodate(vmf->cow_page); > > > > > > ret |= finish_fault(vmf); > > > - unlock_page(vmf->page); > > > - put_page(vmf->page); > > > + if (vmf->page) { > > > + unlock_page(vmf->page); > > > + put_page(vmf->page); > > > + } > > > if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY))) > > > goto uncharge_out; > > > return ret; > > > > > > ... I wrote the code directly in my email client; definitely not > > > compile-tested. But if this situation is causing a real problem for > > > someone, this would be a quick fix for them. > > > > > > Is this a real problem or just intellectual curiosity? > > > > For me it's pure curiosity when I was asking this question; I don't have a > > production environment that can directly benefit from this. > > > > For real users I'd expect private shmem will always be mapped on meaningful > > (aka, non-zero) shared pages just to have their own copy, but no better > > knowledge than that. > > > There is an easy way to trigger this from QEMU, and we've had > customers running into this: Can the customer simply set shared=on? > > $ grep -E "(Anon|Shmem)" /proc/meminfo > AnonPages: 4097900 kB > Shmem: 1242364 kB > > $ qemu-system-x86_64 -object memory-backend-memfd,id=tmp,share=off,size=4G,prealloc=on -S --nographic & > > $ grep -E "(Anon|Shmem)" /proc/meminfo > AnonPages: 8296696 kB > Shmem: 5434800 kB > > I recall it's fairly easy to get wrong from Libvirt when starting a VM. > > We use an empty memfd and map it private. Each page we touch (especially write to) > ends up allocating shmem memory. > > Note that figuring out the write side ("write to hole via private mapping") is only > part of the story. For example, by dumping/migrating the VM (reading all memory) we can easily read > yet unpopulated memory and allocate a shmem page as well; once the VM would write to it, we'd > allocate an additional private page. > > We'd need support for the shared zeropage to handle that better -- which would implicitly also handle > shared mappings of shmem better -- dumping/migrating a VM would not allocate a lot of shmem pages filled > with zeroes. Yes. I just had a vague memory that zeropage for shmem used to exist for a while but went away. But I might have had it wrong. -- Peter Xu