From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BA8EC433FE for ; Fri, 21 Oct 2022 07:23:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 16D928E0002; Fri, 21 Oct 2022 03:23:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 11E038E0001; Fri, 21 Oct 2022 03:23:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F26F28E0002; Fri, 21 Oct 2022 03:23:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E45E08E0001 for ; Fri, 21 Oct 2022 03:23:11 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A86A2C11B8 for ; Fri, 21 Oct 2022 07:23:11 +0000 (UTC) X-FDA: 80044115382.19.D97AC8F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf17.hostedemail.com (Postfix) with ESMTP id 1A2374001B for ; Fri, 21 Oct 2022 07:23:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666336990; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OTkj/oGyJYNng6AS6o8oIC1FMCdzkeSz7wDalfqk2A0=; b=Dmsn+bmayDdLfln39ylsnT6RMevVUdCTpmISiCJ066N2hPJeIln4AANXcuSixAayxEVf2l BHCpo5qISPHpUiKEdMiqPn09rb1tz7yBO/x7ls1gfxmEvdsVOZBnj0DUBr7TEBuHC6hNzF 9Nd1epBbg8jQGRQ9951F2AaeOtfEghU= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-649-oG1IYFyeMF-nhE3JCKtYuQ-1; Fri, 21 Oct 2022 03:23:03 -0400 X-MC-Unique: oG1IYFyeMF-nhE3JCKtYuQ-1 Received: by mail-wm1-f71.google.com with SMTP id i203-20020a1c3bd4000000b003c6d18a7ee5so1060705wma.6 for ; Fri, 21 Oct 2022 00:23:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OTkj/oGyJYNng6AS6o8oIC1FMCdzkeSz7wDalfqk2A0=; b=jL1cahUsWwoP9Rqc101ljLSXoWMOofk0yUo6MLJcJOMNTOtFtXip6t0ttcDhxHnyOQ 03vGH9WXs3HeyCoeFCfOIrdjof/CAs1LRgjlqjEpGofVmtIYz5KmRKXoouQswek4LFFm kUlyNYH2NwIIq50xxzdhIkKsjJkdnMxHXajO7f+8M9pEyW51CF6qvhbnEsAtHP0Uf1ej C22qy+L0yeCnRYv7BwIt9OOcK5BteOW9+xD8yNwFGskTg/sstKg/GW/LSE10VDhjksO/ PazlTqm6STmPTNkk3Lw0GbCCP17FddDTDhqfnf58HFwaoJPrHcOm976B0i2QEyMp5ka2 qnoQ== X-Gm-Message-State: ACrzQf0ol0iDjOhc2APbMRnz2MKDvGtb/BA9jyWI8oMRYXo7Eixz86r0 BggxxhRCz/MSK9LlPyrvedK5MKLKK9Q2+sA6Iw/yOJYVdwN8lp1DAf6IoJWIGGg8c1yI7sGNmtj MCLqZMka/sLc= X-Received: by 2002:a05:600c:35c5:b0:3c7:187b:cc32 with SMTP id r5-20020a05600c35c500b003c7187bcc32mr2125962wmq.192.1666336982308; Fri, 21 Oct 2022 00:23:02 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6QDrDtKuis8Lux0FFACpGyHgPTkb6WdLM3opPVDwtUqcMx4PrgeT9W5uidEFRhk6riaMgQhQ== X-Received: by 2002:a05:600c:35c5:b0:3c7:187b:cc32 with SMTP id r5-20020a05600c35c500b003c7187bcc32mr2125944wmq.192.1666336981921; Fri, 21 Oct 2022 00:23:01 -0700 (PDT) Received: from ?IPV6:2003:cb:c708:1700:e40d:574c:c991:5f78? (p200300cbc7081700e40d574cc9915f78.dip0.t-ipconnect.de. [2003:cb:c708:1700:e40d:574c:c991:5f78]) by smtp.gmail.com with ESMTPSA id s16-20020a5d4250000000b0022e47b57735sm17977641wrr.97.2022.10.21.00.23.01 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 21 Oct 2022 00:23:01 -0700 (PDT) Message-ID: Date: Fri, 21 Oct 2022 09:23:00 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.3.1 To: Peter Xu , Matthew Wilcox Cc: linux-mm@kvack.org, Hugh Dickins References: From: David Hildenbrand Organization: Red Hat Subject: Re: Avoiding allocation of unused shmem page In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666336991; a=rsa-sha256; cv=none; b=jvHYQjhtxppLI5xm3jzYAWKwtUOOE55fm+MT8aHOKF0lEQq+1R/ZFs0w807HQ1iWnS1s8V wVfu4sviJKhcpuLsBDck85hNtVEZWpox+0j7yECDuB8LETX9Bc8qalhbJtS4T8deNYj7pb 4DmHGlIXig/6St7lMW/tAAclU3C53M4= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Dmsn+bma; spf=pass (imf17.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666336991; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OTkj/oGyJYNng6AS6o8oIC1FMCdzkeSz7wDalfqk2A0=; b=LtViB/lX4dspzqiGLI0uSwPc6lnvYhDbBl9TAr/Xvxz0lnCMiolQyi2H0a5qwcI9sWigk3 BgzVCsqFVwAKMWEYAEt8xjJC3MPGNrqQGBrw4hZBfFCuigxkza934NgY7wsW3iuPdUiQz/ jCEVKCXxf3PjAvJgZTGpHRwvztfnWgY= X-Stat-Signature: sra6ytt4cn43rmqakkhygyhh1t6k7778 X-Rspamd-Queue-Id: 1A2374001B Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Dmsn+bma; spf=pass (imf17.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1666336990-31184 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 20.10.22 23:10, Peter Xu wrote: > On Thu, Oct 20, 2022 at 09:14:09PM +0100, Matthew Wilcox wrote: >> In yesterday's call, David brought up the case where we fallocate a file >> in shmem, call mmap(MAP_PRIVATE) and then store to a page which is over >> a hole. That currently causes shmem to allocate a page, zero-fill it, >> then COW it, resulting in two pages being allocated when only the >> COW page really needs to be allocated. >> >> The path we currently take through the MM when we take the page fault >> looks like this (correct me if I'm wrong ...): >> >> handle_mm_fault() >> __handle_mm_fault() >> handle_pte_fault() >> do_fault() >> do_cow_fault() >> __do_fault() >> vm_ops->fault() >> >> ... which is where we come into shmem_fault(). Apart from the >> horrendous hole-punch handling case, shmem_fault() is quite simple: >> >> err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, SGP_CACHE, >> gfp, vma, vmf, &ret); >> if (err) >> return vmf_error(err); >> vmf->page = folio_file_page(folio, vmf->pgoff); >> return ret; >> >> What we could do here is detect this case. Something like: >> >> enum sgp_type sgp = SGP_CACHE; >> >> if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) >> sgp = SGP_READ; > > Yes this will start to save the space, but just to mention this may start > to break anything that will still depend on the pagecache to work. E.g., > it'll change behavior if the vma is registered with uffd missing mode; > we'll start to lose MISSING events for these private mappings. Not sure > whether there're other side effects. I don't follow, can you elaborate? hugetlb doesn't perform this kind of unnecessary allocation and should be fine in regards to uffd. Why should it matter here and how exactly would a problematic sequence look like? > > The zero-page approach will not have such issue as long as the pagecache is > still filled with something. Having the shared zeropage available would be nice. But I understand that it adds quite some complexity -- on write fault, we have to invalidate all these zeropage mappings. > >> err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, sgp, gfp, >> vma, vmf, &ret); >> if (err) >> return vmf_error(err); >> if (folio) >> vmf->page = folio_file_page(folio, vmf->pgoff); >> else >> vmf->page = NULL; >> return ret; >> >> and change do_cow_fault() like this: >> >> +++ b/mm/memory.c >> @@ -4575,12 +4575,17 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf) >> if (ret & VM_FAULT_DONE_COW) >> return ret; >> >> - copy_user_highpage(vmf->cow_page, vmf->page, vmf->address, vma); >> + if (vmf->page) >> + copy_user_highpage(vmf->cow_page, vmf->page, vmf->address, vma); >> + else >> + clear_user_highpage(vmf->cow_page, vmf->address); >> __SetPageUptodate(vmf->cow_page); >> >> ret |= finish_fault(vmf); >> - unlock_page(vmf->page); >> - put_page(vmf->page); >> + if (vmf->page) { >> + unlock_page(vmf->page); >> + put_page(vmf->page); >> + } >> if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY))) >> goto uncharge_out; >> return ret; >> >> ... I wrote the code directly in my email client; definitely not >> compile-tested. But if this situation is causing a real problem for >> someone, this would be a quick fix for them. >> >> Is this a real problem or just intellectual curiosity? > > For me it's pure curiosity when I was asking this question; I don't have a > production environment that can directly benefit from this. > > For real users I'd expect private shmem will always be mapped on meaningful > (aka, non-zero) shared pages just to have their own copy, but no better > knowledge than that. There is an easy way to trigger this from QEMU, and we've had customers running into this: $ grep -E "(Anon|Shmem)" /proc/meminfo AnonPages: 4097900 kB Shmem: 1242364 kB $ qemu-system-x86_64 -object memory-backend-memfd,id=tmp,share=off,size=4G,prealloc=on -S --nographic & $ grep -E "(Anon|Shmem)" /proc/meminfo AnonPages: 8296696 kB Shmem: 5434800 kB I recall it's fairly easy to get wrong from Libvirt when starting a VM. We use an empty memfd and map it private. Each page we touch (especially write to) ends up allocating shmem memory. Note that figuring out the write side ("write to hole via private mapping") is only part of the story. For example, by dumping/migrating the VM (reading all memory) we can easily read yet unpopulated memory and allocate a shmem page as well; once the VM would write to it, we'd allocate an additional private page. We'd need support for the shared zeropage to handle that better -- which would implicitly also handle shared mappings of shmem better -- dumping/migrating a VM would not allocate a lot of shmem pages filled with zeroes. -- Thanks, David / dhildenb