From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50DB8C433FE for ; Fri, 21 Oct 2022 15:17:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AB99D8E0002; Fri, 21 Oct 2022 11:17:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A42878E0001; Fri, 21 Oct 2022 11:17:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BC9A8E0002; Fri, 21 Oct 2022 11:17:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 76D388E0001 for ; Fri, 21 Oct 2022 11:17:17 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4A96E41434 for ; Fri, 21 Oct 2022 15:17:17 +0000 (UTC) X-FDA: 80045310114.24.8F08DDF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id D8F5B18003A for ; Fri, 21 Oct 2022 15:17:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666365435; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NZrMMGQIVhlFH4G6twWzUHQ3RwAKHzntFe6dmZPmSx8=; b=M94fxdpFH0udnjkEXvmNDknxYYpzlo60we8SaM+QaFHCT0Ljr4dqaaQQA2RgD6lzn0aVdl +y/kfoAJHaz8n9u5cBw8Qpt0gHW1f85FhmqjsGULctW7N3iui9sUXJbXTkxOlXzcTjPR+V xa7ncRSei7b8JWHenwFFOpUF4ATA7sE= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-374-BB_U57yWMDqFBVeSnUHVzg-1; Fri, 21 Oct 2022 11:17:11 -0400 X-MC-Unique: BB_U57yWMDqFBVeSnUHVzg-1 Received: by mail-wr1-f69.google.com with SMTP id n13-20020adf8b0d000000b0023658a75751so362873wra.23 for ; Fri, 21 Oct 2022 08:17:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=NZrMMGQIVhlFH4G6twWzUHQ3RwAKHzntFe6dmZPmSx8=; b=BuCJ/tIxs8K24uauv2t3rvIrGM8Jjjf+ymDlxRt3degjJXiqyiYSR7rBEv/V7ojp/v REu/0HVOk1NJ6zGKjnCT+bpJnOnsWp8VzmZ0h2MarPDtQ8YHV+0Rje4Hr9gMtdkEvUQQ /f9DqznarG5gTpsNZQBkRn7H7vMqNrXObjm6l42LoFZXWZ5gtGbLOHsg+cHMiE62xXNg 003eEZ/VkfTE75m+adDInrZkTJoZwLCg7s2PamHo1rwEhvL3FGmbHcsmN9bYz8IPCIw7 uX7Z19/AYLwAbUVm9uMmsRcH8ni49TJCNS7ZBCVRA1lyO3oLbbd1+G/P3iPMmP1FLMAN 0p2Q== X-Gm-Message-State: ACrzQf2w88LoOtokaxXsTJTDMw2ja4e8wr7FecnfnIAEYGr0+jey8qEv +ZdQgdB7JPV7NjoOXbzpIln6fDqlaewsluQZVpgV1e+KO6CSRrNL55Tdl98EJ7/897/BDaV/Yjo pgiBF6fAVT9s= X-Received: by 2002:a05:600c:1da8:b0:3c6:ca7c:2e87 with SMTP id p40-20020a05600c1da800b003c6ca7c2e87mr12923420wms.126.1666365430543; Fri, 21 Oct 2022 08:17:10 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6Onug/HEgapxIwwVWuRD3Qxw0fMdyrif2L8OCy8x1OzzRH5nHn5xkEs0AF6IPn5aXtEIgnQA== X-Received: by 2002:a05:600c:1da8:b0:3c6:ca7c:2e87 with SMTP id p40-20020a05600c1da800b003c6ca7c2e87mr12923403wms.126.1666365430245; Fri, 21 Oct 2022 08:17:10 -0700 (PDT) Received: from ?IPV6:2003:cb:c708:1700:e40d:574c:c991:5f78? (p200300cbc7081700e40d574cc9915f78.dip0.t-ipconnect.de. [2003:cb:c708:1700:e40d:574c:c991:5f78]) by smtp.gmail.com with ESMTPSA id l25-20020a1ced19000000b003c6cdbface4sm2725354wmh.11.2022.10.21.08.17.09 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 21 Oct 2022 08:17:09 -0700 (PDT) Message-ID: <3b19120b-05f6-10b8-c1af-67d8eb60fea0@redhat.com> Date: Fri, 21 Oct 2022 17:17:08 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.3.1 To: Peter Xu Cc: Matthew Wilcox , linux-mm@kvack.org, Hugh Dickins References: <4e1f4fb4-559e-2be3-c091-40ce0130b6c3@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: Avoiding allocation of unused shmem page In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=M94fxdpF; spf=pass (imf24.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666365435; a=rsa-sha256; cv=none; b=TL2gxEaPJLCP0GgbBVmkWGnHXmHLhUjQUb2LykORJmKuWvEjMvX/hQ1YISAD2anHUWZn6k Kao1wYpb24G+1JZNqrOzS5PCypRF60Z+LHvEYX16IwlWHhG7rIQ1wNilaNOSzyk9LtJuvi jgj9PV2nnBwYCOGTnXNphSFs945i53o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666365435; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NZrMMGQIVhlFH4G6twWzUHQ3RwAKHzntFe6dmZPmSx8=; b=gZy1GgSyAoYIy6QMPymxrk/cPhXr6ixJye1p0zNkOxgcn2N296DBVIzeU5p0sZPFGlCLYZ acXnOrgCnYcY1TfT5uuA9u6iOb1V0pbUrzcO4LZL+gI/MkyrWlCk/K9nDcg12iYFDhKnaW VEByFtXCmVe1fBfIouCd1HwHFGVwdco= Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=M94fxdpF; spf=pass (imf24.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Stat-Signature: f63mywz3188hbmncxxsftiq5qaez4nai X-Rspamd-Queue-Id: D8F5B18003A X-HE-Tag: 1666365435-762699 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 21.10.22 17:08, Peter Xu wrote: > On Fri, Oct 21, 2022 at 04:45:27PM +0200, David Hildenbrand wrote: >> On 21.10.22 16:28, Peter Xu wrote: >>> On Fri, Oct 21, 2022 at 04:10:41PM +0200, David Hildenbrand wrote: >>>> On 21.10.22 16:01, Peter Xu wrote: >>>>> On Fri, Oct 21, 2022 at 09:23:00AM +0200, David Hildenbrand wrote: >>>>>> On 20.10.22 23:10, Peter Xu wrote: >>>>>>> On Thu, Oct 20, 2022 at 09:14:09PM +0100, Matthew Wilcox wrote: >>>>>>>> In yesterday's call, David brought up the case where we fallocate a file >>>>>>>> in shmem, call mmap(MAP_PRIVATE) and then store to a page which is over >>>>>>>> a hole. That currently causes shmem to allocate a page, zero-fill it, >>>>>>>> then COW it, resulting in two pages being allocated when only the >>>>>>>> COW page really needs to be allocated. >>>>>>>> >>>>>>>> The path we currently take through the MM when we take the page fault >>>>>>>> looks like this (correct me if I'm wrong ...): >>>>>>>> >>>>>>>> handle_mm_fault() >>>>>>>> __handle_mm_fault() >>>>>>>> handle_pte_fault() >>>>>>>> do_fault() >>>>>>>> do_cow_fault() >>>>>>>> __do_fault() >>>>>>>> vm_ops->fault() >>>>>>>> >>>>>>>> ... which is where we come into shmem_fault(). Apart from the >>>>>>>> horrendous hole-punch handling case, shmem_fault() is quite simple: >>>>>>>> >>>>>>>> err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, SGP_CACHE, >>>>>>>> gfp, vma, vmf, &ret); >>>>>>>> if (err) >>>>>>>> return vmf_error(err); >>>>>>>> vmf->page = folio_file_page(folio, vmf->pgoff); >>>>>>>> return ret; >>>>>>>> >>>>>>>> What we could do here is detect this case. Something like: >>>>>>>> >>>>>>>> enum sgp_type sgp = SGP_CACHE; >>>>>>>> >>>>>>>> if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) >>>>>>>> sgp = SGP_READ; >>>>>>> >>>>>>> Yes this will start to save the space, but just to mention this may start >>>>>>> to break anything that will still depend on the pagecache to work. E.g., >>>>>>> it'll change behavior if the vma is registered with uffd missing mode; >>>>>>> we'll start to lose MISSING events for these private mappings. Not sure >>>>>>> whether there're other side effects. >>>>>> >>>>>> I don't follow, can you elaborate? >>>>>> >>>>>> hugetlb doesn't perform this kind of unnecessary allocation and should be fine in regards to uffd. Why should it matter here and how exactly would a problematic sequence look like? >>>>> >>>>> Hugetlb is special because hugetlb detects pte first and relies on pte at >>>>> least for uffd. shmem is not. >>>>> >>>>> Feel free to also reference the recent fix which relies on the stable >>>>> hugetlb pte with commit 2ea7ff1e39cbe375. >>>> >>>> Sorry to be dense here, but I don't follow how that relates. >>>> >>>> Assume we have a MAP_PRIVATE shmem mapping and someone registers uffd >>>> missing events on that mapping. >>>> >>>> Assume we get a page fault on a hole. We detect no page is mapped and check >>>> if the page cache has a page mapped -- which is also not the case, because >>>> there is a hole. >>>> >>>> So we notify uffd. >>>> >>>> Uffd will place a page. It should *not* touch the page cache and only insert >>>> that page into the page table -- otherwise we'd be violating MAP_PRIVATE >>>> semantics. >>> >>> That's actually exactly what we do right now... we insert into page cache >>> for the shmem. See shmem_mfill_atomic_pte(). >>> >>> Why it violates MAP_PRIVATE? Private pages only guarantee the exclusive >>> ownership of pages, I don't see why it should restrict uffd behavior. Uffd >>> missing mode (afaiu) is defined to resolve page cache missings in this >>> case. Hugetlb is special but not shmem IMO comparing to most of the rest >>> of the file systems. >> >> If a write (or uffd placement) via a MAP_PRIVATE mapping results in other >> shared/private mappings from observing these modifications, you have a clear >> violation of MAP_PRIVATE semantics. > > I think I understand what you meant, but just to mention again that I think > it's a matter of how we defined the uffd missing semantics when shmem > missing mode was introduced years ago. It does not need to be the same > semantic as writting directly to a private mapping. > I think uffd does exactly the right thing in mfill_atomic_pte: /* * The normal page fault path for a shmem will invoke the * fault, fill the hole in the file and COW it right away. The * result generates plain anonymous memory. So when we are * asked to fill an hole in a MAP_PRIVATE shmem mapping, we'll * generate anonymous memory directly without actually filling * the hole. For the MAP_PRIVATE case the robustness check * only happens in the pagetable (to verify it's still none) * and not in the radix tree. */ if (!(dst_vma->vm_flags & VM_SHARED)) { if (mode == MCOPY_ATOMIC_NORMAL) err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, src_addr, page, wp_copy); else err = mfill_zeropage_pte(dst_mm, dst_pmd, dst_vma, dst_addr); } else { err = shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, src_addr, mode != MCOPY_ATOMIC_NORMAL, wp_copy, page); } Unless we have a writable shared mapping, we end up not touching the pagecache. After what I understand from your last message (maybe I understood it wrong), I tried exploiting uffd behavior by writing into a hole of a file without write permissions using uffd. I failed because it does the right thing ;) -- Thanks, David / dhildenb