From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 725C5EB64D9 for ; Tue, 27 Jun 2023 09:14:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 828628D0002; Tue, 27 Jun 2023 05:14:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D77F8D0001; Tue, 27 Jun 2023 05:14:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69F6C8D0002; Tue, 27 Jun 2023 05:14:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 59BB88D0001 for ; Tue, 27 Jun 2023 05:14:10 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2363C160912 for ; Tue, 27 Jun 2023 09:14:10 +0000 (UTC) X-FDA: 80947966260.30.4B5C849 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf05.hostedemail.com (Postfix) with ESMTP id A3C2810001C for ; Tue, 27 Jun 2023 09:14:07 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GJrz6L8k; spf=pass (imf05.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687857247; a=rsa-sha256; cv=none; b=1P4L3tCRAWNgh5Zzmp69Pt5f8/do1rO8O69dS/dP6yZASxJVDq80wJp4h9h7aC3CQRC7a5 N0WLhUsXVFTGpgT0FmkerY1Kbq890HmpqNN+elTuDSqShg0zWGK9ucZYt06/jDVuYZFmMq RoCjtGLbDWhdB/4VB2I9fIY9+KZok0M= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GJrz6L8k; spf=pass (imf05.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687857247; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ij82bTMq6/uViGwAL8PXDdJMt9B/wmu4cITPZfAQEVw=; b=4VyBv1V+u3Fg7brt3tGR3dhLQgHX4xKuiHAopAD3wawQMjycv7vcYRXh4wPf/BFHnyRhl2 0s88W02UQ/hWHkey49PUISJ8deSMy/76Dsud6E7Uv73tWDuregZvTgU1yzc4Ujgu/9AXV1 NNmbX0FRQK8QFe0WwIFGKhNOSvAtxts= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687857246; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ij82bTMq6/uViGwAL8PXDdJMt9B/wmu4cITPZfAQEVw=; b=GJrz6L8k7nwJ7ZViR+jHDtvN+LcgWdLrvcF3A9RUaTLpR1CCXhTTvfATDhw4UPi/IgpHnr 8jd2NJjgW8OON6zVPb2aB2XCbmZ+MWPRmM2ku6AiQpWftIqWaeIUXGstjGxjAcOsSSW7HR R4QymWJ0A93h8A6+2MBXx1h33eweNbk= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-357-3US62xQfNs2cf--7ftz-dQ-1; Tue, 27 Jun 2023 05:14:02 -0400 X-MC-Unique: 3US62xQfNs2cf--7ftz-dQ-1 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-3128319d532so3055410f8f.2 for ; Tue, 27 Jun 2023 02:14:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687857241; x=1690449241; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ij82bTMq6/uViGwAL8PXDdJMt9B/wmu4cITPZfAQEVw=; b=KU2xVgOvPbvz4R6H9tWBmxSVB2OiGwJqseEK5QsCSHAbuSSeOSHp/sNL3Z8FWtv5bD LvDb+a8TnaQfJdgZ6m8E/YuNkYGSSs4/lTJaDOsHFzIwsxlJ99b7r9qE6M2ZoFadxg1C e7y2UzzgFJa37X2FFQy4K3LeCe+6rJjMbecn8QXzDhyaa9p8cNvlnVS+IgK1coTUNbuD 6ANxjjaflrNqoyinxMYn27gaKAOZLjKBMV1QKUhj0XGNndpsTP+1Gb+o0gtAXuwJu/u5 Z5VZqXkV8+/+RcvYHXGPUQEN8LqsYmon/HbXMtKOiOtkTyzeys7lU0GQ/mHsPT69N2cJ M6kQ== X-Gm-Message-State: AC+VfDwft37IFd3DLzNo56uCHcTk/5VaHgC3B1lsstuK3zS0+cRAzzxa L7y4Xjz7KH84mb60IPi+HtYFxWaYtL4PdAKz/+t72dZ/a+s+qmjkGR5f1O2tucjRcnAB/npNHom GN1HEhamBSZg= X-Received: by 2002:a5d:46c5:0:b0:313:ef93:9e6e with SMTP id g5-20020a5d46c5000000b00313ef939e6emr4982674wrs.24.1687857241229; Tue, 27 Jun 2023 02:14:01 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6tpmhqijmTKVbr0EEyMc4aKWpIkUQzTxfjhoUeKuFM2sfKFuMKThEy+JgS1uhzDpGcIn+Rmw== X-Received: by 2002:a5d:46c5:0:b0:313:ef93:9e6e with SMTP id g5-20020a5d46c5000000b00313ef939e6emr4982656wrs.24.1687857240727; Tue, 27 Jun 2023 02:14:00 -0700 (PDT) Received: from ?IPV6:2003:cb:c737:4900:68b3:e93b:e07a:558b? (p200300cbc737490068b3e93be07a558b.dip0.t-ipconnect.de. [2003:cb:c737:4900:68b3:e93b:e07a:558b]) by smtp.gmail.com with ESMTPSA id cw8-20020a056000090800b00311d8c2561bsm9750387wrb.60.2023.06.27.02.13.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 27 Jun 2023 02:14:00 -0700 (PDT) Message-ID: <57c677d1-9809-966e-5254-f01f59eff7bc@redhat.com> Date: Tue, 27 Jun 2023 11:13:59 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0 To: Lorenzo Stoakes Cc: Vlastimil Babka , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Mike Rapoport , "Liam R . Howlett" References: <20230626204612.106165-1-lstoakes@gmail.com> <074fc253-beb4-f7be-14a1-ee5f4745c15b@suse.cz> <1832a772-93b4-70ad-3a2c-d8da104ce8dd@redhat.com> <40cd965f-ba4f-44d8-8e7c-d6267b91a9b3@lucifer.local> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH] mm/mprotect: allow unfaulted VMAs to be unaccounted on mprotect() In-Reply-To: <40cd965f-ba4f-44d8-8e7c-d6267b91a9b3@lucifer.local> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: A3C2810001C X-Stat-Signature: 7cemmxpfzu1b5b991tbbeh6jrfou4c1e X-Rspam-User: X-HE-Tag: 1687857247-279323 X-HE-Meta: U2FsdGVkX1+gKblOQFYVAjbTC3+QysfztiuEtXdj+xUuvgQ3hAFpPjjThH97SKX6JZe+9YIWpSV3lkL8kq/FVtKbNmMNMdNGE5Res5rj/8jipGcuhxKPQ86rUrgsCBC9di2lGC1qMO02gEZOT1EZW6EUtO890O997IM+KEvxjeIW2Ot5xoFnEWr0x7IT6C5k1WJbSXX5fuV4gY3RV9f8DQeMk2AFf09kkn7E7pnHsFBI8ZHx+kLzCUuo1tqLje4K5WOJ1HS0Yk9FgACrZvYfrpc/I/9vib9l5BDwprenguveeDgDs8MBHUM7/ZIyfW5WvlkdiqY6jPg0ydBe9w5zGPhxvzx1k8TenMrSrJgbJXP5S87BqbdFaUXPXgnhvmOHMOeI9lyZtz7m3jFm4/z//XwSFwkNSDp7Cog3Nsp0zBei0YCviZRP13FAtj/C3snRrV+tCLmP9VjJ9s5R8ZXfqF/VaJNyKz71lkrVLV65iku4EV9DiRec/xqxOT06NplWRMxY6EQuyVDojcdGwS5KIlqKZOze6GxINwCOTOAtSyyYBdUPlb3PEFYKOm0DWVTtORHJ0K51CZOUZUkRyKcbQ01fp6SZnTuAphn03F4ijc08QDHas7wJXZtqtxEchxthWMi9xWOToJzVjDe9Fq4fiBoVcD5rTAyiZ4UvTAPvgJJ9EZxJeCrcLSZ77vEw8BCSrUN8f8o12JUz6jgM3Y2rvlt6vVi339M30NTC1h+XBsyLjUlllu1gUrGvYSyqhFMdQdc9I4bh5QgtpuXS0SoAmuPq/y17VUdcUxQPqKj35RKlHfg3ntKxbc7EnBtugP/Vi6fbEAvTIK7lPRI6XJMP/CP+wo0t4xoACqQ/b+to+NLT0BFVgqBt42DHaninmR1reDzlXmIdxdPqW6fJZ7juJsdF5ai79lZYSB4C0ri18cXxcFcBJEmhhX5ppiULF84uX23LTR1bwvSX8Ep2vXd 1jQGM4eb I9xgJMIsEOvnvXOOHp3oSrKhcqhf5EGUap6g6zir+RqdhWjVkUcnK5TD+ix0ID+v9Xw4h3muC0Dgl6b4fZwqbWAQYnnthI1MG99IZ5YAoFEvzSUzQ621BQjz9eJwUAwKnqzq6JuLcjd0oVDBuSCpcpTJ4biYaM3hRWsADpn0dGNXgCqXqxq2qMZx9Cw472/73B8V3jVIW1/kvAUfb/QhQV7KnSKWezkomQgVVfW1k0rkq5eHqtsi5FyhTJtmppmyQR3sEtu2vK5Wq7ndFEoGtkV+fIowAE9uFe7fM6WTQ0E1qD6m1MMDuXVXN52HHM81z7QHgcxll7WNVCYqC80oJF6Oyi3cJNjqlNw0yXkpQnzH6Vs8Q5yS/xNH/N+NjOPSx40Jf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 27.06.23 10:49, Lorenzo Stoakes wrote: > On Tue, Jun 27, 2023 at 08:59:33AM +0200, David Hildenbrand wrote: >> Hi all, >> >> On 27.06.23 08:28, Vlastimil Babka wrote: >>> On 6/26/23 22:46, Lorenzo Stoakes wrote: >>>> When mprotect() is used to make unwritable VMAs writable, they have the >>>> VM_ACCOUNT flag applied and memory accounted accordingly. >>>> >>>> If the VMA has had no pages faulted in and is then made unwritable once >>>> again, it will remain accounted for, despite not being capable of extending >>>> memory usage. >>>> >>>> Consider:- >>>> >>>> ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0); >>>> mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); >>>> mprotect(ptr + page_size, page_size, PROT_READ); >>> >>> In the original Mike's example there were actual pages populated, in that >>> case we still won't merge the vma's, right? Guess that can't be helped. >>> >> >> I am clearly missing the motivation for this patch: above is a artificial >> reproducer that I cannot really imagine being relevant in practice. > > I cc'd you on this patch exactly because I knew you'd scrutinise it > greatly :) > Yeah, and that needs time and you have to motivate me :) > Well the motivator for the initial investigation was rppt playing with > R[WO]X (this came from an #mm irc conversation), however in his case he > will be mapping pages between the two. And that's the scenario I think we care about in practice (actually accessing memory). > > (apologies to rppt, I forgot to add the Reported-By...) > >> >> So is there any sane workload that does random mprotect() without even >> touching memory once? Sure, fuzzing, ... artificial reproducers ... but is >> there any real-life problem we're solving here? >> >> IOW, why did you (Lorenzo) invest time optimizing for this andcrafting this >> patch and why should reviewer invest time to understand if it's correct? :) >> > > So why I (that Stoakes guy) invested time here was, well I had chased down > the issue for rppt out of curiosity, and 'proved' the point by making > essentially this patch. > > I dug into it further and (as the patch message aludes to) have convinced > myself that this is safe, so essentially why NOT submit it :) > > In real-use scenarios, yes fuzzers are a thing, but what comes to mind more > immediately is a process that maps a big chunk of virtual memory PROT_NONE > and uses that as part of an internal allocator. > > If the process then allocates memory from this chunk (mprotect() -> > PROT_READ | PROT_WRITE), which then gets freed without being used > (mprotect() -> PROT_NONE) we hit the issue. For OVERCOMMIT_NEVER this could > become quite an issue more so than the VMA fragmentation. Using mprotect() when allocating/freeing memory in an allocator is already horribly harmful for performance (well, and the #VMAs), so I don't think that scenario is relevant in practice. What some allocators (iirc even glibc) do is reserve a bigger area with PROT_NONE and grow the accessible part slowly on demand, discarding freed memory using MADV_DONTNEED. So you essentially end up with two VMAs -- one completely accessible, one completely inaccessible. They don't use mprotect() because: (a) It's bad for performance (b) It might increase the #VMAs There is efence, but I remember it simply does mmap()+munmap() and runs into VMA limits easily just by relying on a lot of mappings. > > In addition, I think a user simply doing the artificial test above would > find the split remaining quite confusing, and somebody debugging some code > like this would equally wonder why it happened, so there is benefit in > clarity too (they of course observing the VMA fragmentation from the > perspective of /proc/$pid/[s]maps). My answer would have been "memory gets commited the first time we allow write access, and that wasn't the case for all memory in that range". Now, take your example above and touch the memory. ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0); mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); *(ptr + page_size) = 1; mprotect(ptr + page_size, page_size, PROT_READ); And we'll not merge the VMAs. Which, at least to me, makes existing handling more consistent. And users could rightfully wonder "why isn't it getting merged". And the answer would be the same: "memory gets commited the first time we allow write access, and that wasn't the case for all memory in that range". > > I believe given we hold a very strong lock (write on mm->mmap_lock) and > that vma->anon_vma being NULL really does seem to imply no pages have been > allocated that this is therefore a safe thing to do and worthwhile. Do we have to care about the VMA locks now that pagefaults can be served without the mmap_lock in write mode? [...] >>> So in practice programs will likely do the PROT_WRITE in order to actually >>> populate the area, so this won't trigger as I commented above. But it can >>> still help in some cases and is cheap to do, so: >> >> IMHO we should much rather look into getting hugetlb ranges merged. Mt >> recollection is that we'll never end up merging hugetlb VMAs once split. > > I'm not sure how that's relevant to fragmented non-hugetlb VMAs though? It's a VMA merging issue that can be hit in practice, so I raised it. No strong opinion from my side, just my 2 cents reading the patch description and wondering "why do we even invest time thinking about this case" -- and eventually make handling less consistent IMHO (see above). -- Cheers, David / dhildenb