From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B5C7D3C92D for ; Sun, 20 Oct 2024 17:38:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 02A4E6B0083; Sun, 20 Oct 2024 13:38:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F1C656B0088; Sun, 20 Oct 2024 13:38:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE4306B0089; Sun, 20 Oct 2024 13:38:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C0B676B0083 for ; Sun, 20 Oct 2024 13:38:27 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C0E90C1342 for ; Sun, 20 Oct 2024 17:38:11 +0000 (UTC) X-FDA: 82694689434.22.776FC7E Received: from cygnus.enyo.de (cygnus.enyo.de [79.140.189.114]) by imf10.hostedemail.com (Postfix) with ESMTP id 102D8C0018 for ; Sun, 20 Oct 2024 17:38:18 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of fw@deneb.enyo.de designates 79.140.189.114 as permitted sender) smtp.mailfrom=fw@deneb.enyo.de; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729445830; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CxP+hvNU4qcqNGBpiO6FGJzqf0kceUjq2XGHkl/4ekk=; b=7Wkx08NvFSvjJYKUhu/SVRhD65jqnUNQlTBfi7xemNkgCNvuxHIUWd/rmn0jl0Jl0MMGVv cPFvEsQOUTXU5X4141Vk87BZJtpbBURbG7JcSDxdDxq9Bbo8Kv03Y4mJSe8VB1wOUyUBXm tIK6Qa9nVzoiDDuD9/9//tyRs+me1As= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of fw@deneb.enyo.de designates 79.140.189.114 as permitted sender) smtp.mailfrom=fw@deneb.enyo.de; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729445830; a=rsa-sha256; cv=none; b=hPJgzAhORZaM6h9tWjiWILNEjYSUVkMWS8wA5aNGiPJ3bCzOjS426BuqhdwG62xZdPUOv6 QCm9BAq5cT5ta4421JoZcImEAZ7q4QKMBQuLKmIV/jpYNRe2q3jDzSYqjM0K5Lf838Tjsl u8zmf67zlQYJmsXUtmwssWKv7PCk8Io= Received: from [172.17.203.2] (port=37437 helo=deneb.enyo.de) by albireo.enyo.de ([172.17.140.2]) with esmtps (TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) id 1t2ZsI-002agy-2T; Sun, 20 Oct 2024 17:37:54 +0000 Received: from fw by deneb.enyo.de with local (Exim 4.96) (envelope-from ) id 1t2ZsI-008CWx-26; Sun, 20 Oct 2024 19:37:54 +0200 From: Florian Weimer To: Lorenzo Stoakes Cc: Andrew Morton , Suren Baghdasaryan , "Liam R . Howlett" , Matthew Wilcox , Vlastimil Babka , "Paul E . McKenney" , Jann Horn , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E . J . Bottomley" , Helge Deller , Chris Zankel , Max Filippov , Arnd Bergmann , linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-arch@vger.kernel.org, Shuah Khan , Christian Brauner , linux-kselftest@vger.kernel.org, Sidhartha Kumar , Jeff Xu , Christoph Hellwig , linux-api@vger.kernel.org, John Hubbard Subject: Re: [PATCH v2 0/5] implement lightweight guard pages References: Date: Sun, 20 Oct 2024 19:37:54 +0200 In-Reply-To: (Lorenzo Stoakes's message of "Sun, 20 Oct 2024 17:20:00 +0100") Message-ID: <87a5eysmj1.fsf@mid.deneb.enyo.de> MIME-Version: 1.0 Content-Type: text/plain X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 102D8C0018 X-Stat-Signature: ddq3rsqt9ricnhejt5phjdwbi3gmmogg X-HE-Tag: 1729445898-546773 X-HE-Meta: U2FsdGVkX1/m05QZwDrjZ01kOJgC1VY8RTliS7k1OAVeYslPdHJ4WrQCSy73gQ08yTPLKCRnnR9/Lok7Cgy59sKAXSgOaWmfq0tGeh8lI3mrQNKOTbLunxV5sGNvcOQXGbXzNSWiNNtlvmxcQPbuWNrOPMFU267uHb9WQImPpz/xzgeyeCilyPnBTN5NR+ho8uQJxNLhIwMLpvLCv8zT1c/LUuY/bqf5Gy1FgR2M5cCpjh0YkiVhiYzUJ5HstxJRKd6RUiPS6YZghCTnUzxZMKObZUWAuf3WqCig4tA9b8YH3stmxE180nN2zHFaUqSzDpH2Rv6u2CyUwq9epaL3FJXEN1nUqG8tUy6FPmeGZVGCSaGxcO50vyMbD1FTddsm4RloFAPeD39OtWoyWeHJfIsunJDQOJ4YVk6Wi+beF31Ph9KqlGwAVOEK/vzYnNVz3v+5cj/VRHQva18roye+B0THaTphs64psuD9+fOMGcgwHQBa5YoQvh1Nv/wgcjVYsbFD+jcim6LkUEbd9KxKhIUI47StYU+zOS6DUbqxxN7Kq3Mlkd/YkSzj27xRW/5BZS5/3KF1LM8GTuUNTmsqKmsmbcZUfJAJMVUcG5irqAW8rtW9cKIUorfc+2If90kfXD47x6SAPEc8qnJzBYVS8JCjgEztzsxpFFgO77fvg8obFgkOS99rAWp9Y/bFlie1uzjonY5tmNDy78sTYj4/5jLVcgoNyNj4b8ux3zsH5NljEUFeaqOgmXwjDCiSCOhgIRuYEn43FVCoEIOHuNg4LSdsYqmDmalLuDaOlg+R31LrrMpOhG0gv7Ygspp32vI4dTSkIUzCwWtd4C9NHnkSkvBr6fe8ZJ+pPNRLxGg5q+8VNyqe9s2sGc6LM11geAv/5Q4rE/wedB12YqJZHgQ1vhpjU9GtW5NTrG0SdjeiYW2EmNocjmKSBd/E5flry21hXx6g1mIFSEECFMvoCe9 +MjJfoFW iX8SyzDs3+uVT9x8m+CXUodhd0SqzN43hVGD9pJYG0yzQO1fEbEfVhA7zLpRVTli+EHBdJOnSsgM3XZqPH+BRVmE2Wg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: * Lorenzo Stoakes: > Early testing of the prototype version of this code suggests a 5 times > speed up in memory mapping invocations (in conjunction with use of > process_madvise()) and a 13% reduction in VMAs on an entirely idle android > system and unoptimised code. > > We expect with optimisation and a loaded system with a larger number of > guard pages this could significantly increase, but in any case these > numbers are encouraging. > > This way, rather than having separate VMAs specifying which parts of a > range are guard pages, instead we have a VMA spanning the entire range of > memory a user is permitted to access and including ranges which are to be > 'guarded'. > > After mapping this, a user can specify which parts of the range should > result in a fatal signal when accessed. > > By restricting the ability to specify guard pages to memory mapped by > existing VMAs, we can rely on the mappings being torn down when the > mappings are ultimately unmapped and everything works simply as if the > memory were not faulted in, from the point of view of the containing VMAs. We have a glibc (so not Android) dynamic linker bug that asks us to remove PROT_NONE mappings in mapped shared objects: Extra struct vm_area_struct with ---p created when PAGE_SIZE < max-page-size It's slightly different from a guard page because our main goal is to avoid other mappings to end up in those gaps, which has been shown to cause odd application behavior in cases where it happens. If I understand the series correctly, the kernel would not automatically attribute those PROT_NONE gaps to the previous or subsequent mapping. We would have to extend one of the surrounding mapps and apply MADV_POISON to that over-mapped part. That doesn't seem too onerous. Could the ELF loader in the kernel do the same thing for the main executable and the program loader?