From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CFDBD1713B for ; Mon, 21 Oct 2024 20:46:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 119696B0099; Mon, 21 Oct 2024 16:46:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C9D66B009A; Mon, 21 Oct 2024 16:46:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E851A6B009B; Mon, 21 Oct 2024 16:46:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C17B86B0099 for ; Mon, 21 Oct 2024 16:46:05 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id EF06B81B2E for ; Mon, 21 Oct 2024 20:45:51 +0000 (UTC) X-FDA: 82698790524.20.0408A84 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf11.hostedemail.com (Postfix) with ESMTP id A57914001F for ; Mon, 21 Oct 2024 20:45:44 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=oYQDUBAL; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="3P7gbDy/"; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=oYQDUBAL; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="3P7gbDy/"; dmarc=none; spf=pass (imf11.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729543526; a=rsa-sha256; cv=none; b=AmDwxcexJNOI/sRe/q4phQN3YdB3i5jr46nd2pzRfQ8dMYbKKJZy7snRXZlm5rNXBA2Qzt XEPAkrgvV9tFuGTOZBu8e5uPd9MHcOFZKt3atv27wf4utxZKabzfHfI2nX5G1uJ/GdPlqK NG8TafaBUZw8pCNjhVInItlffasiFzw= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=oYQDUBAL; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="3P7gbDy/"; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=oYQDUBAL; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="3P7gbDy/"; dmarc=none; spf=pass (imf11.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729543526; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=46VyK051n5rUSp8NYFDgjeFP2weMNY/tKuWhnM3jx28=; b=JGxAxU+FbVCaRdtnloaUI+RQkWhQk2dLUQ4G5kvHbVqK3m8jaJhsDltaMntvdv5j+VlrML iohmPR9O9xr+2EWQSmXrRM5hUryIOCufdkOtvC+UAVsvyx6+73i7p6VVZU1HHkuMyU74mw j6u59ebSu+1iRDhncVvphA2wF8Mxceg= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 0FBAA1F851; Mon, 21 Oct 2024 20:46:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1729543561; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=46VyK051n5rUSp8NYFDgjeFP2weMNY/tKuWhnM3jx28=; b=oYQDUBALBMz86YiPvtuxpr8H8Nb6eXCInqBmaHSTHx/sbzTQ17wuj8HRDb8YfFJ8XzS0tH r9guFQnpNR7sYRACAAtA1+IdQ9FOm7ULTa8tOd82h3eBdAnPOTtMHvsANJqbeuu0eem+aJ IlzXJgSfeGbW0pR+VZAeqk7+sX2KALo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1729543561; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=46VyK051n5rUSp8NYFDgjeFP2weMNY/tKuWhnM3jx28=; b=3P7gbDy/uz+VHeZF5LtEEayb67byRnVjtt1fTAo8pB/UctBbEhm4U1i5oepSKkUQrn0LG2 1ujTt03bBy5v69DQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1729543561; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=46VyK051n5rUSp8NYFDgjeFP2weMNY/tKuWhnM3jx28=; b=oYQDUBALBMz86YiPvtuxpr8H8Nb6eXCInqBmaHSTHx/sbzTQ17wuj8HRDb8YfFJ8XzS0tH r9guFQnpNR7sYRACAAtA1+IdQ9FOm7ULTa8tOd82h3eBdAnPOTtMHvsANJqbeuu0eem+aJ IlzXJgSfeGbW0pR+VZAeqk7+sX2KALo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1729543561; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=46VyK051n5rUSp8NYFDgjeFP2weMNY/tKuWhnM3jx28=; b=3P7gbDy/uz+VHeZF5LtEEayb67byRnVjtt1fTAo8pB/UctBbEhm4U1i5oepSKkUQrn0LG2 1ujTt03bBy5v69DQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 5F83E136DC; Mon, 21 Oct 2024 20:46:00 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 0Cg5Foi9Fme7RgAAD6G6ig (envelope-from ); Mon, 21 Oct 2024 20:46:00 +0000 Message-ID: Date: Mon, 21 Oct 2024 22:45:58 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 3/5] mm: madvise: implement lightweight guard page mechanism Content-Language: en-US To: Lorenzo Stoakes Cc: Andrew Morton , Suren Baghdasaryan , "Liam R . Howlett" , Matthew Wilcox , "Paul E . McKenney" , Jann Horn , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E . J . Bottomley" , Helge Deller , Chris Zankel , Max Filippov , Arnd Bergmann , linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-arch@vger.kernel.org, Shuah Khan , Christian Brauner , linux-kselftest@vger.kernel.org, Sidhartha Kumar , Jeff Xu , Christoph Hellwig , linux-api@vger.kernel.org, John Hubbard References: <393b0932-1c52-4d59-9466-e5e6184a7daf@lucifer.local> From: Vlastimil Babka Autocrypt: addr=vbabka@suse.cz; keydata= xsFNBFZdmxYBEADsw/SiUSjB0dM+vSh95UkgcHjzEVBlby/Fg+g42O7LAEkCYXi/vvq31JTB KxRWDHX0R2tgpFDXHnzZcQywawu8eSq0LxzxFNYMvtB7sV1pxYwej2qx9B75qW2plBs+7+YB 87tMFA+u+L4Z5xAzIimfLD5EKC56kJ1CsXlM8S/LHcmdD9Ctkn3trYDNnat0eoAcfPIP2OZ+ 9oe9IF/R28zmh0ifLXyJQQz5ofdj4bPf8ecEW0rhcqHfTD8k4yK0xxt3xW+6Exqp9n9bydiy tcSAw/TahjW6yrA+6JhSBv1v2tIm+itQc073zjSX8OFL51qQVzRFr7H2UQG33lw2QrvHRXqD Ot7ViKam7v0Ho9wEWiQOOZlHItOOXFphWb2yq3nzrKe45oWoSgkxKb97MVsQ+q2SYjJRBBH4 8qKhphADYxkIP6yut/eaj9ImvRUZZRi0DTc8xfnvHGTjKbJzC2xpFcY0DQbZzuwsIZ8OPJCc LM4S7mT25NE5kUTG/TKQCk922vRdGVMoLA7dIQrgXnRXtyT61sg8PG4wcfOnuWf8577aXP1x 6mzw3/jh3F+oSBHb/GcLC7mvWreJifUL2gEdssGfXhGWBo6zLS3qhgtwjay0Jl+kza1lo+Cv BB2T79D4WGdDuVa4eOrQ02TxqGN7G0Biz5ZLRSFzQSQwLn8fbwARAQABzSBWbGFzdGltaWwg QmFia2EgPHZiYWJrYUBzdXNlLmN6PsLBlAQTAQoAPgIbAwULCQgHAwUVCgkICwUWAgMBAAIe AQIXgBYhBKlA1DSZLC6OmRA9UCJPp+fMgqZkBQJkBREIBQkRadznAAoJECJPp+fMgqZkNxIQ ALZRqwdUGzqL2aeSavbum/VF/+td+nZfuH0xeWiO2w8mG0+nPd5j9ujYeHcUP1edE7uQrjOC Gs9sm8+W1xYnbClMJTsXiAV88D2btFUdU1mCXURAL9wWZ8Jsmz5ZH2V6AUszvNezsS/VIT87 AmTtj31TLDGwdxaZTSYLwAOOOtyqafOEq+gJB30RxTRE3h3G1zpO7OM9K6ysLdAlwAGYWgJJ V4JqGsQ/lyEtxxFpUCjb5Pztp7cQxhlkil0oBYHkudiG8j1U3DG8iC6rnB4yJaLphKx57NuQ PIY0Bccg+r9gIQ4XeSK2PQhdXdy3UWBr913ZQ9AI2usid3s5vabo4iBvpJNFLgUmxFnr73SJ KsRh/2OBsg1XXF/wRQGBO9vRuJUAbnaIVcmGOUogdBVS9Sun/Sy4GNA++KtFZK95U7J417/J Hub2xV6Ehc7UGW6fIvIQmzJ3zaTEfuriU1P8ayfddrAgZb25JnOW7L1zdYL8rXiezOyYZ8Fm ZyXjzWdO0RpxcUEp6GsJr11Bc4F3aae9OZtwtLL/jxc7y6pUugB00PodgnQ6CMcfR/HjXlae h2VS3zl9+tQWHu6s1R58t5BuMS2FNA58wU/IazImc/ZQA+slDBfhRDGYlExjg19UXWe/gMcl De3P1kxYPgZdGE2eZpRLIbt+rYnqQKy8UxlszsBNBFsZNTUBCACfQfpSsWJZyi+SHoRdVyX5 J6rI7okc4+b571a7RXD5UhS9dlVRVVAtrU9ANSLqPTQKGVxHrqD39XSw8hxK61pw8p90pg4G /N3iuWEvyt+t0SxDDkClnGsDyRhlUyEWYFEoBrrCizbmahOUwqkJbNMfzj5Y7n7OIJOxNRkB IBOjPdF26dMP69BwePQao1M8Acrrex9sAHYjQGyVmReRjVEtv9iG4DoTsnIR3amKVk6si4Ea X/mrapJqSCcBUVYUFH8M7bsm4CSxier5ofy8jTEa/CfvkqpKThTMCQPNZKY7hke5qEq1CBk2 wxhX48ZrJEFf1v3NuV3OimgsF2odzieNABEBAAHCwXwEGAEKACYCGwwWIQSpQNQ0mSwujpkQ PVAiT6fnzIKmZAUCZAUSmwUJDK5EZgAKCRAiT6fnzIKmZOJGEACOKABgo9wJXsbWhGWYO7mD 8R8mUyJHqbvaz+yTLnvRwfe/VwafFfDMx5GYVYzMY9TWpA8psFTKTUIIQmx2scYsRBUwm5VI EurRWKqENcDRjyo+ol59j0FViYysjQQeobXBDDE31t5SBg++veI6tXfpco/UiKEsDswL1WAr tEAZaruo7254TyH+gydURl2wJuzo/aZ7Y7PpqaODbYv727Dvm5eX64HCyyAH0s6sOCyGF5/p eIhrOn24oBf67KtdAN3H9JoFNUVTYJc1VJU3R1JtVdgwEdr+NEciEfYl0O19VpLE/PZxP4wX PWnhf5WjdoNI1Xec+RcJ5p/pSel0jnvBX8L2cmniYnmI883NhtGZsEWj++wyKiS4NranDFlA HdDM3b4lUth1pTtABKQ1YuTvehj7EfoWD3bv9kuGZGPrAeFNiHPdOT7DaXKeHpW9homgtBxj 8aX/UkSvEGJKUEbFL9cVa5tzyialGkSiZJNkWgeHe+jEcfRT6pJZOJidSCdzvJpbdJmm+eED w9XOLH1IIWh7RURU7G1iOfEfmImFeC3cbbS73LQEFGe1urxvIH5K/7vX+FkNcr9ujwWuPE9b 1C2o4i/yZPLXIVy387EjA6GZMqvQUFuSTs/GeBcv0NjIQi8867H3uLjz+mQy63fAitsDwLmR EP+ylKVEKb0Q2A== In-Reply-To: <393b0932-1c52-4d59-9466-e5e6184a7daf@lucifer.local> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Action: no action X-Rspam-User: X-Stat-Signature: zdy4msshi58b1o9r4qtuzwqtzgwdcm8x X-Rspamd-Queue-Id: A57914001F X-Rspamd-Server: rspam02 X-HE-Tag: 1729543544-619720 X-HE-Meta: U2FsdGVkX1+kI21L/yMfi47PSz3GRLQcKgfN59KSzSHEFmg+dRCtInvnY4wbknnfZ6w3Gk45qyhmqWUsIUXA8X408eA8buLrsWSt3QxMnYygodLouK/FWlxh8HFUMNrEfE4sMjci5CEH5qm2U1UNGYNq2G8Ztsl1d9WbvEQCuZqAhP2aRioFtGXlDPHgLp9kDFPI8MEG0HiNDQXpr8TJ0atMfUXrUStzn0iWVjWDqdtxZrkkxADROYVTLpHAtvCzKOa8lLPFS8QhA02rphUUaGASABLL0BwS6xatyhrNsBxF4roTw3zIdbMrZ3XX4RE8J4eG+NZqd2IcbV0mTEXj6E/KRDa9y3qApCAsycopbX29zhm/IUzgUhkKMaqTsHJ3K1zHrRRVddv2nfXgPeNPHyrRYR7sQJSeoLhA2i7+bOgRudZi+CAU52EN1zu3TZVzNBB4zhGKxATlvDbEz8TkiBdVQQ/8cHCVuIIDbrwnvMancpcq0nWtGGUr9N2RZYutUhfjNKRrrI180JqPoScFnF8fnDaoAL48U+u7XreJmiPr7mdSklEMpP5NvL4tpxaHwVwe9mcwFEu7UAnFmmxJXfqIxYtXr58lUkzStx+RZ4NXn/2NJDDgX6O3PK3OVvTniMZMh9HD1nnVhHf+x815ZbjmgtdyD4VipWTwSC4+Fcb4+VlBl/9eVliPur6bCuKv3yZQuDlzhhoyL57rOUKlOULrzCh5EYF57+CQuZWdz+CUBTQn0SoDAwX88RL3PzBIIO82fl/PQJMWaiM0V7qtjCx85xs9irexnGbKH2/UQoYvdvMszi623Jg2o5OeDCBJlKMNNfcIu3Mvq6RoeTNNQFfSZZeQH76o+meNs/ChFy42SjS6TyT33eVXsuOa8s+fOS5licBDYaX1XSXQDWTaXe3rRszajkOOfqGU4dYh96I9e0NFdyQ5kebOD64kxHbDfBWd190Qly+9ApSlbK+ jWojTXdg 0k2AXBbVwfRLiKRgoga7pZgOGvJB6dsDFprWOsZud3iJT7Wx1DYruqFSwhvWPLAlrgl6i0H+NhYo5ExRRKs9yJ5r7KJdyCq2P/mKlhbP6hVL3r2deVEFnWrwL/WpNvZgQ1Nn4Zosg4PuacOQlMbJATmGmQrX6ZLEOANKywoMzXkQV7IQcx7UUldoc0/IjwmZG7IYsc60wXE18V7U+1sOO4IE0jMdCx36XKmSgQ+z5JV7JOx00ON+76FFpN+Oa53QKRzU7n6rukzx8pmrEDR3+QCaGPZZS1/8DWGvSskz6uaFSzo9nJUpAcAJpSSK0qDrk2n0biG8bbdS4OnNHVYMO0TQ9F0tlV7+rXAbdYK1JjD1bqjY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 10/21/24 22:27, Lorenzo Stoakes wrote: > On Mon, Oct 21, 2024 at 10:11:29PM +0200, Vlastimil Babka wrote: >> On 10/20/24 18:20, Lorenzo Stoakes wrote: >> > Implement a new lightweight guard page feature, that is regions of userland >> > virtual memory that, when accessed, cause a fatal signal to arise. >> > >> > Currently users must establish PROT_NONE ranges to achieve this. >> > >> > However this is very costly memory-wise - we need a VMA for each and every >> > one of these regions AND they become unmergeable with surrounding VMAs. >> > >> > In addition repeated mmap() calls require repeated kernel context switches >> > and contention of the mmap lock to install these ranges, potentially also >> > having to unmap memory if installed over existing ranges. >> > >> > The lightweight guard approach eliminates the VMA cost altogether - rather >> > than establishing a PROT_NONE VMA, it operates at the level of page table >> > entries - poisoning PTEs such that accesses to them cause a fault followed >> > by a SIGSGEV signal being raised. >> > >> > This is achieved through the PTE marker mechanism, which a previous commit >> > in this series extended to permit this to be done, installed via the >> > generic page walking logic, also extended by a prior commit for this >> > purpose. >> > >> > These poison ranges are established with MADV_GUARD_POISON, and if the >> > range in which they are installed contain any existing mappings, they will >> > be zapped, i.e. free the range and unmap memory (thus mimicking the >> > behaviour of MADV_DONTNEED in this respect). >> > >> > Any existing poison entries will be left untouched. There is no nesting of >> > poisoned pages. >> > >> > Poisoned ranges are NOT cleared by MADV_DONTNEED, as this would be rather >> > unexpected behaviour, but are cleared on process teardown or unmapping of >> > memory ranges. >> > >> > Ranges can have the poison property removed by MADV_GUARD_UNPOISON - >> > 'remedying' the poisoning. The ranges over which this is applied, should >> > they contain non-poison entries, will be untouched, only poison entries >> > will be cleared. >> > >> > We permit this operation on anonymous memory only, and only VMAs which are >> > non-special, non-huge and not mlock()'d (if we permitted this we'd have to >> > drop locked pages which would be rather counterintuitive). >> > >> > Suggested-by: Vlastimil Babka >> > Suggested-by: Jann Horn >> > Suggested-by: David Hildenbrand >> > Signed-off-by: Lorenzo Stoakes >> >> >> >> > +static long madvise_guard_poison(struct vm_area_struct *vma, >> > + struct vm_area_struct **prev, >> > + unsigned long start, unsigned long end) >> > +{ >> > + long err; >> > + >> > + *prev = vma; >> > + if (!is_valid_guard_vma(vma, /* allow_locked = */false)) >> > + return -EINVAL; >> > + >> > + /* >> > + * If we install poison markers, then the range is no longer >> > + * empty from a page table perspective and therefore it's >> > + * appropriate to have an anon_vma. >> > + * >> > + * This ensures that on fork, we copy page tables correctly. >> > + */ >> > + err = anon_vma_prepare(vma); >> > + if (err) >> > + return err; >> > + >> > + /* >> > + * Optimistically try to install the guard poison pages first. If any >> > + * non-guard pages are encountered, give up and zap the range before >> > + * trying again. >> > + */ >> >> Should the page walker become powerful enough to handle this in one go? :) > > I can tell you've not read previous threads... Whoops, you're right, I did read v1 but forgot about the RFC. But we can assume people who'll only see the code after it's merged will not have read it either, so since a potentially endless loop could be suspicious, expanding the comment to explain how it's fine wouldn't hurt? > I've addressed this in discussion with Jann - we'd have to do a full fat > huge comprehensive thing to do an in-place replace. > > It'd either have to be fully duplicative of the multiple copies of the very > lengthily code to do this sort of thing right (some in mm/madvise.c itself) > or I'd have to go off and do a totally new pre-requisite series > centralising that in a way that people probably wouldn't accept... I'm not > sure the benefits outway the costs. > >> But sure, if it's too big a task to teach it to zap ptes with all the tlb >> flushing etc (I assume it's something page walkers don't do today), it makes >> sense to do it this way. >> Or we could require userspace to zap first (MADV_DONTNEED), but that would >> unnecessarily mean extra syscalls for the use case of an allocator debug >> mode that wants to turn freed memory to guards to catch use after free. >> So this seems like a good compromise... > > This is optimistic as the comment says, you very often won't need to do > this, so we do a little extra work in the case that you need to zap, > vs. the more likely case that you don't when you don't. > > In the face of racing faults, which we can't reasonably prevent without > having to write _and_ VMA lock which is an egregious requirement, this > wouldn't really save us anythign anyway. OK. >> >> > + while (true) { >> > + /* Returns < 0 on error, == 0 if success, > 0 if zap needed. */ >> > + err = walk_page_range_mm(vma->vm_mm, start, end, >> > + &guard_poison_walk_ops, NULL); >> > + if (err <= 0) >> > + return err; >> > + >> > + /* >> > + * OK some of the range have non-guard pages mapped, zap >> > + * them. This leaves existing guard pages in place. >> > + */ >> > + zap_page_range_single(vma, start, end - start, NULL); >> >> ... however the potentially endless loop doesn't seem great. Could a >> malicious program keep refaulting the range (ignoring any segfaults if it >> loses a race) with one thread while failing to make progress here with >> another thread? Is that ok because it would only punish itself? > > Sigh. Again, I don't think you've read the previous series have you? Or > even the changelog... I added this as Jann asked for it. Originally we'd > -EAGAIN if we got raced. See the discussion over in v1 for details. > > I did it that way specifically to avoid such things, but Jann didn't appear > to think it was a problem. If Jann is fine with this then it must be secure enough. >> >> I mean if we have to retry the guards page installation more than once, it >> means the program *is* racing faults with installing guard ptes in the same >> range, right? So it would be right to segfault it. But I guess when we >> detect it here, we have no way to send the signal to the right thread and it >> would be too late? So unless we can do the PTE zap+install marker >> atomically, maybe there's no better way and this is acceptable as a >> malicious program can harm only itself? > > Yup you'd only be hurting yourself. I went over this with Jann, who didn't > appear to have a problem with this approach from a security perspective, in > fact he explicitly asked me to do this :) > >> >> Maybe it would be just simpler to install the marker via zap_details rather >> than the pagewalk? > > Ah the inevitable 'please completely rework how you do everything' comment > I was expecting at some point :) Job security :) j/k > Obviously I've considered this (and a number of other approaches), it would > fundamentally change what zap is - right now if it can't traverse a page > table level that job done (it's job is to remove PTEs not create). > > We'd instead have to completely rework the logic to be able to _install_ > page tables and then carefully check we don't break anything and only do it > in the specific cases we need. > > Or we could add a mode that says 'replace with a poison marker' rather than > zap, but that has potential TLB concerns, splits it across two operations > (installation and zapping), and then could you really be sure that there > isn't a really really badly timed race that would mean you'd have to loop > again? > > Right now it's simple, elegant, small and we can only make ourselves > wait. I don't think this is a huge problem. > > I think I'll need an actual security/DoS-based justification to change this. > >> >> > + >> > + if (fatal_signal_pending(current)) >> > + return -EINTR; >> > + cond_resched(); >> > + } >> > +} >> > + >> > +static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr, >> > + unsigned long next, struct mm_walk *walk) >> > +{ >> > + pte_t ptent = ptep_get(pte); >> > + >> > + if (is_guard_pte_marker(ptent)) { >> > + /* Simply clear the PTE marker. */ >> > + pte_clear_not_present_full(walk->mm, addr, pte, false); >> > + update_mmu_cache(walk->vma, addr, pte); >> > + } >> > + >> > + return 0; >> > +} >> > + >> > +static const struct mm_walk_ops guard_unpoison_walk_ops = { >> > + .pte_entry = guard_unpoison_pte_entry, >> > + .walk_lock = PGWALK_RDLOCK, >> > +}; >> > + >> > +static long madvise_guard_unpoison(struct vm_area_struct *vma, >> > + struct vm_area_struct **prev, >> > + unsigned long start, unsigned long end) >> > +{ >> > + *prev = vma; >> > + /* >> > + * We're ok with unpoisoning mlock()'d ranges, as this is a >> > + * non-destructive action. >> > + */ >> > + if (!is_valid_guard_vma(vma, /* allow_locked = */true)) >> > + return -EINVAL; >> > + >> > + return walk_page_range(vma->vm_mm, start, end, >> > + &guard_unpoison_walk_ops, NULL); >> > +} >> > + >> > /* >> > * Apply an madvise behavior to a region of a vma. madvise_update_vma >> > * will handle splitting a vm area into separate areas, each area with its own >> > @@ -1098,6 +1260,10 @@ static int madvise_vma_behavior(struct vm_area_struct *vma, >> > break; >> > case MADV_COLLAPSE: >> > return madvise_collapse(vma, prev, start, end); >> > + case MADV_GUARD_POISON: >> > + return madvise_guard_poison(vma, prev, start, end); >> > + case MADV_GUARD_UNPOISON: >> > + return madvise_guard_unpoison(vma, prev, start, end); >> > } >> > >> > anon_name = anon_vma_name(vma); >> > @@ -1197,6 +1363,8 @@ madvise_behavior_valid(int behavior) >> > case MADV_DODUMP: >> > case MADV_WIPEONFORK: >> > case MADV_KEEPONFORK: >> > + case MADV_GUARD_POISON: >> > + case MADV_GUARD_UNPOISON: >> > #ifdef CONFIG_MEMORY_FAILURE >> > case MADV_SOFT_OFFLINE: >> > case MADV_HWPOISON: >> > diff --git a/mm/mprotect.c b/mm/mprotect.c >> > index 0c5d6d06107d..d0e3ebfadef8 100644 >> > --- a/mm/mprotect.c >> > +++ b/mm/mprotect.c >> > @@ -236,7 +236,8 @@ static long change_pte_range(struct mmu_gather *tlb, >> > } else if (is_pte_marker_entry(entry)) { >> > /* >> > * Ignore error swap entries unconditionally, >> > - * because any access should sigbus anyway. >> > + * because any access should sigbus/sigsegv >> > + * anyway. >> > */ >> > if (is_poisoned_swp_entry(entry)) >> > continue; >> > diff --git a/mm/mseal.c b/mm/mseal.c >> > index ece977bd21e1..21bf5534bcf5 100644 >> > --- a/mm/mseal.c >> > +++ b/mm/mseal.c >> > @@ -30,6 +30,7 @@ static bool is_madv_discard(int behavior) >> > case MADV_REMOVE: >> > case MADV_DONTFORK: >> > case MADV_WIPEONFORK: >> > + case MADV_GUARD_POISON: >> > return true; >> > } >> > >>