From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DBDBD1712F for ; Mon, 21 Oct 2024 20:11:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8EDC26B0089; Mon, 21 Oct 2024 16:11:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 89D1D6B008A; Mon, 21 Oct 2024 16:11:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 717666B0092; Mon, 21 Oct 2024 16:11:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4DDAD6B0089 for ; Mon, 21 Oct 2024 16:11:35 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 45992160AFF for ; Mon, 21 Oct 2024 20:11:17 +0000 (UTC) X-FDA: 82698704088.10.C8929C5 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf02.hostedemail.com (Postfix) with ESMTP id 96A0E80010 for ; Mon, 21 Oct 2024 20:11:03 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=MkbVPTGO; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=DjdRUKyz; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=MkbVPTGO; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=DjdRUKyz; dmarc=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729541370; a=rsa-sha256; cv=none; b=2UhdPcrdhoBwH+e7lLlXwo3tjJT7fs8AAl1oaPiIT3VpUqPVsrj4cs52mIcRfsizJhmomb 4jNf+IDOfZd4gdFVvpeG2TYZjRJlJa5wWZ6Wz5IZHqAtj4h20eOSH0B/Zieb6zrMPtS3AW iguaG0PtGdwYQm8igJQBHc1e7o3yQAE= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=MkbVPTGO; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=DjdRUKyz; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=MkbVPTGO; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=DjdRUKyz; dmarc=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729541370; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+TUHq37aNBPulYbrvIlyzG7xqO2QpErgZ+H1UXjgVmM=; b=BImSN+yFC8FtRwPL14vy4X9rLJAYMAfisKR5bVE0uNykUFGTASsjzcaV8j7LDiqG5eqSrb ll+kzxwlZBC/nl+xn9mDfHtG/rlZdVzO9db7vCi0bHErYunTG5RwZSrNMZ69sRRWIepg9v FdpTFMih7XhYAZaV1nJHSHcvzVMiTqM= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 5925A21C9E; Mon, 21 Oct 2024 20:11:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1729541490; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=+TUHq37aNBPulYbrvIlyzG7xqO2QpErgZ+H1UXjgVmM=; b=MkbVPTGOHfA6KuYo4gFd9P6rIYKDt02PGyipPw+V42WPDUoFywzjKvFivgdlQNaPLQgsIH vi5BWbedImnNY859jKM6cvLMLGbMWxeEsxsWaIY5xJQLprvrnuncQWRtiv4aVJ8g1EXce4 MifpAMv75Z8NGktlGXRBTiCjLtEaWmE= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1729541490; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=+TUHq37aNBPulYbrvIlyzG7xqO2QpErgZ+H1UXjgVmM=; b=DjdRUKyz/abzGentx+0Pg15Yw3WyPgqlvuUSR/X7JW7Brwq2WX2k+f/v3jkDrEGjgowR6c 9EWptTrfGsRmTAAQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1729541490; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=+TUHq37aNBPulYbrvIlyzG7xqO2QpErgZ+H1UXjgVmM=; b=MkbVPTGOHfA6KuYo4gFd9P6rIYKDt02PGyipPw+V42WPDUoFywzjKvFivgdlQNaPLQgsIH vi5BWbedImnNY859jKM6cvLMLGbMWxeEsxsWaIY5xJQLprvrnuncQWRtiv4aVJ8g1EXce4 MifpAMv75Z8NGktlGXRBTiCjLtEaWmE= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1729541490; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=+TUHq37aNBPulYbrvIlyzG7xqO2QpErgZ+H1UXjgVmM=; b=DjdRUKyz/abzGentx+0Pg15Yw3WyPgqlvuUSR/X7JW7Brwq2WX2k+f/v3jkDrEGjgowR6c 9EWptTrfGsRmTAAQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 18912139E0; Mon, 21 Oct 2024 20:11:30 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id Mt6CBXK1FmenPQAAD6G6ig (envelope-from ); Mon, 21 Oct 2024 20:11:30 +0000 Message-ID: Date: Mon, 21 Oct 2024 22:11:29 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 3/5] mm: madvise: implement lightweight guard page mechanism Content-Language: en-US To: Lorenzo Stoakes , Andrew Morton Cc: Suren Baghdasaryan , "Liam R . Howlett" , Matthew Wilcox , "Paul E . McKenney" , Jann Horn , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E . J . Bottomley" , Helge Deller , Chris Zankel , Max Filippov , Arnd Bergmann , linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-arch@vger.kernel.org, Shuah Khan , Christian Brauner , linux-kselftest@vger.kernel.org, Sidhartha Kumar , Jeff Xu , Christoph Hellwig , linux-api@vger.kernel.org, John Hubbard References: From: Vlastimil Babka Autocrypt: addr=vbabka@suse.cz; keydata= xsFNBFZdmxYBEADsw/SiUSjB0dM+vSh95UkgcHjzEVBlby/Fg+g42O7LAEkCYXi/vvq31JTB KxRWDHX0R2tgpFDXHnzZcQywawu8eSq0LxzxFNYMvtB7sV1pxYwej2qx9B75qW2plBs+7+YB 87tMFA+u+L4Z5xAzIimfLD5EKC56kJ1CsXlM8S/LHcmdD9Ctkn3trYDNnat0eoAcfPIP2OZ+ 9oe9IF/R28zmh0ifLXyJQQz5ofdj4bPf8ecEW0rhcqHfTD8k4yK0xxt3xW+6Exqp9n9bydiy tcSAw/TahjW6yrA+6JhSBv1v2tIm+itQc073zjSX8OFL51qQVzRFr7H2UQG33lw2QrvHRXqD Ot7ViKam7v0Ho9wEWiQOOZlHItOOXFphWb2yq3nzrKe45oWoSgkxKb97MVsQ+q2SYjJRBBH4 8qKhphADYxkIP6yut/eaj9ImvRUZZRi0DTc8xfnvHGTjKbJzC2xpFcY0DQbZzuwsIZ8OPJCc LM4S7mT25NE5kUTG/TKQCk922vRdGVMoLA7dIQrgXnRXtyT61sg8PG4wcfOnuWf8577aXP1x 6mzw3/jh3F+oSBHb/GcLC7mvWreJifUL2gEdssGfXhGWBo6zLS3qhgtwjay0Jl+kza1lo+Cv BB2T79D4WGdDuVa4eOrQ02TxqGN7G0Biz5ZLRSFzQSQwLn8fbwARAQABzSBWbGFzdGltaWwg QmFia2EgPHZiYWJrYUBzdXNlLmN6PsLBlAQTAQoAPgIbAwULCQgHAwUVCgkICwUWAgMBAAIe AQIXgBYhBKlA1DSZLC6OmRA9UCJPp+fMgqZkBQJkBREIBQkRadznAAoJECJPp+fMgqZkNxIQ ALZRqwdUGzqL2aeSavbum/VF/+td+nZfuH0xeWiO2w8mG0+nPd5j9ujYeHcUP1edE7uQrjOC Gs9sm8+W1xYnbClMJTsXiAV88D2btFUdU1mCXURAL9wWZ8Jsmz5ZH2V6AUszvNezsS/VIT87 AmTtj31TLDGwdxaZTSYLwAOOOtyqafOEq+gJB30RxTRE3h3G1zpO7OM9K6ysLdAlwAGYWgJJ V4JqGsQ/lyEtxxFpUCjb5Pztp7cQxhlkil0oBYHkudiG8j1U3DG8iC6rnB4yJaLphKx57NuQ PIY0Bccg+r9gIQ4XeSK2PQhdXdy3UWBr913ZQ9AI2usid3s5vabo4iBvpJNFLgUmxFnr73SJ KsRh/2OBsg1XXF/wRQGBO9vRuJUAbnaIVcmGOUogdBVS9Sun/Sy4GNA++KtFZK95U7J417/J Hub2xV6Ehc7UGW6fIvIQmzJ3zaTEfuriU1P8ayfddrAgZb25JnOW7L1zdYL8rXiezOyYZ8Fm ZyXjzWdO0RpxcUEp6GsJr11Bc4F3aae9OZtwtLL/jxc7y6pUugB00PodgnQ6CMcfR/HjXlae h2VS3zl9+tQWHu6s1R58t5BuMS2FNA58wU/IazImc/ZQA+slDBfhRDGYlExjg19UXWe/gMcl De3P1kxYPgZdGE2eZpRLIbt+rYnqQKy8UxlszsBNBFsZNTUBCACfQfpSsWJZyi+SHoRdVyX5 J6rI7okc4+b571a7RXD5UhS9dlVRVVAtrU9ANSLqPTQKGVxHrqD39XSw8hxK61pw8p90pg4G /N3iuWEvyt+t0SxDDkClnGsDyRhlUyEWYFEoBrrCizbmahOUwqkJbNMfzj5Y7n7OIJOxNRkB IBOjPdF26dMP69BwePQao1M8Acrrex9sAHYjQGyVmReRjVEtv9iG4DoTsnIR3amKVk6si4Ea X/mrapJqSCcBUVYUFH8M7bsm4CSxier5ofy8jTEa/CfvkqpKThTMCQPNZKY7hke5qEq1CBk2 wxhX48ZrJEFf1v3NuV3OimgsF2odzieNABEBAAHCwXwEGAEKACYCGwwWIQSpQNQ0mSwujpkQ PVAiT6fnzIKmZAUCZAUSmwUJDK5EZgAKCRAiT6fnzIKmZOJGEACOKABgo9wJXsbWhGWYO7mD 8R8mUyJHqbvaz+yTLnvRwfe/VwafFfDMx5GYVYzMY9TWpA8psFTKTUIIQmx2scYsRBUwm5VI EurRWKqENcDRjyo+ol59j0FViYysjQQeobXBDDE31t5SBg++veI6tXfpco/UiKEsDswL1WAr tEAZaruo7254TyH+gydURl2wJuzo/aZ7Y7PpqaODbYv727Dvm5eX64HCyyAH0s6sOCyGF5/p eIhrOn24oBf67KtdAN3H9JoFNUVTYJc1VJU3R1JtVdgwEdr+NEciEfYl0O19VpLE/PZxP4wX PWnhf5WjdoNI1Xec+RcJ5p/pSel0jnvBX8L2cmniYnmI883NhtGZsEWj++wyKiS4NranDFlA HdDM3b4lUth1pTtABKQ1YuTvehj7EfoWD3bv9kuGZGPrAeFNiHPdOT7DaXKeHpW9homgtBxj 8aX/UkSvEGJKUEbFL9cVa5tzyialGkSiZJNkWgeHe+jEcfRT6pJZOJidSCdzvJpbdJmm+eED w9XOLH1IIWh7RURU7G1iOfEfmImFeC3cbbS73LQEFGe1urxvIH5K/7vX+FkNcr9ujwWuPE9b 1C2o4i/yZPLXIVy387EjA6GZMqvQUFuSTs/GeBcv0NjIQi8867H3uLjz+mQy63fAitsDwLmR EP+ylKVEKb0Q2A== In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Action: no action X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 96A0E80010 X-Stat-Signature: qa3j7echkk8ik5sn1n4ioo5usm8i9uyp X-Rspam-User: X-HE-Tag: 1729541463-357841 X-HE-Meta: U2FsdGVkX1+xJ40SszfUBjyMLPfM/7OELEEdYqbLKSoWQD9zeRL7moB3Pg9662ziCF9g7Gl19J3Irf/No7zNYxd7+ADCpYtRK8TpIk95slUwFtUGy0a4Gh2UccHdrPKgh4/4pNgcf+0geMaG8ngvuIBXtrIeKhn1wO9U1Cm8y+KLxWL4TJh2EUVl3Z1T2gDSHnX8BD6BacSytOUEmxlCGIOKCwirKuoOuQVUWMIV/YpNmF3jB2LtaN9AyYXPEZhIbFg6ii5phaPYcDstkXHg/spwzrb2xvJpOg3mt1TOYrJLeRy6vH8TolG7V8jkM/BL0rDQPe0uC2Tubno9ho0NQFwhWBjAArWtN2plzRTGQCxo3J0SCyldpuPPo+JEx0OkuhmqogsCc0SHg9gYZi2dc71tt7gEYwSnej7OfsauMMZEsSy3MkuGbcg8igmuORNAiOLkLR59/yUZNgPoAGSrSI14BVXM79mGTrpyZJk/k3t1oI4xnoqigHg3vN3jLmmjkjuGLMFyW2CPgXS6ybiXJRMeJwLUAGwj9VRsUAaT9iNk/c9Ds4ZZjy/i9blJhOhY9amcWYC2sbmBYx/fydtd0lss+MvnWfeCjyI8spz051K+Iolj06Lqaa4awDj8YwRycFKKXjZm6FGK6Y/aorgZXw0bWVjiKPJaACVo4zoUlgHaKgg97hmhFyX5fnebKx5w9fmpnRCT79m2V934dkJyOxUzte93KcSAJ69ETyprfgIDv9g2ea6cts3NllVBECy9maXPPWLfq433Q++bELQObqbOa2hbqnNq5g1lRX2wAeScCj1i0B5HjeHXEp7F3Vb6Jm7TR90fPvOYzqA6cwRmGPWGHTNtTlMtkWnH4dQTl/RHAvkemYCXwuz52lwvxTmV6ICx3erqOzm8/BvC8RRBAj12t40f/srPugF1qcipD2W1unEounbbtcNlocfY+pu305MQK8ln4ANFUEMmr+4 IblgKr94 5TcCchqBgK7SsWI6R+XNpfPloz6ifb7ejVEBFxbYg5+TgDc3S329w7rw63RBaaSmt6ykPn+XXD/cL/EyqZR/BH8lVwC6B7Ancveq81zvJTzmhuvECTPJvNgdrRr9xluH7AP8cJwuTzH/CPXuRTQLR3nviBk7lm+d1BnI9BPvjuni8kVtYrFDwjqzgWZEr1dkJljkOiS4hBNMC+JR+FT+vzeMO+Q8KscqNE2Ka4q7cMz8zNaq2cS2+Y7aBReC/urcTmJWKmV5GiG8vNwcWeRCIgMeALnTYRkg9kGS4aU/5cROYg4XrtMI72i+4fuQ0LTpZ/AFID1DVNUTCBGNd+f1PUn2zyUutobpcgUbSeeZaN9jvqTU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 10/20/24 18:20, Lorenzo Stoakes wrote: > Implement a new lightweight guard page feature, that is regions of userland > virtual memory that, when accessed, cause a fatal signal to arise. > > Currently users must establish PROT_NONE ranges to achieve this. > > However this is very costly memory-wise - we need a VMA for each and every > one of these regions AND they become unmergeable with surrounding VMAs. > > In addition repeated mmap() calls require repeated kernel context switches > and contention of the mmap lock to install these ranges, potentially also > having to unmap memory if installed over existing ranges. > > The lightweight guard approach eliminates the VMA cost altogether - rather > than establishing a PROT_NONE VMA, it operates at the level of page table > entries - poisoning PTEs such that accesses to them cause a fault followed > by a SIGSGEV signal being raised. > > This is achieved through the PTE marker mechanism, which a previous commit > in this series extended to permit this to be done, installed via the > generic page walking logic, also extended by a prior commit for this > purpose. > > These poison ranges are established with MADV_GUARD_POISON, and if the > range in which they are installed contain any existing mappings, they will > be zapped, i.e. free the range and unmap memory (thus mimicking the > behaviour of MADV_DONTNEED in this respect). > > Any existing poison entries will be left untouched. There is no nesting of > poisoned pages. > > Poisoned ranges are NOT cleared by MADV_DONTNEED, as this would be rather > unexpected behaviour, but are cleared on process teardown or unmapping of > memory ranges. > > Ranges can have the poison property removed by MADV_GUARD_UNPOISON - > 'remedying' the poisoning. The ranges over which this is applied, should > they contain non-poison entries, will be untouched, only poison entries > will be cleared. > > We permit this operation on anonymous memory only, and only VMAs which are > non-special, non-huge and not mlock()'d (if we permitted this we'd have to > drop locked pages which would be rather counterintuitive). > > Suggested-by: Vlastimil Babka > Suggested-by: Jann Horn > Suggested-by: David Hildenbrand > Signed-off-by: Lorenzo Stoakes > +static long madvise_guard_poison(struct vm_area_struct *vma, > + struct vm_area_struct **prev, > + unsigned long start, unsigned long end) > +{ > + long err; > + > + *prev = vma; > + if (!is_valid_guard_vma(vma, /* allow_locked = */false)) > + return -EINVAL; > + > + /* > + * If we install poison markers, then the range is no longer > + * empty from a page table perspective and therefore it's > + * appropriate to have an anon_vma. > + * > + * This ensures that on fork, we copy page tables correctly. > + */ > + err = anon_vma_prepare(vma); > + if (err) > + return err; > + > + /* > + * Optimistically try to install the guard poison pages first. If any > + * non-guard pages are encountered, give up and zap the range before > + * trying again. > + */ Should the page walker become powerful enough to handle this in one go? :) But sure, if it's too big a task to teach it to zap ptes with all the tlb flushing etc (I assume it's something page walkers don't do today), it makes sense to do it this way. Or we could require userspace to zap first (MADV_DONTNEED), but that would unnecessarily mean extra syscalls for the use case of an allocator debug mode that wants to turn freed memory to guards to catch use after free. So this seems like a good compromise... > + while (true) { > + /* Returns < 0 on error, == 0 if success, > 0 if zap needed. */ > + err = walk_page_range_mm(vma->vm_mm, start, end, > + &guard_poison_walk_ops, NULL); > + if (err <= 0) > + return err; > + > + /* > + * OK some of the range have non-guard pages mapped, zap > + * them. This leaves existing guard pages in place. > + */ > + zap_page_range_single(vma, start, end - start, NULL); ... however the potentially endless loop doesn't seem great. Could a malicious program keep refaulting the range (ignoring any segfaults if it loses a race) with one thread while failing to make progress here with another thread? Is that ok because it would only punish itself? I mean if we have to retry the guards page installation more than once, it means the program *is* racing faults with installing guard ptes in the same range, right? So it would be right to segfault it. But I guess when we detect it here, we have no way to send the signal to the right thread and it would be too late? So unless we can do the PTE zap+install marker atomically, maybe there's no better way and this is acceptable as a malicious program can harm only itself? Maybe it would be just simpler to install the marker via zap_details rather than the pagewalk? > + > + if (fatal_signal_pending(current)) > + return -EINTR; > + cond_resched(); > + } > +} > + > +static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr, > + unsigned long next, struct mm_walk *walk) > +{ > + pte_t ptent = ptep_get(pte); > + > + if (is_guard_pte_marker(ptent)) { > + /* Simply clear the PTE marker. */ > + pte_clear_not_present_full(walk->mm, addr, pte, false); > + update_mmu_cache(walk->vma, addr, pte); > + } > + > + return 0; > +} > + > +static const struct mm_walk_ops guard_unpoison_walk_ops = { > + .pte_entry = guard_unpoison_pte_entry, > + .walk_lock = PGWALK_RDLOCK, > +}; > + > +static long madvise_guard_unpoison(struct vm_area_struct *vma, > + struct vm_area_struct **prev, > + unsigned long start, unsigned long end) > +{ > + *prev = vma; > + /* > + * We're ok with unpoisoning mlock()'d ranges, as this is a > + * non-destructive action. > + */ > + if (!is_valid_guard_vma(vma, /* allow_locked = */true)) > + return -EINVAL; > + > + return walk_page_range(vma->vm_mm, start, end, > + &guard_unpoison_walk_ops, NULL); > +} > + > /* > * Apply an madvise behavior to a region of a vma. madvise_update_vma > * will handle splitting a vm area into separate areas, each area with its own > @@ -1098,6 +1260,10 @@ static int madvise_vma_behavior(struct vm_area_struct *vma, > break; > case MADV_COLLAPSE: > return madvise_collapse(vma, prev, start, end); > + case MADV_GUARD_POISON: > + return madvise_guard_poison(vma, prev, start, end); > + case MADV_GUARD_UNPOISON: > + return madvise_guard_unpoison(vma, prev, start, end); > } > > anon_name = anon_vma_name(vma); > @@ -1197,6 +1363,8 @@ madvise_behavior_valid(int behavior) > case MADV_DODUMP: > case MADV_WIPEONFORK: > case MADV_KEEPONFORK: > + case MADV_GUARD_POISON: > + case MADV_GUARD_UNPOISON: > #ifdef CONFIG_MEMORY_FAILURE > case MADV_SOFT_OFFLINE: > case MADV_HWPOISON: > diff --git a/mm/mprotect.c b/mm/mprotect.c > index 0c5d6d06107d..d0e3ebfadef8 100644 > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -236,7 +236,8 @@ static long change_pte_range(struct mmu_gather *tlb, > } else if (is_pte_marker_entry(entry)) { > /* > * Ignore error swap entries unconditionally, > - * because any access should sigbus anyway. > + * because any access should sigbus/sigsegv > + * anyway. > */ > if (is_poisoned_swp_entry(entry)) > continue; > diff --git a/mm/mseal.c b/mm/mseal.c > index ece977bd21e1..21bf5534bcf5 100644 > --- a/mm/mseal.c > +++ b/mm/mseal.c > @@ -30,6 +30,7 @@ static bool is_madv_discard(int behavior) > case MADV_REMOVE: > case MADV_DONTFORK: > case MADV_WIPEONFORK: > + case MADV_GUARD_POISON: > return true; > } >