From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 100C4D5B14E for ; Mon, 28 Oct 2024 20:45:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96E128D0003; Mon, 28 Oct 2024 16:45:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 91E2D8D0002; Mon, 28 Oct 2024 16:45:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E5F88D0003; Mon, 28 Oct 2024 16:45:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5AC108D0002 for ; Mon, 28 Oct 2024 16:45:34 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C9957401F2 for ; Mon, 28 Oct 2024 20:45:33 +0000 (UTC) X-FDA: 82724191032.10.E07D313 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf26.hostedemail.com (Postfix) with ESMTP id 3A1D3140009 for ; Mon, 28 Oct 2024 20:45:14 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=dHXqyY24; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of jarkko@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=jarkko@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730148289; a=rsa-sha256; cv=none; b=cyX5b7XQY1KNVD9KTGz24uALk77/6prd7Qm3HnnErSLFve8LFgBzNmaP5+Xl5hmWpVA5/O dOntM/e4BpgZc3O/W2x5PuxtVboavZK+w1ZV9K1OpsL4SroukjG0Tyu0KCIjwZG5+3ts68 o1AIJ7ue7AxJmJ3Jl68sSQHqeOSsWPE= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=dHXqyY24; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of jarkko@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=jarkko@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730148289; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=klNU4O4u9GTJGOir7UlGZyApHGUiGxbIBx8Q+aZK4yQ=; b=YiX9aSV3RBabBOzCFX6yIHkmrpi3yvedG4nd3+uhbzEKBYY2V/e8bVqLRt3/3JpGLk6y5y 2oglDyJPbqUgY6nJO7zZkj3tY4b3s1sdvIZvSF1qE524POkfLuipK0Yaz/oP3g8ugHolfZ VaUq9IOCx0RizULx340bCKmhz+RVBhI= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 98E27A42CA8; Mon, 28 Oct 2024 20:43:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7B0DCC4CEC3; Mon, 28 Oct 2024 20:45:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730148330; bh=aaIPPFaPZSsOgsVJW6n+iBEeGJkAyi1OUkgKKwtKqMM=; h=Date:To:Cc:Subject:From:References:In-Reply-To:From; b=dHXqyY24b6DyRmb+UsPCKjIArBlee7N1lWmLQjsVcbgOAEzwC4VwP9Ycv06ieBQKN L1SI8nbEThRTIRLzy1b4aXpOc3ykEItYFIm3mtJhYCSEOXUJp/3bNghRt3yT7xjA88 vLICHidCv2Q3dqa1+Z3959+8PtePwJ/x01YfLgvtzaNexh9oVfxHqh0lMiUPVHgxt5 F0UiIprzEVbm5NC3j10215pnLjjVA+23G9GfZyBtIEsRBtbGplK7jAPr9c192R7HgP BtlmeV+rW7nLAc2cQn+Lrxy5SSvmYfZ8yjkoJZnV+yMkdmusXA+g0mvFmyvaIAYhWj R87gxRiAYapYQ== Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 Date: Mon, 28 Oct 2024 22:45:26 +0200 Message-Id: To: "Lorenzo Stoakes" , "Andrew Morton" Cc: "Suren Baghdasaryan" , "Liam R . Howlett" , "Matthew Wilcox" , "Vlastimil Babka" , "Paul E . McKenney" , "Jann Horn" , "David Hildenbrand" , , , "Muchun Song" , "Richard Henderson" , "Matt Turner" , "Thomas Bogendoerfer" , "James E . J . Bottomley" , "Helge Deller" , "Chris Zankel" , "Max Filippov" , "Arnd Bergmann" , , , , , "Shuah Khan" , "Christian Brauner" , , "Sidhartha Kumar" , "Jeff Xu" , "Christoph Hellwig" , , "John Hubbard" Subject: Re: [PATCH v3 3/5] mm: madvise: implement lightweight guard page mechanism From: "Jarkko Sakkinen" X-Mailer: aerc 0.18.2 References: <415da1e6c5828d96db3af480d243a7f68ccabf6d.1729699916.git.lorenzo.stoakes@oracle.com> In-Reply-To: <415da1e6c5828d96db3af480d243a7f68ccabf6d.1729699916.git.lorenzo.stoakes@oracle.com> X-Rspam-User: X-Stat-Signature: cxejhueyj1n7hhusu9pdbmfeiwxq578p X-Rspamd-Queue-Id: 3A1D3140009 X-Rspamd-Server: rspam02 X-HE-Tag: 1730148314-970245 X-HE-Meta: U2FsdGVkX19/wzJC98M7A+dr12Q3DF5O6aPyvku3wG30s6CJjNYg6wFViKJcWMmijd+D5MvsUzIG+8R91+8cxGRZWhNGMXram8r6Rdmnr4EMRoCCb8WHXNU4eA/SbWSZlwxYzxUHNzGxPkfrYbd5gnBKBUUSuySYDIMbRSDfvQshiBlR1ZInM6XvhBvAhQFOMBoBm3W27Q6bIXox7b4YR4Nph0LVQAsKqeONWcticz1mVxMJiFRxyZj+Q1BlREhBTOGFA4v9T6zyB6Lu31AU2HSQqXIpMR0PNANxrpNZWOd/umCBHZiRkrmMVC3j0N/7j4bUmSHlcnpyBEC8ArLTdb3s3u8jgcHYPSA1a3xY3fbYDF5v5w2ijg9g6P+ASUwPtHgw4t+vUq5Aw9VjjapGzq8lhhxhikmrxqjc59W7hbtB8Se5dMOBPLYV3QpX0bJ+ZGwKekKoDoi+d3ZpHj9hP74nYFXlx7BUfSWa1bIsDYfLhC39b0NyaQ2PMtmQbXLJsR4lEFefX7+7/699YbezllzVfLNa8APYNfEE1Xm6SY2jBhbySqDgPD/zJ+NI6h+PTyrGPx/GxxX0JumQ9gCyFUiUkkm31DPgUMrdBedjFHrj7DhCip+z/4qvqsITibJqxrv9tt+aSvdIeWKPKtrj8rOYthThc8IW3E1XFiFVofJJYMXBH+I5EVs4EESJ1MOpmXUS+6IJ3WIThFhH3mKOuzhnOwRfVqdy9GfitWbqrFLTHTyyGwEdz6KV+Fsc4qi5HEZxwH2BigedHrD+KNabERcR82trXpF8WjCIDcZTtwOAfue+6+oTxVeIGV0JjrP8YWiMqmUg73lWIg4dLgPTNjjdzLZ+WCD4Qd1OPP9hjeQiosCqijHdSkFHyYxQ7f19gzYHZvGRqYiv+7WsqzLlhsh+HsEQXPWeLYqk1g08mvPnamGuNQ7gDww4Kz85uhlKOXANGdYfPBvAkBTxfa/ ZbQPtc3T 577Z1hf1D+X1qtAGvac2WOyeZjfFJj9ap87LuXNbytUMkSJAgzxwPsfzbEzh2II+Xfx3p2tiu14g3KPoDk1to+hj/uaHWbZII+lEqZrWAZSm0r1CGkYC40CIhy5ZXJCxBE0vbpCC3UBtDgr/XYK9sWuFlxFd3VhhT503HOF6JFWWHcfaT7W8/Kl+g+XMYDKOGvh++AcHQZ7o0wTns+ATVPNR01oFGlQf9d/G7EZjlvMB9PNzt5YpXNINDvy7S+KqZ88RgRpVlvYCIrxihiQR/Tdq4OouBllWYwVFBoSm0gK0lkl8uzZ4AGOpWJogDxz44AdxUn75sFc4U2zVJXY47ZUXnPA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed Oct 23, 2024 at 7:24 PM EEST, Lorenzo Stoakes wrote: > Implement a new lightweight guard page feature, that is regions of userla= nd > virtual memory that, when accessed, cause a fatal signal to arise. > > Currently users must establish PROT_NONE ranges to achieve this. A bit off-topic but other hack with PROT_NONE is to allocate naturally aligned ranges: 1. mmap() of 2*N size with PROT_NONE. 2. mmap(MAP_FIXED) of N size from that range. > > However this is very costly memory-wise - we need a VMA for each and ever= y > one of these regions AND they become unmergeable with surrounding VMAs. > > In addition repeated mmap() calls require repeated kernel context switche= s > and contention of the mmap lock to install these ranges, potentially also > having to unmap memory if installed over existing ranges. > > The lightweight guard approach eliminates the VMA cost altogether - rathe= r > than establishing a PROT_NONE VMA, it operates at the level of page table > entries - establishing PTE markers such that accesses to them cause a fau= lt > followed by a SIGSGEV signal being raised. > > This is achieved through the PTE marker mechanism, which we have already > extended to provide PTE_MARKER_GUARD, which we installed via the generic > page walking logic which we have extended for this purpose. > > These guard ranges are established with MADV_GUARD_INSTALL. If the range = in > which they are installed contain any existing mappings, they will be > zapped, i.e. free the range and unmap memory (thus mimicking the behaviou= r > of MADV_DONTNEED in this respect). > > Any existing guard entries will be left untouched. There is therefore no > nesting of guarded pages. > > Guarded ranges are NOT cleared by MADV_DONTNEED nor MADV_FREE (in both > instances the memory range may be reused at which point a user would expe= ct > guards to still be in place), but they are cleared via MADV_GUARD_REMOVE, > process teardown or unmapping of memory ranges. > > The guard property can be removed from ranges via MADV_GUARD_REMOVE. The > ranges over which this is applied, should they contain non-guard entries, > will be untouched, with only guard entries being cleared. > > We permit this operation on anonymous memory only, and only VMAs which ar= e > non-special, non-huge and not mlock()'d (if we permitted this we'd have t= o > drop locked pages which would be rather counterintuitive). > > Racing page faults can cause repeated attempts to install guard pages tha= t > are interrupted, result in a zap, and this process can end up being > repeated. If this happens more than would be expected in normal operation= , > we rescind locks and retry the whole thing, which avoids lock contention = in > this scenario. > > Suggested-by: Vlastimil Babka > Suggested-by: Jann Horn > Suggested-by: David Hildenbrand > Signed-off-by: Lorenzo Stoakes > --- > arch/alpha/include/uapi/asm/mman.h | 3 + > arch/mips/include/uapi/asm/mman.h | 3 + > arch/parisc/include/uapi/asm/mman.h | 3 + > arch/xtensa/include/uapi/asm/mman.h | 3 + > include/uapi/asm-generic/mman-common.h | 3 + > mm/internal.h | 6 + > mm/madvise.c | 225 +++++++++++++++++++++++++ > mm/mseal.c | 1 + > 8 files changed, 247 insertions(+) > > diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi= /asm/mman.h > index 763929e814e9..1e700468a685 100644 > --- a/arch/alpha/include/uapi/asm/mman.h > +++ b/arch/alpha/include/uapi/asm/mman.h > @@ -78,6 +78,9 @@ > =20 > #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ > =20 > +#define MADV_GUARD_INSTALL 102 /* fatal signal on access to range */ > +#define MADV_GUARD_REMOVE 103 /* unguard range */ > + > /* compatibility flags */ > #define MAP_FILE 0 > =20 > diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/a= sm/mman.h > index 9c48d9a21aa0..b700dae28c48 100644 > --- a/arch/mips/include/uapi/asm/mman.h > +++ b/arch/mips/include/uapi/asm/mman.h > @@ -105,6 +105,9 @@ > =20 > #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ > =20 > +#define MADV_GUARD_INSTALL 102 /* fatal signal on access to range */ > +#define MADV_GUARD_REMOVE 103 /* unguard range */ > + > /* compatibility flags */ > #define MAP_FILE 0 > =20 > diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/ua= pi/asm/mman.h > index 68c44f99bc93..b6a709506987 100644 > --- a/arch/parisc/include/uapi/asm/mman.h > +++ b/arch/parisc/include/uapi/asm/mman.h > @@ -75,6 +75,9 @@ > #define MADV_HWPOISON 100 /* poison a page for testing */ > #define MADV_SOFT_OFFLINE 101 /* soft offline page for testing */ > =20 > +#define MADV_GUARD_INSTALL 102 /* fatal signal on access to range */ > +#define MADV_GUARD_REMOVE 103 /* unguard range */ > + > /* compatibility flags */ > #define MAP_FILE 0 > =20 > diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/ua= pi/asm/mman.h > index 1ff0c858544f..99d4ccee7f6e 100644 > --- a/arch/xtensa/include/uapi/asm/mman.h > +++ b/arch/xtensa/include/uapi/asm/mman.h > @@ -113,6 +113,9 @@ > =20 > #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ > =20 > +#define MADV_GUARD_INSTALL 102 /* fatal signal on access to range */ > +#define MADV_GUARD_REMOVE 103 /* unguard range */ > + > /* compatibility flags */ > #define MAP_FILE 0 > =20 > diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-ge= neric/mman-common.h > index 6ce1f1ceb432..1ea2c4c33b86 100644 > --- a/include/uapi/asm-generic/mman-common.h > +++ b/include/uapi/asm-generic/mman-common.h > @@ -79,6 +79,9 @@ > =20 > #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ > =20 > +#define MADV_GUARD_INSTALL 102 /* fatal signal on access to range */ > +#define MADV_GUARD_REMOVE 103 /* unguard range */ > + > /* compatibility flags */ > #define MAP_FILE 0 > =20 > diff --git a/mm/internal.h b/mm/internal.h > index fb1fb0c984e4..fcf08b5e64dc 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -423,6 +423,12 @@ extern unsigned long highest_memmap_pfn; > */ > #define MAX_RECLAIM_RETRIES 16 > =20 > +/* > + * Maximum number of attempts we make to install guard pages before we g= ive up > + * and return -ERESTARTNOINTR to have userspace try again. > + */ > +#define MAX_MADVISE_GUARD_RETRIES 3 > + > /* > * in mm/vmscan.c: > */ > diff --git a/mm/madvise.c b/mm/madvise.c > index e871a72a6c32..48eba25e25fe 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -60,6 +60,8 @@ static int madvise_need_mmap_write(int behavior) > case MADV_POPULATE_READ: > case MADV_POPULATE_WRITE: > case MADV_COLLAPSE: > + case MADV_GUARD_INSTALL: > + case MADV_GUARD_REMOVE: > return 0; > default: > /* be safe, default to 1. list exceptions explicitly */ > @@ -1017,6 +1019,214 @@ static long madvise_remove(struct vm_area_struct = *vma, > return error; > } > =20 > +static bool is_valid_guard_vma(struct vm_area_struct *vma, bool allow_lo= cked) > +{ > + vm_flags_t disallowed =3D VM_SPECIAL | VM_HUGETLB; > + > + /* > + * A user could lock after setting a guard range but that's fine, as > + * they'd not be able to fault in. The issue arises when we try to zap > + * existing locked VMAs. We don't want to do that. > + */ > + if (!allow_locked) > + disallowed |=3D VM_LOCKED; > + > + if (!vma_is_anonymous(vma)) > + return false; > + > + if ((vma->vm_flags & (VM_MAYWRITE | disallowed)) !=3D VM_MAYWRITE) > + return false; > + > + return true; > +} > + > +static bool is_guard_pte_marker(pte_t ptent) > +{ > + return is_pte_marker(ptent) && > + is_guard_swp_entry(pte_to_swp_entry(ptent)); > +} > + > +static int guard_install_pud_entry(pud_t *pud, unsigned long addr, > + unsigned long next, struct mm_walk *walk) > +{ > + pud_t pudval =3D pudp_get(pud); > + > + /* If huge return >0 so we abort the operation + zap. */ > + return pud_trans_huge(pudval) || pud_devmap(pudval); > +} > + > +static int guard_install_pmd_entry(pmd_t *pmd, unsigned long addr, > + unsigned long next, struct mm_walk *walk) > +{ > + pmd_t pmdval =3D pmdp_get(pmd); > + > + /* If huge return >0 so we abort the operation + zap. */ > + return pmd_trans_huge(pmdval) || pmd_devmap(pmdval); > +} > + > +static int guard_install_pte_entry(pte_t *pte, unsigned long addr, > + unsigned long next, struct mm_walk *walk) > +{ > + pte_t pteval =3D ptep_get(pte); > + unsigned long *nr_pages =3D (unsigned long *)walk->private; > + > + /* If there is already a guard page marker, we have nothing to do. */ > + if (is_guard_pte_marker(pteval)) { > + (*nr_pages)++; > + > + return 0; > + } > + > + /* If populated return >0 so we abort the operation + zap. */ > + return 1; > +} > + > +static int guard_install_set_pte(unsigned long addr, unsigned long next, > + pte_t *ptep, struct mm_walk *walk) > +{ > + unsigned long *nr_pages =3D (unsigned long *)walk->private; > + > + /* Simply install a PTE marker, this causes segfault on access. */ > + *ptep =3D make_pte_marker(PTE_MARKER_GUARD); > + (*nr_pages)++; > + > + return 0; > +} > + > +static const struct mm_walk_ops guard_install_walk_ops =3D { > + .pud_entry =3D guard_install_pud_entry, > + .pmd_entry =3D guard_install_pmd_entry, > + .pte_entry =3D guard_install_pte_entry, > + .install_pte =3D guard_install_set_pte, > + .walk_lock =3D PGWALK_RDLOCK, > +}; > + > +static long madvise_guard_install(struct vm_area_struct *vma, > + struct vm_area_struct **prev, > + unsigned long start, unsigned long end) > +{ > + long err; > + int i; > + > + *prev =3D vma; > + if (!is_valid_guard_vma(vma, /* allow_locked =3D */false)) > + return -EINVAL; > + > + /* > + * If we install guard markers, then the range is no longer > + * empty from a page table perspective and therefore it's > + * appropriate to have an anon_vma. > + * > + * This ensures that on fork, we copy page tables correctly. > + */ > + err =3D anon_vma_prepare(vma); > + if (err) > + return err; > + > + /* > + * Optimistically try to install the guard marker pages first. If any > + * non-guard pages are encountered, give up and zap the range before > + * trying again. > + * > + * We try a few times before giving up and releasing back to userland t= o > + * loop around, releasing locks in the process to avoid contention. Thi= s > + * would only happen if there was a great many racing page faults. > + * > + * In most cases we should simply install the guard markers immediately > + * with no zap or looping. > + */ > + for (i =3D 0; i < MAX_MADVISE_GUARD_RETRIES; i++) { > + unsigned long nr_pages =3D 0; > + > + /* Returns < 0 on error, =3D=3D 0 if success, > 0 if zap needed. */ > + err =3D walk_page_range_mm(vma->vm_mm, start, end, > + &guard_install_walk_ops, &nr_pages); > + if (err < 0) > + return err; > + > + if (err =3D=3D 0) { > + unsigned long nr_expected_pages =3D PHYS_PFN(end - start); > + > + VM_WARN_ON(nr_pages !=3D nr_expected_pages); > + return 0; > + } > + > + /* > + * OK some of the range have non-guard pages mapped, zap > + * them. This leaves existing guard pages in place. > + */ > + zap_page_range_single(vma, start, end - start, NULL); > + } > + > + /* > + * We were unable to install the guard pages due to being raced by page > + * faults. This should not happen ordinarily. We return to userspace an= d > + * immediately retry, relieving lock contention. > + */ > + return -ERESTARTNOINTR; > +} > + > +static int guard_remove_pud_entry(pud_t *pud, unsigned long addr, > + unsigned long next, struct mm_walk *walk) > +{ > + pud_t pudval =3D pudp_get(pud); > + > + /* If huge, cannot have guard pages present, so no-op - skip. */ > + if (pud_trans_huge(pudval) || pud_devmap(pudval)) > + walk->action =3D ACTION_CONTINUE; > + > + return 0; > +} > + > +static int guard_remove_pmd_entry(pmd_t *pmd, unsigned long addr, > + unsigned long next, struct mm_walk *walk) > +{ > + pmd_t pmdval =3D pmdp_get(pmd); > + > + /* If huge, cannot have guard pages present, so no-op - skip. */ > + if (pmd_trans_huge(pmdval) || pmd_devmap(pmdval)) > + walk->action =3D ACTION_CONTINUE; > + > + return 0; > +} > + > +static int guard_remove_pte_entry(pte_t *pte, unsigned long addr, > + unsigned long next, struct mm_walk *walk) > +{ > + pte_t ptent =3D ptep_get(pte); > + > + if (is_guard_pte_marker(ptent)) { > + /* Simply clear the PTE marker. */ > + pte_clear_not_present_full(walk->mm, addr, pte, false); > + update_mmu_cache(walk->vma, addr, pte); > + } > + > + return 0; > +} > + > +static const struct mm_walk_ops guard_remove_walk_ops =3D { > + .pud_entry =3D guard_remove_pud_entry, > + .pmd_entry =3D guard_remove_pmd_entry, > + .pte_entry =3D guard_remove_pte_entry, > + .walk_lock =3D PGWALK_RDLOCK, > +}; > + > +static long madvise_guard_remove(struct vm_area_struct *vma, > + struct vm_area_struct **prev, > + unsigned long start, unsigned long end) > +{ > + *prev =3D vma; > + /* > + * We're ok with removing guards in mlock()'d ranges, as this is a > + * non-destructive action. > + */ > + if (!is_valid_guard_vma(vma, /* allow_locked =3D */true)) > + return -EINVAL; > + > + return walk_page_range(vma->vm_mm, start, end, > + &guard_remove_walk_ops, NULL); > +} > + > /* > * Apply an madvise behavior to a region of a vma. madvise_update_vma > * will handle splitting a vm area into separate areas, each area with i= ts own > @@ -1098,6 +1308,10 @@ static int madvise_vma_behavior(struct vm_area_str= uct *vma, > break; > case MADV_COLLAPSE: > return madvise_collapse(vma, prev, start, end); > + case MADV_GUARD_INSTALL: > + return madvise_guard_install(vma, prev, start, end); > + case MADV_GUARD_REMOVE: > + return madvise_guard_remove(vma, prev, start, end); > } > =20 > anon_name =3D anon_vma_name(vma); > @@ -1197,6 +1411,8 @@ madvise_behavior_valid(int behavior) > case MADV_DODUMP: > case MADV_WIPEONFORK: > case MADV_KEEPONFORK: > + case MADV_GUARD_INSTALL: > + case MADV_GUARD_REMOVE: > #ifdef CONFIG_MEMORY_FAILURE > case MADV_SOFT_OFFLINE: > case MADV_HWPOISON: > @@ -1490,6 +1706,15 @@ static ssize_t vector_madvise(struct mm_struct *mm= , struct iov_iter *iter, > while (iov_iter_count(iter)) { > ret =3D do_madvise(mm, (unsigned long)iter_iov_addr(iter), > iter_iov_len(iter), behavior); > + /* > + * We cannot return this, as we instead return the number of > + * successful operations. Since all this would achieve in a > + * single madvise() invocation is to re-enter the syscall, and > + * we have already rescinded locks, it should be no problem to > + * simply try again. > + */ > + if (ret =3D=3D -ERESTARTNOINTR) > + continue; > if (ret < 0) > break; > iov_iter_advance(iter, iter_iov_len(iter)); > diff --git a/mm/mseal.c b/mm/mseal.c > index ece977bd21e1..81d6e980e8a9 100644 > --- a/mm/mseal.c > +++ b/mm/mseal.c > @@ -30,6 +30,7 @@ static bool is_madv_discard(int behavior) > case MADV_REMOVE: > case MADV_DONTFORK: > case MADV_WIPEONFORK: > + case MADV_GUARD_INSTALL: > return true; > } > =20 Acked-by: Jarkko Sakkinen Tested-by: Jarkko Sakkinen BR, Jarkko