linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Suren Baghdasaryan <surenb@google.com>,
	"Liam R . Howlett" <Liam.Howlett@oracle.com>,
	Matthew Wilcox <willy@infradead.org>,
	"Paul E . McKenney" <paulmck@kernel.org>,
	Jann Horn <jannh@google.com>,
	David Hildenbrand <david@redhat.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Muchun Song <muchun.song@linux.dev>,
	Richard Henderson <richard.henderson@linaro.org>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Matt Turner <mattst88@gmail.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	"James E . J . Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, Chris Zankel <chris@zankel.net>,
	Max Filippov <jcmvbkbc@gmail.com>, Arnd Bergmann <arnd@arndb.de>,
	linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org,
	linux-parisc@vger.kernel.org, linux-arch@vger.kernel.org,
	Shuah Khan <shuah@kernel.org>,
	Christian Brauner <brauner@kernel.org>,
	linux-kselftest@vger.kernel.org,
	Sidhartha Kumar <sidhartha.kumar@oracle.com>,
	Jeff Xu <jeffxu@chromium.org>,
	Christoph Hellwig <hch@infradead.org>,
	linux-api@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>
Subject: Re: [PATCH v2 3/5] mm: madvise: implement lightweight guard page mechanism
Date: Mon, 21 Oct 2024 22:45:58 +0200	[thread overview]
Message-ID: <f2448c59-0456-49e8-9676-609629227808@suse.cz> (raw)
In-Reply-To: <393b0932-1c52-4d59-9466-e5e6184a7daf@lucifer.local>

On 10/21/24 22:27, Lorenzo Stoakes wrote:
> On Mon, Oct 21, 2024 at 10:11:29PM +0200, Vlastimil Babka wrote:
>> On 10/20/24 18:20, Lorenzo Stoakes wrote:
>> > Implement a new lightweight guard page feature, that is regions of userland
>> > virtual memory that, when accessed, cause a fatal signal to arise.
>> >
>> > Currently users must establish PROT_NONE ranges to achieve this.
>> >
>> > However this is very costly memory-wise - we need a VMA for each and every
>> > one of these regions AND they become unmergeable with surrounding VMAs.
>> >
>> > In addition repeated mmap() calls require repeated kernel context switches
>> > and contention of the mmap lock to install these ranges, potentially also
>> > having to unmap memory if installed over existing ranges.
>> >
>> > The lightweight guard approach eliminates the VMA cost altogether - rather
>> > than establishing a PROT_NONE VMA, it operates at the level of page table
>> > entries - poisoning PTEs such that accesses to them cause a fault followed
>> > by a SIGSGEV signal being raised.
>> >
>> > This is achieved through the PTE marker mechanism, which a previous commit
>> > in this series extended to permit this to be done, installed via the
>> > generic page walking logic, also extended by a prior commit for this
>> > purpose.
>> >
>> > These poison ranges are established with MADV_GUARD_POISON, and if the
>> > range in which they are installed contain any existing mappings, they will
>> > be zapped, i.e. free the range and unmap memory (thus mimicking the
>> > behaviour of MADV_DONTNEED in this respect).
>> >
>> > Any existing poison entries will be left untouched. There is no nesting of
>> > poisoned pages.
>> >
>> > Poisoned ranges are NOT cleared by MADV_DONTNEED, as this would be rather
>> > unexpected behaviour, but are cleared on process teardown or unmapping of
>> > memory ranges.
>> >
>> > Ranges can have the poison property removed by MADV_GUARD_UNPOISON -
>> > 'remedying' the poisoning. The ranges over which this is applied, should
>> > they contain non-poison entries, will be untouched, only poison entries
>> > will be cleared.
>> >
>> > We permit this operation on anonymous memory only, and only VMAs which are
>> > non-special, non-huge and not mlock()'d (if we permitted this we'd have to
>> > drop locked pages which would be rather counterintuitive).
>> >
>> > Suggested-by: Vlastimil Babka <vbabka@suse.cz>
>> > Suggested-by: Jann Horn <jannh@google.com>
>> > Suggested-by: David Hildenbrand <david@redhat.com>
>> > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>
>> <snip>
>>
>> > +static long madvise_guard_poison(struct vm_area_struct *vma,
>> > +				 struct vm_area_struct **prev,
>> > +				 unsigned long start, unsigned long end)
>> > +{
>> > +	long err;
>> > +
>> > +	*prev = vma;
>> > +	if (!is_valid_guard_vma(vma, /* allow_locked = */false))
>> > +		return -EINVAL;
>> > +
>> > +	/*
>> > +	 * If we install poison markers, then the range is no longer
>> > +	 * empty from a page table perspective and therefore it's
>> > +	 * appropriate to have an anon_vma.
>> > +	 *
>> > +	 * This ensures that on fork, we copy page tables correctly.
>> > +	 */
>> > +	err = anon_vma_prepare(vma);
>> > +	if (err)
>> > +		return err;
>> > +
>> > +	/*
>> > +	 * Optimistically try to install the guard poison pages first. If any
>> > +	 * non-guard pages are encountered, give up and zap the range before
>> > +	 * trying again.
>> > +	 */
>>
>> Should the page walker become powerful enough to handle this in one go? :)
> 
> I can tell you've not read previous threads...

Whoops, you're right, I did read v1 but forgot about the RFC.

But we can assume people who'll only see the code after it's merged will not
have read it either, so since a potentially endless loop could be
suspicious, expanding the comment to explain how it's fine wouldn't hurt?

> I've addressed this in discussion with Jann - we'd have to do a full fat
> huge comprehensive thing to do an in-place replace.
> 
> It'd either have to be fully duplicative of the multiple copies of the very
> lengthily code to do this sort of thing right (some in mm/madvise.c itself)
> or I'd have to go off and do a totally new pre-requisite series
> centralising that in a way that people probably wouldn't accept... I'm not
> sure the benefits outway the costs.
> 
>> But sure, if it's too big a task to teach it to zap ptes with all the tlb
>> flushing etc (I assume it's something page walkers don't do today), it makes
>> sense to do it this way.
>> Or we could require userspace to zap first (MADV_DONTNEED), but that would
>> unnecessarily mean extra syscalls for the use case of an allocator debug
>> mode that wants to turn freed memory to guards to catch use after free.
>> So this seems like a good compromise...
> 
> This is optimistic as the comment says, you very often won't need to do
> this, so we do a little extra work in the case that you need to zap,
> vs. the more likely case that you don't when you don't.
> 
> In the face of racing faults, which we can't reasonably prevent without
> having to write _and_ VMA lock which is an egregious requirement, this
> wouldn't really save us anythign anyway.

OK.

>>
>> > +	while (true) {
>> > +		/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
>> > +		err = walk_page_range_mm(vma->vm_mm, start, end,
>> > +					 &guard_poison_walk_ops, NULL);
>> > +		if (err <= 0)
>> > +			return err;
>> > +
>> > +		/*
>> > +		 * OK some of the range have non-guard pages mapped, zap
>> > +		 * them. This leaves existing guard pages in place.
>> > +		 */
>> > +		zap_page_range_single(vma, start, end - start, NULL);
>>
>> ... however the potentially endless loop doesn't seem great. Could a
>> malicious program keep refaulting the range (ignoring any segfaults if it
>> loses a race) with one thread while failing to make progress here with
>> another thread? Is that ok because it would only punish itself?
> 
> Sigh. Again, I don't think you've read the previous series have you? Or
> even the changelog... I added this as Jann asked for it. Originally we'd
> -EAGAIN if we got raced. See the discussion over in v1 for details.
> 
> I did it that way specifically to avoid such things, but Jann didn't appear
> to think it was a problem.

If Jann is fine with this then it must be secure enough.

>>
>> I mean if we have to retry the guards page installation more than once, it
>> means the program *is* racing faults with installing guard ptes in the same
>> range, right? So it would be right to segfault it. But I guess when we
>> detect it here, we have no way to send the signal to the right thread and it
>> would be too late? So unless we can do the PTE zap+install marker
>> atomically, maybe there's no better way and this is acceptable as a
>> malicious program can harm only itself?
> 
> Yup you'd only be hurting yourself. I went over this with Jann, who didn't
> appear to have a problem with this approach from a security perspective, in
> fact he explicitly asked me to do this :)
> 
>>
>> Maybe it would be just simpler to install the marker via zap_details rather
>> than the pagewalk?
> 
> Ah the inevitable 'please completely rework how you do everything' comment
> I was expecting at some point :)

Job security :)

j/k

> Obviously I've considered this (and a number of other approaches), it would
> fundamentally change what zap is - right now if it can't traverse a page
> table level that job done (it's job is to remove PTEs not create).
> 
> We'd instead have to completely rework the logic to be able to _install_
> page tables and then carefully check we don't break anything and only do it
> in the specific cases we need.
> 
> Or we could add a mode that says 'replace with a poison marker' rather than
> zap, but that has potential TLB concerns, splits it across two operations
> (installation and zapping), and then could you really be sure that there
> isn't a really really badly timed race that would mean you'd have to loop
> again?
> 
> Right now it's simple, elegant, small and we can only make ourselves
> wait. I don't think this is a huge problem.
> 
> I think I'll need an actual security/DoS-based justification to change this.
> 
>>
>> > +
>> > +		if (fatal_signal_pending(current))
>> > +			return -EINTR;
>> > +		cond_resched();
>> > +	}
>> > +}
>> > +
>> > +static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr,
>> > +				    unsigned long next, struct mm_walk *walk)
>> > +{
>> > +	pte_t ptent = ptep_get(pte);
>> > +
>> > +	if (is_guard_pte_marker(ptent)) {
>> > +		/* Simply clear the PTE marker. */
>> > +		pte_clear_not_present_full(walk->mm, addr, pte, false);
>> > +		update_mmu_cache(walk->vma, addr, pte);
>> > +	}
>> > +
>> > +	return 0;
>> > +}
>> > +
>> > +static const struct mm_walk_ops guard_unpoison_walk_ops = {
>> > +	.pte_entry		= guard_unpoison_pte_entry,
>> > +	.walk_lock		= PGWALK_RDLOCK,
>> > +};
>> > +
>> > +static long madvise_guard_unpoison(struct vm_area_struct *vma,
>> > +				   struct vm_area_struct **prev,
>> > +				   unsigned long start, unsigned long end)
>> > +{
>> > +	*prev = vma;
>> > +	/*
>> > +	 * We're ok with unpoisoning mlock()'d ranges, as this is a
>> > +	 * non-destructive action.
>> > +	 */
>> > +	if (!is_valid_guard_vma(vma, /* allow_locked = */true))
>> > +		return -EINVAL;
>> > +
>> > +	return walk_page_range(vma->vm_mm, start, end,
>> > +			       &guard_unpoison_walk_ops, NULL);
>> > +}
>> > +
>> >  /*
>> >   * Apply an madvise behavior to a region of a vma.  madvise_update_vma
>> >   * will handle splitting a vm area into separate areas, each area with its own
>> > @@ -1098,6 +1260,10 @@ static int madvise_vma_behavior(struct vm_area_struct *vma,
>> >  		break;
>> >  	case MADV_COLLAPSE:
>> >  		return madvise_collapse(vma, prev, start, end);
>> > +	case MADV_GUARD_POISON:
>> > +		return madvise_guard_poison(vma, prev, start, end);
>> > +	case MADV_GUARD_UNPOISON:
>> > +		return madvise_guard_unpoison(vma, prev, start, end);
>> >  	}
>> >
>> >  	anon_name = anon_vma_name(vma);
>> > @@ -1197,6 +1363,8 @@ madvise_behavior_valid(int behavior)
>> >  	case MADV_DODUMP:
>> >  	case MADV_WIPEONFORK:
>> >  	case MADV_KEEPONFORK:
>> > +	case MADV_GUARD_POISON:
>> > +	case MADV_GUARD_UNPOISON:
>> >  #ifdef CONFIG_MEMORY_FAILURE
>> >  	case MADV_SOFT_OFFLINE:
>> >  	case MADV_HWPOISON:
>> > diff --git a/mm/mprotect.c b/mm/mprotect.c
>> > index 0c5d6d06107d..d0e3ebfadef8 100644
>> > --- a/mm/mprotect.c
>> > +++ b/mm/mprotect.c
>> > @@ -236,7 +236,8 @@ static long change_pte_range(struct mmu_gather *tlb,
>> >  			} else if (is_pte_marker_entry(entry)) {
>> >  				/*
>> >  				 * Ignore error swap entries unconditionally,
>> > -				 * because any access should sigbus anyway.
>> > +				 * because any access should sigbus/sigsegv
>> > +				 * anyway.
>> >  				 */
>> >  				if (is_poisoned_swp_entry(entry))
>> >  					continue;
>> > diff --git a/mm/mseal.c b/mm/mseal.c
>> > index ece977bd21e1..21bf5534bcf5 100644
>> > --- a/mm/mseal.c
>> > +++ b/mm/mseal.c
>> > @@ -30,6 +30,7 @@ static bool is_madv_discard(int behavior)
>> >  	case MADV_REMOVE:
>> >  	case MADV_DONTFORK:
>> >  	case MADV_WIPEONFORK:
>> > +	case MADV_GUARD_POISON:
>> >  		return true;
>> >  	}
>> >
>>



  reply	other threads:[~2024-10-21 20:46 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-20 16:20 [PATCH v2 0/5] implement lightweight guard pages Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 1/5] mm: pagewalk: add the ability to install PTEs Lorenzo Stoakes
2024-10-21 13:27   ` Vlastimil Babka
2024-10-21 13:50     ` Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 2/5] mm: add PTE_MARKER_GUARD PTE marker Lorenzo Stoakes
2024-10-21 13:45   ` Vlastimil Babka
2024-10-21 19:57     ` Lorenzo Stoakes
2024-10-21 20:42     ` Lorenzo Stoakes
2024-10-21 21:13       ` Lorenzo Stoakes
2024-10-21 21:20         ` Dave Hansen
2024-10-21 14:13   ` Vlastimil Babka
2024-10-21 14:33     ` Lorenzo Stoakes
2024-10-21 14:54       ` Vlastimil Babka
2024-10-21 15:33         ` Lorenzo Stoakes
2024-10-21 15:41           ` Lorenzo Stoakes
2024-10-21 16:00           ` David Hildenbrand
2024-10-21 16:23             ` Lorenzo Stoakes
2024-10-21 16:44               ` David Hildenbrand
2024-10-21 16:51                 ` Lorenzo Stoakes
2024-10-21 17:00                   ` David Hildenbrand
2024-10-21 17:14                     ` Lorenzo Stoakes
2024-10-21 17:21                       ` David Hildenbrand
2024-10-21 17:26                       ` Vlastimil Babka
2024-10-22 19:13                         ` David Hildenbrand
2024-10-20 16:20 ` [PATCH v2 3/5] mm: madvise: implement lightweight guard page mechanism Lorenzo Stoakes
2024-10-21 17:05   ` David Hildenbrand
2024-10-21 17:15     ` Lorenzo Stoakes
2024-10-21 17:23       ` David Hildenbrand
2024-10-21 19:25         ` John Hubbard
2024-10-21 19:39           ` Lorenzo Stoakes
2024-10-21 20:18             ` David Hildenbrand
2024-10-21 20:11   ` Vlastimil Babka
2024-10-21 20:17     ` David Hildenbrand
2024-10-21 20:25       ` Vlastimil Babka
2024-10-21 20:30         ` Lorenzo Stoakes
2024-10-21 20:37         ` David Hildenbrand
2024-10-21 20:49           ` Lorenzo Stoakes
2024-10-21 21:20             ` David Hildenbrand
2024-10-21 21:33               ` Lorenzo Stoakes
2024-10-21 21:35               ` Vlastimil Babka
2024-10-21 21:46                 ` Lorenzo Stoakes
2024-10-22 19:18                 ` David Hildenbrand
2024-10-21 20:27     ` Lorenzo Stoakes
2024-10-21 20:45       ` Vlastimil Babka [this message]
2024-10-22 19:08         ` Jann Horn
2024-10-22 19:35           ` Lorenzo Stoakes
2024-10-22 19:57             ` Jann Horn
2024-10-22 20:45               ` Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 4/5] tools: testing: update tools UAPI header for mman-common.h Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 5/5] selftests/mm: add self tests for guard page feature Lorenzo Stoakes
2024-10-21 21:31   ` Shuah Khan
2024-10-22 10:25     ` Lorenzo Stoakes
2024-10-20 17:37 ` [PATCH v2 0/5] implement lightweight guard pages Florian Weimer
2024-10-20 19:45   ` Lorenzo Stoakes
2024-10-23  6:24   ` Dmitry Vyukov
2024-10-23  7:19     ` David Hildenbrand
2024-10-23  8:11       ` Lorenzo Stoakes
2024-10-23  8:56         ` Dmitry Vyukov
2024-10-23  9:06           ` Vlastimil Babka
2024-10-23  9:13             ` David Hildenbrand
2024-10-23  9:18               ` Lorenzo Stoakes
2024-10-23  9:29                 ` David Hildenbrand
2024-10-23 11:31                   ` Marco Elver
2024-10-23 11:36                     ` David Hildenbrand
2024-10-23 11:40                       ` Lorenzo Stoakes
2024-10-23  9:17             ` Dmitry Vyukov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f2448c59-0456-49e8-9676-609629227808@suse.cz \
    --to=vbabka@suse.cz \
    --cc=James.Bottomley@hansenpartnership.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=brauner@kernel.org \
    --cc=chris@zankel.net \
    --cc=david@redhat.com \
    --cc=deller@gmx.de \
    --cc=hch@infradead.org \
    --cc=ink@jurassic.park.msu.ru \
    --cc=jannh@google.com \
    --cc=jcmvbkbc@gmail.com \
    --cc=jeffxu@chromium.org \
    --cc=jhubbard@nvidia.com \
    --cc=linux-alpha@vger.kernel.org \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-parisc@vger.kernel.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mattst88@gmail.com \
    --cc=muchun.song@linux.dev \
    --cc=paulmck@kernel.org \
    --cc=richard.henderson@linaro.org \
    --cc=shuah@kernel.org \
    --cc=sidhartha.kumar@oracle.com \
    --cc=surenb@google.com \
    --cc=tsbogend@alpha.franken.de \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox