From: Mike Galbraith <efault@gmx.de>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Vitaly Wool <vitaly.wool@konsulko.com>
Cc: Oleksandr Natalenko <oleksandr@natalenko.name>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Andrew Morton <akpm@linux-foundation.org>,
Steven Rostedt <rostedt@goodmis.org>,
Thomas Gleixner <tglx@linutronix.de>,
linux-rt-users@vger.kernel.org
Subject: Re: scheduling while atomic in z3fold
Date: Thu, 03 Dec 2020 09:18:21 +0100 [thread overview]
Message-ID: <64ab382309c41ca5c7a601fc3efbb6d2a6e68602.camel@gmx.de> (raw)
In-Reply-To: <abe48cb9ab522659a05d7e41ce07317da2a85163.camel@gmx.de>
On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote:
> On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote:
> Looks like...
>
> d8f117abb380 z3fold: fix use-after-free when freeing handles
>
> ...wasn't completely effective...
The top two hunks seem to have rendered the thing RT tolerant.
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 18feaa0bc537..851d9f4f1644 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -537,7 +537,7 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
spin_unlock(&pool->lock);
/* If there are no foreign handles, free the handles array */
- read_lock(&zhdr->slots->lock);
+ write_lock(&zhdr->slots->lock);
for (i = 0; i <= BUDDY_MASK; i++) {
if (zhdr->slots->slot[i]) {
is_free = false;
@@ -546,7 +546,7 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
}
if (!is_free)
set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
- read_unlock(&zhdr->slots->lock);
+ write_unlock(&zhdr->slots->lock);
if (is_free)
kmem_cache_free(pool->c_handle, zhdr->slots);
@@ -642,14 +642,16 @@ static inline void add_to_unbuddied(struct z3fold_pool *pool,
{
if (zhdr->first_chunks == 0 || zhdr->last_chunks == 0 ||
zhdr->middle_chunks == 0) {
- struct list_head *unbuddied = get_cpu_ptr(pool->unbuddied);
-
+ struct list_head *unbuddied;
int freechunks = num_free_chunks(zhdr);
+
+ migrate_disable();
+ unbuddied = this_cpu_ptr(pool->unbuddied);
spin_lock(&pool->lock);
list_add(&zhdr->buddy, &unbuddied[freechunks]);
spin_unlock(&pool->lock);
zhdr->cpu = smp_processor_id();
- put_cpu_ptr(pool->unbuddied);
+ migrate_enable();
}
}
@@ -886,8 +888,9 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool,
int chunks = size_to_chunks(size), i;
lookup:
+ migrate_disable();
/* First, try to find an unbuddied z3fold page. */
- unbuddied = get_cpu_ptr(pool->unbuddied);
+ unbuddied = this_cpu_ptr(pool->unbuddied);
for_each_unbuddied_list(i, chunks) {
struct list_head *l = &unbuddied[i];
@@ -905,7 +908,7 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool,
!z3fold_page_trylock(zhdr)) {
spin_unlock(&pool->lock);
zhdr = NULL;
- put_cpu_ptr(pool->unbuddied);
+ migrate_enable();
if (can_sleep)
cond_resched();
goto lookup;
@@ -919,7 +922,7 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool,
test_bit(PAGE_CLAIMED, &page->private)) {
z3fold_page_unlock(zhdr);
zhdr = NULL;
- put_cpu_ptr(pool->unbuddied);
+ migrate_enable();
if (can_sleep)
cond_resched();
goto lookup;
@@ -934,7 +937,7 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool,
kref_get(&zhdr->refcount);
break;
}
- put_cpu_ptr(pool->unbuddied);
+ migrate_enable();
if (!zhdr) {
int cpu;
next prev parent reply other threads:[~2020-12-03 8:18 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-28 14:05 Oleksandr Natalenko
2020-11-28 14:09 ` Oleksandr Natalenko
2020-11-28 14:27 ` Oleksandr Natalenko
2020-11-29 6:41 ` Mike Galbraith
[not found] ` <79ee43026efe5aaa560953ea8fe29a826ac4e855.camel@gmx.de>
2020-11-29 9:21 ` Mike Galbraith
2020-11-29 10:56 ` Mike Galbraith
2020-11-29 11:29 ` Oleksandr Natalenko
2020-11-29 11:41 ` Mike Galbraith
2020-11-30 13:20 ` Sebastian Andrzej Siewior
2020-11-30 13:53 ` Oleksandr Natalenko
2020-11-30 14:28 ` Sebastian Andrzej Siewior
2020-11-30 14:42 ` Mike Galbraith
2020-11-30 14:52 ` Sebastian Andrzej Siewior
2020-11-30 15:01 ` Mike Galbraith
2020-11-30 15:03 ` Mike Galbraith
2020-11-30 16:03 ` Sebastian Andrzej Siewior
2020-11-30 16:27 ` Mike Galbraith
2020-11-30 16:32 ` Sebastian Andrzej Siewior
2020-11-30 16:36 ` Mike Galbraith
2020-11-30 19:09 ` Mike Galbraith
2020-11-30 16:53 ` Mike Galbraith
2020-12-02 2:30 ` Mike Galbraith
[not found] ` <20201202220826.5chy56mbgvrwmg3d@linutronix.de>
2020-12-03 2:16 ` Mike Galbraith
2020-12-03 8:18 ` Mike Galbraith [this message]
2020-12-03 13:39 ` Sebastian Andrzej Siewior
2020-12-03 14:07 ` Vitaly Wool
2020-12-06 9:18 ` Mike Galbraith
2020-12-07 1:05 ` Vitaly Wool
2020-12-07 2:18 ` Mike Galbraith
2020-12-07 11:52 ` Vitaly Wool
2020-12-07 12:34 ` Mike Galbraith
2020-12-07 15:21 ` Vitaly Wool
2020-12-07 15:41 ` Sebastian Andrzej Siewior
2020-12-07 15:41 ` Mike Galbraith
2020-12-08 23:26 ` Vitaly Wool
2020-12-09 6:13 ` Mike Galbraith
2020-12-09 6:31 ` Mike Galbraith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=64ab382309c41ca5c7a601fc3efbb6d2a6e68602.camel@gmx.de \
--to=efault@gmx.de \
--cc=akpm@linux-foundation.org \
--cc=bigeasy@linutronix.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=oleksandr@natalenko.name \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=vitaly.wool@konsulko.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox