From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B2E4C63777 for ; Thu, 3 Dec 2020 08:18:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E22D921527 for ; Thu, 3 Dec 2020 08:18:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E22D921527 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=gmx.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 02D3A6B006C; Thu, 3 Dec 2020 03:18:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F1F9B6B006E; Thu, 3 Dec 2020 03:18:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E34886B0070; Thu, 3 Dec 2020 03:18:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0009.hostedemail.com [216.40.44.9]) by kanga.kvack.org (Postfix) with ESMTP id CD4E56B006C for ; Thu, 3 Dec 2020 03:18:54 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8A8701F0A for ; Thu, 3 Dec 2020 08:18:54 +0000 (UTC) X-FDA: 77551270188.19.lock23_3416cc6273ba Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 6CD6E1AD1B7 for ; Thu, 3 Dec 2020 08:18:54 +0000 (UTC) X-HE-Tag: lock23_3416cc6273ba X-Filterd-Recvd-Size: 6577 Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Thu, 3 Dec 2020 08:18:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1606983503; bh=+mg95bL0eB1QM3vNqeKO61wWaQkqiUstVwa5YpwRRGo=; h=X-UI-Sender-Class:Subject:From:To:Cc:Date:In-Reply-To:References; b=VeSxLyfIY0H0JPcKgmQ5ZLpB7STufUn6B66XhSgKJ71wuAurRNXRXg7g9gbAy/b1u IxXmqfPAHswFGwZT73L2Hq8ZRG6Fvt+RyX1Fx5eBHJ6eIt+0PqA/tEZ9Pi8aB8+DAR CL2SE+FFpcLeWnZn/k2N1HHR4mWUydtuGc1Qmllg= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from homer.fritz.box ([185.221.149.242]) by mail.gmx.com (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1MEV3I-1kwIHV1i2M-00Fz1B; Thu, 03 Dec 2020 09:18:23 +0100 Message-ID: <64ab382309c41ca5c7a601fc3efbb6d2a6e68602.camel@gmx.de> Subject: Re: scheduling while atomic in z3fold From: Mike Galbraith To: Sebastian Andrzej Siewior , Vitaly Wool Cc: Oleksandr Natalenko , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Steven Rostedt , Thomas Gleixner , linux-rt-users@vger.kernel.org Date: Thu, 03 Dec 2020 09:18:21 +0100 In-Reply-To: References: <20201129112922.db53kmtpu76xxukj@spock.localdomain> <90c4857c53b657147bfb71a281ece9839b0373c2.camel@gmx.de> <20201130132014.mlvxeyiub3fpwyw7@linutronix.de> <856b5cc2a3d4eb673743b52956bf1e60dcdf87a1.camel@gmx.de> <20201130145229.mhbkrfuvyctniaxi@linutronix.de> <05121515e73891ceb9e5caf64b6111fc8ff43fab.camel@gmx.de> <20201130160327.ov32m4rapk4h432a@linutronix.de> <20201202220826.5chy56mbgvrwmg3d@linutronix.de> Content-Type: text/plain; charset="ISO-8859-15" User-Agent: Evolution 3.34.4 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Provags-ID: V03:K1:/IX7vUtk4rT5MqHxVvtHQ0lWlxEyiWzRjfnNmJ7bUptZZTISoAg O6a1AVKfK5ZxO7qugA6ryljYEUqAQq5TwUgdcGjSYsLe9nz0CzFt/nMM+9hJLXIDCx+27fJ uUP9UTabMGxbTxKfDEgFUU/ChTMkERwBuhM8W2uLQrgXGtRIgdngMXN3sMU1sy51arftOQs 5IYNvFyW047nOePJiOT8g== X-UI-Out-Filterresults: notjunk:1;V03:K0:IF6acc0iF7Y=:zBvLRz5x4Ka+WeQg42f/QG Kf6rJ6b+fFD+lUhyL0oBdZSsI74V4natxuaKAmdcqGY5GVl0z5JxbNxvTwxFBAONKSd/6cURq 06N8qbjyYBzwClqGdJDKi3SrpSXdnRVzEnpqzDX/DoAlSg0G2Bh5MgTd2LoQ0VlaMz+aoQgs6 T69PT9wtmmnS26RgXHEbWTLflI/24IbwmX+m4go8sFVoxKoOl4IIfqtuNEx2p4mj+3SCap5y9 0twXqU6P53r/3NmTBzqR1e7juv3vqwDshQ79HPiM9hNlkpIdauCMFYI2AFyEyXFRs6LJsXbSG 215YZzOHY3JAKJYFARdqGiNzqBQTxwE1PzX6NOGA6Dja4ZN8EVPDdsFeiA4ChxlM2bczC+aVk piSIvIYM3fZglAdpY807Pj7ZnBSVvqQQ55Xb6IubitG1BPgLH1wOqhg6cyIrzm9i7vqAYIWZm C/FHUO2AgGn8I/Gtxt+O2MmutSu+e+dBT+dqnQekeRClrHiFtSl1TFXBWDMDS9dLuM7mexQLc 62WSHQziY/EQe1Og+OY/sUjQzX9TvomcY+cJ1cfWd62by+SzISRBnSXMl2JGOptomhjIR3rBX 1Bm/BNyeJcuZRVamU/YddjezINHkRVEXKbUZP3rDnKjh9DYsVMgD5BBcr7MGkT1D90S413bYz JnSAQsm0GjaraixUF7eqqy7RV+xis3f7KQrGfylXUfNLwpFoGST7RFzDg1suddjnZTB5TCMsU gdX2P82POeBzwZhYATwJDFOsDxYWhskBgTwlo+hy8o81CEED67lWBsOdgHz5LIcxui/dTkUvs MdUiG9Mo1H7m01+Q4/vqX01tIDsoX5tRu4MgaB4oTkav6Qmh/R/KrGo1PTcQdJr18rmLRyY2I FlSIyVvR/IeIjABH1bIw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote: > On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote: > Looks like... > > d8f117abb380 z3fold: fix use-after-free when freeing handles > > ...wasn't completely effective... The top two hunks seem to have rendered the thing RT tolerant. diff --git a/mm/z3fold.c b/mm/z3fold.c index 18feaa0bc537..851d9f4f1644 100644 =2D-- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -537,7 +537,7 @@ static void __release_z3fold_page(struct z3fold_header= *zhdr, bool locked) spin_unlock(&pool->lock); /* If there are no foreign handles, free the handles array */ - read_lock(&zhdr->slots->lock); + write_lock(&zhdr->slots->lock); for (i =3D 0; i <=3D BUDDY_MASK; i++) { if (zhdr->slots->slot[i]) { is_free =3D false; @@ -546,7 +546,7 @@ static void __release_z3fold_page(struct z3fold_header= *zhdr, bool locked) } if (!is_free) set_bit(HANDLES_ORPHANED, &zhdr->slots->pool); - read_unlock(&zhdr->slots->lock); + write_unlock(&zhdr->slots->lock); if (is_free) kmem_cache_free(pool->c_handle, zhdr->slots); @@ -642,14 +642,16 @@ static inline void add_to_unbuddied(struct z3fold_po= ol *pool, { if (zhdr->first_chunks =3D=3D 0 || zhdr->last_chunks =3D=3D 0 || zhdr->middle_chunks =3D=3D 0) { - struct list_head *unbuddied =3D get_cpu_ptr(pool->unbuddied); - + struct list_head *unbuddied; int freechunks =3D num_free_chunks(zhdr); + + migrate_disable(); + unbuddied =3D this_cpu_ptr(pool->unbuddied); spin_lock(&pool->lock); list_add(&zhdr->buddy, &unbuddied[freechunks]); spin_unlock(&pool->lock); zhdr->cpu =3D smp_processor_id(); - put_cpu_ptr(pool->unbuddied); + migrate_enable(); } } @@ -886,8 +888,9 @@ static inline struct z3fold_header *__z3fold_alloc(str= uct z3fold_pool *pool, int chunks =3D size_to_chunks(size), i; lookup: + migrate_disable(); /* First, try to find an unbuddied z3fold page. */ - unbuddied =3D get_cpu_ptr(pool->unbuddied); + unbuddied =3D this_cpu_ptr(pool->unbuddied); for_each_unbuddied_list(i, chunks) { struct list_head *l =3D &unbuddied[i]; @@ -905,7 +908,7 @@ static inline struct z3fold_header *__z3fold_alloc(str= uct z3fold_pool *pool, !z3fold_page_trylock(zhdr)) { spin_unlock(&pool->lock); zhdr =3D NULL; - put_cpu_ptr(pool->unbuddied); + migrate_enable(); if (can_sleep) cond_resched(); goto lookup; @@ -919,7 +922,7 @@ static inline struct z3fold_header *__z3fold_alloc(str= uct z3fold_pool *pool, test_bit(PAGE_CLAIMED, &page->private)) { z3fold_page_unlock(zhdr); zhdr =3D NULL; - put_cpu_ptr(pool->unbuddied); + migrate_enable(); if (can_sleep) cond_resched(); goto lookup; @@ -934,7 +937,7 @@ static inline struct z3fold_header *__z3fold_alloc(str= uct z3fold_pool *pool, kref_get(&zhdr->refcount); break; } - put_cpu_ptr(pool->unbuddied); + migrate_enable(); if (!zhdr) { int cpu;