From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2142C433F5 for ; Mon, 3 Oct 2022 17:00:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C3866B0071; Mon, 3 Oct 2022 13:00:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 273186B0073; Mon, 3 Oct 2022 13:00:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 114828E0001; Mon, 3 Oct 2022 13:00:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id EC9386B0071 for ; Mon, 3 Oct 2022 13:00:41 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7F04CA0862 for ; Mon, 3 Oct 2022 17:00:41 +0000 (UTC) X-FDA: 79980252282.11.0FA0FA2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 4782640020 for ; Mon, 3 Oct 2022 17:00:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=64kmJcmlBNcd7ZVRV9QGbQ+ehy4GBcry+YjVTQEY9ZY=; b=rv8+76aX7O4tLP53vdvVbSgLsh Ps1cX0t9eK2DvO6xyMeO6QUPo3HVV0n1wVOWpJon/jezexdruEWrCEkTScFzJTqT6uvxo2apBU7dP 4DSkdQ69bxyqOZSM2CSXnl6CbtKLIe0RnVr5eWy2E4qbJNW7J2p9NOzdf0YTvZaUzMeVoE5PMrAyo Szs8qpwYp5Rk15Xu10RiQJyp6utdfD26w55CoyFejNvE2i+8LSNv0qLuSxRVsmRDOYhgiAkcZxaGm dWCHMkYop+s4O3DKxueYveZFwZUbfl0qqmuI5HAEjHQjcqZjdGOHTCCSuHan2jN+cztvSs66tDGRB xr0ZMb0Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1ofOnz-00GXjv-86; Mon, 03 Oct 2022 17:00:35 +0000 Date: Mon, 3 Oct 2022 18:00:35 +0100 From: Matthew Wilcox To: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Hugh Dickins , Vlastimil Babka , David Laight , Joel Fernandes , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, rcu@vger.kernel.org Subject: Re: amusing SLUB compaction bug when CC_OPTIMIZE_FOR_SIZE Message-ID: References: <35502bdd-1a78-dea1-6ac3-6ff1bcc073fa@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664816441; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=64kmJcmlBNcd7ZVRV9QGbQ+ehy4GBcry+YjVTQEY9ZY=; b=HZO3SJ4HWSm+S7mAUVQf7xwIQe4YHKUOQxbBksBtR5ndzV4lPQSfN5Kv69ISq2uzc/7b1I nCZKCzh9mgv+pQmQAxUXORMFOdG5YUV1xGrVZ3k36FRB9BWRyoOhvFcQ9lgdO7uXGVZov5 SHtY2W0VER/VpQwmGkvQz0Xn8ms7nTM= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rv8+76aX; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664816441; a=rsa-sha256; cv=none; b=TuRnkG40cQI1JkPQT+w4aMb++0g4bUlh1c6jILd//OY1AoXIcKOfP4c2ul5PlY3gq+2BMB NOIY5xaV7i9tcgVF5lr1B2ktMn+wSR7LNdiFk/UEhnUsJNa5E+hPlRfrJXzO9uCpDJeO0u 2LGN71BhNcmUZGIo2DpEO61mopud/gQ= X-Rspamd-Queue-Id: 4782640020 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rv8+76aX; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-Rspam-User: X-Stat-Signature: 558zigqiyyy5s9rnxz4yyxuaxhwzx7d6 X-HE-Tag: 1664816440-126012 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Oct 02, 2022 at 02:48:02PM +0900, Hyeonggon Yoo wrote: > Just one more thing, rcu_leak_callback too. RCU seem to use it > internally to catch double call_rcu(). > > And some suggestions: > - what about adding runtime WARN() on slab init code to catch > unexpected arch/toolchain issues? > - instead of 4, we may use macro definition? like (PAGE_MAPPING_FLAGS + 1)? I think the real problem here is that isolate_movable_page() is insufficiently paranoid. Looking at the gyrations that GUP and the page cache do to convince themselves that the page they got really is the page they wanted, there are a few missing pieces (eg checking that you actually got a refcount on _this_ page and not some random other page you were temporarily part of a compound page with). This patch does three things: - Turns one of the comments into English. There are some others which I'm still scratching my head over. - Uses a folio to help distinguish which operations are being done to the head vs the specific page (this is somewhat an abuse of the folio concept, but it's acceptable) - Add the aforementioned check that we're actually operating on the page that we think we want to be. - Add a check that the folio isn't secretly a slab. We could put the slab check in PageMapping and call it after taking the folio lock, but that seems pointless. It's the acquisition of the refcount which stabilises the slab flag, not holding the lock. diff --git a/mm/migrate.c b/mm/migrate.c index 6a1597c92261..a65598308c83 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -59,6 +59,7 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode) { + struct folio *folio = page_folio(page); const struct movable_operations *mops; /* @@ -70,16 +71,23 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode) * the put_page() at the end of this block will take care of * release this page, thus avoiding a nasty leakage. */ - if (unlikely(!get_page_unless_zero(page))) + if (unlikely(!folio_try_get(folio))) goto out; + /* Recheck the page is still part of the folio we just got */ + if (unlikely(page_folio(page) != folio)) + goto out_put; + /* - * Check PageMovable before holding a PG_lock because page's owner - * assumes anybody doesn't touch PG_lock of newly allocated page - * so unconditionally grabbing the lock ruins page's owner side. + * Check movable flag before taking the folio lock because + * we use non-atomic bitops on newly allocated page flags so + * unconditionally grabbing the lock ruins page's owner side. */ - if (unlikely(!__PageMovable(page))) - goto out_putpage; + if (unlikely(!__folio_test_movable(folio))) + goto out_put; + if (unlikely(folio_test_slab(folio))) + goto out_put; + /* * As movable pages are not isolated from LRU lists, concurrent * compaction threads can race against page migration functions @@ -91,8 +99,8 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode) * lets be sure we have the page lock * before proceeding with the movable page isolation steps. */ - if (unlikely(!trylock_page(page))) - goto out_putpage; + if (unlikely(!folio_trylock(folio))) + goto out_put; if (!PageMovable(page) || PageIsolated(page)) goto out_no_isolated; @@ -106,14 +114,14 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode) /* Driver shouldn't use PG_isolated bit of page->flags */ WARN_ON_ONCE(PageIsolated(page)); SetPageIsolated(page); - unlock_page(page); + folio_unlock(folio); return 0; out_no_isolated: - unlock_page(page); -out_putpage: - put_page(page); + folio_unlock(folio); +out_put: + folio_put(folio); out: return -EBUSY; }