From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53E72C43217 for ; Thu, 10 Nov 2022 13:49:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3A9946B0071; Thu, 10 Nov 2022 08:49:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 359B26B0072; Thu, 10 Nov 2022 08:49:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 222886B0074; Thu, 10 Nov 2022 08:49:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1328D6B0071 for ; Thu, 10 Nov 2022 08:49:01 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E57621A0CE1 for ; Thu, 10 Nov 2022 13:49:00 +0000 (UTC) X-FDA: 80117663640.09.B348B40 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by imf08.hostedemail.com (Postfix) with ESMTP id 6A4F2160006 for ; Thu, 10 Nov 2022 13:49:00 +0000 (UTC) Received: by mail-pf1-f171.google.com with SMTP id k22so2163026pfd.3 for ; Thu, 10 Nov 2022 05:49:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=LnFXtuVA4NYqbhGKoWGds6kgTIbomnPNzPXIEWeGMzA=; b=Rq4WavHCZSENVhdgHs1/rY+/SSxB00ToA/rMllcLsCMKD3jblV1wXVIrhk2ITsU070 RtkJ73FaZqP4cCekNlMnzLDeEJMVpEcam5UTEI3l5piEzDvw15ZKf12sR6vlsuDgUld8 j9mba8Gin1SdeeRXVQdlIiSh/iANUo6iRoIwFxEJPwZHTK5Lv3pAutoSLLtlZJWub0NF yNTUKp8MsqJvWQlAOwe+l8fZJBCk4LI98t2tE4Bm+iR+0md8yQRSHkYHRWvB+GI18oZD nxz8njas6j+F7P959vRjlc77ladwwhBn+31PWTIA29/tuehfh5sFRLCVaz7FGP2khHbB kwaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=LnFXtuVA4NYqbhGKoWGds6kgTIbomnPNzPXIEWeGMzA=; b=Q3tbn84kU9ThrmmxDSpoR3Aif4+YsFW35Ed6Yaa4lYVI8wFJHgzjyT2Irirhlz4CBd 6ArV+pVx4Iqe2i5AfnyNI5ufo2OIlmrBncL0ZTfi70j7wqws39aTER9izdqN/97lDc// P4e9G6V4iJlCdBc/wEDOMtHZUxo8CCtoxzKA6d/LpSn7ALA6tMUOU2tiNcVu5j0DYPWj AJpDtK8IbaNtUibgpoLuVJiFIfopUyfj83e9Uh1DQ5BGz2EuzImNyLBUI+ZQSnbSMg6U nUMdD3rYlQAWP5nKJdwWOfbPcexAyOvwQfasH0i1/GyRRTilBx/Qi4Ir81NHxDjUkvlK vdnw== X-Gm-Message-State: ACrzQf1XWSGqJ2PAz8K5QEqpfHyLmK2usIy11JWQF3hQ2bb08LliwTxh TmkyiBxXJmDNXJcqELeEYXI= X-Google-Smtp-Source: AMsMyM43/BE/K1xNrCRl0YtgPZEMGAS9AkDZFR+/Yilr+I8eGgX+HEWjonQeoiWTN1IlbV/xdOlmkw== X-Received: by 2002:a05:6a00:3497:b0:56b:6936:ddfb with SMTP id cp23-20020a056a00349700b0056b6936ddfbmr2688071pfb.15.1668088138958; Thu, 10 Nov 2022 05:48:58 -0800 (PST) Received: from hyeyoo ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id j9-20020a170902690900b0017a018221e2sm11236297plk.70.2022.11.10.05.48.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 10 Nov 2022 05:48:58 -0800 (PST) Date: Thu, 10 Nov 2022 22:48:52 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg , Joel Fernandes , Roman Gushchin , Matthew Wilcox , paulmck@kernel.org, rcu@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, kernel test robot Subject: Re: [PATCH v2 2/3] mm/migrate: make isolate_movable_page() skip slab pages Message-ID: References: <20221107170554.7869-1-vbabka@suse.cz> <20221107170554.7869-3-vbabka@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221107170554.7869-3-vbabka@suse.cz> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668088140; a=rsa-sha256; cv=none; b=zMjmO0gWqmh4nZUj4lY4MyC4IjQrH1mBjJk9ApOLeWK2QcsCplGF+5zRcIARhB4WvAdy2/ rHXrtvSLpl1PyulEqPe/pLyDN1Ek2YvOs04N80C87UQbRm/JC0Tya+pRwpr9RLvMCbCx7p LIgdu1Yo+GmUcq1zC8+3Uf1/iPCnLkI= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Rq4WavHC; spf=pass (imf08.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.171 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668088140; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LnFXtuVA4NYqbhGKoWGds6kgTIbomnPNzPXIEWeGMzA=; b=JrVfAO1wvWiyAbYlrH5B1svCRoC71Nzc67w/+DLobgRH2+m/Ieajg62SKBTOC9mH/PssiL 5Mb5QulUHXu93ney3BGWetr3J7kJcrFNqLnsZ7SYiPBtHchFdHpCUfZwIyNEM2IoTGimF2 DoR8v99e1H7A4yiUotIH1F12aQEbkTc= X-Stat-Signature: hux95a13e7xm61owrbra9yhrn9esthu5 X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Rq4WavHC; spf=pass (imf08.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.171 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Queue-Id: 6A4F2160006 X-Rspamd-Server: rspam09 X-HE-Tag: 1668088140-633554 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Nov 07, 2022 at 06:05:53PM +0100, Vlastimil Babka wrote: > In the next commit we want to rearrange struct slab fields to allow a larger > rcu_head. Afterwards, the page->mapping field will overlap with SLUB's "struct > list_head slab_list", where the value of prev pointer can become LIST_POISON2, > which is 0x122 + POISON_POINTER_DELTA. Unfortunately the bit 1 being set can > confuse PageMovable() to be a false positive and cause a GPF as reported by lkp > [1]. > > To fix this, make isolate_movable_page() skip pages with the PageSlab flag set. > This is a bit tricky as we need to add memory barriers to SLAB and SLUB's page > allocation and freeing, and their counterparts to isolate_movable_page(). > > Based on my RFC from [2]. Added a comment update from Matthew's variant in [3] > and, as done there, moved the PageSlab checks to happen before trying to take > the page lock. > > [1] https://lore.kernel.org/all/208c1757-5edd-fd42-67d4-1940cc43b50f@intel.com/ > [2] https://lore.kernel.org/all/aec59f53-0e53-1736-5932-25407125d4d4@suse.cz/ > [3] https://lore.kernel.org/all/YzsVM8eToHUeTP75@casper.infradead.org/ > > Reported-by: kernel test robot > Signed-off-by: Vlastimil Babka > --- > mm/migrate.c | 15 ++++++++++++--- > mm/slab.c | 6 +++++- > mm/slub.c | 6 +++++- > 3 files changed, 22 insertions(+), 5 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index 1379e1912772..959c99cff814 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -74,13 +74,22 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode) > if (unlikely(!get_page_unless_zero(page))) > goto out; > > + if (unlikely(PageSlab(page))) > + goto out_putpage; > + /* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */ > + smp_rmb(); > /* > - * Check PageMovable before holding a PG_lock because page's owner > - * assumes anybody doesn't touch PG_lock of newly allocated page > - * so unconditionally grabbing the lock ruins page's owner side. > + * Check movable flag before taking the page lock because > + * we use non-atomic bitops on newly allocated page flags so > + * unconditionally grabbing the lock ruins page's owner side. > */ > if (unlikely(!__PageMovable(page))) > goto out_putpage; > + /* Pairs with smp_wmb() in slab allocation, e.g. SLUB's alloc_slab_page() */ > + smp_rmb(); > + if (unlikely(PageSlab(page))) > + goto out_putpage; > + > /* > * As movable pages are not isolated from LRU lists, concurrent > * compaction threads can race against page migration functions > diff --git a/mm/slab.c b/mm/slab.c > index 59c8e28f7b6a..219beb48588e 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -1370,6 +1370,8 @@ static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, > > account_slab(slab, cachep->gfporder, cachep, flags); > __folio_set_slab(folio); > + /* Make the flag visible before any changes to folio->mapping */ > + smp_wmb(); > /* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */ > if (sk_memalloc_socks() && page_is_pfmemalloc(folio_page(folio, 0))) > slab_set_pfmemalloc(slab); > @@ -1387,9 +1389,11 @@ static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab) > > BUG_ON(!folio_test_slab(folio)); > __slab_clear_pfmemalloc(slab); > - __folio_clear_slab(folio); > page_mapcount_reset(folio_page(folio, 0)); > folio->mapping = NULL; > + /* Make the mapping reset visible before clearing the flag */ > + smp_wmb(); > + __folio_clear_slab(folio); > > if (current->reclaim_state) > current->reclaim_state->reclaimed_slab += 1 << order; > diff --git a/mm/slub.c b/mm/slub.c > index 99ba865afc4a..5e6519d5169c 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1800,6 +1800,8 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node, > > slab = folio_slab(folio); > __folio_set_slab(folio); > + /* Make the flag visible before any changes to folio->mapping */ > + smp_wmb(); > if (page_is_pfmemalloc(folio_page(folio, 0))) > slab_set_pfmemalloc(slab); > > @@ -2000,8 +2002,10 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) > int pages = 1 << order; > > __slab_clear_pfmemalloc(slab); > - __folio_clear_slab(folio); > folio->mapping = NULL; > + /* Make the mapping reset visible before clearing the flag */ > + smp_wmb(); > + __folio_clear_slab(folio); > if (current->reclaim_state) > current->reclaim_state->reclaimed_slab += pages; > unaccount_slab(slab, order, s); > -- > 2.38.0 This looks correct to me. Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Just noting to myself to avoid confusion in the future: - When one sees PageSlab() == false, __PageMovable() == true should not be false positive from slab page because resetting ->mapping is visible first and then it clears PG_slab. - When one sees __PageMoveable() == true for slab page, PageSlab() must be true because setting PG_slab in slab allocation is visible first and then it writes to ->mapping field. I hope it's nicely reshaped after Matthew's frozen refcount series. -- Thanks, Hyeonggon