From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9ADC9C31D97 for ; Thu, 4 Jul 2024 02:35:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C20D56B0082; Wed, 3 Jul 2024 22:35:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD0DA6B0093; Wed, 3 Jul 2024 22:35:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A71196B0099; Wed, 3 Jul 2024 22:35:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 883B36B0082 for ; Wed, 3 Jul 2024 22:35:41 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id DAD4F8056E for ; Thu, 4 Jul 2024 02:35:40 +0000 (UTC) X-FDA: 82300504440.30.3A0F99B Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf28.hostedemail.com (Postfix) with ESMTP id 06FE8C000C for ; Thu, 4 Jul 2024 02:35:38 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=0L7gFjY0; spf=pass (imf28.hostedemail.com: domain of akpm@linux-foundation.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720060526; a=rsa-sha256; cv=none; b=tMnMhXWPflPWNpXAoxZjBATxfw78M9m1lJMCvJhem32HxokqNb+AykZmNpVp9XStxzsPCy or98YfOrGLf7hRqfvPFC0KvnO2i0CP4mVRsTOtNpKFWJw+NXBG8X6cLPptplEBOmH+9Mv6 cxpYQoGagY42sQ5B7dL/2/d9H9TETjI= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=0L7gFjY0; spf=pass (imf28.hostedemail.com: domain of akpm@linux-foundation.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720060526; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Y8J7OAmx0E8eAWs92QeRqkA5K8vSJFg/5jklPsZR2BM=; b=7U+L1cUSRSO/i6ewRQL0MuxS+FPe6fwGWaiDhXQJ4rEE0l7GK+z83bD5EIgt4azDgjf9cB j9VD13B9qrQ7YTF4QITLsG00NZeAdg/AcZMdZR/QDh/uopnTvItjzuP1FonwQI1JHFr7DF LhUw1o31VoZVT1odejfJGYyJUEq5778= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id DAFEB62415; Thu, 4 Jul 2024 02:35:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2F8F1C2BD10; Thu, 4 Jul 2024 02:35:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1720060537; bh=CCdptDblgEDJMwdJ2hZx/MZmumuPKe3fT1gLmOBxzCQ=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=0L7gFjY0jppfIMjs4LcHJgPXYk/O9ZobxUzbn20XCZ/thrU9PJRTfAo2xN6iQqSqe /Hmt0K+E89a98lfBuBxj/tFNsy4WiQ3kNT0pE1TAIJD+hHBJnS6FH3p9a9nicBO5w+ kIJtRQG6DPAVHL2hrBljqFBWd/DR8GY6rYsQ9XJg= Date: Wed, 3 Jul 2024 19:35:36 -0700 From: Andrew Morton To: Hugh Dickins Cc: Baolin Wang , Nhat Pham , Yang Shi , Zi Yan , Barry Song , Kefeng Wang , David Hildenbrand , Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Kefeng Wang Subject: Re: [PATCH hotfix] mm: fix crashes from deferred split racing folio migration Message-Id: <20240703193536.78bce768a9330da3a361ca8a@linux-foundation.org> In-Reply-To: <29c83d1a-11ca-b6c9-f92e-6ccb322af510@google.com> References: <29c83d1a-11ca-b6c9-f92e-6ccb322af510@google.com> X-Mailer: Sylpheed 3.8.0beta1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Stat-Signature: h18gephq4qmoymjmos93o9h5ih87qy5j X-Rspamd-Queue-Id: 06FE8C000C X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1720060538-592091 X-HE-Meta: U2FsdGVkX1/PzM5FzuLQv5X0yAYAg1LZhxQbML73rohFW3xhCjwlMGYiv0vxh169BR4YQDKmeD6dSSINFJrfQBLbFYcYejZlvqTRlNRBxnaTeMcbugrxtfJg3nzzVe6tgL01rgzmnpRbulRQA33BbLSH8B+bAxhHXlaz5pHcot/18IUOnlgmtj4LbWpo5Y3OG/UIb6IZP7aXAOSFnxpCuJ5XQJdrynyui8LHXursHQgIpM6zWWTASdGGnQgvleuKtv1GymNJaIb56OuhoOHWQF0cHfPOFNlx1kdx645pUev3KNn7sQDdendH4EG1Jrg+TbBJMVyvoPKiwP0DPxJckyD5BO+gwQDwnfx+m1zZMjMHWF0V4u3DQXJdrteMyE/sB/fTChiSyUJbzgf16Lc20sTCAxXwMJwLBY1Vt3RSg9eMbq7WEEffCuqslXINxsgAjDunWYC3preJjXTPp3B4v14zALcPSNdLKB70XInVHCwgKTpZtQQDE+rP22/3HojjzjaXGzkTIgL7gcCCe+p6Z/hGFHo3Od57Nml/sxRI26E3YiJ4SeihFhiHsON8LdcUTwWwZr688BI4nF2ncUEUfYk+inAgxKVEyI88veJN1W3WHWq5f4VU1ghfmgNVemsgERsBa/BCImJse5ewsKFplFP2EJpLDkb/hqW3HQRMpFCNMzZ9E3LlYw+uWkV5SpOhnMhi9pzuF12PrXiRgRK2BBCvJqSv2csZSqgCFgtQV3g/md5+6Tr31K31MJrkrxp9OapzkoDEk1tE8wDqPCPO6SGiFhOUw+N8/6qQqFNwCB64H2o0jfqdtGhRQymv6vHdY00pDG2J5LBXX3PSdfK6b4BSvsZPBnv+1DzroXwjb4Zk5GliBdo58aWfVyfPORujyym2Yv9LN2euTgzXzT7nf20hYSETXlDo8RXvXW9jJu9H6sQE1MXb9tfHmCK0oIyCr3tWWI4ed4a2Rnukstq lrx73AbG A9KZJy0c5aquEe5h2lbNziZGRDkuewcBsBbkJ5hg/sXOgC/pRVhuEPwkLP4Yw5vEqpcVeEihccLkhSBxyXq05SReQ0JDP/HF6iSiqTAcdNHyU1ZtJUhAT3lxY6lY0p7zWpxu+2+1gLK5PkQU1k3yA3TY9mXLtDb592Tcnx/VOh0uMkbTljvyeZzF0h+qRj/EYg6S2b+ZB/oAZkgZP2PpDbJfRc43Jt3CrzjdX6xQTkDlObSS4DKVC2t80i/47P1Snq8RhbNMxxWT2Z8aqBw0TJUk4iD1NqGwFF/0UzR/0tTbjQpFInVZCHAo33qfM5bmLHrd+AA4gvxE1asRtv7pCBQXbyw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, 2 Jul 2024 00:40:55 -0700 (PDT) Hugh Dickins wrote: > Even on 6.10-rc6, I've been seeing elusive "Bad page state"s (often on > flags when freeing, yet the flags shown are not bad: PG_locked had been > set and cleared??), and VM_BUG_ON_PAGE(page_ref_count(page) == 0)s from > deferred_split_scan()'s folio_put(), and a variety of other BUG and WARN > symptoms implying double free by deferred split and large folio migration. > > 6.7 commit 9bcef5973e31 ("mm: memcg: fix split queue list crash when large > folio migration") was right to fix the memcg-dependent locking broken in > 85ce2c517ade ("memcontrol: only transfer the memcg data for migration"), > but missed a subtlety of deferred_split_scan(): it moves folios to its own > local list to work on them without split_queue_lock, during which time > folio->_deferred_list is not empty, but even the "right" lock does nothing > to secure the folio and the list it is on. > > Fortunately, deferred_split_scan() is careful to use folio_try_get(): so > folio_migrate_mapping() can avoid the race by folio_undo_large_rmappable() > while the old folio's reference count is temporarily frozen to 0 - adding > such a freeze in the !mapping case too (originally, folio lock and > unmapping and no swap cache left an anon folio unreachable, so no freezing > was needed there: but the deferred split queue offers a way to reach it). There's a conflict when applying Kefeng's "mm: refactor folio_undo_large_rmappable()" (https://lkml.kernel.org/r/20240521130315.46072-1-wangkefeng.wang@huawei.com) on top of this hotfix. --- mm/memcontrol.c~mm-refactor-folio_undo_large_rmappable +++ mm/memcontrol.c @@ -7832,8 +7832,7 @@ void mem_cgroup_migrate(struct folio *ol * In addition, the old folio is about to be freed after migration, so * removing from the split queue a bit earlier seems reasonable. */ - if (folio_test_large(old) && folio_test_large_rmappable(old)) - folio_undo_large_rmappable(old); + folio_undo_large_rmappable(old); old->memcg_data = 0; } I'm resolving this by simply dropping the above hunk. So Kefeng's patch is now as below. Please check. --- a/mm/huge_memory.c~mm-refactor-folio_undo_large_rmappable +++ a/mm/huge_memory.c @@ -3258,22 +3258,11 @@ out: return ret; } -void folio_undo_large_rmappable(struct folio *folio) +void __folio_undo_large_rmappable(struct folio *folio) { struct deferred_split *ds_queue; unsigned long flags; - if (folio_order(folio) <= 1) - return; - - /* - * At this point, there is no one trying to add the folio to - * deferred_list. If folio is not in deferred_list, it's safe - * to check without acquiring the split_queue_lock. - */ - if (data_race(list_empty(&folio->_deferred_list))) - return; - ds_queue = get_deferred_split_queue(folio); spin_lock_irqsave(&ds_queue->split_queue_lock, flags); if (!list_empty(&folio->_deferred_list)) { --- a/mm/internal.h~mm-refactor-folio_undo_large_rmappable +++ a/mm/internal.h @@ -622,7 +622,22 @@ static inline void folio_set_order(struc #endif } -void folio_undo_large_rmappable(struct folio *folio); +void __folio_undo_large_rmappable(struct folio *folio); +static inline void folio_undo_large_rmappable(struct folio *folio) +{ + if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio)) + return; + + /* + * At this point, there is no one trying to add the folio to + * deferred_list. If folio is not in deferred_list, it's safe + * to check without acquiring the split_queue_lock. + */ + if (data_race(list_empty(&folio->_deferred_list))) + return; + + __folio_undo_large_rmappable(folio); +} static inline struct folio *page_rmappable_folio(struct page *page) { --- a/mm/page_alloc.c~mm-refactor-folio_undo_large_rmappable +++ a/mm/page_alloc.c @@ -2661,8 +2661,7 @@ void free_unref_folios(struct folio_batc unsigned long pfn = folio_pfn(folio); unsigned int order = folio_order(folio); - if (order > 0 && folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); if (!free_pages_prepare(&folio->page, order)) continue; /* --- a/mm/swap.c~mm-refactor-folio_undo_large_rmappable +++ a/mm/swap.c @@ -123,8 +123,7 @@ void __folio_put(struct folio *folio) } page_cache_release(folio); - if (folio_test_large(folio) && folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); mem_cgroup_uncharge(folio); free_unref_page(&folio->page, folio_order(folio)); } @@ -1021,10 +1020,7 @@ void folios_put_refs(struct folio_batch free_huge_folio(folio); continue; } - if (folio_test_large(folio) && - folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); - + folio_undo_large_rmappable(folio); __page_cache_release(folio, &lruvec, &flags); if (j != i) --- a/mm/vmscan.c~mm-refactor-folio_undo_large_rmappable +++ a/mm/vmscan.c @@ -1439,9 +1439,7 @@ free_it: */ nr_reclaimed += nr_pages; - if (folio_test_large(folio) && - folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); if (folio_batch_add(&free_folios, folio) == 0) { mem_cgroup_uncharge_folios(&free_folios); try_to_unmap_flush(); @@ -1848,9 +1846,7 @@ static unsigned int move_folios_to_lru(s if (unlikely(folio_put_testzero(folio))) { __folio_clear_lru_flags(folio); - if (folio_test_large(folio) && - folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); if (folio_batch_add(&free_folios, folio) == 0) { spin_unlock_irq(&lruvec->lru_lock); mem_cgroup_uncharge_folios(&free_folios); _