linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hugh Dickins <hughd@google.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Hugh Dickins <hughd@google.com>,
	 Baolin Wang <baolin.wang@linux.alibaba.com>,
	Nhat Pham <nphamcs@gmail.com>,  Yang Shi <shy828301@gmail.com>,
	Zi Yan <ziy@nvidia.com>,  Barry Song <baohua@kernel.org>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	 David Hildenbrand <david@redhat.com>,
	Matthew Wilcox <willy@infradead.org>,
	 linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH hotfix] mm: fix crashes from deferred split racing folio migration
Date: Wed, 3 Jul 2024 20:21:22 -0700 (PDT)	[thread overview]
Message-ID: <825653a7-a4d4-89f2-278f-4b18f8f8da5d@google.com> (raw)
In-Reply-To: <20240703193536.78bce768a9330da3a361ca8a@linux-foundation.org>

On Wed, 3 Jul 2024, Andrew Morton wrote:
> On Tue, 2 Jul 2024 00:40:55 -0700 (PDT) Hugh Dickins <hughd@google.com> wrote:
> 
> > Even on 6.10-rc6, I've been seeing elusive "Bad page state"s (often on
> > flags when freeing, yet the flags shown are not bad: PG_locked had been
> > set and cleared??), and VM_BUG_ON_PAGE(page_ref_count(page) == 0)s from
> > deferred_split_scan()'s folio_put(), and a variety of other BUG and WARN
> > symptoms implying double free by deferred split and large folio migration.
> > 
> > 6.7 commit 9bcef5973e31 ("mm: memcg: fix split queue list crash when large
> > folio migration") was right to fix the memcg-dependent locking broken in
> > 85ce2c517ade ("memcontrol: only transfer the memcg data for migration"),
> > but missed a subtlety of deferred_split_scan(): it moves folios to its own
> > local list to work on them without split_queue_lock, during which time
> > folio->_deferred_list is not empty, but even the "right" lock does nothing
> > to secure the folio and the list it is on.
> > 
> > Fortunately, deferred_split_scan() is careful to use folio_try_get(): so
> > folio_migrate_mapping() can avoid the race by folio_undo_large_rmappable()
> > while the old folio's reference count is temporarily frozen to 0 - adding
> > such a freeze in the !mapping case too (originally, folio lock and
> > unmapping and no swap cache left an anon folio unreachable, so no freezing
> > was needed there: but the deferred split queue offers a way to reach it).
> 
> There's a conflict when applying Kefeng's "mm: refactor
> folio_undo_large_rmappable()"
> (https://lkml.kernel.org/r/20240521130315.46072-1-wangkefeng.wang@huawei.com)
> on top of this hotfix.

Yes, anticipated in my "below the --- line" comments:
sorry for giving you this nuisance.

And perhaps a conflict with another one of Kefeng's, which deletes a hunk
in mm/migrate.c just above where I add a hunk: and that's indeed how it
should end up, hunk deleted by Kefeng, hunk added by me.

> 
> --- mm/memcontrol.c~mm-refactor-folio_undo_large_rmappable
> +++ mm/memcontrol.c
> @@ -7832,8 +7832,7 @@ void mem_cgroup_migrate(struct folio *ol
>  	 * In addition, the old folio is about to be freed after migration, so
>  	 * removing from the split queue a bit earlier seems reasonable.
>  	 */
> -	if (folio_test_large(old) && folio_test_large_rmappable(old))
> -		folio_undo_large_rmappable(old);
> +	folio_undo_large_rmappable(old);
>  	old->memcg_data = 0;
>  }
> 
> I'm resolving this by simply dropping the above hunk.  So Kefeng's
> patch is now as below.  Please check.

Checked, and that is correct, thank you Andrew.  Correct, but not quite
complete: because I'm sure that if Kefeng had written his patch after
mine, he would have made the equivalent change in mm/migrate.c:

--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -443,8 +443,7 @@ int folio_migrate_mapping(struct address_space *mapping,
 	}
 
 	/* Take off deferred split queue while frozen and memcg set */
-	if (folio_test_large(folio) && folio_test_large_rmappable(folio))
-		folio_undo_large_rmappable(folio);
+	folio_undo_large_rmappable(folio);
 
 	/*
 	 * Now we know that no one else is looking at the folio:

But there's no harm done if you push out a tree without that additional
mod: we can add it as a fixup afterwards, it's no more than a cleanup.

(I'm on the lookout for an mm.git update, hope to give it a try when it
appears.)

Thanks,
Hugh


  reply	other threads:[~2024-07-04  3:21 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-02  7:40 Hugh Dickins
2024-07-02  9:25 ` Baolin Wang
2024-07-02 16:15   ` Hugh Dickins
2024-07-03  1:51     ` Baolin Wang
2024-07-03  2:13     ` Andrew Morton
2024-07-03 14:30 ` Zi Yan
2024-07-03 16:21   ` David Hildenbrand
2024-07-03 16:22     ` Zi Yan
2024-07-04  2:35 ` Andrew Morton
2024-07-04  3:21   ` Hugh Dickins [this message]
2024-07-04  3:28     ` Andrew Morton
2024-07-04  6:12     ` Kefeng Wang
2024-07-06 21:29       ` Hugh Dickins
2024-07-07  2:11         ` Andrew Morton
2024-07-07  3:07           ` Kefeng Wang
2024-07-07  8:28         ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=825653a7-a4d4-89f2-278f-4b18f8f8da5d@google.com \
    --to=hughd@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nphamcs@gmail.com \
    --cc=shy828301@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox