linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yang Shi <shy828301@gmail.com>
To: Zi Yan <ziy@nvidia.com>
Cc: Hugh Dickins <hughd@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	 Usama Arif <usamaarif642@gmail.com>,
	Wei Yang <richard.weiyang@gmail.com>,
	 "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Matthew Wilcox <willy@infradead.org>,
	 David Hildenbrand <david@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	 Baolin Wang <baolin.wang@linux.alibaba.com>,
	Barry Song <baohua@kernel.org>,
	 Kefeng Wang <wangkefeng.wang@huawei.com>,
	Ryan Roberts <ryan.roberts@arm.com>,
	 Nhat Pham <nphamcs@gmail.com>, Chris Li <chrisl@kernel.org>,
	linux-kernel@vger.kernel.org,  linux-mm@kvack.org
Subject: Re: [PATCH hotfix 1/2] mm/thp: fix deferred split queue not partially_mapped
Date: Fri, 25 Oct 2024 11:36:15 -0700	[thread overview]
Message-ID: <CAHbLzkqEhbw89HMh7h-r6M1xM5vw7bUZDNO7KEgPyaguO32d-Q@mail.gmail.com> (raw)
In-Reply-To: <E5A75697-55C7-4335-8D86-EE5CB6A99C4F@nvidia.com>

On Fri, Oct 25, 2024 at 8:32 AM Zi Yan <ziy@nvidia.com> wrote:
>
> On 25 Oct 2024, at 1:41, Hugh Dickins wrote:
>
> > On Thu, 24 Oct 2024, Zi Yan wrote:
> >> On 24 Oct 2024, at 0:10, Hugh Dickins wrote:
> >>
> >>> The new unlocked list_del_init() in deferred_split_scan() is buggy.
> >>> I gave bad advice, it looks plausible since that's a local on-stack
> >>> list, but the fact is that it can race with a third party freeing or
> >>> migrating the preceding folio (properly unqueueing it with refcount 0
> >>> while holding split_queue_lock), thereby corrupting the list linkage.
> >>>
> >>> The obvious answer would be to take split_queue_lock there: but it has
> >>> a long history of contention, so I'm reluctant to add to that. Instead,
> >>> make sure that there is always one safe (raised refcount) folio before,
> >>> by delaying its folio_put().  (And of course I was wrong to suggest
> >>> updating split_queue_len without the lock: leave that until the splice.)
> >>
> >> I feel like this is not the right approach, since it breaks the existing
> >> condition of changing folio->_deferred_list, namely taking
> >> ds_queue->split_queue_lock for serialization. The contention might not be
> >> as high as you think, since if a folio were split, the split_queue_lock
> >> needed to be taken during split anyway. So the worse case is the same
> >> as all folios are split. Do you see significant perf degradation due to
> >> taking the lock when doing list_del_init()?
> >>
> >> I am afraid if we take this route, we might hit hard-to-debug bugs
> >> in the future when someone touches the code.
> >
> > You have a good point: I am adding another element of trickiness
> > to that already-tricky local-but-not-quite list - which has tripped
> > us up a few times in the past.
> >
> > I do still feel that this solution is right in the spirit of that list;
> > but I've certainly not done any performance measurement to justify it,
> > nor would I ever trust my skill to do so.  I just tried to solve the
> > corruptions in what I thought was the best way.
> >
> > (To be honest, I found this solution to the corruptions first, and thought
> > the bug went back to the original implemention: that its put_page() at the
> > end of the loop was premature all along.  It was only when writing the
> > commit message two days ago, that I came to realize that even put_page()
> > or folio_put() would be safely using the lock to unqueue: that it is only
> > this new list_del_init() which is the exception which introduces the bug.)
> >
> > Looking at vmstats, I'm coming to believe that the performance advantage
> > of this way is likely to be in the noise: that mTHPs alone, and the
> > !partially_mapped case on top, are greatly increasing the split_deferred
> > stats: and may give rise to renewed complaints of lock contention, with
> > or without this optimization.
> >
> > While I still prefer to stick with what's posted and most tested, I am
> > giving the locked version a run overnight.  Thanks a lot for the reviews
> > and acks everyone: at present Zi Yan is in the minority preferring a
> > locked version, but please feel free to change your vote if you wish.
>
> Thank you a lot for taking the time to check the locked version. Looking
> forward to the result. BTW, I am not going to block this patch since it
> fixes the bug.
>
> The tricky part in deferred_list_scan() is always the use of
> folio->_deferred_list without taking split_queue_lock. I am thinking about
> use folio_batch to store the out-of-split_queue folios, so that _deferred_list
> will not be touched when these folios are tried to be split. Basically,
>
> 1. loop through split_queue and move folios to a folio_batch until the
>    folio_batch is full;
> 2. loop through the folio_batch to try to split each folio;
> 3. move the remaining folios back to split_queue.
>
> With this approach, split_queue_lock might be taken more if there are
> more than 31 (folio_batch max size) folios on split_queue and split_queue_lock
> will be held longer in step 3, since the remaining folios need to be
> added back to split_queue one by one instead of a single list splice.

IMHO, the folio_batch approach is worth trying. The deferred list lock
is just held when deleting folio from deferred list and updating the
list len. Re-acquiring the lock every 31 folios seems not very bad. Of
course, some benchmark is needed.

The other subtle thing is folio->_deferred_list is reused when the
folio is moved to the local on-stack list. And some
list_empty(deferred_list) checks return true even though the folio is
actually on the local on-stack list. Some code may depend on or
inadvertently depend on this behavior. Using folio_batch may break
some assumptions, but depending on this subtle behavior is definitely
not reliable IMHO.

>
> Let me know your thoughts. I can look into this if this approach sounds
> promising. Thanks.
>
>
> Best Regards,
> Yan, Zi


  reply	other threads:[~2024-10-25 18:36 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-24  4:10 Hugh Dickins
2024-10-24  4:13 ` [PATCH hotfix 2/2] mm/thp: fix deferred split unqueue naming and locking Hugh Dickins
2024-10-24 20:00   ` Yang Shi
2024-10-25  1:21     ` Yang Shi
2024-10-25  6:57     ` Hugh Dickins
2024-10-25 16:34       ` Yang Shi
2024-10-27  5:35         ` Hugh Dickins
2024-10-24 20:52   ` David Hildenbrand
2024-10-25  1:25     ` Yang Shi
2024-10-27  7:07     ` Hugh Dickins
2024-10-24 10:20 ` [PATCH hotfix 1/2] mm/thp: fix deferred split queue not partially_mapped Usama Arif
2024-10-24 20:39 ` David Hildenbrand
2024-10-24 22:37 ` Zi Yan
2024-10-25  5:41   ` Hugh Dickins
2024-10-25 15:32     ` Zi Yan
2024-10-25 18:36       ` Yang Shi [this message]
2024-10-27  5:08         ` Hugh Dickins
2024-10-28 18:36           ` Yang Shi
2024-10-27  4:43       ` Hugh Dickins
2024-10-25  1:56 ` Baolin Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHbLzkqEhbw89HMh7h-r6M1xM5vw7bUZDNO7KEgPyaguO32d-Q@mail.gmail.com \
    --to=shy828301@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nphamcs@gmail.com \
    --cc=richard.weiyang@gmail.com \
    --cc=ryan.roberts@arm.com \
    --cc=usamaarif642@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox