linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>,
	linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Yang Shi <shy828301@gmail.com>, Huang Ying <ying.huang@intel.com>,
	"\"Kirill A . Shutemov\"" <kirill.shutemov@linux.intel.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] mm/migrate: put dest folio on deferred split list if source was there.
Date: Tue, 12 Mar 2024 10:26:36 -0400	[thread overview]
Message-ID: <373F606D-4A90-4514-8C31-775557B494BB@nvidia.com> (raw)
In-Reply-To: <e3e14098-eade-483e-a459-e43200b87941@arm.com>

[-- Attachment #1: Type: text/plain, Size: 3397 bytes --]

On 12 Mar 2024, at 4:05, Ryan Roberts wrote:

> On 12/03/2024 03:45, Matthew Wilcox wrote:
>> On Mon, Mar 11, 2024 at 03:58:48PM -0400, Zi Yan wrote:
>>> @@ -1168,6 +1172,17 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
>>>  		folio_lock(src);
>>>  	}
>>>  	locked = true;
>>> +	if (folio_test_large_rmappable(src) &&
>
> I think you also need to check that the order > 1, now that we support order-1
> pagecache folios? _deferred_list only exists if order > 1.
>
>>> +		!list_empty(&src->_deferred_list)) {
>>> +		struct deferred_split *ds_queue = get_deferred_split_queue(src);
>>> +
>>> +		spin_lock(&ds_queue->split_queue_lock);
>>> +		ds_queue->split_queue_len--;
>>> +		list_del_init(&src->_deferred_list);
>>> +		spin_unlock(&ds_queue->split_queue_lock);
>>> +		old_page_state |= PAGE_WAS_ON_DEFERRED_LIST;
>>> +	}
>>
>> I have a few problems with this ...
>>
>> Trivial: your whitespace is utterly broken.  You can't use a single tab
>> for both indicating control flow change and for line-too-long.
>>
>> Slightly more important: You're checking list_empty outside the lock
>> (which is fine in order to avoid unnecessarily acquiring the lock),
>> but you need to re-check it inside the lock in case of a race.  And you
>> didn't mark it as data_race(), so KMSAN will whinge.
>
> I've seen data_race() used around list_empty() without the lock held
> inconsistently (see deferred_split_folio()). What are the rules? Given that we
> are not doing a memory access here, I don't really understand why it is needed?
> list_empty() uses READ_ONCE() which I thought would be sufficient? (I've just
> added a similar lockless check in my swap-out series - will add data_race() if
> needed, but previously concluded it's not).
>
>>
>> Much more important: You're doing this with a positive refcount, which
>> breaks the (undocumented) logic in deferred_split_scan() that a folio
>> with a positive refcount will not be removed from the list.
>>
>> Maximally important: Wer shouldn't be doing any of this!  This folio is
>> on the deferred split list.  We shouldn't be migrating it as a single
>> entity; we should be splitting it now that we're in a context where we
>> can do the right thing and split it.  Documentation/mm/transhuge.rst
>> is clear that we don't split it straight away due to locking context.
>> Splitting it on migration is clearly the right thing to do.
>>
>> If splitting fails, we should just fail the migration; splitting fails
>> due to excess references, and if the source folio has excess references,
>> then migration would fail too.
>
> This comment makes me wonder what we do in split_huge_page_to_list_to_order() if
> the target order is greater than 1 and the input folio is on the deferred split
> list. Looks like we currently just remove it from the deferred list. Is there a
> case for putting any output folios that are still partially mapped back on the
> deferred list, or splitting them to a lower order such that all output folios
> are fully mapped, and all unmapped portions are freed?

I probably would let the caller of split_huge_page_to_list_to_order() to decide
whether output folios should be put back in deferred list. The caller should
determine the right order to split. Letting split_huge_page_to_list_to_order()
change new_order will confuse the caller and complicate the handling of
output folios.



--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

  reply	other threads:[~2024-03-12 14:26 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-11 19:58 Zi Yan
2024-03-12  3:45 ` Matthew Wilcox
2024-03-12  8:05   ` Ryan Roberts
2024-03-12 14:26     ` Zi Yan [this message]
2024-03-12 14:13   ` Zi Yan
2024-03-12 14:19     ` Matthew Wilcox
2024-03-12 15:51       ` Zi Yan
2024-03-12 16:38         ` Matthew Wilcox
2024-03-12 18:32           ` Zi Yan
2024-03-12 18:46             ` Matthew Wilcox
2024-03-12 19:45               ` Zi Yan
2024-03-13  2:07               ` Yin, Fengwei
2024-03-13  2:33                 ` Yin, Fengwei
2024-03-12  7:27 ` Baolin Wang
2024-03-12 13:49   ` Zi Yan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=373F606D-4A90-4514-8C31-775557B494BB@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox