From: Zi Yan <ziy@nvidia.com>
To: Hugh Dickins <hughd@google.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Ryan Roberts <ryan.roberts@arm.com>,
David Hildenbrand <david@redhat.com>,
Yang Shi <yang@os.amperecomputing.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Yu Zhao <yuzhao@google.com>, John Hubbard <jhubbard@nvidia.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org,
Kairui Song <kasong@tencent.com>,
Liu Shixin <liushixin2@huawei.com>
Subject: Re: [PATCH v9 2/8] mm/huge_memory: add two new (not yet used) functions for folio_split()
Date: Wed, 05 Mar 2025 16:08:04 -0500 [thread overview]
Message-ID: <43642DB0-17E5-4B3E-9095-665806FE38C5@nvidia.com> (raw)
In-Reply-To: <238c28cb-ce1c-40f5-ec9e-82c5312f0947@google.com>
On 5 Mar 2025, at 15:50, Hugh Dickins wrote:
> On Wed, 5 Mar 2025, Zi Yan wrote:
>> On 4 Mar 2025, at 6:49, Hugh Dickins wrote:
>>>
>>> I think (might be wrong, I'm in a rush) my mods are all to this
>>> "add two new (not yet used) functions for folio_split()" patch:
>>> please merge them in if you agree.
>>>
>>> 1. From source inspection, it looks like a folio_set_order() was missed.
>>
>> Actually no. folio_set_order(folio, new_order) is called multiple times
>> in the for loop above. It is duplicated but not missing.
>
> I was about to disagree with you, when at last I saw that, yes,
> it is doing that on "folio" at the time of setting up "new_folio".
>
> That is confusing: in all other respects, that loop is reading folio
> to set up new_folio. Do you have a reason for doing it there?
No. I agree your fix is better. Just point out folio_set_order() should
not trigger a bug.
>
> The transient "nested folio" situation is anomalous either way.
> I'd certainly prefer it to be done at the point where you
> ClearPageCompound when !new_order; but if you think there's an issue
> with racing isolate_migratepages_block() or something like that, which
> your current placement handles better, then please add a line of comment
> both where you do it and where I expected to find it - thanks.
Sure. I will use your patch unless I find some racing issue.
>
> (Historically, there was quite a lot of difficulty in getting the order
> of events in __split_huge_page_tail() to be safe: I wonder whether we
> shall see a crop of new weird bugs from these changes. I note that your
> loops advance forwards, whereas the old ones went backwards: but I don't
> have anything to say you're wrong. I think it's mainly a matter of how
> the first tail or two gets handled: which might be why you want to
> folio_set_order(folio, new_order) at the earliest opportunity.)
I am worried about that too. In addition, in __split_huge_page_tail(),
page refcount is restored right after new tail folio split is done,
whereas I needed to delay them until all new after-split folios
are done, since non-uniform split is iterative and only the after-split
folios NOT containing the split_at page will be released. These
folios are locked and frozen after __split_folio_to_order() like
the original folio. Maybe because there are more such locked frozen
folios than before?
>>
>>>
>>> 2. Why is swapcache only checked when folio_test_anon? I can see that
>>> you've just copied that over from the old __split_huge_page(), but
>>> it seems wrong to me here and there - I guess a relic from before
>>> shmem could swap out a huge page.
>>
>> Yes, it is a relic, but it is still right before I change another relic
>> in __folio_split() or split_huge_page_to_list_to_order() from mainline,
>> if (!mapping) { ret = -EBUSY; goto out; }. It excludes the shmem in swap
>> cache case. I probably will leave it as is in my next folio_split() version
>> to avoid adding more potential bugs, but will come back later in another
>> patch.
>
> I agree. The "Truncated ?" check. Good. But I do prefer that you use
> that part of my patch, referring to mapping and swap_cache instead of anon,
> rather than rely on that accident of what's done at the higher level.
Definitely.
Best Regards,
Yan, Zi
next prev parent reply other threads:[~2025-03-05 21:08 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-26 21:00 [PATCH v9 0/8] Buddy allocator like (or non-uniform) folio split Zi Yan
2025-02-26 21:00 ` [PATCH v9 1/8] xarray: add xas_try_split() to split a multi-index entry Zi Yan
2025-02-26 21:00 ` [PATCH v9 2/8] mm/huge_memory: add two new (not yet used) functions for folio_split() Zi Yan
2025-02-27 5:55 ` Matthew Wilcox
2025-02-27 15:14 ` Matthew Wilcox
2025-02-27 15:42 ` Zi Yan
2025-03-04 11:49 ` Hugh Dickins
2025-03-04 16:20 ` Zi Yan
2025-03-04 20:29 ` Andrew Morton
2025-03-04 20:34 ` Zi Yan
2025-03-05 21:03 ` Hugh Dickins
2025-03-05 21:10 ` Zi Yan
2025-03-05 22:38 ` Hugh Dickins
2025-03-06 16:21 ` Zi Yan
2025-03-07 15:23 ` Zi Yan
2025-03-10 8:54 ` Hugh Dickins
2025-03-10 15:35 ` Zi Yan
2025-03-05 19:45 ` Zi Yan
2025-03-05 20:50 ` Hugh Dickins
2025-03-05 21:08 ` Zi Yan [this message]
2025-03-05 21:49 ` Hugh Dickins
2025-03-06 9:19 ` David Hildenbrand
2025-03-06 16:27 ` Zi Yan
2025-03-07 17:46 ` David Hildenbrand
2025-02-26 21:00 ` [PATCH v9 3/8] mm/huge_memory: move folio split common code to __folio_split() Zi Yan
2025-02-26 21:00 ` [PATCH v9 4/8] mm/huge_memory: add buddy allocator like (non-uniform) folio_split() Zi Yan
2025-02-26 21:00 ` [PATCH v9 5/8] mm/huge_memory: remove the old, unused __split_huge_page() Zi Yan
2025-02-26 21:00 ` [PATCH v9 6/8] mm/huge_memory: add folio_split() to debugfs testing interface Zi Yan
2025-02-26 21:00 ` [PATCH v9 7/8] mm/truncate: use buddy allocator like folio split for truncate operation Zi Yan
2025-03-02 3:52 ` Zi Yan
2025-02-26 21:00 ` [PATCH v9 8/8] selftests/mm: add tests for folio_split(), buddy allocator like split Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=43642DB0-17E5-4B3E-9095-665806FE38C5@nvidia.com \
--to=ziy@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=jhubbard@nvidia.com \
--cc=kasong@tencent.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=liushixin2@huawei.com \
--cc=ryan.roberts@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox