From: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
To: Zi Yan <ziy@nvidia.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-mm@kvack.org, Ryan Roberts <ryan.roberts@arm.com>,
Hugh Dickins <hughd@google.com>,
David Hildenbrand <david@redhat.com>,
Yang Shi <yang@os.amperecomputing.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Yu Zhao <yuzhao@google.com>, John Hubbard <jhubbard@nvidia.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 1/6] mm/huge_memory: add two new (yet used) functions for folio_split()
Date: Thu, 7 Nov 2024 16:01:51 +0200 [thread overview]
Message-ID: <yu5u6srhyixvnx66qvin3rk5p3ve4yxu7v6qj4ymma3fnbk4fg@yneglqtwpvyc> (raw)
In-Reply-To: <C9096636-C91B-42C0-A236-F3B7D9876489@nvidia.com>
On Wed, Nov 06, 2024 at 05:06:32PM -0500, Zi Yan wrote:
> >> + } else {
> >> + if (PageHead(head))
> >> + ClearPageCompound(head);
> >
> > Huh? You only have to test for PageHead() because it is inside the loop.
> > It has to be done after loop is done.
>
> You are right, will remove this and add the code below after the loop.
>
> if (!new_order && PageHead(&folio->page))
> ClearPageCompound(&folio->page);
PageHead(&forlio->page) is always true, isn't it?
> >> + if (folio_test_anon(folio) && folio_test_swapcache(folio)) {
> >> + if (!uniform_split)
> >> + return -EINVAL;
> >
> > Why this limitation?
>
> I am not closely following the status of mTHP support in swap. If it
> is supported, this can be removed. Right now, split_huge_page_to_list_to_order()
> only allows to split a swapcache folio to order 0[1].
>
> [1] https://elixir.bootlin.com/linux/v6.12-rc6/source/mm/huge_memory.c#L3397
It would be nice to clarify this or at least add a comment.
--
Kiryl Shutsemau / Kirill A. Shutemov
next prev parent reply other threads:[~2024-11-07 14:02 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-01 15:03 [PATCH v2 0/6] Buddy allocator like folio split Zi Yan
2024-11-01 15:03 ` [PATCH v2 1/6] mm/huge_memory: add two new (yet used) functions for folio_split() Zi Yan
2024-11-06 10:44 ` Kirill A . Shutemov
2024-11-06 22:06 ` Zi Yan
2024-11-07 14:01 ` Kirill A . Shutemov [this message]
2024-11-07 14:42 ` Zi Yan
2024-11-07 15:01 ` Zi Yan
2024-11-01 15:03 ` [PATCH v2 2/6] mm/huge_memory: move folio split common code to __folio_split() Zi Yan
2024-11-01 15:03 ` [PATCH v2 3/6] mm/huge_memory: add buddy allocator like folio_split() Zi Yan
2024-11-01 15:03 ` [PATCH v2 4/6] mm/huge_memory: remove the old, unused __split_huge_page() Zi Yan
2024-11-01 15:03 ` [PATCH v2 5/6] mm/huge_memory: add folio_split() to debugfs testing interface Zi Yan
2024-11-01 15:03 ` [PATCH v2 6/6] mm/truncate: use folio_split() for truncate operation Zi Yan
2024-11-02 15:39 ` kernel test robot
2024-11-02 17:22 ` kernel test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=yu5u6srhyixvnx66qvin3rk5p3ve4yxu7v6qj4ymma3fnbk4fg@yneglqtwpvyc \
--to=kirill.shutemov@linux.intel.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=jhubbard@nvidia.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ryan.roberts@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox