From: Yang Shi <yang@os.amperecomputing.com>
To: Zi Yan <ziy@nvidia.com>,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
David Hildenbrand <david@redhat.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Ryan Roberts <ryan.roberts@arm.com>,
Hugh Dickins <hughd@google.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Yu Zhao <yuzhao@google.com>, John Hubbard <jhubbard@nvidia.com>,
linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 2/3] mm/huge_memory: allow split shmem large folio to any lower order
Date: Wed, 22 Jan 2025 09:58:11 -0800 [thread overview]
Message-ID: <c50051c1-76c4-4db4-bfee-c0e52389a824@os.amperecomputing.com> (raw)
In-Reply-To: <20250122161928.1240637-2-ziy@nvidia.com>
On 1/22/25 8:19 AM, Zi Yan wrote:
> Commit 4d684b5f92ba ("mm: shmem: add large folio support for tmpfs") has
> added large folio support to shmem. Remove the restriction in
> split_huge_page*().
Reviewed-by: Yang Shi <yang@os.amperecomputing.com>
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/huge_memory.c | 8 +-------
> 1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 3d3ebdc002d5..deb4e72daeb9 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3299,7 +3299,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> /* Some pages can be beyond EOF: drop them from page cache */
> if (tail->index >= end) {
> if (shmem_mapping(folio->mapping))
> - nr_dropped++;
> + nr_dropped += new_nr;
> else if (folio_test_clear_dirty(tail))
> folio_account_cleaned(tail,
> inode_to_wb(folio->mapping->host));
> @@ -3465,12 +3465,6 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
> return -EINVAL;
> }
> } else if (new_order) {
> - /* Split shmem folio to non-zero order not supported */
> - if (shmem_mapping(folio->mapping)) {
> - VM_WARN_ONCE(1,
> - "Cannot split shmem folio to non-0 order");
> - return -EINVAL;
> - }
> /*
> * No split if the file system does not support large folio.
> * Note that we might still have THPs in such mappings due to
next prev parent reply other threads:[~2025-01-22 17:58 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-22 16:19 [PATCH v2 1/3] selftests/mm: make file-backed THP split work by writing PMD size data Zi Yan
2025-01-22 16:19 ` [PATCH v2 2/3] mm/huge_memory: allow split shmem large folio to any lower order Zi Yan
2025-01-22 17:58 ` Yang Shi [this message]
2025-01-22 16:19 ` [PATCH v2 3/3] selftests/mm: test splitting file-backed THP " Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c50051c1-76c4-4db4-bfee-c0e52389a824@os.amperecomputing.com \
--to=yang@os.amperecomputing.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=jhubbard@nvidia.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ryan.roberts@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox