From: Zi Yan <ziy@nvidia.com>
To: linux-mm@kvack.org,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
Hugh Dickins <hughd@google.com>,
David Hildenbrand <david@redhat.com>,
Yang Shi <yang@os.amperecomputing.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Yu Zhao <yuzhao@google.com>, John Hubbard <jhubbard@nvidia.com>,
linux-kernel@vger.kernel.org, Zi Yan <ziy@nvidia.com>
Subject: [PATCH RESEND v3 6/9] mm/truncate: use folio_split() for truncate operation.
Date: Wed, 4 Dec 2024 19:18:36 -0500 [thread overview]
Message-ID: <20241205001839.2582020-7-ziy@nvidia.com> (raw)
In-Reply-To: <20241205001839.2582020-1-ziy@nvidia.com>
Instead of splitting the large folio uniformly during truncation, use
buddy allocator like split at the start of truncation range to minimize
the number of resulting folios.
For example, to truncate a order-4 folio
[0, 1, 2, 3, 4, 5, ..., 15]
between [3, 10] (inclusive), folio_split() splits the folio to
[0,1], [2], [3], [4..7], [8..15] and [3], [4..7] can be dropped and
[8..15] is kept with zeros in [8..10].
It is possible to further do a folio_split() at 10, so more resulting
folios can be dropped. But it is left as future possible optimization
if needed.
Another possible optimization is to make folio_split() to split a folio
based on a given range, like [3..10] above. But that complicates
folio_split(), so it will investigated when necessary.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
include/linux/huge_mm.h | 18 ++++++++++++++++++
mm/truncate.c | 5 ++++-
2 files changed, 22 insertions(+), 1 deletion(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index b94c2e8ee918..29accb5d93b8 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -339,6 +339,18 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
unsigned int new_order);
int min_order_for_split(struct folio *folio);
int split_folio_to_list(struct folio *folio, struct list_head *list);
+int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
+ struct list_head *list);
+static inline int split_folio_at(struct folio *folio, struct page *page,
+ struct list_head *list)
+{
+ int ret = min_order_for_split(folio);
+
+ if (ret < 0)
+ return ret;
+
+ return folio_split(folio, ret, page, list);
+}
static inline int split_huge_page(struct page *page)
{
struct folio *folio = page_folio(page);
@@ -531,6 +543,12 @@ static inline int split_folio_to_list(struct folio *folio, struct list_head *lis
return 0;
}
+static inline int split_folio_at(struct folio *folio, struct page *page,
+ struct list_head *list)
+{
+ return 0;
+}
+
static inline void deferred_split_folio(struct folio *folio, bool partially_mapped) {}
#define split_huge_pmd(__vma, __pmd, __address) \
do { } while (0)
diff --git a/mm/truncate.c b/mm/truncate.c
index 7c304d2f0052..9f33d6821748 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -178,6 +178,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
{
loff_t pos = folio_pos(folio);
unsigned int offset, length;
+ long in_folio_offset;
if (pos < start)
offset = start - pos;
@@ -207,7 +208,9 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end)
folio_invalidate(folio, offset, length);
if (!folio_test_large(folio))
return true;
- if (split_folio(folio) == 0)
+
+ in_folio_offset = PAGE_ALIGN_DOWN(offset) / PAGE_SIZE;
+ if (split_folio_at(folio, folio_page(folio, in_folio_offset), NULL) == 0)
return true;
if (folio_test_dirty(folio))
return false;
--
2.45.2
next prev parent reply other threads:[~2024-12-05 0:19 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-05 0:18 [PATCH RESEND v3 0/9] Buddy allocator like folio split Zi Yan
2024-12-05 0:18 ` [PATCH RESEND v3 1/9] mm/huge_memory: add two new (not yet used) functions for folio_split() Zi Yan
2024-12-05 0:18 ` [PATCH RESEND v3 2/9] mm/huge_memory: move folio split common code to __folio_split() Zi Yan
2024-12-05 0:18 ` [PATCH RESEND v3 3/9] mm/huge_memory: add buddy allocator like folio_split() Zi Yan
2024-12-05 0:18 ` [PATCH RESEND v3 4/9] mm/huge_memory: remove the old, unused __split_huge_page() Zi Yan
2024-12-05 0:18 ` [PATCH RESEND v3 5/9] mm/huge_memory: add folio_split() to debugfs testing interface Zi Yan
2024-12-05 0:18 ` Zi Yan [this message]
2024-12-10 20:12 ` [PATCH RESEND v3 6/9] mm/truncate: use folio_split() for truncate operation David Hildenbrand
2024-12-10 20:41 ` Zi Yan
2024-12-10 20:50 ` Zi Yan
2024-12-05 0:18 ` [PATCH RESEND v3 7/9] selftests/mm: use selftests framework to print test result Zi Yan
2024-12-05 0:18 ` [PATCH RESEND v3 8/9] selftests/mm: add tests for splitting pmd THPs to all lower orders Zi Yan
2024-12-05 0:18 ` [PATCH RESEND v3 9/9] selftests/mm: add tests for folio_split(), buddy allocator like split Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241205001839.2582020-7-ziy@nvidia.com \
--to=ziy@nvidia.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=jhubbard@nvidia.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ryan.roberts@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox