linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Zi Yan <ziy@nvidia.com>,
	linux-mm@kvack.org,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
	Hugh Dickins <hughd@google.com>,
	Yang Shi <yang@os.amperecomputing.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	Yu Zhao <yuzhao@google.com>, John Hubbard <jhubbard@nvidia.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH RESEND v3 6/9] mm/truncate: use folio_split() for truncate operation.
Date: Tue, 10 Dec 2024 21:12:07 +0100	[thread overview]
Message-ID: <ee92b309-db6d-416c-97ab-25abf8b12957@redhat.com> (raw)
In-Reply-To: <20241205001839.2582020-7-ziy@nvidia.com>

On 05.12.24 01:18, Zi Yan wrote:
> Instead of splitting the large folio uniformly during truncation, use
> buddy allocator like split at the start of truncation range to minimize
> the number of resulting folios.
> 
> For example, to truncate a order-4 folio
> [0, 1, 2, 3, 4, 5, ..., 15]
> between [3, 10] (inclusive), folio_split() splits the folio to
> [0,1], [2], [3], [4..7], [8..15] and [3], [4..7] can be dropped and
> [8..15] is kept with zeros in [8..10].

But isn't that making things worse that they are today? Imagine 
fallocate() on a shmem file where we won't be freeing memory?

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2024-12-10 20:12 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-05  0:18 [PATCH RESEND v3 0/9] Buddy allocator like folio split Zi Yan
2024-12-05  0:18 ` [PATCH RESEND v3 1/9] mm/huge_memory: add two new (not yet used) functions for folio_split() Zi Yan
2024-12-05  0:18 ` [PATCH RESEND v3 2/9] mm/huge_memory: move folio split common code to __folio_split() Zi Yan
2024-12-05  0:18 ` [PATCH RESEND v3 3/9] mm/huge_memory: add buddy allocator like folio_split() Zi Yan
2024-12-05  0:18 ` [PATCH RESEND v3 4/9] mm/huge_memory: remove the old, unused __split_huge_page() Zi Yan
2024-12-05  0:18 ` [PATCH RESEND v3 5/9] mm/huge_memory: add folio_split() to debugfs testing interface Zi Yan
2024-12-05  0:18 ` [PATCH RESEND v3 6/9] mm/truncate: use folio_split() for truncate operation Zi Yan
2024-12-10 20:12   ` David Hildenbrand [this message]
2024-12-10 20:41     ` Zi Yan
2024-12-10 20:50       ` Zi Yan
2024-12-05  0:18 ` [PATCH RESEND v3 7/9] selftests/mm: use selftests framework to print test result Zi Yan
2024-12-05  0:18 ` [PATCH RESEND v3 8/9] selftests/mm: add tests for splitting pmd THPs to all lower orders Zi Yan
2024-12-05  0:18 ` [PATCH RESEND v3 9/9] selftests/mm: add tests for folio_split(), buddy allocator like split Zi Yan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ee92b309-db6d-416c-97ab-25abf8b12957@redhat.com \
    --to=david@redhat.com \
    --cc=hughd@google.com \
    --cc=jhubbard@nvidia.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=yang@os.amperecomputing.com \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox