From: David Hildenbrand <david@redhat.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Yin Fengwei <fengwei.yin@intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, mike.kravetz@oracle.com,
sidhartha.kumar@oracle.com, naoya.horiguchi@nec.com,
jane.chu@oracle.com
Subject: Re: [PATCH v4 0/5] batched remove rmap in try_to_unmap_one()
Date: Tue, 14 Mar 2023 10:50:03 +0100 [thread overview]
Message-ID: <53366288-0b7c-8048-9687-ed09d186da0d@redhat.com> (raw)
In-Reply-To: <ZBBC42VDz47TQbjB@casper.infradead.org>
On 14.03.23 10:48, Matthew Wilcox wrote:
> On Tue, Mar 14, 2023 at 10:16:09AM +0100, David Hildenbrand wrote:
>> On 14.03.23 04:09, Yin Fengwei wrote:
>>> For long term with wider adoption of large folio in kernel (like large
>>> folio for anonymous page), MADV_PAGEOUT needs be updated to handle
>>> large folio as whole to avoid splitting it always.
>>
>> Just curious what the last sentence implies. Large folios are supposed to be
>> a transparent optimization. So why should we pageout all surrounding
>> subpages simply because a single subpage was requested to be paged out? That
>> might harm performance of some workloads ... more than the actual split.
>>
>> So it's not immediately obvious to me why "avoid splitting" is the correct
>> answer to the problem at hand.
>
> Even if your madvise() call says to pageout all pages covered by a
> folio, the current code will split it. That's what needs to be fixed.
Agreed, if possible in the future (swap handling ...).
>
> At least for anonymous pages, using large folios is an attempt to treat
> all pages in a particular range the same way. If the user says to only
> page out some of them, that's a big clue that these pages are different
> from the other pages, and so we should split a folio where the madvise
> call does not cover every page in the folio.
Agreed.
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2023-03-14 9:50 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-13 12:45 Yin Fengwei
2023-03-13 12:45 ` [PATCH v4 1/5] rmap: move hugetlb try_to_unmap to dedicated function Yin Fengwei
2023-03-13 12:45 ` [PATCH v4 2/5] rmap: move page unmap operation " Yin Fengwei
2023-03-13 12:45 ` [PATCH v4 3/5] rmap: cleanup exit path of try_to_unmap_one_page() Yin Fengwei
2023-03-13 12:45 ` [PATCH v4 4/5] rmap:addd folio_remove_rmap_range() Yin Fengwei
2023-03-13 12:45 ` [PATCH v4 5/5] try_to_unmap_one: batched remove rmap, update folio refcount Yin Fengwei
2023-03-13 18:49 ` [PATCH v4 0/5] batched remove rmap in try_to_unmap_one() Andrew Morton
2023-03-14 3:09 ` Yin Fengwei
2023-03-14 9:16 ` David Hildenbrand
2023-03-14 9:48 ` Matthew Wilcox
2023-03-14 9:50 ` David Hildenbrand [this message]
2023-03-14 14:50 ` Yin, Fengwei
2023-03-14 15:01 ` Matthew Wilcox
2023-03-15 2:17 ` Yin Fengwei
2023-03-20 13:47 ` Yin, Fengwei
2023-03-21 14:17 ` David Hildenbrand
2023-03-22 1:31 ` Yin Fengwei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53366288-0b7c-8048-9687-ed09d186da0d@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=fengwei.yin@intel.com \
--cc=jane.chu@oracle.com \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=naoya.horiguchi@nec.com \
--cc=sidhartha.kumar@oracle.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox