linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Yin, Fengwei" <fengwei.yin@intel.com>
To: Ryan Roberts <ryan.roberts@arm.com>, Zi Yan <ziy@nvidia.com>,
	"Matthew Wilcox" <willy@infradead.org>,
	David Hildenbrand <david@redhat.com>, Yu Zhao <yuzhao@google.com>
Cc: Linux-MM <linux-mm@kvack.org>
Subject: Re: Prerequisites for Large Anon Folios
Date: Thu, 31 Aug 2023 15:38:11 +0800	[thread overview]
Message-ID: <2944c22c-8fcc-46aa-935e-91881d48fb4b@intel.com> (raw)
In-Reply-To: <3037447e-c53a-41c6-b87d-de6365982515@arm.com>



On 8/31/2023 3:18 PM, Ryan Roberts wrote:
> On 31/08/2023 01:08, Yin, Fengwei wrote:
>>
>> On 8/30/2023 6:44 PM, Ryan Roberts wrote:
>>> Hi All,
>>>
>>>
>>> I want to get serious about getting large anon folios merged. To do that, there
>>> are a number of outstanding prerequistes. I'm hoping the respective owners may
>>> be able to provide an update on progress?
>>>
>>> I appreciate everyone is busy and likely juggling multiple things, so understand
>>> if no progress has been made or likely to be made - it would be good to know
>>> that though, so I can attempt to make alternative plans.
>>>
>>> See questions/comments below.
>>>
>>> Thanks!
>>>
>>>
> ...
>>>
>>>>
>>>> - item:
>>>>     mlock
>>>>
>>>>   priority:
>>>>     prerequisite
>>>>
>>>>   description: >-
>>>>     Large, pte-mapped folios are ignored when mlock is requested. Code comment
>>>>     for mlock_vma_folio() says "...filter out pte mappings of THPs, which cannot
>>>>     be consistently counted: a pte mapping of the THP head cannot be
>>>>     distinguished by the page alone."
>>>>
>>>>   location:
>>>>     - mlock_pte_range()
>>>>     - mlock_vma_folio()
>>>>
>>>>   links:
>>>>     - https://lore.kernel.org/linux-mm/20230712060144.3006358-1-fengwei.yin@intel.com/
>>>>
>>>>   assignee:
>>>>     Yin, Fengwei <fengwei.yin@intel.com>
>>>>
>>>>
>>>
>>> series on list at [2]. Does this series cover everything?
>> Yes. I suppose so. I already collected comment from you. And I am waiting for review comment
>> from Yu who is on vacation now. Then, I will work on v3.
> 
> Great -thanks for the fast reply!
> 
>>
>>>
>>> [2] https://lore.kernel.org/linux-mm/20230809061105.3369958-1-fengwei.yin@intel.com/
>>>
>>>
>>>>
>>>> - item:
>>>>     madvise
>>>>
>>>>   priority:
>>>>     prerequisite
>>>>
>>>>   description: >-
>>>>     MADV_COLD, MADV_PAGEOUT, MADV_FREE: For large folios, code assumes exclusive
>>>>     only if mapcount==1, else skips remainder of operation. For large,
>>>>     pte-mapped folios, exclusive folios can have mapcount upto nr_pages and
>>>>     still be exclusive. Even better; don't split the folio if it fits entirely
>>>>     within the range. Likely depends on "shared vs exclusive mappings".
>>>>
>>>>   links:
>>>>     - https://lore.kernel.org/linux-mm/20230713150558.200545-1-fengwei.yin@intel.com/
>>>>
>>>>   location:
>>>>     - madvise_cold_or_pageout_pte_range()
>>>>     - madvise_free_pte_range()
>>>>
>>>>   assignee:
>>>>     Yin, Fengwei <fengwei.yin@intel.com>
>>>
>>> As I understand it: initial solution based on folio_estimated_sharers() has gone
>>> into v6.5. Have a dependecy on David's precise shared vs exclusive work for an
>>> improved solution. And I think you mentioned you are planning to do a change
>>> that avoids splitting a large folio if it is entirely covered by the range?
>> The changes based on folio_estimated_sharers() is in. Once David's solution is
>> ready, will switch to new solution.
>>
>> For avoids splitting large folio, it was in the patchset I posted (before split
>> folio_estimated_sharers() part out).
> 
> The RFC version? Do you plan to post an updated version, or are you waiting for
> David's shared vs exclusive series before moving forwards?

For folio_estimated_sharers(), Once David's solution is ready. I will send patch
to switch to new solution.

For avoid splitting large folio, I don't think it blocks the anonymous large folio
merging as it's optimization instead of bug fix. My idea was demonstrated on the
first patchset (and folio_estimated_sharers() was separated from the first patchset
as it's a bug fixing) and wait for comments from Minchan.


Regards
Yin, Fengwei

> 
>>
>> Regards
>> Yin, Fengwei
> 


      reply	other threads:[~2023-08-31  7:39 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-20  9:41 Ryan Roberts
2023-07-23 12:33 ` Yin, Fengwei
2023-07-24  9:04   ` Ryan Roberts
2023-07-24  9:33     ` Yin, Fengwei
2023-07-24  9:46       ` Ryan Roberts
2023-07-24  9:54         ` Yin, Fengwei
2023-07-24 11:42         ` David Hildenbrand
2023-08-30 10:08       ` Ryan Roberts
2023-08-31  0:01         ` Yin, Fengwei
2023-08-31  7:16           ` Ryan Roberts
2023-08-30 10:44 ` Ryan Roberts
2023-08-30 16:20   ` David Hildenbrand
2023-08-31  7:26     ` Ryan Roberts
2023-08-31  7:59       ` David Hildenbrand
2023-08-31  9:04         ` Ryan Roberts
2023-09-01 14:44           ` David Hildenbrand
2023-09-04 10:06             ` Ryan Roberts
2023-09-05 20:54               ` David Rientjes
2023-08-31  0:08   ` Yin, Fengwei
2023-08-31  7:18     ` Ryan Roberts
2023-08-31  7:38       ` Yin, Fengwei [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2944c22c-8fcc-46aa-935e-91881d48fb4b@intel.com \
    --to=fengwei.yin@intel.com \
    --cc=david@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox