linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Yang Shi <shy828301@gmail.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Linux MM <linux-mm@kvack.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [RFC] Kill THP deferred split queue?
Date: Fri, 10 Jul 2020 17:18:27 +0300	[thread overview]
Message-ID: <20200710141827.netxb2rimpge4qkd@box> (raw)
In-Reply-To: <CAHbLzkq5rSHUSbHegM5mURytS7nEDyHHbxOYn8DaBwYB0qGocw@mail.gmail.com>

On Tue, Jul 07, 2020 at 11:00:16AM -0700, Yang Shi wrote:
> Hi folks,
> 
> The THP deferred split queue is used to store PTE mapped THP (i.e.
> partial unmapped THP) then they will get split by deferred split
> shrinker when memory pressure kicks in.
> 
> Now the page reclaim could handle such cases nicely without calling
> the shrinker. Since the THPs on deferred split queue is not PMD mapped
> so they will be split unconditionally, then the unmapped sub pages
> would get freed. Please see the below code snippet:
> 
>                              if (PageTransHuge(page)) {
>                                         /* cannot split THP, skip it */
>                                         if (!can_split_huge_page(page, NULL))
>                                                 goto activate_locked;
>                                         /*
>                                          * Split pages without a PMD map right
>                                          * away. Chances are some or all of the
>                                          * tail pages can be freed without IO.
>                                          */
>                                         if (!compound_mapcount(page) &&
>                                             split_huge_page_to_list(page,
>                                                                     page_list))
>                                                 goto activate_locked;
>                                 }
> 
> Then the unmapped pages will be moved to free_list by
> move_pages_to_lru() called by shrink_inactive_list(). The mapped sub
> pages will be kept on LRU. So, it does exactly the same thing as
> deferred split shrinker and at the exact same timing.
> 
> The only benefit of shrinker is they can be split and freed via "echo
> 2 > /proc/sys/vm/drop_caches”, but I'm not sure how many people rely
> on this?
> 
> The benefit of killing deferred split queue is code simplification.
> 
> Any comment is welcome.

The point of handing it in shrinker is that these pages have to be dropped
before anything potentially useful get reclaimed. If the compound page has
any active PTEs you are unlikely to reach it during normal reclaim.

-- 
 Kirill A. Shutemov


  reply	other threads:[~2020-07-10 14:18 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-07 18:00 Yang Shi
2020-07-10 14:18 ` Kirill A. Shutemov [this message]
2020-07-10 17:32   ` Yang Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200710141827.netxb2rimpge4qkd@box \
    --to=kirill@shutemov.name \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shy828301@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox