linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	kirill.shutemov@linux.intel.com,
	Yang Shi <yang.shi@linux.alibaba.com>,
	hannes@cmpxchg.org, rientjes@google.com,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [v2 PATCH -mm] mm: account deferred split THPs into MemAvailable
Date: Mon, 26 Aug 2019 09:40:35 +0200	[thread overview]
Message-ID: <20190826074035.GD7538@dhcp22.suse.cz> (raw)
In-Reply-To: <20190822152934.w6ztolutdix6kbvc@box>

On Thu 22-08-19 18:29:34, Kirill A. Shutemov wrote:
> On Thu, Aug 22, 2019 at 02:56:56PM +0200, Vlastimil Babka wrote:
> > On 8/22/19 10:04 AM, Michal Hocko wrote:
> > > On Thu 22-08-19 01:55:25, Yang Shi wrote:
> > >> Available memory is one of the most important metrics for memory
> > >> pressure.
> > > 
> > > I would disagree with this statement. It is a rough estimate that tells
> > > how much memory you can allocate before going into a more expensive
> > > reclaim (mostly swapping). Allocating that amount still might result in
> > > direct reclaim induced stalls. I do realize that this is simple metric
> > > that is attractive to use and works in many cases though.
> > > 
> > >> Currently, the deferred split THPs are not accounted into
> > >> available memory, but they are reclaimable actually, like reclaimable
> > >> slabs.
> > >> 
> > >> And, they seems very common with the common workloads when THP is
> > >> enabled.  A simple run with MariaDB test of mmtest with THP enabled as
> > >> always shows it could generate over fifteen thousand deferred split THPs
> > >> (accumulated around 30G in one hour run, 75% of 40G memory for my VM).
> > >> It looks worth accounting in MemAvailable.
> > > 
> > > OK, this makes sense. But your above numbers are really worrying.
> > > Accumulating such a large amount of pages that are likely not going to
> > > be used is really bad. They are essentially blocking any higher order
> > > allocations and also push the system towards more memory pressure.
> > > 
> > > IIUC deferred splitting is mostly a workaround for nasty locking issues
> > > during splitting, right? This is not really an optimization to cache
> > > THPs for reuse or something like that. What is the reason this is not
> > > done from a worker context? At least THPs which would be freed
> > > completely sound like a good candidate for kworker tear down, no?
> > 
> > Agreed that it's a good question. For Kirill :) Maybe with kworker approach we
> > also wouldn't need the cgroup awareness?
> 
> I don't remember a particular locking issue, but I cannot say there's
> none :P
> 
> It's artifact from decoupling PMD split from compound page split: the same
> page can be mapped multiple times with combination of PMDs and PTEs. Split
> of one PMD doesn't need to trigger split of all PMDs and underlying
> compound page.
> 
> Other consideration is the fact that page split can fail and we need to
> have fallback for this case.
> 
> Also in most cases THP split would be just waste of time if we would do
> them at the spot. If you don't have memory pressure it's better to wait
> until process termination: less pages on LRU is still beneficial.

This might be true but the reality shows that a lot of THPs might be
waiting for the memory pressure that is essentially freeable on the
spot. So I am not really convinced that "less pages on LRUs" is really a
plausible justification. Can we free at least those THPs which are
unmapped completely without any pte mappings?

> Main source of partly mapped THPs comes from exit path. When PMD mapping
> of THP got split across multiple VMAs (for instance due to mprotect()),
> in exit path we unmap PTEs belonging to one VMA just before unmapping the
> rest of the page. It would be total waste of time to split the page in
> this scenario.
> 
> The whole deferred split thing still looks as a reasonable compromise
> to me.

Even when it leads to all other problems mentioned in this and memcg
deferred reclaim series?

> We may have some kind of watermark and try to keep the number of deferred
> split THP under it. But it comes with own set of problems: what if all
> these pages are pinned for really long time and effectively not available
> for split.

Again, why cannot we simply push the freeing where there are no other
mappings? This should be pretty common case, right? I am still not sure
that waiting for the memory reclaim is a general win. Do you have any
examples of workloads that measurably benefit from this lazy approach
without any other downsides? In other words how exactly do we measure
cost/benefit model of this heuristic?

-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2019-08-26  7:40 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-21 17:55 Yang Shi
2019-08-22  8:04 ` Michal Hocko
2019-08-22 12:56   ` Vlastimil Babka
2019-08-22 15:29     ` Kirill A. Shutemov
2019-08-26  7:40       ` Michal Hocko [this message]
2019-08-26 13:15         ` Kirill A. Shutemov
2019-08-27  6:01           ` Michal Hocko
2019-08-27 11:02             ` Kirill A. Shutemov
2019-08-27 11:48               ` Michal Hocko
2019-08-27 12:01               ` Vlastimil Babka
2019-08-27 12:09                 ` Michal Hocko
2019-08-27 12:17                   ` Kirill A. Shutemov
2019-08-27 12:59                     ` Kirill A. Shutemov
2019-08-27 17:06                       ` Yang Shi
2019-08-28  7:57                         ` Michal Hocko
2019-08-28 14:03                           ` Kirill A. Shutemov
     [not found]                             ` <20190828141253.GM28313@dhcp22.suse.cz>
2019-08-28 14:46                               ` Kirill A. Shutemov
2019-08-28 16:02                                 ` Michal Hocko
2019-08-29 17:03                                   ` Yang Shi
2019-08-30  6:23                                     ` Michal Hocko
2019-08-30 12:53                                   ` Kirill A. Shutemov
2019-08-22 15:49     ` Kirill A. Shutemov
2019-08-22 15:57     ` Yang Shi
2019-08-22 15:33   ` Yang Shi
2019-08-26  7:43     ` Michal Hocko
2019-08-27  4:27       ` Yang Shi
2019-08-27  5:59         ` Michal Hocko
2019-08-27  8:32           ` Kirill A. Shutemov
2019-08-27  9:00             ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190826074035.GD7538@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    --cc=yang.shi@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox