From: David Rientjes <rientjes@google.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@suse.de>,
Andrea Arcangeli <aarcange@redhat.com>,
Vlastimil Babka <vbabka@suse.cz>, Zi Yan <zi.yan@cs.rutgers.edu>,
Stefan Priebe - Profihost AG <s.priebe@profihost.ag>,
"Kirill A. Shutemov" <kirill@shutemov.name>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] Revert "mm, thp: restore node-local hugepage allocations"
Date: Thu, 6 Jun 2019 15:12:40 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.21.1906061451001.121338@chino.kir.corp.google.com> (raw)
In-Reply-To: <20190605093257.GC15685@dhcp22.suse.cz>
On Wed, 5 Jun 2019, Michal Hocko wrote:
> > That's fine, but we also must be mindful of users who have used
> > MADV_HUGEPAGE over the past four years based on its hard-coded behavior
> > that would now regress as a result.
>
> Absolutely, I am all for helping those usecases. First of all we need to
> understand what those usecases are though. So far we have only seen very
> vague claims about artificial worst case examples when a remote access
> dominates the overall cost but that doesn't seem to be the case in real
> life in my experience (e.g. numa balancing will correct things or the
> over aggressive node reclaim tends to cause problems elsewhere etc.).
>
The usecase is a remap of a binary's text segment to transparent hugepages
by doing mmap() -> madvise(MADV_HUGEPAGE) -> mremap() and when this
happens on a locally fragmented node. This happens at startup when we
aren't concerned about allocation latency: we want to compact. We are
concerned with access latency thereafter as long as the process is
running.
MADV_HUGEPAGE has worked great for this and we have a large userspace
stack built upon that because it's been the long-standing behavior. This
gets back to the point of MADV_HUGEPAGE being overloaded for four
different purposes. I argue that processes that fit within a single node
are in the majority.
> > Thus far, I haven't seen anybody engage in discussion on how to address
> > the issue other than proposed reverts that readily acknowledge they cause
> > other users to regress. If all nodes are fragmented, the swap storms that
> > are currently reported for the local node would be made worse by the
> > revert -- if remote hugepages cannot be faulted quickly then it's only
> > compounded the problem.
>
> Andrea has outline the strategy to go IIRC. There also has been a
> general agreement that we shouldn't be over eager to fall back to remote
> nodes if the base page size allocation could be satisfied from a local node.
Sorry, I haven't seen patches for this, I can certainly test them if
there's a link. If we have the ability to tune how eager the page
allocator is to fallback and have the option to say "never" as part of
that eagerness, it may work.
The idea that I had was snipped from this, however, and it would be nice
to get some feedback on it: I've suggested that direct reclaim for the
purposes of hugepage allocation on the local node is never worthwhile
unless and until memory compaction can both capture that page to use (not
rely on the freeing scanner to find it) and that migration of a number of
pages would eventually result in the ability to free a pageblock.
I'm hoping that we can all agree to that because otherwise it leads us
down a bad road if reclaim is doing pointless work (freeing scanner can't
find it or it gets allocated again before it can find it) or compaction
can't make progress as a result of it (even though we can migrate, it
still won't free a pageblock).
In the interim, I think we should suppress direct reclaim entirely for
thp allocations, regardless of enabled=always or MADV_HUGEPAGE because it
cannot be proven that the reclaim work is beneficial and I believe it
results in the swap storms that are being reported.
Any disagreements so far?
Furthermore, if we can agree to that, memory compaction when allocating a
transparent hugepage fails for different reasons, one of which is because
we fail watermark checks because we lack migration targets. This is
normally what leads to direct reclaim. Compaction is *supposed* to return
COMPACT_SKIPPED for this but that's overloaded as well: it happens when we
fail extfrag_threshold checks and wheng gfp flags doesn't allow it. The
former matters for thp.
So my proposed change would be:
- give the page allocator a consistent indicator that compaction failed
because we are low on memory (make COMPACT_SKIPPED really mean this),
- if we get this in the page allocator and we are allocating thp, fail,
reclaim is unlikely to help here and is much more likely to be
disruptive
- we could retry compaction if we haven't scanned all memory and
were contended,
- if the hugepage allocation fails, have thp check watermarks for order-0
pages without any padding,
- if watermarks succeed, fail the thp allocation: we can't allocate
because of fragmentation and it's better to return node local memory,
- if watermarks fail, a follow up allocation of the pte will likely also
fail, so thp retries the allocation with a cleared __GFP_THISNODE.
This doesn't sound very invasive and I'll code it up if it will be tested.
next prev parent reply other threads:[~2019-06-06 22:12 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-03 22:31 [PATCH 0/2] reapply: relax __GFP_THISNODE for MADV_HUGEPAGE mappings Andrea Arcangeli
2019-05-03 22:31 ` [PATCH 1/2] Revert "Revert "mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask"" Andrea Arcangeli
2019-05-04 12:03 ` Michal Hocko
2019-05-03 22:31 ` [PATCH 2/2] Revert "mm, thp: restore node-local hugepage allocations" Andrea Arcangeli
2019-05-04 12:11 ` Michal Hocko
2019-05-09 8:38 ` Mel Gorman
2019-05-15 20:26 ` David Rientjes
2019-05-20 15:36 ` Mel Gorman
2019-05-20 17:54 ` David Rientjes
2019-05-24 0:57 ` Andrew Morton
2019-05-24 20:29 ` Andrea Arcangeli
2019-05-29 2:06 ` David Rientjes
2019-05-29 21:24 ` David Rientjes
2019-05-31 9:22 ` Michal Hocko
2019-05-31 21:53 ` David Rientjes
2019-06-05 9:32 ` Michal Hocko
2019-06-06 22:12 ` David Rientjes [this message]
2019-06-07 8:32 ` Michal Hocko
2019-06-13 20:17 ` David Rientjes
2019-06-21 21:16 ` David Rientjes
2019-07-30 13:11 ` Michal Hocko
2019-07-30 18:05 ` Andrew Morton
2019-07-31 8:17 ` Michal Hocko
2019-05-24 10:07 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.21.1906061451001.121338@chino.kir.corp.google.com \
--to=rientjes@google.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@kernel.org \
--cc=s.priebe@profihost.ag \
--cc=vbabka@suse.cz \
--cc=zi.yan@cs.rutgers.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox