From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, hannes@cmpxchg.org,
peterz@infradead.org, guro@fb.com, shakeelb@google.com,
minchan@kernel.org, timmurray@google.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, kernel-team@android.com
Subject: Re: [PATCH 1/1] mm: count time in drain_all_pages during direct reclaim as memory pressure
Date: Mon, 21 Feb 2022 09:55:12 +0100 [thread overview]
Message-ID: <YhNTcM9XtqA1zUUi@dhcp22.suse.cz> (raw)
In-Reply-To: <20220219174940.2570901-1-surenb@google.com>
On Sat 19-02-22 09:49:40, Suren Baghdasaryan wrote:
> When page allocation in direct reclaim path fails, the system will
> make one attempt to shrink per-cpu page lists and free pages from
> high alloc reserves. Draining per-cpu pages into buddy allocator can
> be a very slow operation because it's done using workqueues and the
> task in direct reclaim waits for all of them to finish before
> proceeding. Currently this time is not accounted as psi memory stall.
>
> While testing mobile devices under extreme memory pressure, when
> allocations are failing during direct reclaim, we notices that psi
> events which would be expected in such conditions were not triggered.
> After profiling these cases it was determined that the reason for
> missing psi events was that a big chunk of time spent in direct
> reclaim is not accounted as memory stall, therefore psi would not
> reach the levels at which an event is generated. Further investigation
> revealed that the bulk of that unaccounted time was spent inside
> drain_all_pages call.
It would be cool to have some numbers here.
> Annotate drain_all_pages and unreserve_highatomic_pageblock during
> page allocation failure in the direct reclaim path so that delays
> caused by these calls are accounted as memory stall.
If the draining is too slow and dependent on the current CPU/WQ
contention then we should address that. The original intention was that
having a dedicated WQ with WQ_MEM_RECLAIM would help to isolate the
operation from the rest of WQ activity. Maybe we need to fine tune
mm_percpu_wq. If that doesn't help then we should revise the WQ model
and use something else. Memory reclaim shouldn't really get stuck behind
other unrelated work.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2022-02-21 8:57 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-19 17:49 Suren Baghdasaryan
2022-02-20 0:40 ` Minchan Kim
2022-02-20 16:52 ` Suren Baghdasaryan
2022-02-23 18:54 ` Johannes Weiner
2022-02-23 19:06 ` Suren Baghdasaryan
2022-02-23 19:42 ` Suren Baghdasaryan
2022-02-21 8:55 ` Michal Hocko [this message]
2022-02-21 10:41 ` Petr Mladek
2022-02-21 19:13 ` Suren Baghdasaryan
2022-02-21 19:09 ` Suren Baghdasaryan
2022-02-22 19:47 ` Tim Murray
2022-02-23 0:15 ` Suren Baghdasaryan
2022-03-03 2:59 ` Hillf Danton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YhNTcM9XtqA1zUUi@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@android.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=peterz@infradead.org \
--cc=shakeelb@google.com \
--cc=surenb@google.com \
--cc=timmurray@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox