From: Shakeel Butt <shakeelb@google.com>
To: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@techsingularity.net>,
Tejun Heo <tj@kernel.org>, Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Steven Rostedt <rostedt@goodmis.org>,
Linux MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
Cgroups <cgroups@vger.kernel.org>
Subject: Re: [PATCH v2 3/4] mm/vmscan: Don't change pgdat state on base of a single LRU list state.
Date: Thu, 5 Apr 2018 18:04:50 -0700 [thread overview]
Message-ID: <CALvZod6bRRdq4gWbSxWXaT8OSEsp+O5YwrjfLdzMx3gQVZei-Q@mail.gmail.com> (raw)
In-Reply-To: <20180323152029.11084-4-aryabinin@virtuozzo.com>
On Fri, Mar 23, 2018 at 8:20 AM, Andrey Ryabinin
<aryabinin@virtuozzo.com> wrote:
> We have separate LRU list for each memory cgroup. Memory reclaim iterates
> over cgroups and calls shrink_inactive_list() every inactive LRU list.
> Based on the state of a single LRU shrink_inactive_list() may flag
> the whole node as dirty,congested or under writeback. This is obviously
> wrong and hurtful. It's especially hurtful when we have possibly
> small congested cgroup in system. Than *all* direct reclaims waste time
> by sleeping in wait_iff_congested(). And the more memcgs in the system
> we have the longer memory allocation stall is, because
> wait_iff_congested() called on each lru-list scan.
>
> Sum reclaim stats across all visited LRUs on node and flag node as dirty,
> congested or under writeback based on that sum. Also call
> congestion_wait(), wait_iff_congested() once per pgdat scan, instead of
> once per lru-list scan.
>
> This only fixes the problem for global reclaim case. Per-cgroup reclaim
> may alter global pgdat flags too, which is wrong. But that is separate
> issue and will be addressed in the next patch.
>
> This change will not have any effect on a systems with all workload
> concentrated in a single cgroup.
>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Seems reasonable.
Reviewed-by: Shakeel Butt <shakeelb@google.com>
> ---
> mm/vmscan.c | 124 +++++++++++++++++++++++++++++++++++-------------------------
> 1 file changed, 73 insertions(+), 51 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 403f59edd53e..2134b3ac8fa0 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -116,6 +116,15 @@ struct scan_control {
>
> /* Number of pages freed so far during a call to shrink_zones() */
> unsigned long nr_reclaimed;
> +
> + struct {
> + unsigned int dirty;
> + unsigned int unqueued_dirty;
> + unsigned int congested;
> + unsigned int writeback;
> + unsigned int immediate;
> + unsigned int file_taken;
> + } nr;
> };
>
> #ifdef ARCH_HAS_PREFETCH
> @@ -1754,23 +1763,6 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> mem_cgroup_uncharge_list(&page_list);
> free_unref_page_list(&page_list);
>
> - /*
> - * If reclaim is isolating dirty pages under writeback, it implies
> - * that the long-lived page allocation rate is exceeding the page
> - * laundering rate. Either the global limits are not being effective
> - * at throttling processes due to the page distribution throughout
> - * zones or there is heavy usage of a slow backing device. The
> - * only option is to throttle from reclaim context which is not ideal
> - * as there is no guarantee the dirtying process is throttled in the
> - * same way balance_dirty_pages() manages.
> - *
> - * Once a node is flagged PGDAT_WRITEBACK, kswapd will count the number
> - * of pages under pages flagged for immediate reclaim and stall if any
> - * are encountered in the nr_immediate check below.
> - */
> - if (stat.nr_writeback && stat.nr_writeback == nr_taken)
> - set_bit(PGDAT_WRITEBACK, &pgdat->flags);
> -
> /*
> * If dirty pages are scanned that are not queued for IO, it
> * implies that flushers are not doing their job. This can
> @@ -1785,40 +1777,13 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> if (stat.nr_unqueued_dirty == nr_taken)
> wakeup_flusher_threads(WB_REASON_VMSCAN);
>
> - /*
> - * Legacy memcg will stall in page writeback so avoid forcibly
> - * stalling here.
> - */
> - if (sane_reclaim(sc)) {
> - /*
> - * Tag a node as congested if all the dirty pages scanned were
> - * backed by a congested BDI and wait_iff_congested will stall.
> - */
> - if (stat.nr_dirty && stat.nr_dirty == stat.nr_congested)
> - set_bit(PGDAT_CONGESTED, &pgdat->flags);
> -
> - /* Allow kswapd to start writing pages during reclaim. */
> - if (stat.nr_unqueued_dirty == nr_taken)
> - set_bit(PGDAT_DIRTY, &pgdat->flags);
> -
> - /*
> - * If kswapd scans pages marked marked for immediate
> - * reclaim and under writeback (nr_immediate), it implies
> - * that pages are cycling through the LRU faster than
> - * they are written so also forcibly stall.
> - */
> - if (stat.nr_immediate)
> - congestion_wait(BLK_RW_ASYNC, HZ/10);
> - }
> -
> - /*
> - * Stall direct reclaim for IO completions if underlying BDIs and node
> - * is congested. Allow kswapd to continue until it starts encountering
> - * unqueued dirty pages or cycling through the LRU too quickly.
> - */
> - if (!sc->hibernation_mode && !current_is_kswapd() &&
> - current_may_throttle())
> - wait_iff_congested(pgdat, BLK_RW_ASYNC, HZ/10);
> + sc->nr.dirty += stat.nr_dirty;
> + sc->nr.congested += stat.nr_congested;
> + sc->nr.unqueued_dirty += stat.nr_unqueued_dirty;
> + sc->nr.writeback += stat.nr_writeback;
> + sc->nr.immediate += stat.nr_immediate;
> + if (file)
> + sc->nr.file_taken += nr_taken;
>
> trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id,
> nr_scanned, nr_reclaimed,
> @@ -2522,6 +2487,8 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> unsigned long node_lru_pages = 0;
> struct mem_cgroup *memcg;
>
> + memset(&sc->nr, 0, sizeof(sc->nr));
> +
> nr_reclaimed = sc->nr_reclaimed;
> nr_scanned = sc->nr_scanned;
>
> @@ -2587,6 +2554,61 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> if (sc->nr_reclaimed - nr_reclaimed)
> reclaimable = true;
>
> + /*
> + * If reclaim is isolating dirty pages under writeback, it
> + * implies that the long-lived page allocation rate is exceeding
> + * the page laundering rate. Either the global limits are not
> + * being effective at throttling processes due to the page
> + * distribution throughout zones or there is heavy usage of a
> + * slow backing device. The only option is to throttle from
> + * reclaim context which is not ideal as there is no guarantee
> + * the dirtying process is throttled in the same way
> + * balance_dirty_pages() manages.
> + *
> + * Once a node is flagged PGDAT_WRITEBACK, kswapd will count the
> + * number of pages under pages flagged for immediate reclaim and
> + * stall if any are encountered in the nr_immediate check below.
> + */
> + if (sc->nr.writeback && sc->nr.writeback == sc->nr.file_taken)
> + set_bit(PGDAT_WRITEBACK, &pgdat->flags);
> +
> + /*
> + * Legacy memcg will stall in page writeback so avoid forcibly
> + * stalling here.
> + */
> + if (sane_reclaim(sc)) {
> + /*
> + * Tag a node as congested if all the dirty pages
> + * scanned were backed by a congested BDI and
> + * wait_iff_congested will stall.
> + */
> + if (sc->nr.dirty && sc->nr.dirty == sc->nr.congested)
> + set_bit(PGDAT_CONGESTED, &pgdat->flags);
> +
> + /* Allow kswapd to start writing pages during reclaim.*/
> + if (sc->nr.unqueued_dirty == sc->nr.file_taken)
> + set_bit(PGDAT_DIRTY, &pgdat->flags);
> +
> + /*
> + * If kswapd scans pages marked marked for immediate
> + * reclaim and under writeback (nr_immediate), it
> + * implies that pages are cycling through the LRU
> + * faster than they are written so also forcibly stall.
> + */
> + if (sc->nr.immediate)
> + congestion_wait(BLK_RW_ASYNC, HZ/10);
> + }
> +
> + /*
> + * Stall direct reclaim for IO completions if underlying BDIs
> + * and node is congested. Allow kswapd to continue until it
> + * starts encountering unqueued dirty pages or cycling through
> + * the LRU too quickly.
> + */
> + if (!sc->hibernation_mode && !current_is_kswapd() &&
> + current_may_throttle())
> + wait_iff_congested(pgdat, BLK_RW_ASYNC, HZ/10);
> +
> } while (should_continue_reclaim(pgdat, sc->nr_reclaimed - nr_reclaimed,
> sc->nr_scanned - nr_scanned, sc));
>
> --
> 2.16.1
>
next prev parent reply other threads:[~2018-04-06 1:04 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-23 15:20 [PATCH v2 0/4] vmscan per-cgroup reclaim fixes Andrey Ryabinin
2018-03-23 15:20 ` [PATCH v2 1/4] mm/vmscan: Update stale comments Andrey Ryabinin
2018-04-06 16:08 ` Johannes Weiner
2018-03-23 15:20 ` [PATCH v2 2/4] mm/vmscan: remove redundant current_may_throttle() check Andrey Ryabinin
2018-04-06 16:10 ` Johannes Weiner
2018-03-23 15:20 ` [PATCH v2 3/4] mm/vmscan: Don't change pgdat state on base of a single LRU list state Andrey Ryabinin
2018-04-05 22:17 ` Andrew Morton
2018-04-06 1:04 ` Shakeel Butt [this message]
2018-04-06 16:28 ` Johannes Weiner
2018-04-06 17:25 ` Andrey Ryabinin
2018-03-23 15:20 ` [PATCH v2 4/4] mm/vmscan: Don't mess with pgdat->flags in memcg reclaim Andrey Ryabinin
2018-04-05 22:18 ` Andrew Morton
2018-04-06 2:13 ` Shakeel Butt
2018-04-06 11:44 ` Andrey Ryabinin
2018-04-06 14:15 ` Shakeel Butt
2018-04-06 13:52 ` [PATCH] mm-vmscan-dont-mess-with-pgdat-flags-in-memcg-reclaim-v2-fix Andrey Ryabinin
2018-04-06 14:37 ` Shakeel Butt
2018-04-06 15:09 ` Andrey Ryabinin
2018-04-06 15:22 ` Shakeel Butt
2018-04-06 16:36 ` Johannes Weiner
2018-04-06 18:02 ` [PATCH v3 1/2] mm/vmscan: don't change pgdat state on base of a single LRU list state Andrey Ryabinin
2018-04-06 18:02 ` [PATCH v3 2/2] mm/vmscan: don't mess with pgdat->flags in memcg reclaim Andrey Ryabinin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALvZod6bRRdq4gWbSxWXaT8OSEsp+O5YwrjfLdzMx3gQVZei-Q@mail.gmail.com \
--to=shakeelb@google.com \
--cc=akpm@linux-foundation.org \
--cc=aryabinin@virtuozzo.com \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@kernel.org \
--cc=rostedt@goodmis.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox