From: Tim Chen <tim.c.chen@linux.intel.com>
To: Yu Zhao <yuzhao@google.com>,
Xing Zhengjun <zhengjun.xing@linux.intel.com>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, ying.huang@intel.com,
Shakeel Butt <shakeelb@google.com>,
Michal Hocko <mhocko@suse.com>,
wfg@mail.ustc.edu.cn
Subject: Re: [RFC] mm/vmscan.c: avoid possible long latency caused by too_many_isolated()
Date: Thu, 22 Apr 2021 13:17:37 -0700 [thread overview]
Message-ID: <a085478d-5118-cdff-c611-1649fce7a650@linux.intel.com> (raw)
In-Reply-To: <YIGuvh70JbE1Cx4U@google.com>
On 4/22/21 10:13 AM, Yu Zhao wrote:
> @@ -3302,6 +3252,7 @@ static bool throttle_direct_reclaim(gfp_t gfp_mask, struct zonelist *zonelist,
> unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> gfp_t gfp_mask, nodemask_t *nodemask)
> {
> + int nr_cpus;
> unsigned long nr_reclaimed;
> struct scan_control sc = {
> .nr_to_reclaim = SWAP_CLUSTER_MAX,
> @@ -3334,8 +3285,17 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> set_task_reclaim_state(current, &sc.reclaim_state);
> trace_mm_vmscan_direct_reclaim_begin(order, sc.gfp_mask);
>
> + nr_cpus = current_is_kswapd() ? 0 : num_online_cpus();
> + while (nr_cpus && !atomic_add_unless(&pgdat->nr_reclaimers, 1, nr_cpus)) {
> + if (schedule_timeout_killable(HZ / 10))
100 msec seems like a long time to wait. The original code in shrink_inactive_list
choose 100 msec sleep because the sleep happens only once in the while loop and 100 msec was
used to check for stalling. In this case the loop can go on for a while and the
#reclaimers can go down below the sooner than 100 msec. Seems like it should be checked
more often.
Tim
> + return SWAP_CLUSTER_MAX;
> + }
> +
> nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
>
> + if (nr_cpus)
> + atomic_dec(&pgdat->nr_reclaimers);
> +
> trace_mm_vmscan_direct_reclaim_end(nr_reclaimed);
> set_task_reclaim_state(current, NULL);
>
next prev parent reply other threads:[~2021-04-22 20:17 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-16 2:35 zhengjun.xing
2021-04-22 8:36 ` Xing Zhengjun
2021-04-22 10:23 ` Hillf Danton
2021-04-23 6:55 ` Xing Zhengjun
2021-04-30 5:33 ` Xing Zhengjun
2021-04-30 6:43 ` Hillf Danton
2021-05-10 8:03 ` Xing Zhengjun
2021-05-10 9:46 ` Hillf Danton
2021-04-22 17:13 ` Yu Zhao
2021-04-22 18:51 ` Shakeel Butt
2021-04-22 20:15 ` Yu Zhao
2021-04-22 20:17 ` Tim Chen [this message]
2021-04-22 20:30 ` Yu Zhao
2021-04-22 20:38 ` Tim Chen
2021-04-22 20:57 ` Yu Zhao
2021-04-22 21:02 ` Tim Chen
2021-04-23 6:57 ` Xing Zhengjun
2021-04-23 20:23 ` Yu Zhao
2021-04-25 0:48 ` Huang, Ying
2021-04-27 21:53 ` Yu Zhao
2021-04-30 5:57 ` Xing Zhengjun
2021-04-30 6:24 ` Yu Zhao
2021-04-28 11:55 ` Michal Hocko
2021-04-28 15:05 ` Yu Zhao
2021-04-29 10:00 ` Michal Hocko
2021-04-30 8:34 ` Yu Zhao
2021-04-30 9:17 ` Michal Hocko
2021-04-30 17:04 ` Yu Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a085478d-5118-cdff-c611-1649fce7a650@linux.intel.com \
--to=tim.c.chen@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=shakeelb@google.com \
--cc=wfg@mail.ustc.edu.cn \
--cc=ying.huang@intel.com \
--cc=yuzhao@google.com \
--cc=zhengjun.xing@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox