Thank you for your reply. shrink_lruvec() is nested in deep loop. Reclaimer may have already reclaimed part of requested memory in one loop, but before adjust sc->nr_to_reclaim in outer loop, call shrink_lruvec() again will still follow the current sc->nr_to_reclaim
to work. It will eventually lead to overreclaim. My problem case is easy to construct. Allocate lots of anonymous memory(e.g. 20G) in a memcg, then swapping by writing memory.reclaim and there is a certain probability of
overreclaim.
发件人: Andrew Morton <akpm@linux-foundation.org>
发送时间: 2023年7月8日 3:09
收件人: 杨逸飞
抄送: linux-mm@kvack.org; linux-kernel@vger.kernel.org; Johannes Weiner
主题: ?????!?Re: [PATCH] mm:vmscan: fix inaccurate reclaim during proactive reclaim
安全提示:此邮件来自公司外部。除非您确认发件人身份可信且邮件内容不含可疑信息,否则请勿回复或转发邮件、点击邮件链接或打开附件。
(cc hannes)
On Fri, 7 Jul 2023 18:32:26 +0800 Efly Young <yangyifei03@kuaishou.com> wrote:
> With commit f53af4285d77 ("mm: vmscan: fix extreme overreclaim
> and swap floods"), proactive reclaim still seems inaccurate.
>
> Our problematic scene also are almost anon pages. Request 1G
> by writing memory.reclaim will reclaim 1.7G or other values
> more than 1G by swapping.
>
> This try to fix the inaccurate reclaim problem.
It would be helpful to have some additional explanation of why you
believe the current code is incorrect?
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -6208,7 +6208,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
> unsigned long nr_to_scan;
> enum lru_list lru;
> unsigned long nr_reclaimed = 0;
> - unsigned long nr_to_reclaim = sc->nr_to_reclaim;
> + unsigned long nr_to_reclaim = (sc->nr_to_reclaim - sc->nr_reclaimed);
> bool proportional_reclaim;
> struct blk_plug plug;
>