From: Kairui Song <ryncsn@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Jingxiang Zeng <linuszeng@tencent.com>,
Jingxiang Zeng <jingxiangzeng.cas@gmail.com>,
linux-mm@kvack.org, Yu Zhao <yuzhao@google.com>,
Wei Xu <weixugc@google.com>,
"T . J . Mercier" <tjmercier@google.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm/vmscan: wake up flushers conditionally to avoid cgroup OOM
Date: Mon, 2 Sep 2024 04:39:24 +0800 [thread overview]
Message-ID: <CAMgjq7AnaNr354zzu-Z-SB6xZtD1+a2zUwFtZ_Qg7pMj0m7y7A@mail.gmail.com> (raw)
In-Reply-To: <20240830173813.c53769f62bf72116266f42ca@linux-foundation.org>
On Sat, Aug 31, 2024 at 8:38 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Thu, 29 Aug 2024 18:25:43 +0800 Jingxiang Zeng <jingxiangzeng.cas@gmail.com> wrote:
>
> > From: Zeng Jingxiang <linuszeng@tencent.com>
> >
> > Commit 14aa8b2d5c2e ("mm/mglru: don't sync disk for each aging cycle")
> > removed the opportunity to wake up flushers during the MGLRU page
> > reclamation process can lead to an increased likelihood of triggering
> > OOM when encountering many dirty pages during reclamation on MGLRU.
> >
> > This leads to premature OOM if there are too many dirty pages in cgroup:
> > Killed
> >
> > ...
> >
> > The flusher wake up was removed to decrease SSD wearing, but if we are
> > seeing all dirty folios at the tail of an LRU, not waking up the flusher
> > could lead to thrashing easily. So wake it up when a mem cgroups is
> > about to OOM due to dirty caches.
>
> Thanks, I'll queue this for testing and review. Could people please
> consider whether we should backport this into -stable kernels.
>
Hi Andrew, Thanks for picking this up.
> > MGLRU still suffers OOM issue on latest mm tree, so the test is done
> > with another fix merged [1].
> >
> > Link: https://lore.kernel.org/linux-mm/CAOUHufYi9h0kz5uW3LHHS3ZrVwEq-kKp8S6N-MZUmErNAXoXmw@mail.gmail.com/ [1]
>
> This one is already queued for -stable.
I didn't see this in -unstable or -stable though, is there any other
repo or branch I missed? Jingxiang is referring to this fix from Yu:
diff --git a/mm/vmscan.c b/mm/vmscan.c
index cfa839284b92..778bf5b7ef97 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4320,7 +4320,7 @@ static bool sort_folio(struct lruvec *lruvec,
struct folio *folio, struct scan_c
}
/* ineligible */
- if (zone > sc->reclaim_idx || skip_cma(folio, sc)) {
+ if (!folio_test_lru(folio) || zone > sc->reclaim_idx ||
skip_cma(folio, sc)) {
gen = folio_inc_gen(lruvec, folio, false);
list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]);
return true;
next prev parent reply other threads:[~2024-09-01 20:39 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-29 10:25 Jingxiang Zeng
2024-08-31 0:38 ` Andrew Morton
2024-09-01 20:39 ` Kairui Song [this message]
2024-09-01 22:40 ` Andrew Morton
2024-09-02 10:59 ` Hillf Danton
2024-09-06 0:00 ` Wei Xu
2024-09-11 1:29 ` jingxiang zeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAMgjq7AnaNr354zzu-Z-SB6xZtD1+a2zUwFtZ_Qg7pMj0m7y7A@mail.gmail.com \
--to=ryncsn@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=jingxiangzeng.cas@gmail.com \
--cc=linuszeng@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=tjmercier@google.com \
--cc=weixugc@google.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox