From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83627CED25D for ; Tue, 8 Oct 2024 03:26:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 16C9E6B007B; Mon, 7 Oct 2024 23:26:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 11D376B0083; Mon, 7 Oct 2024 23:26:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F262C6B0085; Mon, 7 Oct 2024 23:26:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D628D6B007B for ; Mon, 7 Oct 2024 23:26:19 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 1314B1606AC for ; Tue, 8 Oct 2024 03:26:19 +0000 (UTC) X-FDA: 82648996878.15.C0524B7 Received: from mail-ua1-f52.google.com (mail-ua1-f52.google.com [209.85.222.52]) by imf11.hostedemail.com (Postfix) with ESMTP id A792240003 for ; Tue, 8 Oct 2024 03:26:17 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ASR0lyxV; spf=pass (imf11.hostedemail.com: domain of weixugc@google.com designates 209.85.222.52 as permitted sender) smtp.mailfrom=weixugc@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728357934; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4HDlG+1eeGeo94ncM1ZKqkTVSW7EyMktJC8U74CrEpA=; b=DDZLsdjdGGb0auDYKlOTSCSlz7JpfvvuxH9NLOeIByLueJCNBTboB0wRCkgn1zyVkpd4bx B646scAcQk6D6yEBpPeaGASLzBRh0Xgpb/Mol4B+NKwAhbKlsqlyH1JXOcWWVZz3im3GDw IyL/xKRDTSxi5/ETBzskKgDNlobToQ8= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ASR0lyxV; spf=pass (imf11.hostedemail.com: domain of weixugc@google.com designates 209.85.222.52 as permitted sender) smtp.mailfrom=weixugc@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728357934; a=rsa-sha256; cv=none; b=fpw3MWuAJcuKzGu9D23VPD3TAE5JIaZ5W3BhKO8Uh9BaePL7h4yq3Zwm2Jhxtj9LhHIkyr UT7L2SvD19RMNi8DGNONTJQwczAnw9ASVcJDL9T2E8t3JOLN4SkVRVt3O3M5i7AtK4diKF YmqTpOPX8COV8oJZKwyAoyuaoCc+1fM= Received: by mail-ua1-f52.google.com with SMTP id a1e0cc1a2514c-84ea658b647so1687102241.1 for ; Mon, 07 Oct 2024 20:26:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728357977; x=1728962777; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=4HDlG+1eeGeo94ncM1ZKqkTVSW7EyMktJC8U74CrEpA=; b=ASR0lyxVRzYIOf3uMGLudG7FsgsjRa1CyO88/1REqmEGyjO+olk16N5W22Y69nNjlY e+nyCg3CXTcUe5RtazUGr+amruJNJJedtdJ7FOA5bCv19brHy30QSApRgz+TN9RIRdep vu45yhfecVlLntiM7Fe6Vt8TRRK4EanBSjCXKSQfc3uAp29ZtUwh3iovs6XGcpiSndk/ N8oWt8jsJV8qohCh1gt95gfUiHn25v2MSUxuUhynxGC/k3PDbkymOjQnbnBeITQuJX14 ZyOa2yPNOwib8lOND8NlwuAoznC/4iItT0vVTbCZeEtRoeqDY/UukrtL13xCUDcIMcc1 bzIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728357977; x=1728962777; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4HDlG+1eeGeo94ncM1ZKqkTVSW7EyMktJC8U74CrEpA=; b=VzdmYipIy9IRbkMfple3fmn//OAYOHsy2CW0uNSEvfTPdwe6Bbs1vSsn2bWML5akUF SJ5dilcukMl3m0FyLLSnraqx5BkTZYfj5tIHCh9wT+BY6AxDWgHVNixDXtU5uiuEV5mC tuFYpREB7/cRC8v3DQfuFU/+ly+rABblX99SbF5cpZTKyn9IDa4Tn2QAoZgrLy+we4CV UDadgkH0TOArOKy/ObEXtVA/Ju4MRC5k7GG2MEwbD+aLoM0brkGnkKsyZyTt5Pgc/HM6 nAel0ndNQ5/Vow/vJx6rYGzuGbJBPBb3K9tGBPEZcUbubBna09HtDFW4TRP5S8lAPmwG FZyg== X-Gm-Message-State: AOJu0YznU7MPRZYgm2G0k9hPgUGu07P57BUo3U1EArXUhLGyLqy6fjh4 R1tlyezhXNcwl6/zRuh6bsRvpdAVg0joUtaZQsIo3NlTxKyo52i+kf1evDpvI8c7moX7wiAXlxn /9OaTEiL0UgdqMR1+FjME0ez7KfyKwiK+9Otv X-Google-Smtp-Source: AGHT+IFrQeWlGciUKI4LkiokTwx2pF8Y0QELtAkw2r9IwTwujVDCZGfSLzk69uUV7Xgwe9usSB7jb2M25NuLCEa4BfM= X-Received: by 2002:a05:6102:1620:b0:4a3:dbe3:a2d9 with SMTP id ada2fe7eead31-4a40576402fmr12321905137.9.1728357976510; Mon, 07 Oct 2024 20:26:16 -0700 (PDT) MIME-Version: 1.0 References: <20241008015635.2782751-1-jingxiangzeng.cas@gmail.com> In-Reply-To: <20241008015635.2782751-1-jingxiangzeng.cas@gmail.com> From: Wei Xu Date: Mon, 7 Oct 2024 20:26:03 -0700 Message-ID: Subject: Re: [RESEND][PATCH v4] mm/vmscan: wake up flushers conditionally to avoid cgroup OOM To: Jingxiang Zeng Cc: linux-mm@kvack.org, akpm@linux-foundation.org, kasong@tencent.com, linuszeng@tencent.com, linux-kernel@vger.kernel.org, tjmercier@google.com, yuzhao@google.com, chrisl@kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: 8tz9o5wc5oq77trcwmtzhp4maztqzpjb X-Rspamd-Queue-Id: A792240003 X-Rspamd-Server: rspam11 X-HE-Tag: 1728357977-403033 X-HE-Meta: U2FsdGVkX198Dtn8GEE8KLqwcv7pu/+4sm/TaW/JXRB8iOqNrboiYDZK5R9z1qnY097A/eviqaqRpbAvxZY7CYQNd3lyPh6RHAJ9aI0hugoQ82oWdMIEoL0BwZbpSBTvOBt3x/e3Gb0xhOaQEbJ3i2RO2624EfqjW3qBOSwDYetCfNwpveI7DnqxcYgb+a6c4c1SbRkzLm/Ciut9IPz211o4iS6fFADYdC5gabCgJEeMVnsMN/zrjbxorhUpl/iOPHeg+34i467EaldQUklVFzqMaC4cnkcWdR9M7Pagf79JK3y92OofLUdAfP7GMpJBj0SKdyaz4JDXb2nruepxqDvrg5saQCKzgYNA5RNFCpFPq0A06I316sTWcpe+3WllhRVYAcpAS+61gL7eIolAY524Vr7q3xaQxTER9MM77NdJCfRKraS+fjDyS0Ux9RopTM/H6x4+QSWdeL2RhsITKZLqd0Iu5sqs3aq0GSEMchuws95wYtuVr7WhcGOfJ2sRQC68TsQKM7YdSKkZsqVqwfghbdfD/2pafv8dJAvadQPKt2okOCmwucMM6+aVvA47/5+8650di6AwWzq770Gr1izTf8NSyPmAZLeZspC9t359gReU1BvTGeXN8PFM3t7iYheT1suLSpWPecylg6sG+KvyGwS4cyjfSRjy1kG4GbsQQdp944EkkyebcJSEIn59ctzAtoeMJTiM7nvFkl4JKAja8GOYuleNRj2hpc6Hdt9i7aEObEFjAc0mffaq7ao8Iebj2l4PF3eImrHN6CLZRAKOh8Oi/EmNxCzrFZEBPcZwYHYYnYHJyKeiYGfmg/UAzXEVaFnK6KMXj3IdYlfBfk5GC1+rHCF8eGdfaQRhS+QS8PlkITki9llPruBJRhzGoqSJb5YJH0WApsVwCZ1Pu01ZP26cFFl+MsZUNqDBP5CTukgkbdLAze5nENpAiY45e4M1zTwNa6O26xiESGC 6ZNV62z1 Pl5fgfAeUyEwmDbbDHPGlL04qXgR4gPAI4h8RjkEARc6KgobeLPoStFgsX4fwoVtapJh8J+x5CkJ8VnUQ4Mr7O1KSQLm7nB0tqecceMAUkI4pZ5O1qNkGY13nMo0rhoQLGbQyvxs6mSH719xLKoQErCy/H7oP/27Og8uaNZoqEF0F3XVewmMDZGjzuKHkpl16aHtOoL+ONhUncJLjoAXf6fl9qP1b/TSweytunIe6/juh3hIwhKPNDcqrYLdTFjDNNSrOCJkMDY3Zew4LUnAf4+FaqxNnlnEvoNNRZvz+vqOZ1XlXYS8Bepymw4XY+vOdPKtCvAEttB3GjaRB6xzM/lnFYn1kaHGyDVig12MNueTPlrs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Oct 7, 2024 at 6:57=E2=80=AFPM Jingxiang Zeng wrote: > > From: Jingxiang Zeng > > Commit 14aa8b2d5c2e ("mm/mglru: don't sync disk for each aging cycle") > removed the opportunity to wake up flushers during the MGLRU page > reclamation process can lead to an increased likelihood of triggering OOM > when encountering many dirty pages during reclamation on MGLRU. > > This leads to premature OOM if there are too many dirty pages in cgroup: > Killed > > dd invoked oom-killer: gfp_mask=3D0x101cca(GFP_HIGHUSER_MOVABLE|__GFP_WRI= TE), > order=3D0, oom_score_adj=3D0 > > Call Trace: > > dump_stack_lvl+0x5f/0x80 > dump_stack+0x14/0x20 > dump_header+0x46/0x1b0 > oom_kill_process+0x104/0x220 > out_of_memory+0x112/0x5a0 > mem_cgroup_out_of_memory+0x13b/0x150 > try_charge_memcg+0x44f/0x5c0 > charge_memcg+0x34/0x50 > __mem_cgroup_charge+0x31/0x90 > filemap_add_folio+0x4b/0xf0 > __filemap_get_folio+0x1a4/0x5b0 > ? srso_return_thunk+0x5/0x5f > ? __block_commit_write+0x82/0xb0 > ext4_da_write_begin+0xe5/0x270 > generic_perform_write+0x134/0x2b0 > ext4_buffered_write_iter+0x57/0xd0 > ext4_file_write_iter+0x76/0x7d0 > ? selinux_file_permission+0x119/0x150 > ? srso_return_thunk+0x5/0x5f > ? srso_return_thunk+0x5/0x5f > vfs_write+0x30c/0x440 > ksys_write+0x65/0xe0 > __x64_sys_write+0x1e/0x30 > x64_sys_call+0x11c2/0x1d50 > do_syscall_64+0x47/0x110 > entry_SYSCALL_64_after_hwframe+0x76/0x7e > > memory: usage 308224kB, limit 308224kB, failcnt 2589 > swap: usage 0kB, limit 9007199254740988kB, failcnt 0 > > ... > file_dirty 303247360 > file_writeback 0 > ... > > oom-kill:constraint=3DCONSTRAINT_MEMCG,nodemask=3D(null),cpuset=3Dtest, > mems_allowed=3D0,oom_memcg=3D/test,task_memcg=3D/test,task=3Ddd,pid=3D440= 4,uid=3D0 > Memory cgroup out of memory: Killed process 4404 (dd) total-vm:10512kB, > anon-rss:1152kB, file-rss:1824kB, shmem-rss:0kB, UID:0 pgtables:76kB > oom_score_adj:0 > > The flusher wake up was removed to decrease SSD wearing, but if we are > seeing all dirty folios at the tail of an LRU, not waking up the flusher > could lead to thrashing easily. So wake it up when a mem cgroups is abou= t > to OOM due to dirty caches. > > --- > Changes from v3: > - Avoid taking lock and reduce overhead on folio isolation by > checking the right flags and rework wake up condition, fixing the > performance regression reported by Chris Li. > [Chris Li, Kairui Song] > - Move the wake up check to try_to_shrink_lruvec to cover kswapd > case as well, and update comments. [Kairui Song] > - Link to v3: https://lore.kernel.org/all/20240924121358.30685-1-jingxian= gzeng.cas@gmail.com/ > Changes from v2: > - Acquire the lock before calling the folio_check_dirty_writeback > function. [Wei Xu, Jingxiang Zeng] > - Link to v2: https://lore.kernel.org/all/20240913084506.3606292-1-jingxi= angzeng.cas@gmail.com/ > Changes from v1: > - Add code to count the number of unqueued_dirty in the sort_folio > function. [Wei Xu, Jingxiang Zeng] > - Link to v1: https://lore.kernel.org/all/20240829102543.189453-1-jingxia= ngzeng.cas@gmail.com/ > --- > > Fixes: 14aa8b2d5c2e ("mm/mglru: don't sync disk for each aging cycle") > Signed-off-by: Zeng Jingxiang > Signed-off-by: Kairui Song > Cc: T.J. Mercier > Cc: Wei Xu > Cc: Yu Zhao > --- > mm/vmscan.c | 19 ++++++++++++++++--- > 1 file changed, 16 insertions(+), 3 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index dc7a285b256b..2a5c2fe81467 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -4291,6 +4291,7 @@ static bool sort_folio(struct lruvec *lruvec, struc= t folio *folio, struct scan_c > int tier_idx) > { > bool success; > + bool dirty, writeback; > int gen =3D folio_lru_gen(folio); > int type =3D folio_is_file_lru(folio); > int zone =3D folio_zonenum(folio); > @@ -4336,9 +4337,14 @@ static bool sort_folio(struct lruvec *lruvec, stru= ct folio *folio, struct scan_c > return true; > } > > + dirty =3D folio_test_dirty(folio); > + writeback =3D folio_test_writeback(folio); > + if (type =3D=3D LRU_GEN_FILE && dirty && !writeback) > + sc->nr.unqueued_dirty +=3D delta; > + This sounds good. BTW, when shrink_folio_list() in evict_folios() returns, we should add stat.nr_unqueued_dirty to sc->nr.unqueued_dirty there as well. > /* waiting for writeback */ > - if (folio_test_locked(folio) || folio_test_writeback(folio) || > - (type =3D=3D LRU_GEN_FILE && folio_test_dirty(folio))) { > + if (folio_test_locked(folio) || writeback || > + (type =3D=3D LRU_GEN_FILE && dirty)) { > gen =3D folio_inc_gen(lruvec, folio, true); > list_move(&folio->lru, &lrugen->folios[gen][type][zone]); > return true; > @@ -4454,7 +4460,7 @@ static int scan_folios(struct lruvec *lruvec, struc= t scan_control *sc, > trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, MAX_LRU_B= ATCH, > scanned, skipped, isolated, > type ? LRU_INACTIVE_FILE : LRU_INACTIVE_A= NON); > - > + sc->nr.taken +=3D scanned; I think we should only include file pages (in sort_folio) and isolated pages into sc->nr.taken, instead of all scanned pages. For example, if there are only unevictable and unqueued dirty pages, we would still like to wake up the flusher threads, but because nr.taken counts unevictable pages as well, the wakeup condition in try_to_shrink_lruvec() won't be met. > /* > * There might not be eligible folios due to reclaim_idx. Check t= he > * remaining to prevent livelock if it's not making progress. > @@ -4796,6 +4802,13 @@ static bool try_to_shrink_lruvec(struct lruvec *lr= uvec, struct scan_control *sc) > cond_resched(); > } > > + /* > + * If too many file cache in the coldest generation can't be evic= ted > + * due to being dirty, wake up the flusher. > + */ > + if (sc->nr.unqueued_dirty && sc->nr.unqueued_dirty =3D=3D sc->nr.= taken) > + wakeup_flusher_threads(WB_REASON_VMSCAN); > + try_to_shrink_lruvec() can be called from shrink_node() for global reclaim as well. We need to reset sc->nr before calling lru_gen_shrink_node() there. MGLRU didn't need that because it didn't use sc->nr until this change. > /* whether this lruvec should be rotated */ > return nr_to_scan < 0; > } > -- > 2.43.5 >