From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21003CF0440 for ; Wed, 9 Oct 2024 04:52:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 403C46B009A; Wed, 9 Oct 2024 00:52:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 38C126B00A8; Wed, 9 Oct 2024 00:52:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1DE026B00C3; Wed, 9 Oct 2024 00:52:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EA1BC6B009A for ; Wed, 9 Oct 2024 00:52:39 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9D889140F8F for ; Wed, 9 Oct 2024 04:52:37 +0000 (UTC) X-FDA: 82652843238.11.6301433 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf25.hostedemail.com (Postfix) with ESMTP id 64653A0004 for ; Wed, 9 Oct 2024 04:52:37 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=aSc8FuHZ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of jingxiangzeng.cas@gmail.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=jingxiangzeng.cas@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728449407; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=azsC2zlgIV0Lm2w1C4muzvp/beZl6bnGM+mnzNcLDEg=; b=GS2xKRZeFasjNhJ+TQRLYkOvFdJ0nEaI7a9i3e74YFS1D4ut022x5mWzowlmFQFUluZ+pR genS4TtG2KXMf2b+Y0N1RjIyisdG10oFWgs+E32JTieKw20bntYJq3IE+54dMXJOa1oOPv CJIXoc6ELmz+2kSU4ldNOH6ajNpTLkA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728449407; a=rsa-sha256; cv=none; b=MyYS1dHwSfe2PtNXlcxISY+oZZt1XnV8f3Kwd6O1e3U3ZKLNbuF9eu/0zsbllpeeJrJBE8 J2yyhXHPlfm10R+o+L2EHQjPnZcbcFBS4dX1MyebBP3n1GMhZ096gd2NuFrsE8xyX20srn vMCx57glqWEnXzfJpAGnEpBOBcLIEnE= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=aSc8FuHZ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of jingxiangzeng.cas@gmail.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=jingxiangzeng.cas@gmail.com Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-207115e3056so58667735ad.2 for ; Tue, 08 Oct 2024 21:52:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1728449556; x=1729054356; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=azsC2zlgIV0Lm2w1C4muzvp/beZl6bnGM+mnzNcLDEg=; b=aSc8FuHZTiCVuuvSLMHK+3YV6XzZMuc6Rp0Nv1sSYHsdeC13jVKqHp1ZaFQMCxWprR brwLXCux17tXyOxB5T+9WjiKQ0izdZ8oRpE1t79EqHpQnGdweKKCGeP+mZvKFm1LggPV L24HafxtgEHU4g2nU7b27KS4e4pX6oFJI6F25BDp6+rVopMhjQaoeM47adEhyBmmVnqN NMlJ69FXgD3GOn9/8xJ1yp9MlK+y9DqVgdA3yEmy+pr5fwpwo84TKS1Am74LZLqT+Tbh HFKU7w6Ts/tCgC8jQoZs+tTfcXeWvmZvFcOkSw8jGJoZpmQQBh7So1jZOpLVHLf6DiFN mG8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728449556; x=1729054356; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=azsC2zlgIV0Lm2w1C4muzvp/beZl6bnGM+mnzNcLDEg=; b=IZDxV+94V6HJHQKm7p/aMSNGrqaCeZHjBK1ig7hWLgcMgYnupQOyvWD5WIBnDYmERc vyZ0P4EjU+F9HPVs2GqGzF2t8W2b5vAJTmIDVMRK33lJrfCZbZ0vb1z2Fd12fgYBKhUc DzZ9WBXA6zbH8sYb0vnheNaRGHKogEJlrLF9AzGmUlmHuVHThTrOXANgykOIvq1zInUQ 95Pl1Z9SZvZPagiB4UMF+c3cDgEg46mBBzmQNrdfLXKwef4SLzsvjsBBkaWIU5pg041T MLXjZSJr4PnKMIWexYvbWCBv2HuBzP7I81zMsERXbpf3VUa+cgou0wrtIIjvkDpBZJKa QTMQ== X-Gm-Message-State: AOJu0YwLjChiG1F5Uucph4aGcp6VnYXKW/GlwXWcFzc22NEsxCM2Ns1G MC83Qx+hyspH/9HSr4JORKXH3F523S67jvsbOdarGiNbKw6/P2eM3QM0QeW63kLNhI8WUwa7eQP So4LWJvLGwDGOEScZlXdGKOup2ag= X-Google-Smtp-Source: AGHT+IE4rsKbZ1VfLiMejLBoaJI3CM6mVNCFLQfKaFBw+EtYHc+Z7C6mzdhbV6Ruf5L9ofztDuNsUW6IT013c31bvmQ= X-Received: by 2002:a17:903:2344:b0:20b:54e8:8b35 with SMTP id d9443c01a7336-20c63751ef6mr19020915ad.33.1728449555940; Tue, 08 Oct 2024 21:52:35 -0700 (PDT) MIME-Version: 1.0 References: <20241008015635.2782751-1-jingxiangzeng.cas@gmail.com> In-Reply-To: From: jingxiang zeng Date: Wed, 9 Oct 2024 12:52:24 +0800 Message-ID: Subject: Re: [RESEND][PATCH v4] mm/vmscan: wake up flushers conditionally to avoid cgroup OOM To: Wei Xu Cc: linux-mm@kvack.org, akpm@linux-foundation.org, kasong@tencent.com, linuszeng@tencent.com, linux-kernel@vger.kernel.org, tjmercier@google.com, yuzhao@google.com, chrisl@kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 64653A0004 X-Stat-Signature: 7ks8ejw1eho6c1tse15t4y673erdykhi X-Rspam-User: X-HE-Tag: 1728449557-268214 X-HE-Meta: U2FsdGVkX1+wnrYRmiXqOeNWmhTohLp8+KEgoDCXTESecHIBNGNMHnFp6b37k0RjGNqXQKXGqqe85KqjW/j7zpLPhcnwiJNZqowRIIztGwh7jYBn0pNYZTDrHCi2Sa9rU+UP0VV72jTVrkPuYFVl4prsSymuliO7SLQ7ACVbvM3e9CAQAq1r4Mgc6NJkhSnhmME4/9EhNBYceLJkgSBqzY13vs/c7MYsnRGFGp4YTuixjlAR7J1U1u9hH0XXC2NTXYHV8kdPaYMb+AcO9ttUuttK7ZKKbmt2w/7KdlLkyO7gj9JyvmM9T1Tclk650ojYiVoX0riPIdI7rmhWVh+Mzdhg2Pb5jPO8tlAMUCThaTUG5gkb2qaUGnfd4lKh2W5Dn9WbH3yAK5hzgHwAILU3VbDLt1UlsrVDyi2wC0ZYu1Py/399cJDe5ASWumrEg1qg/sxAbeh20wXNlrhAQrvj+fyk4IFaLmKqKM6zYgPAWOtZ0C2Rf6FgkQ3phnZQY5uZYqzNxZ2qAluTZonJSRWJPEqVZcWKaNQuhyVmMEWZn1xu93vaNPAGgZ7JozghlvPs7a/+6WqvihHH+b2xZiYom4HGUr97Jux2r++KhYzDKujbZqQwwWDkWmYwT5gd5mGNUuNbAkZXnMulus3ijJ10xdofioRLtNvbS6igWHHD/Z/MDK4J+58Uxl/R1n+oMg2uwrySEnzHj2YWQg82qlpbGgQnjpC5vWpRwbm90yU+cbrypFjum+XOFp+ZECu4lwXmvJcmVx3SrOgH4h5qm3fhq6sP+ivkF5xyy6d6C0aj4qy1cw6MF+y9yPoRWiYXDrXmmNALGR4RkfOuOeCjiZcjEiW3Pw7lNZtyKit3ioJIINaj72JMd7novoO105cUOfy/WXJax2z7TKUKngHRa5NIGqqLdmxPfj+GagQDTKQXwv2QZxoCDz2u3V4UTBzPSuAw6lkhT+DfdDbLTQ37P/s FEqWmmtX eWSyajz8/zFT6AtzFXM2yvteqDsZ2T+H3+lurCc69f6hJ4Ecodd6A5HTypu29QUaECePC2GNAA+bVyPFyH4dwvZzyJJfF/FC5xgTP0xNb4sI8ZoeJmydVzUr1x9hLUZE1L673iuBEokEodqqs/s3t19CJZN45MWPMRxU/m/1dtn06SZTpxTpFnbbTnJwwqSXSy+QcSqhYy2/aNqk83bxcoWXm10pSLZoW8avhqpWAb3a21BAi7V5gd3TSk9mixBbwxhLNBKabJHEDgB83x+O8UFjPneDU25wx5lp5zEErRZFSXSDDQkl2WHlzthX0mmcE87xEM5fCSHhNp4t1O5pvhNf2eDze/boBuz7aNiXOV7DiBvYEgBv/3nyhgYrUa5nvCaUGwzOATRY7UzDJFRsKkORJEnFsVVmL2WWgurSA+EkOldy6auDVu1b4u5XnDUrBAkIX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, 8 Oct 2024 at 11:26, Wei Xu wrote: > > On Mon, Oct 7, 2024 at 6:57=E2=80=AFPM Jingxiang Zeng > wrote: > > > > From: Jingxiang Zeng > > > > Commit 14aa8b2d5c2e ("mm/mglru: don't sync disk for each aging cycle") > > removed the opportunity to wake up flushers during the MGLRU page > > reclamation process can lead to an increased likelihood of triggering O= OM > > when encountering many dirty pages during reclamation on MGLRU. > > > > This leads to premature OOM if there are too many dirty pages in cgroup= : > > Killed > > > > dd invoked oom-killer: gfp_mask=3D0x101cca(GFP_HIGHUSER_MOVABLE|__GFP_W= RITE), > > order=3D0, oom_score_adj=3D0 > > > > Call Trace: > > > > dump_stack_lvl+0x5f/0x80 > > dump_stack+0x14/0x20 > > dump_header+0x46/0x1b0 > > oom_kill_process+0x104/0x220 > > out_of_memory+0x112/0x5a0 > > mem_cgroup_out_of_memory+0x13b/0x150 > > try_charge_memcg+0x44f/0x5c0 > > charge_memcg+0x34/0x50 > > __mem_cgroup_charge+0x31/0x90 > > filemap_add_folio+0x4b/0xf0 > > __filemap_get_folio+0x1a4/0x5b0 > > ? srso_return_thunk+0x5/0x5f > > ? __block_commit_write+0x82/0xb0 > > ext4_da_write_begin+0xe5/0x270 > > generic_perform_write+0x134/0x2b0 > > ext4_buffered_write_iter+0x57/0xd0 > > ext4_file_write_iter+0x76/0x7d0 > > ? selinux_file_permission+0x119/0x150 > > ? srso_return_thunk+0x5/0x5f > > ? srso_return_thunk+0x5/0x5f > > vfs_write+0x30c/0x440 > > ksys_write+0x65/0xe0 > > __x64_sys_write+0x1e/0x30 > > x64_sys_call+0x11c2/0x1d50 > > do_syscall_64+0x47/0x110 > > entry_SYSCALL_64_after_hwframe+0x76/0x7e > > > > memory: usage 308224kB, limit 308224kB, failcnt 2589 > > swap: usage 0kB, limit 9007199254740988kB, failcnt 0 > > > > ... > > file_dirty 303247360 > > file_writeback 0 > > ... > > > > oom-kill:constraint=3DCONSTRAINT_MEMCG,nodemask=3D(null),cpuset=3Dtest, > > mems_allowed=3D0,oom_memcg=3D/test,task_memcg=3D/test,task=3Ddd,pid=3D4= 404,uid=3D0 > > Memory cgroup out of memory: Killed process 4404 (dd) total-vm:10512kB, > > anon-rss:1152kB, file-rss:1824kB, shmem-rss:0kB, UID:0 pgtables:76kB > > oom_score_adj:0 > > > > The flusher wake up was removed to decrease SSD wearing, but if we are > > seeing all dirty folios at the tail of an LRU, not waking up the flushe= r > > could lead to thrashing easily. So wake it up when a mem cgroups is ab= out > > to OOM due to dirty caches. > > > > --- > > Changes from v3: > > - Avoid taking lock and reduce overhead on folio isolation by > > checking the right flags and rework wake up condition, fixing the > > performance regression reported by Chris Li. > > [Chris Li, Kairui Song] > > - Move the wake up check to try_to_shrink_lruvec to cover kswapd > > case as well, and update comments. [Kairui Song] > > - Link to v3: https://lore.kernel.org/all/20240924121358.30685-1-jingxi= angzeng.cas@gmail.com/ > > Changes from v2: > > - Acquire the lock before calling the folio_check_dirty_writeback > > function. [Wei Xu, Jingxiang Zeng] > > - Link to v2: https://lore.kernel.org/all/20240913084506.3606292-1-jing= xiangzeng.cas@gmail.com/ > > Changes from v1: > > - Add code to count the number of unqueued_dirty in the sort_folio > > function. [Wei Xu, Jingxiang Zeng] > > - Link to v1: https://lore.kernel.org/all/20240829102543.189453-1-jingx= iangzeng.cas@gmail.com/ > > --- > > > > Fixes: 14aa8b2d5c2e ("mm/mglru: don't sync disk for each aging cycle") > > Signed-off-by: Zeng Jingxiang > > Signed-off-by: Kairui Song > > Cc: T.J. Mercier > > Cc: Wei Xu > > Cc: Yu Zhao > > --- > > mm/vmscan.c | 19 ++++++++++++++++--- > > 1 file changed, 16 insertions(+), 3 deletions(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index dc7a285b256b..2a5c2fe81467 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -4291,6 +4291,7 @@ static bool sort_folio(struct lruvec *lruvec, str= uct folio *folio, struct scan_c > > int tier_idx) > > { > > bool success; > > + bool dirty, writeback; > > int gen =3D folio_lru_gen(folio); > > int type =3D folio_is_file_lru(folio); > > int zone =3D folio_zonenum(folio); > > @@ -4336,9 +4337,14 @@ static bool sort_folio(struct lruvec *lruvec, st= ruct folio *folio, struct scan_c > > return true; > > } > > > > + dirty =3D folio_test_dirty(folio); > > + writeback =3D folio_test_writeback(folio); > > + if (type =3D=3D LRU_GEN_FILE && dirty && !writeback) > > + sc->nr.unqueued_dirty +=3D delta; > > + > > This sounds good. BTW, when shrink_folio_list() in evict_folios() > returns, we should add stat.nr_unqueued_dirty to sc->nr.unqueued_dirty > there as well. Thank you for your valuable feedback, I will implement it in the next versi= on. > > > /* waiting for writeback */ > > - if (folio_test_locked(folio) || folio_test_writeback(folio) || > > - (type =3D=3D LRU_GEN_FILE && folio_test_dirty(folio))) { > > + if (folio_test_locked(folio) || writeback || > > + (type =3D=3D LRU_GEN_FILE && dirty)) { > > gen =3D folio_inc_gen(lruvec, folio, true); > > list_move(&folio->lru, &lrugen->folios[gen][type][zone]= ); > > return true; > > @@ -4454,7 +4460,7 @@ static int scan_folios(struct lruvec *lruvec, str= uct scan_control *sc, > > trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, MAX_LRU= _BATCH, > > scanned, skipped, isolated, > > type ? LRU_INACTIVE_FILE : LRU_INACTIVE= _ANON); > > - > > + sc->nr.taken +=3D scanned; > > I think we should only include file pages (in sort_folio) and isolated > pages into sc->nr.taken, instead of all scanned pages. For example, if Unqueued dirty pages are not isolated, but promoted to the newer generation in the sort_folio function. So I tend to wake up the flusher thread when th= e number of scanned pages is equal to the number of unqueued dirty pages. > there are only unevictable and unqueued dirty pages, we would still > like to wake up the flusher threads, but because nr.taken counts > unevictable pages as well, the wakeup condition in > try_to_shrink_lruvec() won't be met. The situation you mentioned will not happen because the number of scanned pages does not include unevicatble pages. However, there is another situation that needs attention. When the scanned pages contain anonymous pages and unqueued dirty pages, the flusher cannot be woken up. I will fix this situation in the next versi= on. > > > /* > > * There might not be eligible folios due to reclaim_idx. Check= the > > * remaining to prevent livelock if it's not making progress. > > @@ -4796,6 +4802,13 @@ static bool try_to_shrink_lruvec(struct lruvec *= lruvec, struct scan_control *sc) > > cond_resched(); > > } > > > > + /* > > + * If too many file cache in the coldest generation can't be ev= icted > > + * due to being dirty, wake up the flusher. > > + */ > > + if (sc->nr.unqueued_dirty && sc->nr.unqueued_dirty =3D=3D sc->n= r.taken) > > + wakeup_flusher_threads(WB_REASON_VMSCAN); > > + > > try_to_shrink_lruvec() can be called from shrink_node() for global > reclaim as well. We need to reset sc->nr before calling > lru_gen_shrink_node() there. MGLRU didn't need that because it didn't > use sc->nr until this change. Thank you for your valuable feedback, I will implement it in the next versi= on. > > > /* whether this lruvec should be rotated */ > > return nr_to_scan < 0; > > } > > -- > > 2.43.5 > >