From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8464EC41535 for ; Wed, 8 Nov 2023 21:15:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07F256B0317; Wed, 8 Nov 2023 16:15:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0076C6B0318; Wed, 8 Nov 2023 16:15:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC3366B0319; Wed, 8 Nov 2023 16:15:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C77416B0317 for ; Wed, 8 Nov 2023 16:15:31 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 98AFDA01C4 for ; Wed, 8 Nov 2023 21:15:31 +0000 (UTC) X-FDA: 81436043262.02.28B37B5 Received: from mail-il1-f181.google.com (mail-il1-f181.google.com [209.85.166.181]) by imf21.hostedemail.com (Postfix) with ESMTP id CD5DA1C0014 for ; Wed, 8 Nov 2023 21:15:29 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MiTbfDmI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.181 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699478129; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RNB5xgB/4KCWfHCIWPGT/LuL6VAHs2IvczalgrNNVZI=; b=Tgsj4vt1NzAYTipGx2cO9uRWALG2rolDt/0nshFxSOFeOlfCnS6alCj0JXedg0sTaNd6VL JOHypxfngNLqLe4xBnD4/W9YCGd18em1Q/uMkAyZcCA3af4k0XaWXzEwtRRklmat/6SCna bXznn4B2FOsPei8cMtzlv3eg0Hrcelc= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MiTbfDmI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.181 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699478129; a=rsa-sha256; cv=none; b=4yojQlPVWhfNRthi+DL7vsQQKA7h8ltdbAx9FTpI7IEG7fhrDCCbO32mVxRHZw5varf3HE dyxse98xA98LS6QAd2aADTBOKGnfT+jiSiysjINYneabZqu9UQQHO3mT+gElaDPY75sSIH 6jCkejJTsAgtkD1NJwsN2iNhmpzrIis= Received: by mail-il1-f181.google.com with SMTP id e9e14a558f8ab-359343e399fso773175ab.0 for ; Wed, 08 Nov 2023 13:15:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1699478129; x=1700082929; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=RNB5xgB/4KCWfHCIWPGT/LuL6VAHs2IvczalgrNNVZI=; b=MiTbfDmI7zoxj7RYN+fzAfTr2dM3B5/xzDwRSw68I1LHNHoNMs08yUQYS9f63vRYeB CS6jBiYTfynl4BmN4OC8Vcz32tXamsOz32gTZUiRmvSuDdpLXYyt0Z/Dol+WKN3ACQuY YSEVA4FCigwGWtbPiwzSu1/qTeGu2yVvHSZBX1HhUGE1+GyhMMU+nl6oU0fkADG986PJ NLyvOEQadNysSXIBkEekREnBWO72icZ6vT7kbwhw0Qpuu0SdANLc8FhZyli1tV+dULU6 twqtjALkuU7/2mSFN6t+wFfiPU8i7RwUnIITQDR85YdxXYhnhwTuW66DuC50G9xV76x+ qcGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699478129; x=1700082929; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RNB5xgB/4KCWfHCIWPGT/LuL6VAHs2IvczalgrNNVZI=; b=qy6zLws6gNmoQeOVrSeDs27pdRb3yG/JGC3C8wjlu2r7K4n2rqMBnUJji+v+9FQLp5 rSxeBY+NK2QNISDxFiQqJUk4YOWmzFymQC1sLu9oghNrrsqdxg09voswF7bcWnNCitfr AyNBvRrrdnd1uxbXZBJHhnMB26StvNr/q9z1sWNypTTTHkaqcKaWlKA0lk6oCt9Qx1TO 6PnjhXirEyVQBj/lOT2VnBAcVKuoP5lMyFjuhZhoNMNuYkUySzuOq9GI9TE895zbExYp r3Ok9jpHZM14rREky/Ki1+70iC3HWLXyyILFyOc/cykmxydPcdgPISJsVp0bLomsmhN7 o0kA== X-Gm-Message-State: AOJu0YzIjFMgOqSUA1aBzwitc8vgQjlg6DurhOQh6FSx6h3HhuJNbEBN JkwKKwbpHKlqUIYL6YK3/3q39Mgy5L6/h6dBACc= X-Google-Smtp-Source: AGHT+IHe/1lyuB2mxHD2Axz/XufuKF2H5f0zlaOMzfp6dlNa22DkngFGfB5vfPFkwUkqgFrltkCFDbnnmgolFmmYxjo= X-Received: by 2002:a05:6e02:12e6:b0:359:50c2:62b6 with SMTP id l6-20020a056e0212e600b0035950c262b6mr3329153iln.12.1699478128763; Wed, 08 Nov 2023 13:15:28 -0800 (PST) MIME-Version: 1.0 References: <20231106183159.3562879-1-nphamcs@gmail.com> In-Reply-To: From: Nhat Pham Date: Wed, 8 Nov 2023 13:15:17 -0800 Message-ID: Subject: Re: [PATCH v5 0/6] workload-specific and memory pressure-driven zswap writeback To: Chris Li Cc: Andrew Morton , Johannes Weiner , Domenico Cerasuolo , Yosry Ahmed , Seth Jennings , Dan Streetman , Vitaly Wool , mhocko@kernel.org, roman.gushchin@linux.dev, Shakeel Butt , muchun.song@linux.dev, linux-mm , kernel-team@meta.com, LKML , cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: CD5DA1C0014 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: o1bpwih1rcj9p1h3atwoawxryoxe9mi3 X-HE-Tag: 1699478129-915398 X-HE-Meta: U2FsdGVkX18IvEZy8P30s03Y0jRHVfCooN0fdpjuMHpJ34bVS2gbSFWw15UHhf1BcmMi/eJnHJQz6k1s6ruc86QshFoMqOp4eTbDAFVi2mjTwR6r3oZbjyAb6IPMrytUu45+HM7tI7nB118aR1v3zJa8CjyKUoNvYmfDMGzIGo4D5B3Ysss6aT5ZTIAUs4DiRx+V2uPFj7+qnzD0HhQKePIAdqWAa+hbuZGFCLXgOwfKCHTHDnXyG+Kc8x40pIC2TNEsCJon7vsq4RYR0flxNPGI6ZBosxFTdveFhPK2E/tPZVZmtKGW2zs3nfJkjvvKU8XVVBivpbXbKwjGn1qJhAG6qhUYgNaDNJNOARHAAWvfKRIlGmg3Byr5REO35hkVRZXjo3JGUT/xbAKVsS4k9M5MBcblC06CwdzEc9oMCBGqGiuA3zdXtR9nKuQ52kxWk+e58HWHRZ4Cygdd+1rC8hBo0SJAGWEgW8w9BtKonJj3q6VUeetqYMmI32HQ84JZIBXq6lJniFDggRikLOT8/8d+323+oX3nTZsTYb3+PrtjZlIv2FiiJ8uDn+8GH6TnLFrQqvgGnIuaEn7ByTYsOPKvINoF4eItiu+yLM3Iv5MtUuJf4Wx9pcgR2x/D3tlbKzcl6m+QIwdYoXhJGs4K5MRxfqs/WUXleg0Okxv9l6Ntdx7uvNANQs9M0R73AjMG1rZM7Ec1FIF0EUFkR/WS5dK6v3QQbweLQoJSN00DZdqmLl3Gen0T2GLgvNL7MP4+g5CQJgfYUnkBfQpPvfHDbY2JHMow0WJsfgCTXUptt8d5tJ+1ZhUUmzeuqZIYa0/4hZlXo7qF8E4cN/WWXUL/DcfKPB0BKQZJ+97QPrTs/SSHTbbhHJy0JaQI2kANPuBCvdTUz7DEU3ybQg2gjJMgr9i1gI3B3bXOzAl/zy9zlmi7JU1AubjNQFQHZ5kkW+pKYVrGklEPD8Mn/UV8MdL Ax20kHH9 OC/+wGOI6cJ7qmJQmLCPVzEMUT5iSeUIMVfOY8vI/GnjPuN9C3lUUt4WczpnuLiN9hEDy5C8nAseJgWTEBlTq/FSMX92vsjW782DW4ImtFWjx7jqzROuTT771kTw+tBMeSIRZCMX8rO7JIqrflgDz548KpGMQWCg8cYnlieLYUhh6JVKpcJCZ42Mbuod92ccN71SuHsn4ABQUsLs/araSUJoy1/gnWZktI4bjPEnnEMv2CxhYYxAm9jPvh5izg21rt8C+c1JadCqdET6xFdNNn3TveF/ngxEmss8sn3V8g2ORTBWOwP7DfV3EjMrT359Dy1MOkHVXgdefD4jJyn7IBfQo+udUUZF4NUHtQk7jX/UZEmicPrItEOLP2CJwfU39DOuJ6/8JU3w1UueDyPvIy7vy4vA3u8+B11PdPpWSM5rXd0iUWxWP09kBDpZtMO7QhCeVUPodpQXNG37nG50OvV5TEeIEQkayqeFrcnYGBNcq820= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Nov 8, 2023 at 11:46=E2=80=AFAM Chris Li wrote: > > Hi Nhat, > > Sorry for being late to the party. I want to take a look at your patches = series. > However I wasn't able to "git am" your patches series cleanly on current > mm-stable, mm-unstable or linux tip. > > $ git am patches/v5_20231106_nphamcs_workload_specific_and_memory_pressur= e_driven_zswap_writeback.mbx > Applying: list_lru: allows explicit memcg and NUMA node selection > Applying: memcontrol: allows mem_cgroup_iter() to check for onlineness > Applying: zswap: make shrinking memcg-aware (fix) > error: patch failed: mm/zswap.c:174 > error: mm/zswap.c: patch does not apply > Patch failed at 0003 zswap: make shrinking memcg-aware (fix) Ah that was meant to be a fixlet - so that on top of the original "zswap: make shrinking memcg-aware" patch. The intention was to eventually squash it... But this is getting a bit annoyingly confusing, I admit. I just rebased to mm-unstable + squashed it all again, then sent one single replacement patch: [PATCH v5 3/6 REPLACE] zswap: make shrinking memcg-aware Let me know if this still fails to apply. If not, I'll send the whole thing again as v6! My sincerest apologies for the troubles and confusion :( > > What is the base of your patches? A git hash or a branch I can pull > from would be > nice. > > Thanks > > Chris > > On Mon, Nov 6, 2023 at 10:32=E2=80=AFAM Nhat Pham wro= te: > > > > Changelog: > > v5: > > * Replace reference getting with an rcu_read_lock() section for > > zswap lru modifications (suggested by Yosry) > > * Add a new prep patch that allows mem_cgroup_iter() to return > > online cgroup. > > * Add a callback that updates pool->next_shrink when the cgroup is > > offlined (suggested by Yosry Ahmed, Johannes Weiner) > > v4: > > * Rename list_lru_add to list_lru_add_obj and __list_lru_add to > > list_lru_add (patch 1) (suggested by Johannes Weiner and > > Yosry Ahmed) > > * Some cleanups on the memcg aware LRU patch (patch 2) > > (suggested by Yosry Ahmed) > > * Use event interface for the new per-cgroup writeback counters. > > (patch 3) (suggested by Yosry Ahmed) > > * Abstract zswap's lruvec states and handling into > > zswap_lruvec_state (patch 5) (suggested by Yosry Ahmed) > > v3: > > * Add a patch to export per-cgroup zswap writeback counters > > * Add a patch to update zswap's kselftest > > * Separate the new list_lru functions into its own prep patch > > * Do not start from the top of the hierarchy when encounter a memcg > > that is not online for the global limit zswap writeback (patch 2) > > (suggested by Yosry Ahmed) > > * Do not remove the swap entry from list_lru in > > __read_swapcache_async() (patch 2) (suggested by Yosry Ahmed) > > * Removed a redundant zswap pool getting (patch 2) > > (reported by Ryan Roberts) > > * Use atomic for the nr_zswap_protected (instead of lruvec's lock) > > (patch 5) (suggested by Yosry Ahmed) > > * Remove the per-cgroup zswap shrinker knob (patch 5) > > (suggested by Yosry Ahmed) > > v2: > > * Fix loongarch compiler errors > > * Use pool stats instead of memcg stats when !CONFIG_MEMCG_KEM > > > > There are currently several issues with zswap writeback: > > > > 1. There is only a single global LRU for zswap, making it impossible to > > perform worload-specific shrinking - an memcg under memory pressure > > cannot determine which pages in the pool it owns, and often ends up > > writing pages from other memcgs. This issue has been previously > > observed in practice and mitigated by simply disabling > > memcg-initiated shrinking: > > > > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.c= om/T/#u > > > > But this solution leaves a lot to be desired, as we still do not > > have an avenue for an memcg to free up its own memory locked up in > > the zswap pool. > > > > 2. We only shrink the zswap pool when the user-defined limit is hit. > > This means that if we set the limit too high, cold data that are > > unlikely to be used again will reside in the pool, wasting precious > > memory. It is hard to predict how much zswap space will be needed > > ahead of time, as this depends on the workload (specifically, on > > factors such as memory access patterns and compressibility of the > > memory pages). > > > > This patch series solves these issues by separating the global zswap > > LRU into per-memcg and per-NUMA LRUs, and performs workload-specific > > (i.e memcg- and NUMA-aware) zswap writeback under memory pressure. The > > new shrinker does not have any parameter that must be tuned by the > > user, and can be opted in or out on a per-memcg basis. > > > > As a proof of concept, we ran the following synthetic benchmark: > > build the linux kernel in a memory-limited cgroup, and allocate some > > cold data in tmpfs to see if the shrinker could write them out and > > improved the overall performance. Depending on the amount of cold data > > generated, we observe from 14% to 35% reduction in kernel CPU time used > > in the kernel builds. > > > > Domenico Cerasuolo (3): > > zswap: make shrinking memcg-aware > > mm: memcg: add per-memcg zswap writeback stat > > selftests: cgroup: update per-memcg zswap writeback selftest > > > > Nhat Pham (3): > > list_lru: allows explicit memcg and NUMA node selection > > memcontrol: allows mem_cgroup_iter() to check for onlineness > > zswap: shrinks zswap pool based on memory pressure > > > > Documentation/admin-guide/mm/zswap.rst | 7 + > > drivers/android/binder_alloc.c | 5 +- > > fs/dcache.c | 8 +- > > fs/gfs2/quota.c | 6 +- > > fs/inode.c | 4 +- > > fs/nfs/nfs42xattr.c | 8 +- > > fs/nfsd/filecache.c | 4 +- > > fs/xfs/xfs_buf.c | 6 +- > > fs/xfs/xfs_dquot.c | 2 +- > > fs/xfs/xfs_qm.c | 2 +- > > include/linux/list_lru.h | 46 ++- > > include/linux/memcontrol.h | 9 +- > > include/linux/mmzone.h | 2 + > > include/linux/vm_event_item.h | 1 + > > include/linux/zswap.h | 27 +- > > mm/list_lru.c | 48 ++- > > mm/memcontrol.c | 20 +- > > mm/mmzone.c | 1 + > > mm/shrinker.c | 4 +- > > mm/swap.h | 3 +- > > mm/swap_state.c | 26 +- > > mm/vmscan.c | 26 +- > > mm/vmstat.c | 1 + > > mm/workingset.c | 4 +- > > mm/zswap.c | 430 +++++++++++++++++--- > > tools/testing/selftests/cgroup/test_zswap.c | 74 ++-- > > 26 files changed, 625 insertions(+), 149 deletions(-) > > > > -- > > 2.34.1 > >