From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19688C4167B for ; Mon, 6 Nov 2023 20:26:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A0C5D440150; Mon, 6 Nov 2023 15:26:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BBF88E0001; Mon, 6 Nov 2023 15:26:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88532440150; Mon, 6 Nov 2023 15:26:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 7A2B08E0001 for ; Mon, 6 Nov 2023 15:26:14 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 475961208C8 for ; Mon, 6 Nov 2023 20:26:14 +0000 (UTC) X-FDA: 81428661468.05.831A8B7 Received: from mail-ej1-f47.google.com (mail-ej1-f47.google.com [209.85.218.47]) by imf23.hostedemail.com (Postfix) with ESMTP id 6929814000F for ; Mon, 6 Nov 2023 20:26:11 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="qaJU/A0i"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.47 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699302371; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=S5TYX6lEuKCahV5efuUP6jLDN7dIzcQfoIdtrIEy2yE=; b=zWroDsvP2Llfz/JmAxvIFxbxWm34pEMXkP850mYRjBxUqeTbuzbo/HSVEgj55SbuVzXno3 BfkrDIXRnEDkJY9TzHsewZJkfKzRGE933K9Hy6uaN+0Ibry61vC/hlFmzC4X39qxUFukWi in+idIoPDdd2Kp0HJrin5hAsbnWvWZI= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="qaJU/A0i"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.47 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699302371; a=rsa-sha256; cv=none; b=E9Bie+JykaAkyTCv/ILTj94SMYtaB30F3x6FWJ7yk2r0e39rXE/hDfWeG2CZSg2Qj6yPTe EcQXDYPn1lztXsitQb14OuR4d/25CV3kVTN7teVHj9GjmY999Gp/U4ZFxhLSv5oGb71aWm lJiz/JB1KPms8161n7FA0pNXEIEZXqM= Received: by mail-ej1-f47.google.com with SMTP id a640c23a62f3a-9c2a0725825so725807066b.2 for ; Mon, 06 Nov 2023 12:26:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699302370; x=1699907170; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=S5TYX6lEuKCahV5efuUP6jLDN7dIzcQfoIdtrIEy2yE=; b=qaJU/A0iOMkolTxh9+SH3I/nCDXmxs95maP0AvXdst/PbX7eBRljkoPZQlOKtVHgM6 6vfvIk1BJvNYo39ltjJo4UvITpgHJqffiRc75C6EeLCgghqu8k5csNP7ulKH7Jc2aaeh Fm8TFUaEG4Jrs0a0nfk9Wv6xRPWdx7KcPhdeDr9guE+R6Xr8jlHUAFPhbLVg/zS05D5j babArMgEFWuKuGzH9beLDnLuzeV8MT4n33OXQlhDwDgXb6Ex86mR69KsGlUr6WB3rMww OlDbKSqA17fl4OdbnD+v/dWkRDdUPUmAyPp1xfE8PyJRK0WnhNZ7NrNf9JfCHOAT8Svz WSOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699302370; x=1699907170; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S5TYX6lEuKCahV5efuUP6jLDN7dIzcQfoIdtrIEy2yE=; b=e0n0jshxYtMyLM1hxxk/5F6u57Wa+0YKzWD8RKrMY7GAjH3CZprwGbIFuS7xx7PZ5W 5jCAVCwKIRA7GM2XuCRZmv7Qv+oPlhI7yq0h4TMAMK1/wnu65552VqidT0bVDW6C02U+ o4/8pp+RpFQ9IewE+rivZ7P/ahWXEm/S79pXSQL89yUelxdZwzaHXcTbBbTogyDzz9EL 9FMq3PVzakgfEsuF/GGEQ73t9WA32KDkuithWF534vO0VE+arsyj+IfkxEg45T9wHzmw dikf70Zbo+bVWeqsZObuF+cGTGCWGtGPALfjk+SWblyB6jyGpJvjW5KpB5Nixt7SLlhD a+FQ== X-Gm-Message-State: AOJu0YyrM0FsL0RuHcV3B7KabhioSpevF3QCZSMsxOmd1q02EQYKaAia nbNRsD9mBXCzgl/njUxlirjFRqfqP4HG6GdI/gG3vQ== X-Google-Smtp-Source: AGHT+IE/P/U+nspRnmXie0ihSzjkMxQma5uYY9b94Sf1X7ObMkz1sGnPPXsA59befGpud1esC1lLX+/xCb8XugZ3QVg= X-Received: by 2002:a17:907:26c9:b0:9e0:dcf:17d5 with SMTP id bp9-20020a17090726c900b009e00dcf17d5mr3945353ejc.43.1699302369549; Mon, 06 Nov 2023 12:26:09 -0800 (PST) MIME-Version: 1.0 References: <20231106183159.3562879-1-nphamcs@gmail.com> <20231106183159.3562879-4-nphamcs@gmail.com> In-Reply-To: <20231106183159.3562879-4-nphamcs@gmail.com> From: Yosry Ahmed Date: Mon, 6 Nov 2023 12:25:33 -0800 Message-ID: Subject: Re: [PATCH v5 3/6] zswap: make shrinking memcg-aware To: Nhat Pham Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, cerasuolodomenico@gmail.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 6929814000F X-Stat-Signature: 8gp18fzk7i9fo8un9bh3sbo7k8yuc3ud X-Rspam-User: X-HE-Tag: 1699302371-85278 X-HE-Meta: U2FsdGVkX1/bSuwOa1reHDIz9xA2LWdY/TXnP0IVzo/0vj0vSpWJSNjEZSyLAy1eGRot/gi9Wvi3m2BNtI8sVdeM2FWqONCfsCEqoLOtY5bBKL54fbxMx8c/l5/CBFN+3cBLr0jYnSrR6RJhN0AZcXVmCb1dgDJYOaEzSYY2TnQwCHJTuBNJy/6RVrmlMxJXX0id7HTP+OqCf01zfFxShU0F27PL40SEsKD2ElKwtmeeF71O3b5gzuIYNnOlNbrTW9xDzrsoA5pEfITAGVgkmKt8jSJwod3cruUDvZtc0RhpHBRwtOhn8qAN+d46t7ZiTlPeGXQI3SfuCw1yG79ghsnc3ozpCGwF/qAlC0C1l1urtCtkdwKOyMXNuvqnY8nD1z0k3lgUTJhCvzcg0djWzWRCUyLKonpyw8QK8C5vKmE2666QE5ElKg5HDELt9zLLsn7h/L5JIs/KeyN4qSNymhXp0TDAR5370p6vz2VPmXozZ1eF8aZt5Ddo4H+1LQADZNS7Of4cjagUa+mq0TeE+FyOZ7p7KJbLN8z+jOqHcHSPhWo4JOLNL9kL+3P0DAczB1fthJdi8AoA3xzlbsTMbGjptCDbW/KnYWFe0pvU9FeltebCRN8GvDSY+aXKzyaz4rnxUt//dMmW5uL+HY+bkR9aDVQx2wmquMG3cRUjurC04ypoFdxWMv2+YM+7id9PEJUapT6yNWiK8WdmIz23D8ItIOaLi++5kTSsf0+TQfVt2jVH5u1ez1HauwjfaEiRe43viTL7RZvy4XyXFiif4IcjyiYATc2LcISUlbmMWicAQwTmAivVGOaOPwcT+4bogxGhEsPQHl0+jYFRMoU9QS89O39tjdJQiCRluOBLrbCFoZBNjv+xrImPcfZ+zd1q3TFvGIQHB/Vv1cOmaLzVDSduckRqvIIxDdiCq116/4rxvfhKETAG7tq/lu/bzqOlQyxtbB6EaIpfulABd+G p+DTEdnL QIMcUB9i1az4l/fqH15fN3Ys1cEO5h2YNcHaY/JfF49AUS8ym2P37koz6Gud9DqH9AciDKbAsBOSlyb+TwXFlKO4+kQGUfdvpUpsXYWLe7KRKKkT1oGmQK2YhtrlxT/NFE7r3/s5SaRFSvogHKIY8N2Uxbm2Bw8g4eDAR01o08CbKaMcZIj7dhbC7I00+omvtsU5TkryW2lvmBezFLiDoZUnm93JrH7Fzbl2aF0XegfRKys0Z5fPZr4ECyRWL1Y90H/zidOFmPmBRNfigPBCD4xcRe9NY77Wbb5TXOn0y/n7fMBf3pnK28QmG4Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 6, 2023 at 10:32=E2=80=AFAM Nhat Pham wrote= : > > From: Domenico Cerasuolo > > Currently, we only have a single global LRU for zswap. This makes it > impossible to perform worload-specific shrinking - an memcg cannot > determine which pages in the pool it owns, and often ends up writing > pages from other memcgs. This issue has been previously observed in > practice and mitigated by simply disabling memcg-initiated shrinking: > > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/= #u > > This patch fully resolves the issue by replacing the global zswap LRU > with memcg- and NUMA-specific LRUs, and modify the reclaim logic: > > a) When a store attempt hits an memcg limit, it now triggers a > synchronous reclaim attempt that, if successful, allows the new > hotter page to be accepted by zswap. > b) If the store attempt instead hits the global zswap limit, it will > trigger an asynchronous reclaim attempt, in which an memcg is > selected for reclaim in a round-robin-like fashion. > > Signed-off-by: Domenico Cerasuolo > Co-developed-by: Nhat Pham > Signed-off-by: Nhat Pham > --- > include/linux/memcontrol.h | 5 + > include/linux/zswap.h | 2 + > mm/memcontrol.c | 2 + > mm/swap.h | 3 +- > mm/swap_state.c | 24 +++- > mm/zswap.c | 252 +++++++++++++++++++++++++++++-------- > 6 files changed, 227 insertions(+), 61 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 55c85f952afd..95f6c9e60ed1 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -1187,6 +1187,11 @@ static inline struct mem_cgroup *page_memcg_check(= struct page *page) > return NULL; > } > > +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cg= roup *objcg) > +{ > + return NULL; > +} > + > static inline bool folio_memcg_kmem(struct folio *folio) > { > return false; > diff --git a/include/linux/zswap.h b/include/linux/zswap.h > index 2a60ce39cfde..e571e393669b 100644 > --- a/include/linux/zswap.h > +++ b/include/linux/zswap.h > @@ -15,6 +15,7 @@ bool zswap_load(struct folio *folio); > void zswap_invalidate(int type, pgoff_t offset); > void zswap_swapon(int type); > void zswap_swapoff(int type); > +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg); > > #else > > @@ -31,6 +32,7 @@ static inline bool zswap_load(struct folio *folio) > static inline void zswap_invalidate(int type, pgoff_t offset) {} > static inline void zswap_swapon(int type) {} > static inline void zswap_swapoff(int type) {} > +static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)= {} > > #endif > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 6f7fc0101252..2ef49b471a16 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -5640,6 +5640,8 @@ static void mem_cgroup_css_offline(struct cgroup_su= bsys_state *css) > page_counter_set_min(&memcg->memory, 0); > page_counter_set_low(&memcg->memory, 0); > > + zswap_memcg_offline_cleanup(memcg); I think the "_cleanup" suffix is unnecessary. I guess most calls made here are cleanup calls anyway. > + > memcg_offline_kmem(memcg); > reparent_shrinker_deferred(memcg); > wb_memcg_offline(memcg); > diff --git a/mm/swap.h b/mm/swap.h > index 73c332ee4d91..c0dc73e10e91 100644 > --- a/mm/swap.h > +++ b/mm/swap.h > @@ -289,15 +291,42 @@ static void zswap_update_total_size(void) > zswap_pool_total_size =3D total; > } > > +/* should be called under RCU */ > +static inline struct mem_cgroup *get_mem_cgroup_from_entry(struct zswap_= entry *entry) Do not use "get" in the name if we are not actually taking a ref here. mem_cgroup_from_entry()? > +{ > + return entry->objcg ? obj_cgroup_memcg(entry->objcg) : NULL; > +} > + > +static inline int entry_to_nid(struct zswap_entry *entry) > +{ > + return page_to_nid(virt_to_page(entry)); > +} > + > +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) > +{ > + struct zswap_pool *pool; > + > + /* lock out zswap pools list modification */ > + spin_lock(&zswap_pools_lock); > + list_for_each_entry(pool, &zswap_pools, list) { > + spin_lock(&pool->next_shrink_lock); This lock is only needed to synchronize updating pool->next_shrink, right? Can we just use atomic operations instead? (e.g. cmpxchg()). > + if (pool->next_shrink =3D=3D memcg) > + pool->next_shrink =3D > + mem_cgroup_iter(NULL, pool->next_shrink, = NULL, true); > + spin_unlock(&pool->next_shrink_lock); > + } > + spin_unlock(&zswap_pools_lock); > +} > + > /********************************* > * zswap entry functions > **********************************/ > static struct kmem_cache *zswap_entry_cache; > > -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp) > +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid) > { > struct zswap_entry *entry; > - entry =3D kmem_cache_alloc(zswap_entry_cache, gfp); > + entry =3D kmem_cache_alloc_node(zswap_entry_cache, gfp, nid); > if (!entry) > return NULL; > entry->refcount =3D 1; [..] > @@ -1233,15 +1369,15 @@ bool zswap_store(struct folio *folio) > zswap_invalidate_entry(tree, dupentry); > } > spin_unlock(&tree->lock); > - > - /* > - * XXX: zswap reclaim does not work with cgroups yet. Without a > - * cgroup-aware entry LRU, we will push out entries system-wide b= ased on > - * local cgroup limits. > - */ > objcg =3D get_obj_cgroup_from_folio(folio); > - if (objcg && !obj_cgroup_may_zswap(objcg)) > - goto reject; > + if (objcg && !obj_cgroup_may_zswap(objcg)) { > + memcg =3D get_mem_cgroup_from_objcg(objcg); > + if (shrink_memcg(memcg)) { > + mem_cgroup_put(memcg); > + goto reject; > + } > + mem_cgroup_put(memcg); Can we just use RCU here as well? (same around memcg_list_lru_alloc() call below). > + } > > /* reclaim space if needed */ > if (zswap_is_full()) { > @@ -1258,7 +1394,7 @@ bool zswap_store(struct folio *folio) > } > > /* allocate entry */ > - entry =3D zswap_entry_cache_alloc(GFP_KERNEL); > + entry =3D zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); > if (!entry) { > zswap_reject_kmemcache_fail++; > goto reject; > @@ -1285,6 +1421,15 @@ bool zswap_store(struct folio *folio) > if (!entry->pool) > goto freepage; > > + if (objcg) { > + memcg =3D get_mem_cgroup_from_objcg(objcg); > + if (memcg_list_lru_alloc(memcg, &entry->pool->list_lru, G= FP_KERNEL)) { > + mem_cgroup_put(memcg); > + goto put_pool; > + } > + mem_cgroup_put(memcg); > + } > + > /* compress */ > acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); >