From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65AF7CDB482 for ; Tue, 17 Oct 2023 18:25:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 05C2C80045; Tue, 17 Oct 2023 14:25:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F266E8003F; Tue, 17 Oct 2023 14:25:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DEEB680045; Tue, 17 Oct 2023 14:25:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id CDD508003F for ; Tue, 17 Oct 2023 14:25:58 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A85D0120D8A for ; Tue, 17 Oct 2023 18:25:58 +0000 (UTC) X-FDA: 81355782396.07.BA38DE4 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf19.hostedemail.com (Postfix) with ESMTP id A85221A0016 for ; Tue, 17 Oct 2023 18:25:56 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697567157; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=izNAZw2SPJiOwNgQ70GjuToyWsXx2T/UGv2feIWyxz8=; b=e+Dljlhp6Ep88JwP58bA8Oa1XyUQXY77TBsA53G56e6IbXo+jM6UgOzypjb1TUels13P77 Nt9C6L2g5MCHxNS5MrTcJ8f0TkdIOrfHSWMmjp3cgDub3UPbJCpk/0B1B/OCMpyEKeewL0 ra1Vjj3c89xGIa2lNuHZq8k3Xmy4iXY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697567157; a=rsa-sha256; cv=none; b=EkVt0Zarwm6ZIBBLCmH9lABPY1LaFZ/b0WIZq0rZBGFZ43FUVg2dlepp9uTERsDJNFiZMh jYF6I+gbsYUn7U2mQx5azzv1c7S3a5ZB3UphrqN8pNz4Cpa/oEtGUaQD0yLXd9FUpamFqj Qx9htD+3R33SEQ3Ovh0/zXlhWIWiXRg= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3FBC02F4; Tue, 17 Oct 2023 11:26:36 -0700 (PDT) Received: from [10.57.66.147] (unknown [10.57.66.147]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 78A483F5A1; Tue, 17 Oct 2023 11:25:53 -0700 (PDT) Message-ID: <0dd0bedf-a6de-4176-8c2e-6abab2aed3fc@arm.com> Date: Tue, 17 Oct 2023 19:25:52 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware Content-Language: en-GB To: Domenico Cerasuolo Cc: Nhat Pham , akpm@linux-foundation.org, hannes@cmpxchg.org, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org References: <20230919171447.2712746-1-nphamcs@gmail.com> <20230919171447.2712746-2-nphamcs@gmail.com> <21606fe5-fb9b-4d37-98ab-38c96819893b@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Stat-Signature: usjqa8u5c3txneuh3mfdnb3jmct35esj X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: A85221A0016 X-Rspam-User: X-HE-Tag: 1697567156-423646 X-HE-Meta: U2FsdGVkX1/cUX0Br8b/3ZznGTld/de+QOr5+jbNYx9LLAcLb7JeagBqVCB2Z0KJVS+ice9ZKOLVbJHKwjmsFaVHXlcxh1VEjRsZbHtMaJ7/OXnLIr9wTJlfi1+ao3IWAmb7UbvoLLMUtOAXkhAgwx7OUwG9zmFYATOQBKobBSh19qq4gIQA1L7hle2vwKFNz13nEJaPIax7wYEdXBI7rfSQfNIunPSHGcC2iM+I5CUw8wdECgHKh+5sV2hkjnA2nMCiaDMTbPLxDAhdsw/XrO08W0zhpG3j10O9tCv9I0tp8Zvsy9IOCBMxHoJd3hKy33UOw8T3dqkCEF2BMNJkWEax/3tD3gdOFy+nlsOhiBOqIq8OAs+jZqP/YydKrDzqaa72et99IIfpfuAfbhNXU1kLSqpTivluo7DiokeNjywUiOKM+DzDHOPmqY5ch4xsn0TROEDSbNx3lvBgpicbB7ELmEdCV+HfYBRpewvcqs2v702RserJwhCUb5C9+gddgs5kMFzC0RI5ZopGS1MwI0ej2sa+zn0Bg3YG+QQjm0RDxypvhhazGMymk4mUw9kBBgVBlDZTm+pS4V6UwxeaaFwM0ttR0UDdHfELuQg/fxGtvdl9UqrS1EeNC8mLwf7b70h550No8oK5yp9KP3ouN71n655Vj7/SVZ3n484iKaFt1Vm8pvJr6US+94h71rgPNKe5ddEeGVKk0mYWhnyfn40HJdnntdiMBzxhN5izQSDFbQC1wYMnAurfZahaEFfa6X88KzhouWAeqenn/VoUrb6pU9HaYSu63lTBJUAeiIU0LCdW5X/zyPj+k15UFCIIJKoQPihSUPZ558/f1JRxr8BUHK0zRTw1yo8TABXf+jFYu5cCIO8+mzhk1OBT5G+UVs6U1/ufw8/UwLJTTj7xeaKEcTOH55fQ89YJCeHqfnIqNJZxQoV2Ow0/ITdzPABFloHsBsWSsW66SaFVTQr VrOtWG2z hOttQTE8uvjQkoqJuNMroNOYvTpG1DVMYUHIeZlniDo4rwG3/WMZii7Px53brSTCVVLhyegK7Sy8kg9Nt48LN7BXdu+fCSm1U2Wbgp1eV5N6MPVSQDgc0abwn9jrDoAk++1d8usfjY887daHA4OmApo1lgRSeoZYvkwhYau3mWsIwUx+utQCgn0Sj0MQZt1XudpEqg5geHGWJTfd1xqGw3ItLTZTxACE+/C17ptcJ7o2CnRnDua0526uRTvA5OZh0YlXYZ4b86HI9OvN2xVAjycbn3JrQ4SbZVagN X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 17/10/2023 18:56, Domenico Cerasuolo wrote: > > > On Tue, Oct 17, 2023 at 7:44 PM Ryan Roberts > wrote: > > On 19/09/2023 18:14, Nhat Pham wrote: > > From: Domenico Cerasuolo > > > > > Currently, we only have a single global LRU for zswap. This makes it > > impossible to perform worload-specific shrinking - an memcg cannot > > determine which pages in the pool it owns, and often ends up writing > > pages from other memcgs. This issue has been previously observed in > > practice and mitigated by simply disabling memcg-initiated shrinking: > > > > > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u > > > > > This patch fully resolves the issue by replacing the global zswap LRU > > with memcg- and NUMA-specific LRUs, and modify the reclaim logic: > > > > a) When a store attempt hits an memcg limit, it now triggers a > >    synchronous reclaim attempt that, if successful, allows the new > >    hotter page to be accepted by zswap. > > b) If the store attempt instead hits the global zswap limit, it will > >    trigger an asynchronous reclaim attempt, in which an memcg is > >    selected for reclaim in a round-robin-like fashion. > > > > Signed-off-by: Domenico Cerasuolo > > > Co-developed-by: Nhat Pham > > > Signed-off-by: Nhat Pham > > > --- > >  include/linux/list_lru.h   |  39 +++++++ > >  include/linux/memcontrol.h |   5 + > >  include/linux/zswap.h      |   9 ++ > >  mm/list_lru.c              |  46 ++++++-- > >  mm/swap_state.c            |  19 ++++ > >  mm/zswap.c                 | 221 +++++++++++++++++++++++++++++-------- > >  6 files changed, 287 insertions(+), 52 deletions(-) > > > > [...] > > > @@ -1199,8 +1272,10 @@ bool zswap_store(struct folio *folio) > >       struct scatterlist input, output; > >       struct crypto_acomp_ctx *acomp_ctx; > >       struct obj_cgroup *objcg = NULL; > > +     struct mem_cgroup *memcg = NULL; > >       struct zswap_pool *pool; > >       struct zpool *zpool; > > +     int lru_alloc_ret; > >       unsigned int dlen = PAGE_SIZE; > >       unsigned long handle, value; > >       char *buf; > > @@ -1218,14 +1293,15 @@ bool zswap_store(struct folio *folio) > >       if (!zswap_enabled || !tree) > >               return false; > >  > > -     /* > > -      * XXX: zswap reclaim does not work with cgroups yet. Without a > > -      * cgroup-aware entry LRU, we will push out entries system-wide based on > > -      * local cgroup limits. > > -      */ > >       objcg = get_obj_cgroup_from_folio(folio); > > -     if (objcg && !obj_cgroup_may_zswap(objcg)) > > -             goto reject; > > +     if (objcg && !obj_cgroup_may_zswap(objcg)) { > > +             memcg = get_mem_cgroup_from_objcg(objcg); > > +             if (shrink_memcg(memcg)) { > > +                     mem_cgroup_put(memcg); > > +                     goto reject; > > +             } > > +             mem_cgroup_put(memcg); > > +     } > >  > >       /* reclaim space if needed */ > >       if (zswap_is_full()) { > > @@ -1240,7 +1316,11 @@ bool zswap_store(struct folio *folio) > >               else > >                       zswap_pool_reached_full = false; > >       } > > - > > +     pool = zswap_pool_current_get(); > > +     if (!pool) { > > +             ret = -EINVAL; > > +             goto reject; > > +     } > > > Hi, I'm working to add support for large folios within zswap, and noticed this > piece of code added by this change. I don't see any corresponding put. Have I > missed some detail or is there a bug here? > > > >       /* allocate entry */ > >       entry = zswap_entry_cache_alloc(GFP_KERNEL); > >       if (!entry) { > > @@ -1256,6 +1336,7 @@ bool zswap_store(struct folio *folio) > >                       entry->length = 0; > >                       entry->value = value; > >                       atomic_inc(&zswap_same_filled_pages); > > +                     zswap_pool_put(pool); > > I see you put it in this error path, but after that, there is no further > mention. > > >                       goto insert_entry; > >               } > >               kunmap_atomic(src); > > @@ -1264,6 +1345,15 @@ bool zswap_store(struct folio *folio) > >       if (!zswap_non_same_filled_pages_enabled) > >               goto freepage; > >  > > +     if (objcg) { > > +             memcg = get_mem_cgroup_from_objcg(objcg); > > +             lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, > GFP_KERNEL); > > +             mem_cgroup_put(memcg); > > + > > +             if (lru_alloc_ret) > > +                     goto freepage; > > +     } > > + > >       /* if entry is successfully added, it keeps the reference */ > >       entry->pool = zswap_pool_current_get(); > > The entry takes it's reference to the pool here. > > Thanks, > Ryan > > > Thanks Ryan, I think you're right. Coincidentally, we're about to send a new > version of the series, and will make sure to address this too. Ahh... I'm on top of mm-unstable - for some reason I thought I was on an rc and this was already in. I guess it's less of an issue in that case.