From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDAC6CDB482 for ; Thu, 19 Oct 2023 12:47:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 51AB980097; Thu, 19 Oct 2023 08:47:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CA6F8D019E; Thu, 19 Oct 2023 08:47:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3926E80097; Thu, 19 Oct 2023 08:47:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2A7928D019E for ; Thu, 19 Oct 2023 08:47:56 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id F1D441204B3 for ; Thu, 19 Oct 2023 12:47:55 +0000 (UTC) X-FDA: 81362188110.02.7116855 Received: from mail-oo1-f52.google.com (mail-oo1-f52.google.com [209.85.161.52]) by imf13.hostedemail.com (Postfix) with ESMTP id 316B720013 for ; Thu, 19 Oct 2023 12:47:54 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=duAoYmKC; spf=pass (imf13.hostedemail.com: domain of cerasuolodomenico@gmail.com designates 209.85.161.52 as permitted sender) smtp.mailfrom=cerasuolodomenico@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697719674; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qTpU2GJS0iJKwr+DejD1SyjMobqh5RXzZFM0nPMlyiQ=; b=qAwXReNhwuTj7G/gnbXCMrye9wMcy3k54cs7ioZMrH+bG70wR5ZnDBuL9y/4DJnuh9rW/t lyOuUv/G1lvkuX2tTgoIq8VrKXgJw35UaVVCCDchQH5kSYrAcTO8f5isTabm5yuVJbDKaB 5AHKmNOjcvuaNrsjLozpDelmwyv2KL4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697719674; a=rsa-sha256; cv=none; b=QxhTbYKDCobb3+AYRNHdx63hQ70AaLVcf2CcJbmOQNDQ9LSS43OtOiYnPQR1w69iO1pmg1 LXAKAaxXL0cX9YN7QlzFsODZi1l4LDEjVRQCzvxdjfImxZLpT7Bvs7um5vHQICWYdd+ZUb fB7uIuhV7++G3upNAnPAx0wymVPi5ng= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=duAoYmKC; spf=pass (imf13.hostedemail.com: domain of cerasuolodomenico@gmail.com designates 209.85.161.52 as permitted sender) smtp.mailfrom=cerasuolodomenico@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-oo1-f52.google.com with SMTP id 006d021491bc7-581ed744114so1153022eaf.0 for ; Thu, 19 Oct 2023 05:47:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697719673; x=1698324473; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=qTpU2GJS0iJKwr+DejD1SyjMobqh5RXzZFM0nPMlyiQ=; b=duAoYmKCCndLMZTlEI3+gJzqYtrKDJxQrQzb9Wg0Dqrvu1ht3YSXEZb6syETqZsqjL zP1Q7gSg9w00wWbTjIkGJdHMvH3iI5OO2c6gn1DNtfPVWqwRj98GOJvRRta19hAyw1ib ks3XetEWi/yqOzfx7BlmCUWf7j5+wUWEIFc8CHCoP3MkQ1w0V8vCjWUer/2jk4b2RMfm x+4uU3io31oETHztNHyM726OHivZ3c+EYD6lQfqs8RqeYIwOFPFC2yKTlkr4z/49Dk7l +2qlBuFljaPsr/CoeOfavEj6IvIgIaKgd2IcenURDb34/Ryfe5r9kfiybW03kYTr6b4l yQTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697719673; x=1698324473; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qTpU2GJS0iJKwr+DejD1SyjMobqh5RXzZFM0nPMlyiQ=; b=G5gNwtZuSSY9Cgzdk9WN7fbFn5qo1oWrjUw8dQFRRLbfd69ZfjP3h3OXWZmu85gvMh E9c7nR6kvIH+353CdyH7yyW4Hh/YErTVf+kvxjCSb8nx2DM5l3M4jjwxDufElprmt12J a96vFdFcZ2Sd3enPfVs4hyVicoeIXjwabgR0KqVLoUS852WT45dIbG0+yHC+tCJocRVA HmlQfC1LgqX80+d4UI1fSBemB/7BUematTzAn9I451w6BxOBFJ2F2E69/8H1r7Y4klYV jPRUEjiYY2aKbtDW07//muEToNDRycgwVVX7oDlztG1O3aXQHjRDvXQGDWbZG5ijITYL ZYpQ== X-Gm-Message-State: AOJu0Yxqmf1GMaC0PQwhI9XVvRHoScR1LsTAENFQH4Dg2p3R65NhbZJ3 /o0a8/EPTskGdjIVVSGmuvbBcGmqdoXRlOXjhFs= X-Google-Smtp-Source: AGHT+IGsI5edFB7RtfCwgkr7TTohQ6JOMBTVyhQPj6TmiAmJ9v7o9ujOevVEED1TbCLyOrToyxjZH2ABsCvpmdSRL04= X-Received: by 2002:a05:6358:455:b0:168:aed9:4806 with SMTP id 21-20020a056358045500b00168aed94806mr83951rwe.22.1697719673117; Thu, 19 Oct 2023 05:47:53 -0700 (PDT) MIME-Version: 1.0 References: <20231017232152.2605440-1-nphamcs@gmail.com> <20231017232152.2605440-3-nphamcs@gmail.com> In-Reply-To: From: Domenico Cerasuolo Date: Thu, 19 Oct 2023 14:47:42 +0200 Message-ID: Subject: Re: [PATCH v3 2/5] zswap: make shrinking memcg-aware To: Yosry Ahmed Cc: Nhat Pham , akpm@linux-foundation.org, hannes@cmpxchg.org, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: 7qndhjgku39519es88z4tmgpgoh1pnzk X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 316B720013 X-Rspam-User: X-HE-Tag: 1697719674-107363 X-HE-Meta: U2FsdGVkX193NVzEBMCTXym0NuBufmAoYZFTvONxMGqdBWpAXasGJEo6ozP0VPk1Rq6SS+FUcAtXWxUY01li5NFGxkAzBv0yevvmmywo/e8IBn2XtmCBLqhQQBQLqtiY78A5no4LuqBxeX+bBURrNebJfdhHtsxw4KOwPZ2jzawD88gFHMpwRm1Vv9vcj1WR63JMBU98xkaz76XnvI6AwRA1yt5jiJBWajRazqotRN8Y9QVvbyTO5RmmCKnLLNflx5m7LdtbewyRaVuf1qT6AaqEVxD2V+7SNy2JYl1knogCOAedCjbXgf0szgi3bSKlKmA/kwSHxuJYx0qakC40flyX9yFc9O2bM97G5XDNt7OUvgDuYF/a1nkSKDyyZ9qawHGwZCFEQoNvsmPRyLVQ6PclsjpPEsBl97bkqXpflwe+xh361tZf8A3HItQ7hqefVI8F/3pam6gnaZaMESXTbd+5lY5aqAOnyrLEfl6XCIs5YR0E7lzv9b6GhfXE+pv44K3d6jN/YgP9D3NqYMzW2R2FVNlV9i0Ki9P4bbitPU78e0D+16etEZj89svMRbb1SaJ0RSLoXEmCTo97W326eVi0NBzWwKJQaxrn+YPPTTrii05GdeWJc3w/wh3sQRG/N2qduCJP7N7WgnRAOqPW/LUnqmWhlr7L7naIByNau8BInjONzuSifWtbzTUh0Z9iy8QHxUuZ63njfmpoCXnFnWQUyNRsN0Cb7MkjQDdUvl3tRGsN5Q4PZIMK6odlO4FSCLBWz11TRER912C27hIibjsA2lUzsshbBGk4DuVRfPxZYrE9U52B/VJbDcGgKmGQf28Dw0JITwzvS8Mvp6+mmvUZX87pvHatzB3m2j3YAr7J0hxK8M0auvZeGW4VqTEiakBWHkXR4x9MdBfQ9GnkNOtggRYca/V+p70vkPRg0OqKQLjejHYnY4j8cO9SlyKrhtGVot68NVVYlfqQHy3 to376twz tpIHEj3x6W1hPcSEBghbnC45Mu+gcqSeS2uO6wf7pgmirI0/ZMTbCCtB/AX/UDAj8u9l/KDqRGuIYwpHoYuK5c5KyB8r3kikISlicGkeHj8eDBOYxFACGv0orn+1ykCdetH9A2nlJuS13NPUrPeTYtC6ztpoIa4Mgu8LO7fG5YNmzGkRZR6ZX7NX1WnYEDumbQ7FKBXVi/oGH05/inruRweQIFByi1DSQzr5g3U27bXnokNS+qqqrMiHPVFMFrVgnGQhK2GyTNk3HDy9EKftuKKWGX1u5Owp7cya2ChYYgmlQJwwulY7QN1O78He1vvBHzhD57JyObNhH7g8If8etkxxvQgGgrB+GsMvBvzolvN5kWWBG39+mFw2I6Vl6AHwWr3Z7BF2zbc/UitI4SVczQZKB9cggV8FFXEddU1rdoL2aoUSYKXbp5OQ9bLZvhAZsNHYezXWZ107Vwjc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Oct 19, 2023 at 3:12=E2=80=AFAM Yosry Ahmed = wrote: > > On Wed, Oct 18, 2023 at 4:47=E2=80=AFPM Nhat Pham wro= te: > > > > On Wed, Oct 18, 2023 at 4:20=E2=80=AFPM Yosry Ahmed wrote: > > > > > > On Tue, Oct 17, 2023 at 4:21=E2=80=AFPM Nhat Pham = wrote: > > > > > > > > From: Domenico Cerasuolo > > > > > > > > Currently, we only have a single global LRU for zswap. This makes i= t > > > > impossible to perform worload-specific shrinking - an memcg cannot > > > > determine which pages in the pool it owns, and often ends up writin= g > > > > pages from other memcgs. This issue has been previously observed in > > > > practice and mitigated by simply disabling memcg-initiated shrinkin= g: > > > > > > > > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.= com/T/#u > > > > > > > > This patch fully resolves the issue by replacing the global zswap L= RU > > > > with memcg- and NUMA-specific LRUs, and modify the reclaim logic: > > > > > > > > a) When a store attempt hits an memcg limit, it now triggers a > > > > synchronous reclaim attempt that, if successful, allows the new > > > > hotter page to be accepted by zswap. > > > > b) If the store attempt instead hits the global zswap limit, it wil= l > > > > trigger an asynchronous reclaim attempt, in which an memcg is > > > > selected for reclaim in a round-robin-like fashion. > > > > > > Could you explain the rationale behind the difference in behavior her= e > > > between the global limit and the memcg limit? > > > > The global limit hit reclaim behavior was previously asynchronous too. > > We just added the round-robin part because now the zswap LRU is > > cgroup-aware :) > > > > For the cgroup limit hit, however, we cannot make it asynchronous, > > as it is a bit hairy to add a per-cgroup shrink_work. So, we just > > perform the reclaim synchronously. > > > > The question is whether it makes sense to make the global limit > > reclaim synchronous too. That is a task of its own IMO. > > Let's add such context to the commit log, and perhaps an XXX comment > in the code asking whether we should consider doing the reclaim > synchronously for the global limit too. Makes sense, I wonder if the original reason for switching from a synchrono= us to asynchronous reclaim will still be valid with the shrinker in place. > > > > > (FWIW, this somewhat mirrors the direct reclaimer v.s kswapd > > story to me, but don't quote me too hard on this). > > > [..] > > > > > > > > > /* Hold a reference to prevent a free during writeback */ > > > > zswap_entry_get(entry); > > > > spin_unlock(&tree->lock); > > > > > > > > - ret =3D zswap_writeback_entry(entry, tree); > > > > + writeback_result =3D zswap_writeback_entry(entry, tree); > > > > > > > > spin_lock(&tree->lock); > > > > - if (ret) { > > > > - /* Writeback failed, put entry back on LRU */ > > > > - spin_lock(&pool->lru_lock); > > > > - list_move(&entry->lru, &pool->lru); > > > > - spin_unlock(&pool->lru_lock); > > > > + if (writeback_result) { > > > > + zswap_reject_reclaim_fail++; > > > > + memcg =3D get_mem_cgroup_from_entry(entry); > > > > + spin_lock(lock); > > > > + /* we cannot use zswap_lru_add here, because it inc= rements node's lru count */ > > > > + list_lru_putback(&entry->pool->list_lru, item, entr= y_to_nid(entry), memcg); > > > > + spin_unlock(lock); > > > > + mem_cgroup_put(memcg); > > > > + ret =3D LRU_RETRY; > > > > goto put_unlock; > > > > } > > > > + zswap_written_back_pages++; > > > > > > Why is this moved here from zswap_writeback_entry()? Also why is > > > zswap_reject_reclaim_fail incremented here instead of inside > > > zswap_writeback_entry()? > > > > Domenico should know this better than me, but my understanding > > is that moving it here protects concurrent modifications of > > zswap_written_back_pages with the tree lock. > > > > Is writeback single-threaded in the past? This counter is non-atomic, > > and doesn't seem to be protected by any locks... > > > > There definitely can be concurrent stores now though - with > > a synchronous reclaim from cgroup-limit hit and another > > from the old shrink worker. > > > > (and with the new zswap shrinker, concurrent reclaim is > > the expectation!) > > The comment above the stats definition stats that they are left > unprotected purposefully. If we want to fix that let's do it > separately. If this patch makes it significantly worse such that it > would cause a regression, let's at least do it in a separate patch. > The diff here is too large already. > > > > > zswap_reject_reclaim_fail was previously incremented in > > shrink_worker I think. We need it to be incremented > > for the shrinker as well, so might as well move it here. > > Wouldn't moving it inside zswap_writeback_entry() near incrementing > zswap_written_back_pages make it easier to follow? As Nhat said, zswap_reject_reclaim_fail++ had to be moved, I naturally move= d it here because it's where we act upon the result of the writeback. I then not= iced that zswap_written_back_pages++ was elsewhere and decided to move that as w= ell so that they're in the same place and at least they're under the tree lock. It's not meant to fix the unprotected counters, it's just a mitigation sinc= e we are forced to move at least one of them.