From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EFBBC48260 for ; Tue, 13 Feb 2024 14:20:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 29F968D000F; Tue, 13 Feb 2024 09:20:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 24F5B8D000E; Tue, 13 Feb 2024 09:20:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C9E28D000F; Tue, 13 Feb 2024 09:20:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id EAAFE8D000E for ; Tue, 13 Feb 2024 09:20:55 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E41121A0C0C for ; Tue, 13 Feb 2024 14:20:54 +0000 (UTC) X-FDA: 81786992028.16.DE0E76B Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf06.hostedemail.com (Postfix) with ESMTP id 1105F180005 for ; Tue, 13 Feb 2024 14:20:51 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=YqkJeBFn; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf06.hostedemail.com: domain of zhouchengming@bytedance.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=zhouchengming@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707834052; a=rsa-sha256; cv=none; b=8nBAcNU3R8nNaVgz0tzeA/tWNcVYqNXWp3aQKIj0n/6e0hRDW5nAocjuuccT+8gFaAUzZe wt04CA8/eP1WH6+bDbKhusCelDvuV/hPtp3IPxNSmtcjY4fYqYyhQTWYiE87FgGKMhpL6r +S5hAJQ96j0w2Hbxx7l7hE7HEXXy9aU= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=YqkJeBFn; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf06.hostedemail.com: domain of zhouchengming@bytedance.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=zhouchengming@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707834052; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hTcGaRpU/+l4xDwi5UtlkinpqIiMd4C3NS8zazX5S0Q=; b=xHFy4KEk1nvHoj5SnpucEiaHtuSpmZvsacL23h5DjrcdyRC0pTCFS1pEx14FhTR9d4xZvv mq1WiQ6jrQr2Z6VlfHboPGnCwxsW8qaNKXZevEpi/RrPSWSbBzK8mD6x3OJOSzKHtfnD61 yQ9tpr0uoDegxFsrs5PDbR9vAcNWtEs= Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-1d9bd8fa49eso31458685ad.1 for ; Tue, 13 Feb 2024 06:20:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1707834051; x=1708438851; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=hTcGaRpU/+l4xDwi5UtlkinpqIiMd4C3NS8zazX5S0Q=; b=YqkJeBFnWzmDWZiqJQO8lOOmirBhr+nphgFDHHanzXhVlC2H3yX6PNsH4Pv0N00Q5D BBmsO/lzAYyTZoQVxlYAaKxJNpOcEUzqusSm6Eo3LKxLL2R48rK9T85e4HX30prpk4ZQ nlEMEOJdLLzAs72JWmQoOtV4qxuR8ASbxDFiP+5s8b2kin+pvfHmHn6ZsK+do4Mo0q/B lI3/yq4p1JGplUn5hALyOVb6LQpK8mFRVtVOUQLeXr3RLr73nezE4Wsp8tOgKbuMm+rc Bak4ROmz1xZ+2J0R7rHiiCf6lK1rp59Dlr8Za+qiTQE/XYmK1SbCKE4Pm+lSdXKpFor4 lOGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707834051; x=1708438851; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hTcGaRpU/+l4xDwi5UtlkinpqIiMd4C3NS8zazX5S0Q=; b=KtdaqBMNexaCk7G6FCMLQ+cfEkNbHwcKa6Z/lfhoh9XFRB5X4WciJoKm1VhAwJFjkP V31NQzQe/SbXzs8nS8fHWD4YjDYoFDxEIRgTf5SorWnwQfwc6SPoKvXTwLiApwNANbUY 8LMkbbxKknK+fv+mGNZKdcxR6nJrSDICjXLPR2MEPPd1CYSGhSD4/617TA8h7mB+CmMm sN8IJDcgi60V9mdB++BWnC31ZljDbX6S1AbFZnoVS4eVD3aKTFCR6aUIyXAt6d7dXViG QBpg53OKUlnZDJDYNMd4aWTSywYWKAOdAgqXKwtX1Yn9OUvPb632famWxwr9scfhgHeo Qfgg== X-Forwarded-Encrypted: i=1; AJvYcCVCVotroeXMxBVDKc7qvwItrRsjt7GxmKul9G8WsWxW4Mv9IKJDlFBuuIOMbPY9Z7IvoJGz/lETXwYSvZVk7V+7aog= X-Gm-Message-State: AOJu0Ywn9Hi74a2SObTrp81/AvtP7iG1DLuWP8gfEohSTvVRHBnkJTGM bPq/IOncYKZbTC3DrHWLpWRDYRI3mQAZiBOrny9h/luvz6bN/yngnzDTUuN4H/E+yxlz0PHtKj9 3 X-Google-Smtp-Source: AGHT+IFC0zRVfeHLWIKSSUI/LH5Kvnm+N+P+n9nWieGwSgDYEkSwY3aO0jbvZN7e9sK6rk5P5/rYtQ== X-Received: by 2002:a17:902:e892:b0:1d9:d:5730 with SMTP id w18-20020a170902e89200b001d9000d5730mr10135618plg.3.1707834050787; Tue, 13 Feb 2024 06:20:50 -0800 (PST) X-Forwarded-Encrypted: i=1; AJvYcCXkFxktchCDjxL1MugrEwsm+qtbm23g5veNOLjR4ppC8qSMns8qsFl6bEPEA7UiV/avBdswEdD6YFS227oQC5V28ulahf93te+cscHizShKVOpZsh8+9u3esPmaR9Rgvk3ZWohIUV17q3guT9x7gIWIT+Bwr7TWhB9VI6GIRcwwHRpZIrETQ90= Received: from [10.254.125.113] ([139.177.225.244]) by smtp.gmail.com with ESMTPSA id d18-20020a170903209200b001d50ca466e5sm2131088plc.133.2024.02.13.06.20.47 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 13 Feb 2024 06:20:50 -0800 (PST) Message-ID: Date: Tue, 13 Feb 2024 22:20:44 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] mm/zswap: global lru and shrinker shared by all zswap_pools To: Yosry Ahmed Cc: Andrew Morton , Johannes Weiner , Nhat Pham , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240210-zswap-global-lru-v1-0-853473d7b0da@bytedance.com> <20240210-zswap-global-lru-v1-1-853473d7b0da@bytedance.com> Content-Language: en-US From: Chengming Zhou In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 1105F180005 X-Stat-Signature: 766zaj1ybxinzop9p3e1ka6fj6ffpnxh X-HE-Tag: 1707834051-821773 X-HE-Meta: U2FsdGVkX1+15uxzTUyCmHRf8VmLQzyIEIO2qQtYaBD+84QVSkqkSRpV1XOiNO+pSqe6QEneWVw3SmbLDHMSQuXgzlU0f0k327PLs+0pkkLKRgy8kUnaxdGhwq2VPrMjFjkLBqjoCq69W9+ncnm9UTEfnzc+4DuuKKa9+VktkyHdJafiLwHVZOOjHkg62F2NPT9URczp8OzTrX3h/fyrzBT3/Ch0zI9MsV3DJxjPOLvexo8FH2Zca1nkUDYIgDhPMdYAsQe1hyWGNkc4WJo77kmADjUzjWaGsE7VZs0D1/8kQbb+4SGU99vhAyz+C9MLDJeDrRd2h/R3ug4lEbLG6yb/k7icz3AY4vNU2hTQeCerPxsgHrXsCi07ys3/FWzbKDc0aRJlZE8hz4i7lRUmflDYAT3myJ2nbCVvKYhN2LNZjXc4nKNXm7po92bTHd1iyz65CxpcHVqFko7R8CW9mjq2HaE4oTdiYvOeRn7yLp1VL+zVt5zA01pHZBlCgILZbVnOvEVkR1LAbIyt91mYaF5pDIvRuTEpwOIcxB9j9l9Q0z7pl3u4DP46dv9zgoFme/MP+RYQYvITUXZhfL2/X8n5XeDOC86e/kjYOj7Y39M2IzDFHTfLz2xxaUaKLdEmXTFN/ZWhcOPvK/wEA60JLDTRpJqIIKjZD4X0t06x3I3/qyyOs+JhxswtwL+6VHKHInlamZTR/FRYEQP0NxDx48Ew5nzuAbsII87Lhgv4cSBClMOUqAHM9htOz8un+08LulyB3VhReuq2+lbHFyNbh26FojAchkG9EzeuYaXbASEwenDhkgvH1dskwyer3W+pvIpuib0JD5h/fRg8trNaTVKPPT4Lg5FO+IfMUipTJ7BNwUzHTBGrCs1OLhYL0FQZMh5eDW9KINovefSDjQHNnXd3006SEhMqQqfS6R/Tu7VW51eXQW7jeHtqyCzymQ+wf719tYS0UaSV4D90UTx eSfld19t oHn/g9+o2GBOddT/OyVEbGUgkhBUR1Y8vo197fVE40ZEiVgwh1HhQL9akQQU6b22G6p5wLLjZx+ABX65aT+Q4NPx5WFxU26D14ZMfadUevzlJCZrPjMoulUjBMT9Y/ERsox6YTiKFRz5ifu3hwoeYNJDbIuGTyCODzB1EahSlLoZGy99Wcbp6jJ8JTj6ShqhVyEJ/YFya6zE2jBp2dMqpW20ZCyeiUOow5sr26FUJm4DU6MIRvgZFPwuoZ37gAofGlcdaKASNwZWhU6MoG31CrbjvvOyHL9SZeDUzKrTJxXdaVQs9jHy+DxoOqCG08bmJgCqHaK59pMW3w0vc4hmWpTzm7/Rrsuok3v5H+wWZYuKlyopAfpjtglH76c0JkTZaqTlMoGxM2s0mN2EcPSIIkNK/rPIo2oyDjNFmTCi35PNkVvwpzP4X7P4TfvZ+6LRL3H6zX6JdWqGuAbhGMV8GYc0kP4OQPP/YULAfCf8e5PQogtGKgWZxzJq5C+7dbFxgOxs7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/2/13 20:57, Yosry Ahmed wrote: > On Sun, Feb 11, 2024 at 01:57:04PM +0000, Chengming Zhou wrote: >> Dynamic zswap_pool creation may create/reuse to have multiple >> zswap_pools in a list, only the first will be current used. >> >> Each zswap_pool has its own lru and shrinker, which is not >> necessary and has its problem: >> >> 1. When memory has pressure, all shrinker of zswap_pools will >> try to shrink its own lru, there is no order between them. >> >> 2. When zswap limit hit, only the last zswap_pool's shrink_work >> will try to shrink its lru, which is inefficient. >> >> Anyway, having a global lru and shrinker shared by all zswap_pools >> is better and efficient. > > It is also a great simplification. > >> >> Signed-off-by: Chengming Zhou >> --- >> mm/zswap.c | 153 ++++++++++++++++++++++--------------------------------------- >> 1 file changed, 55 insertions(+), 98 deletions(-) >> >> diff --git a/mm/zswap.c b/mm/zswap.c >> index 62fe307521c9..7668db8c10e3 100644 >> --- a/mm/zswap.c >> +++ b/mm/zswap.c >> @@ -176,14 +176,17 @@ struct zswap_pool { >> struct kref kref; >> struct list_head list; >> struct work_struct release_work; >> - struct work_struct shrink_work; >> struct hlist_node node; >> char tfm_name[CRYPTO_MAX_ALG_NAME]; >> +}; >> + >> +struct { > > static? Ah, right, will add static. > >> struct list_lru list_lru; >> - struct mem_cgroup *next_shrink; >> - struct shrinker *shrinker; > > Just curious, any reason to change the relative ordering of members > here? It produces a couple more lines of diff :) The list_lru and nr_stored atomic variable are used in zswap_store/load hotpath, the other shrinker related sound like cold path. I thought it's normal and clearer to put them according to their usages. > >> atomic_t nr_stored; >> -}; >> + struct shrinker *shrinker; >> + struct work_struct shrink_work; >> + struct mem_cgroup *next_shrink; >> +} zswap; >> >> /* >> * struct zswap_entry >> @@ -301,9 +304,6 @@ static void zswap_update_total_size(void) >> * pool functions >> **********************************/ >> >> -static void zswap_alloc_shrinker(struct zswap_pool *pool); >> -static void shrink_worker(struct work_struct *w); >> - >> static struct zswap_pool *zswap_pool_create(char *type, char *compressor) >> { >> int i; >> @@ -353,30 +353,16 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) >> if (ret) >> goto error; >> >> - zswap_alloc_shrinker(pool); >> - if (!pool->shrinker) >> - goto error; >> - >> - pr_debug("using %s compressor\n", pool->tfm_name); >> - > > Why are we removing this debug print? Oh, I just noticed it's only necessary to print dmesg when "create" success, the below "zswap_pool_debug()" will print its compressor too. > >> /* being the current pool takes 1 ref; this func expects the >> * caller to always add the new pool as the current pool >> */ >> kref_init(&pool->kref); >> INIT_LIST_HEAD(&pool->list); >> - if (list_lru_init_memcg(&pool->list_lru, pool->shrinker)) >> - goto lru_fail; >> - shrinker_register(pool->shrinker); >> - INIT_WORK(&pool->shrink_work, shrink_worker); >> - atomic_set(&pool->nr_stored, 0); >> >> zswap_pool_debug("created", pool); >> >> return pool; >> >> -lru_fail: >> - list_lru_destroy(&pool->list_lru); >> - shrinker_free(pool->shrinker); >> error: >> if (pool->acomp_ctx) >> free_percpu(pool->acomp_ctx); > [..] >> @@ -816,14 +777,10 @@ void zswap_folio_swapin(struct folio *folio) >> >> void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) >> { >> - struct zswap_pool *pool; >> - >> - /* lock out zswap pools list modification */ >> + /* lock out zswap shrinker walking memcg tree */ >> spin_lock(&zswap_pools_lock); >> - list_for_each_entry(pool, &zswap_pools, list) { >> - if (pool->next_shrink == memcg) >> - pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL); >> - } >> + if (zswap.next_shrink == memcg) >> + zswap.next_shrink = mem_cgroup_iter(NULL, zswap.next_shrink, NULL); > > Now that next_shrink has nothing to do with zswap pools, it feels weird > that we are using zswap_pools_lock for its synchronization. Does it make > sense to have a separate lock for it just for semantic purposes? Agree, I think so, it's clearer to have another lock. > >> spin_unlock(&zswap_pools_lock); >> } >> > [..] >> @@ -1328,7 +1284,6 @@ static unsigned long zswap_shrinker_scan(struct shrinker *shrinker, >> static unsigned long zswap_shrinker_count(struct shrinker *shrinker, >> struct shrink_control *sc) >> { >> - struct zswap_pool *pool = shrinker->private_data; >> struct mem_cgroup *memcg = sc->memcg; >> struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid)); >> unsigned long nr_backing, nr_stored, nr_freeable, nr_protected; >> @@ -1343,7 +1298,7 @@ static unsigned long zswap_shrinker_count(struct shrinker *shrinker, >> #else >> /* use pool stats instead of memcg stats */ >> nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT; > > "pool" is still being used here. Oops, should be changed to zswap_pool_total_size here. > >> - nr_stored = atomic_read(&pool->nr_stored); >> + nr_stored = atomic_read(&zswap.nr_stored); >> #endif >> >> if (!nr_stored) > [..] >> @@ -1804,6 +1749,21 @@ static int zswap_setup(void) >> if (ret) >> goto hp_fail; >> >> + shrink_wq = alloc_workqueue("zswap-shrink", >> + WQ_UNBOUND|WQ_MEM_RECLAIM, 1); >> + if (!shrink_wq) >> + goto hp_fail; > > I think we need a new label here to call cpuhp_remove_multi_state(), but > apparently this is missing from the current code for some reason. You are right! This should use a new label to "cpuhp_remove_multi_state()", will fix it. > >> + >> + zswap.shrinker = zswap_alloc_shrinker(); >> + if (!zswap.shrinker) >> + goto shrinker_fail; >> + if (list_lru_init_memcg(&zswap.list_lru, zswap.shrinker)) >> + goto lru_fail; >> + shrinker_register(zswap.shrinker); >> + >> + INIT_WORK(&zswap.shrink_work, shrink_worker); >> + atomic_set(&zswap.nr_stored, 0); >> + >> pool = __zswap_pool_create_fallback(); >> if (pool) { >> pr_info("loaded using pool %s/%s\n", pool->tfm_name, >> @@ -1815,19 +1775,16 @@ static int zswap_setup(void) >> zswap_enabled = false; >> } >> >> - shrink_wq = alloc_workqueue("zswap-shrink", >> - WQ_UNBOUND|WQ_MEM_RECLAIM, 1); >> - if (!shrink_wq) >> - goto fallback_fail; >> - >> if (zswap_debugfs_init()) >> pr_warn("debugfs initialization failed\n"); >> zswap_init_state = ZSWAP_INIT_SUCCEED; >> return 0; >> >> -fallback_fail: >> - if (pool) >> - zswap_pool_destroy(pool); >> +lru_fail: >> + list_lru_destroy(&zswap.list_lru); > > Do we need to call list_lru_destroy() here? I know it is currently being > called if list_lru_init_memcg() fails, but I fail to understand why. It > seems like list_lru_destroy() will do nothing anyway. Right, it's not needed to call list_lru_destroy() here, it should do nothing, will delete it. Thanks! > >> + shrinker_free(zswap.shrinker); >> +shrinker_fail: >> + destroy_workqueue(shrink_wq); >> hp_fail: >> kmem_cache_destroy(zswap_entry_cache); >> cache_fail: >> >> -- >> b4 0.10.1