From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1F05C4167B for ; Wed, 6 Dec 2023 06:00:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C7846B0078; Wed, 6 Dec 2023 01:00:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3771D6B007B; Wed, 6 Dec 2023 01:00:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23F3B6B007D; Wed, 6 Dec 2023 01:00:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1484C6B0078 for ; Wed, 6 Dec 2023 01:00:34 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E56EA1402CD for ; Wed, 6 Dec 2023 06:00:33 +0000 (UTC) X-FDA: 81535343946.17.8E81988 Received: from mail-lj1-f170.google.com (mail-lj1-f170.google.com [209.85.208.170]) by imf04.hostedemail.com (Postfix) with ESMTP id 12EBD4001B for ; Wed, 6 Dec 2023 06:00:31 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=HriXsBzo; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.170 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701842432; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pYq7dca+Gj7nooc7F/6CDn/Z/YkbYvbbjyp+G0CHbFs=; b=DcZnvz9TAax2YlcmpsjgjMcWYwhL0hzgsllvf64O5Ob1LRO3ieZTs/CAEPh0IasHQtQk9O vDSUH4TPEB85wM78SNpaTIdHUhao1JYaPJf8DuxWFlJxd57UUTbqBF+y3lnBsqS8cecTPm sIA+0fgaZnA6ZCP4HFuvgHisDPvgJEE= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=HriXsBzo; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.170 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701842432; a=rsa-sha256; cv=none; b=TK0IKRZonZCTIA+/QQAp980vQTLeQUgx/z5Ucj3STSkkla7xcdpAX2o71lqSGTWroF47Ta 7g+oDvi9ZRE5BMvh2Y0YX9LGkUoKkMmjAef+m91joDF77d+n5Om1+ZP+0JxpECWxuAK1em hfIc2GE321RG62SV9VHn5X/hJldsJXs= Received: by mail-lj1-f170.google.com with SMTP id 38308e7fff4ca-2ca0c36f5beso34416891fa.1 for ; Tue, 05 Dec 2023 22:00:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701842430; x=1702447230; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=pYq7dca+Gj7nooc7F/6CDn/Z/YkbYvbbjyp+G0CHbFs=; b=HriXsBzoiBKXVpL5/maxA6fb6x4V+rdhEMr+ZSXlX6AYZ5Gjb5qNpmanFzrWeMnYtJ K65lO4sXt0//ink0KteA/64GWBUU3fQgs4TmhQ1I1InhsKEC9jkKnvSiZsx2mvSqDjhf R3LZAx/fXcoGr1IlMP0ckZngJ8Lh54kEzFIhU8/YYSHI6tIAHsHsMvj5JLZ3XCK5fc47 F5ZZbA2Gy+1BgF0lnx9pbXLBw2e2ZOKEAP9yxTj5Er98V+d02ubUYVAnrkVhkNSoSGUx 1IuQhJ+ucy7E4qn7NQIgXivgLBmSaf7y1rNa4iZurgRxgDfO5N0fPcW8VW716BQscrQT Fvzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701842430; x=1702447230; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=pYq7dca+Gj7nooc7F/6CDn/Z/YkbYvbbjyp+G0CHbFs=; b=v85IvrpmeZ7B9m0KXfHlHjYT8muhHYLckZd0CtrvCN4Rof+QddkKzNrWnhJIYGBZpl TRGBhLKB+/G3o4J1D9AqsTw5eZMok7Uug4C+1WaR4XkP3iQS3G1YEulaMBU89zd1TWnV oYEBaR/r2gMXJ/yWxpSBU80UpVYuMEorGi0wLSu+toJQSskHw1cTP8Qa04O/s+xru3lY iFy3VQlkER2DMx+QIg28gUdyBA0wMKCiekzoJ/8eie95QZHpRtt/8QZDt2FSUMr1vql+ ehbgljxGN7XAXK21sLLQG3UrYfQMG3lvjJ3u0GQgxE4YoP6Uu0r4AY8ymcqjK3EbJqkS TOrA== X-Gm-Message-State: AOJu0Yxm91PLqEe41X7QnifDHtZ0JkEpMfdiEF7feKVkvcloK8pVGRqK oGZ3bf7qStQmEUv8r4tJLaKTFXC9oA0mVzsCWu/hpA== X-Google-Smtp-Source: AGHT+IFXAeoJL6ZSpivoVRoRx5sclW/hkOTwH9MNxBDf0+k3EyWFJczCRuAq+UUItgKlQjTWq1Kko/YxHxXsgyIZ2LI= X-Received: by 2002:a2e:87d7:0:b0:2ca:ac6:9f94 with SMTP id v23-20020a2e87d7000000b002ca0ac69f94mr178430ljj.86.1701842430004; Tue, 05 Dec 2023 22:00:30 -0800 (PST) MIME-Version: 1.0 References: <20231130194023.4102148-1-nphamcs@gmail.com> <20231130194023.4102148-7-nphamcs@gmail.com> In-Reply-To: From: Yosry Ahmed Date: Tue, 5 Dec 2023 21:59:50 -0800 Message-ID: Subject: Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure To: Chengming Zhou Cc: Nhat Pham , akpm@linux-foundation.org, hannes@cmpxchg.org, cerasuolodomenico@gmail.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 12EBD4001B X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 3eqgrnbsf4rqandtj1io7nsabnfw7ida X-HE-Tag: 1701842431-442232 X-HE-Meta: U2FsdGVkX1+KNuZGl27BaKr+mYTyDYmQUiUJdR6Yl0uIQwPoF7vQDyQ0RjnVAI/DPlM6i5j/Z+L+T2IefUxglinWq1cGHElpG9t7HvT+elI9Kkg4CVglgtzv7Wlg2MbMrFN4PlRdwQu+5Xi7VrG++Q2pzdraAbVIgDQ8APwbVboBfjC0nrzhW5pZHb8oizPipSuOnHo0fVeXJNeBFIgMI54tbrmR4d5k1u665a7J4B3uuemsUwAWpMypgxDEtSLycDP1VgENgg9YLoFixM65tvvDSvkggco0B6VSrkIgEjoW6KSlpt9qwyFcidQdXjcDT4ag4Kf40YEN3kxxLzHvA+TtPQrtgkJgzBm4EKb3s5EMSpEmLf/povOflGiVTFLTkiNUZbc7L6pTmbPm66oNR9khSrXKQrCPh6qhQyu1w15M+A06fybCHWNZREWMhGBUlsKIUUobn9/jeYFdPtj3cQT6k+Q++h1+vmSnhFcFJDUALTLQzNw2NBRVCcA8GkLijKG9WMMbOz7b/AZGTO25Tjo2scvnv6bvwbGd0tj3KNfBkKQZP/S5QfLhIKozZLyrhSPvAixlW2TZ0hh79n7TI/pEJwbM9EAohbiKm6sojFX1o4dq64MMtpW+e2Ck69AxXQZ6kn3waJr81rT3gP1VsT8NpPhd6sAQ4LIA1vRAe6bVjFmILfnX/19M05Af7nL8a/5x0KWTzLqQGxKtx2YvELAK59r7bFIILi0z1Y+WO8nOGc13whpS7solSFcOyL1ej9c/3CGVySUovJJBIiD11y0ZXrE22DxyP/gURaOTrNXfhRbi3ISrVY0s87JnB2YeC7NTo/BEwC7BlJKmWFq52XrNq0KQTvPE4JWpbKkael5iLneStD9RUhQoQ6ACuSvtnm677mJ+Q9WYtE1WJPHXUUb4g4kVRMHpuzce55Ul1fmo/nn45tYcC6WUDQbDXujh2uWTd3iCFqSh6Nghl3y 91tM//oC cImtENBjBYlE9JxF6zqO/mT153RR07Xf6RREp7Y34kreCa2YdkwXvhfnos3Qb3feTKKhI4lWTVgpWdsZlPrXpfkM57G4Vc+Jbm2m01h42atqRxpfbQM8aXqiD0vuB+Vlrltp4hePf/gLV9WqEueOqNPFdcGl0205nVYkNLcqSSXRmAeUt7B4XayOiJMn6teHnWNN3nbqP+r1qFhdz5J5k9yrj1AeER47H2HEZ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000013, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: [..] > > @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root, > > return entry; > > } > > > > +/********************************* > > +* shrinker functions > > +**********************************/ > > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, > > + spinlock_t *lock, void *arg); > > + > > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker, > > + struct shrink_control *sc) > > +{ > > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); > > + unsigned long shrink_ret, nr_protected, lru_size; > > + struct zswap_pool *pool = shrinker->private_data; > > + bool encountered_page_in_swapcache = false; > > + > > + nr_protected = > > + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected); > > + lru_size = list_lru_shrink_count(&pool->list_lru, sc); > > + > > + /* > > + * Abort if the shrinker is disabled or if we are shrinking into the > > + * protected region. > > + * > > + * This short-circuiting is necessary because if we have too many multiple > > + * concurrent reclaimers getting the freeable zswap object counts at the > > + * same time (before any of them made reasonable progress), the total > > + * number of reclaimed objects might be more than the number of unprotected > > + * objects (i.e the reclaimers will reclaim into the protected area of the > > + * zswap LRU). > > + */ > > + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) { > > + sc->nr_scanned = 0; > > + return SHRINK_STOP; > > + } > > + > > + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb, > > + &encountered_page_in_swapcache); > > + > > + if (encountered_page_in_swapcache) > > + return SHRINK_STOP; > > + > > + return shrink_ret ? shrink_ret : SHRINK_STOP; > > +} > > + > > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker, > > + struct shrink_control *sc) > > +{ > > + struct zswap_pool *pool = shrinker->private_data; > > + struct mem_cgroup *memcg = sc->memcg; > > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid)); > > + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected; > > + > > +#ifdef CONFIG_MEMCG_KMEM > > + cgroup_rstat_flush(memcg->css.cgroup); > > + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT; > > + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED); > > +#else > > + /* use pool stats instead of memcg stats */ > > + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT; > > + nr_stored = atomic_read(&pool->nr_stored); > > +#endif > > + > > + if (!zswap_shrinker_enabled || !nr_stored) > When I tested with this series, with !zswap_shrinker_enabled in the default case, > I found the performance is much worse than that without this patch. > > Testcase: memory.max=2G, zswap enabled, kernel build -j32 in a tmpfs directory. > > The reason seems the above cgroup_rstat_flush(), caused much rstat lock contention > to the zswap_store() path. And if I put the "zswap_shrinker_enabled" check above > the cgroup_rstat_flush(), the performance become much better. > > Maybe we can put the "zswap_shrinker_enabled" check above cgroup_rstat_flush()? Yes, we should do nothing if !zswap_shrinker_enabled. We should also use mem_cgroup_flush_stats() here like other places unless accuracy is crucial, which I doubt given that reclaim uses mem_cgroup_flush_stats(). mem_cgroup_flush_stats() has some thresholding to make sure we don't do flushes unnecessarily, and I have a pending series in mm-unstable that makes that thresholding per-memcg. Keep in mind that adding a call to mem_cgroup_flush_stats() will cause a conflict in mm-unstable, because the series there adds a memcg argument to mem_cgroup_flush_stats(). That should be easily amenable though, I can post a fixlet for my series to add the memcg argument there on top of users if needed. > > Thanks! > > > + return 0; > > + > > + nr_protected = > > + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected); > > + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc); > > + /* > > + * Subtract the lru size by an estimate of the number of pages > > + * that should be protected. > > + */ > > + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0; > > + > > + /* > > + * Scale the number of freeable pages by the memory saving factor. > > + * This ensures that the better zswap compresses memory, the fewer > > + * pages we will evict to swap (as it will otherwise incur IO for > > + * relatively small memory saving). > > + */ > > + return mult_frac(nr_freeable, nr_backing, nr_stored); > > +} > > + > > +static void zswap_alloc_shrinker(struct zswap_pool *pool) > > +{ > > + pool->shrinker = > > + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap"); > > + if (!pool->shrinker) > > + return; > > + > > + pool->shrinker->private_data = pool; > > + pool->shrinker->scan_objects = zswap_shrinker_scan; > > + pool->shrinker->count_objects = zswap_shrinker_count; > > + pool->shrinker->batch = 0; > > + pool->shrinker->seeks = DEFAULT_SEEKS; > > +} > > + > > /********************************* > > * per-cpu code > > **********************************/ [..]