From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFB31C4332F for ; Thu, 17 Nov 2022 22:15:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A2B98E0001; Thu, 17 Nov 2022 17:15:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1532A6B0072; Thu, 17 Nov 2022 17:15:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F357C8E0001; Thu, 17 Nov 2022 17:15:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E5CE76B0071 for ; Thu, 17 Nov 2022 17:15:17 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BD663A1068 for ; Thu, 17 Nov 2022 22:15:17 +0000 (UTC) X-FDA: 80144341074.23.8995F6A Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf01.hostedemail.com (Postfix) with ESMTP id 9141440002 for ; Thu, 17 Nov 2022 22:15:15 +0000 (UTC) Received: by mail-pl1-f182.google.com with SMTP id d20so2900454plr.10 for ; Thu, 17 Nov 2022 14:15:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id :reply-to; bh=o+rCsV5xsoe3YaW2+ZL6bhCv4e9AU/LzIWH2zi7pT+E=; b=de8kXYaeuUUFyDxBuZ8N9Itz1hzGQA+CbOUNN8k4fgM8wN0Tib7RipSMV1i7JdgJ0/ bZuygE0uXtCeWZwDb4doQDuagJMYHrQybMgD+XE1O8e1WXR66umAr+9SkxyqW2rz1GCi iTMD5ikKdTmpPYooemuKyLRwCQ1bIN5SMpuROKSoLsIICklfh2M0IDLPAjEIU0yLZ/wL DtJGU6ckrvqFkSpQpfVPirM+ZpJYXfo6uS6GxY3hkzdejcSApS+Lab+QvTUaJNXTURFk EW3JP//MFKXQG8A+FVUnPSVl/16I9pkUe1cHKgi8qiqmYWsV3mBPEUawQpxdwK8KYhy/ LJ/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=o+rCsV5xsoe3YaW2+ZL6bhCv4e9AU/LzIWH2zi7pT+E=; b=JjcHMe1z4vNHGEESkLOmrtFTikjgDbTWQO+Si8BAti6qtrqKDfSDY7IeYgz8RVLJX/ l6Ia+N6M/xuqOvmXtJTRPNg2H4/vL3rmQuHIn7QLBLhI01eWeeTM07fhbxk5AkUzfJH+ 45CzeG5OUd4HZpkHmSRkaQiUdsnLMxYhk2+07guy2y3AETfGPpPuSl3MpQRqodzaVAtk 2q3A4QXuPXLdi4jelXTXodoUZKdVhHp3o8h6TzIcOEWVHmVLdxlWrIua43JIfUJBw4lj X3yDP9/4N1ga54ho+NhkvCm7ajfGGPz9V06q93rJLLk7q0mC1AuGZRsfGW/xLQYCq2RS X39Q== X-Gm-Message-State: ANoB5pmXPi9absB1s5u4GPTG9DVsda/2TU5QDVUYeV3DkkJ/kI3KoX3C 0VnQOwnQ3oYj0BBtTX1lAIg= X-Google-Smtp-Source: AA0mqf64SqKfsQcBAnKS3C7bXsL9Ae3TY0Tb7LsEMKMU08oJXdeFsNoPxvf3n9ZgQeBdgRoq19/kag== X-Received: by 2002:a17:903:234d:b0:186:9405:290a with SMTP id c13-20020a170903234d00b001869405290amr4613078plh.133.1668723314318; Thu, 17 Nov 2022 14:15:14 -0800 (PST) Received: from google.com ([2620:15c:211:201:6bbc:b70a:8f80:710d]) by smtp.gmail.com with ESMTPSA id y19-20020a1709027c9300b0017f9db0236asm1921367pll.82.2022.11.17.14.15.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Nov 2022 14:15:13 -0800 (PST) Date: Thu, 17 Nov 2022 14:15:11 -0800 From: Minchan Kim To: Nhat Pham Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ngupta@vflare.org, senozhatsky@chromium.org, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com Subject: Re: [PATCH v4 3/5] zsmalloc: Add a LRU to zs_pool to keep track of zspages in LRU order Message-ID: References: <20221117163839.230900-1-nphamcs@gmail.com> <20221117163839.230900-4-nphamcs@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221117163839.230900-4-nphamcs@gmail.com> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668723315; a=rsa-sha256; cv=none; b=rQgSzXaj/wKeA5QDenVb1eTed5X/Z+srIIbOeA8iswkGzhA3Sg8nBIfSs7qZW9gvb0oyTw YtLYoCQxIZDfnCtBoE0dwdBBwRvnmHaOp7p5F4VumN6NuTDHln6ttjC5OChQZ58FntuYSW lYoQA9RqK6+s33zHJQKRirh5AF3Yt+k= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=de8kXYae; spf=pass (imf01.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668723315; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=o+rCsV5xsoe3YaW2+ZL6bhCv4e9AU/LzIWH2zi7pT+E=; b=l7wiQACvclRgUysO7Ct5PNKynkHpEWJYK/bCEOV2UNtgNeUttn3uewxy6xJxo9YDi8Mxnk 8AuTUk5GuayqLoOekMrqUwapMHhY527Ju7omFDeHG7XFxQt0tZitp0yjhGPDwVz7b1vwUV 19iHWbKD8I8wZL138Z6Hl7DPiHb4Qkg= X-Stat-Signature: qau4cobnmf6oc7b44bes3e8m31bhwskz X-Rspamd-Queue-Id: 9141440002 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=de8kXYae; spf=pass (imf01.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1668723315-294108 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Nov 17, 2022 at 08:38:37AM -0800, Nhat Pham wrote: > This helps determines the coldest zspages as candidates for writeback. > > Signed-off-by: Nhat Pham > --- > mm/zsmalloc.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 46 insertions(+) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index 326faa751f0a..2557b55ec767 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -239,6 +239,11 @@ struct zs_pool { > /* Compact classes */ > struct shrinker shrinker; > > +#ifdef CONFIG_ZPOOL > + /* List tracking the zspages in LRU order by most recently added object */ > + struct list_head lru; > +#endif > + > #ifdef CONFIG_ZSMALLOC_STAT > struct dentry *stat_dentry; > #endif > @@ -260,6 +265,12 @@ struct zspage { > unsigned int freeobj; > struct page *first_page; > struct list_head list; /* fullness list */ > + > +#ifdef CONFIG_ZPOOL > + /* links the zspage to the lru list in the pool */ > + struct list_head lru; > +#endif > + > struct zs_pool *pool; > #ifdef CONFIG_COMPACTION > rwlock_t lock; > @@ -352,6 +363,18 @@ static void cache_free_zspage(struct zs_pool *pool, struct zspage *zspage) > kmem_cache_free(pool->zspage_cachep, zspage); > } > > +#ifdef CONFIG_ZPOOL > +/* Moves the zspage to the front of the zspool's LRU */ > +static void move_to_front(struct zs_pool *pool, struct zspage *zspage) > +{ > + assert_spin_locked(&pool->lock); > + > + if (!list_empty(&zspage->lru)) > + list_del(&zspage->lru); > + list_add(&zspage->lru, &pool->lru); > +} > +#endif > + > /* pool->lock(which owns the handle) synchronizes races */ > static void record_obj(unsigned long handle, unsigned long obj) > { > @@ -953,6 +976,9 @@ static void free_zspage(struct zs_pool *pool, struct size_class *class, > } > > remove_zspage(class, zspage, ZS_EMPTY); > +#ifdef CONFIG_ZPOOL > + list_del(&zspage->lru); > +#endif > __free_zspage(pool, class, zspage); > } > > @@ -998,6 +1024,10 @@ static void init_zspage(struct size_class *class, struct zspage *zspage) > off %= PAGE_SIZE; > } > > +#ifdef CONFIG_ZPOOL > + INIT_LIST_HEAD(&zspage->lru); > +#endif > + > set_freeobj(zspage, 0); > } > > @@ -1418,6 +1448,11 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > fix_fullness_group(class, zspage); > record_obj(handle, obj); > class_stat_inc(class, OBJ_USED, 1); > + > +#ifdef CONFIG_ZPOOL > + /* Move the zspage to front of pool's LRU */ > + move_to_front(pool, zspage); > +#endif > spin_unlock(&pool->lock); > > return handle; > @@ -1444,6 +1479,10 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > > /* We completely set up zspage so mark them as movable */ > SetZsPageMovable(pool, zspage); > +#ifdef CONFIG_ZPOOL > + /* Move the zspage to front of pool's LRU */ > + move_to_front(pool, zspage); > +#endif > spin_unlock(&pool->lock); Why do we move the zspage in the alloc instead of accessor? Isn't zs_map_object better place since it's clear semantic that user start to access the object? If you are concerning unnecessary churning, can we do that only for WO access? Yeah, it is still weird layering since allocator couldn't know what user will do over the this access(will keep or discard) so the lru in the allocator is not a good design but I just want to make little more sense.