From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8D85C4332F for ; Fri, 18 Nov 2022 21:01:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A3508E0001; Fri, 18 Nov 2022 16:01:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 62B616B0072; Fri, 18 Nov 2022 16:01:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A5518E0001; Fri, 18 Nov 2022 16:01:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 373C86B0071 for ; Fri, 18 Nov 2022 16:01:27 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id EFD5D413DC for ; Fri, 18 Nov 2022 21:01:26 +0000 (UTC) X-FDA: 80147783772.27.3E5D866 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf11.hostedemail.com (Postfix) with ESMTP id 9783B4000F for ; Fri, 18 Nov 2022 21:01:26 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id w15-20020a17090a380f00b0021873113cb4so5733253pjb.0 for ; Fri, 18 Nov 2022 13:01:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id :reply-to; bh=DsB2gTTpP/OQiGsd+LGWAqdXHasWRJVlovNbcAIifOg=; b=lxQbr6oIihZr8uuDeErMxvLhY36vFrXR6iQy5X+ejdFDZbcAAskE5W+4/gVAt/cBb6 /9Rl+trKs3iCiEGXFkTnTVllHwjep4xMXbmn+y4h+jKewVBbmTUakyhqb7eAsYU7QzG1 u+wgPXR4QFxTOmzBA9SvybrjGpJEwlhd3V1N0K6W2Q90WK6u6Rr7i+vSeOb3Ifp950FF lpj7ftAoOwCg93gg1PYOKBjE76/rmOIPbNVzA/j38v2qmUONX7wPB8SDm9ZdURty3dHM 3lwQnKioZb1r7FJG8p1zF49biSzv9s4TUUOxRL6JjaZquwAgv5YPsDux42q5mLqERHlq KnYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DsB2gTTpP/OQiGsd+LGWAqdXHasWRJVlovNbcAIifOg=; b=sFhSUmr02T9azhwbXnVJ8XWpM0PBhILKVCYftzs1NqrH3fTHmCOeK2LX/tKyrXJaeP iX2GmebGaB0L7K0+eO5EUYoFOlwLbK3BBkswlvYGqvqAOxYZ01sb2LOB84IPmIZXQX90 HlFqKtslkcxJCfrA7wr20+cMNouTznCIBJZWIesRqiqYQ24nEPO48RW9cmZfbti3Cqmd 1x0wf9Q9wHtOLAJZ/56nsSj6kLb3kHn+54QMP3ZaAiQlO03D+AD52TQfTAWYDrt5mQ/C SxQwV6yLBzV+RmjMZc2xyXZJDmBrDUUXbcql5OV3cSvXAGON08iH28jdRoY2MdQybeTx kIeA== X-Gm-Message-State: ANoB5pl5fOOaAPF+AxzG3I4GN9O10Z9IdsGZZ97JEgY7lIc/nuYeKNj8 lI/Ew/VhPeciFC3DqhJBghY= X-Google-Smtp-Source: AA0mqf4eHvfhkfJqsZWgYouEZmlrSLJlvAbzq+oQJsi3I4zJ+/nom+ZSL3K+6kMA7rV4pudTwTyYQA== X-Received: by 2002:a17:902:f313:b0:186:8518:6c97 with SMTP id c19-20020a170902f31300b0018685186c97mr1215983ple.94.1668805285191; Fri, 18 Nov 2022 13:01:25 -0800 (PST) Received: from google.com ([2620:15c:211:201:bba9:9f92:b2cc:16a4]) by smtp.gmail.com with ESMTPSA id m10-20020a634c4a000000b00476e84c3530sm3178987pgl.60.2022.11.18.13.01.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Nov 2022 13:01:24 -0800 (PST) Date: Fri, 18 Nov 2022 13:01:22 -0800 From: Minchan Kim To: Nhat Pham Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ngupta@vflare.org, senozhatsky@chromium.org, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com Subject: Re: [PATCH v5 6/6] zsmalloc: Implement writeback mechanism for zsmalloc Message-ID: References: <20221118182407.82548-1-nphamcs@gmail.com> <20221118182407.82548-7-nphamcs@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221118182407.82548-7-nphamcs@gmail.com> ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=lxQbr6oI; spf=pass (imf11.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668805286; a=rsa-sha256; cv=none; b=KgNQ1Iv7gzaYP0w8rcLqo3+11YAQhPiw2jHGUBfdqGzVq2XkqxUzgOfNrc+/2qc0aZo3ei OX8/hyrQEUqzYgthEwMAddazes/qry4xn24+fw4OWarsu2mQCmscX9BVycUpxP0IRerfSK Gkcb62SC6dRrMBd/xdirbXwQFZjgjs8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668805286; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DsB2gTTpP/OQiGsd+LGWAqdXHasWRJVlovNbcAIifOg=; b=6eabFaKtYBeGshuYAgBqn5JRtGldS6DrDwhGPH7fgHQxckqhe0zbcx8VTU4ChvROHtWOLz CDv0yvkYg8rSA4kfBYfPY9Tnc9lUoCeZ9PAHSzThunGLYkB3Y5YFLqEpKpf4vK5qLf9SbW uiGrVBjF/ggxfuYA75VC4xWD2ybLrvc= Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=lxQbr6oI; spf=pass (imf11.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-Rspamd-Server: rspam02 X-Rspam-User: X-Stat-Signature: uiqim4zohnpypxcdpp4i9no8zy6qz781 X-Rspamd-Queue-Id: 9783B4000F X-HE-Tag: 1668805286-446818 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Nov 18, 2022 at 10:24:07AM -0800, Nhat Pham wrote: > This commit adds the writeback mechanism for zsmalloc, analogous to the > zbud allocator. Zsmalloc will attempt to determine the coldest zspage > (i.e least recently used) in the pool, and attempt to write back all the > stored compressed objects via the pool's evict handler. > > Signed-off-by: Nhat Pham > --- > mm/zsmalloc.c | 193 +++++++++++++++++++++++++++++++++++++++++++++++--- > 1 file changed, 182 insertions(+), 11 deletions(-) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index 3ff86f57d08c..d73b9f9e9adf 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -271,12 +271,13 @@ struct zspage { > #ifdef CONFIG_ZPOOL > /* links the zspage to the lru list in the pool */ > struct list_head lru; > + bool under_reclaim; > + /* list of unfreed handles whose objects have been reclaimed */ > + unsigned long *deferred_handles; > #endif > > struct zs_pool *pool; > -#ifdef CONFIG_COMPACTION > rwlock_t lock; > -#endif > }; > > struct mapping_area { > @@ -297,10 +298,11 @@ static bool ZsHugePage(struct zspage *zspage) > return zspage->huge; > } > > -#ifdef CONFIG_COMPACTION > static void migrate_lock_init(struct zspage *zspage); > static void migrate_read_lock(struct zspage *zspage); > static void migrate_read_unlock(struct zspage *zspage); > + > +#ifdef CONFIG_COMPACTION > static void migrate_write_lock(struct zspage *zspage); > static void migrate_write_lock_nested(struct zspage *zspage); > static void migrate_write_unlock(struct zspage *zspage); > @@ -308,9 +310,6 @@ static void kick_deferred_free(struct zs_pool *pool); > static void init_deferred_free(struct zs_pool *pool); > static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage); > #else > -static void migrate_lock_init(struct zspage *zspage) {} > -static void migrate_read_lock(struct zspage *zspage) {} > -static void migrate_read_unlock(struct zspage *zspage) {} > static void migrate_write_lock(struct zspage *zspage) {} > static void migrate_write_lock_nested(struct zspage *zspage) {} > static void migrate_write_unlock(struct zspage *zspage) {} > @@ -425,6 +424,27 @@ static void zs_zpool_free(void *pool, unsigned long handle) > zs_free(pool, handle); > } > > +static int zs_reclaim_page(struct zs_pool *pool, unsigned int retries); > + > +static int zs_zpool_shrink(void *pool, unsigned int pages, > + unsigned int *reclaimed) > +{ > + unsigned int total = 0; > + int ret = -EINVAL; > + > + while (total < pages) { > + ret = zs_reclaim_page(pool, 8); > + if (ret < 0) > + break; > + total++; > + } > + > + if (reclaimed) > + *reclaimed = total; > + > + return ret; > +} > + > static void *zs_zpool_map(void *pool, unsigned long handle, > enum zpool_mapmode mm) > { > @@ -463,6 +483,7 @@ static struct zpool_driver zs_zpool_driver = { > .malloc_support_movable = true, > .malloc = zs_zpool_malloc, > .free = zs_zpool_free, > + .shrink = zs_zpool_shrink, > .map = zs_zpool_map, > .unmap = zs_zpool_unmap, > .total_size = zs_zpool_total_size, > @@ -936,6 +957,23 @@ static int trylock_zspage(struct zspage *zspage) > return 0; > } > > +#ifdef CONFIG_ZPOOL > +/* > + * Free all the deferred handles whose objects are freed in zs_free. > + */ > +static void free_handles(struct zs_pool *pool, struct zspage *zspage) > +{ > + unsigned long handle = (unsigned long)zspage->deferred_handles; > + > + while (handle) { > + unsigned long nxt_handle = handle_to_obj(handle); > + > + cache_free_handle(pool, handle); > + handle = nxt_handle; > + } > +} # else static inline void free_handles(struct zs_pool *pool, struct zspage *zspage) {} > +#endif > + > static void __free_zspage(struct zs_pool *pool, struct size_class *class, > struct zspage *zspage) > { > @@ -950,6 +988,11 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class, > VM_BUG_ON(get_zspage_inuse(zspage)); > VM_BUG_ON(fg != ZS_EMPTY); > > +#ifdef CONFIG_ZPOOL Let's remove the ifdef machinery here. > + /* Free all deferred handles from zs_free */ > + free_handles(pool, zspage); > +#endif > + Other than that, looks good to me.