From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4081BC4332F for ; Thu, 3 Nov 2022 15:53:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D9B8E6B0072; Thu, 3 Nov 2022 11:53:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D4BF56B0073; Thu, 3 Nov 2022 11:53:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C14716B0074; Thu, 3 Nov 2022 11:53:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B0FDF6B0072 for ; Thu, 3 Nov 2022 11:53:33 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 8D2FB1C670E for ; Thu, 3 Nov 2022 15:53:33 +0000 (UTC) X-FDA: 80092575906.10.F7B2214 Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by imf23.hostedemail.com (Postfix) with ESMTP id 3A173140004 for ; Thu, 3 Nov 2022 15:53:33 +0000 (UTC) Received: by mail-pg1-f171.google.com with SMTP id g129so2015541pgc.7 for ; Thu, 03 Nov 2022 08:53:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id :reply-to; bh=iefhXYcFca+HyMrTaCiWzQ24fHFIs/brIYiJSEqSUcQ=; b=mnEgPRYk2PPB666Mwc6QMzwxTKX+rHctMioHYBKj2e0+U0kbTQLPG0Vk5U7nmSb9wV PISe2i9oNNIXplC90+WVjOWIs8l49/kPSfcN5II4Ot73q+9bAJawAHd55MYPSnkXs/qj FIRWz0djKi2SgpHLqSCp3pK9HC57ylPOzHGkWmSRlY6zO929FS1SFMnoj31Gdvi6PF9V JSPUHze4LGYGvzrfdncqMAS65bVQF/8vr7ji42neU0tBUIHYsKx7ATT+TXTyhdDxBW6g OIvvRJPvZqzRKqXvqCisKq8OrHP5H5/DI8tUlmMMR+ZE8FtbzYgWXqDHgm6CsU4OuH85 VXIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iefhXYcFca+HyMrTaCiWzQ24fHFIs/brIYiJSEqSUcQ=; b=sZN0oen0+tzyaplbOGchLWj2Rl6QCrBsS/1kjZro6MCv3mVWdsZBY0a92fRDxgRIpg VxNbEnXtTKOJjwcV7F8dTH6DsgbrF42/ei8oODBOLZ1qaqCAIHajWrOLMODG4HFBY+jJ NY9xMjCTcR5EIDpMfE7frZLZQheyBsaZBCfutZxmONmKyLZxJn02sXd28gdHM6Wq4qJG c1+8XeDWdDMVHarvO58hzNPZKWQ+g24dIJfp9T7VqPeNriP+TzEhG2ccXu3OolRX6rfw 2kApimLEPT/4h4A+EHZQ+0HjX5Su0/f8PyVWTJnTJQYtR8CwpCQfOSETBFqzqH8LiHFk Q2hQ== X-Gm-Message-State: ACrzQf0j9QaY+O7wznd4x5k2zaizZzvfeLONMZC8ekgZAmjpE1GgNBQE l/JB0Afna7vsLZ04izz6e0U= X-Google-Smtp-Source: AMsMyM4fr0j5T3py3yG/3/HWamAEHR9CXTW4h3U6KJORTNzMY9j5TYUF1eC9OhTEn2s6p+YvKd5f9w== X-Received: by 2002:a63:e007:0:b0:46f:d715:3704 with SMTP id e7-20020a63e007000000b0046fd7153704mr15322741pgh.108.1667490811941; Thu, 03 Nov 2022 08:53:31 -0700 (PDT) Received: from google.com ([2620:15c:211:201:3d65:7dc2:c62a:5d98]) by smtp.gmail.com with ESMTPSA id k9-20020a17090a3cc900b00212d9a06edcsm125728pjd.42.2022.11.03.08.53.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Nov 2022 08:53:31 -0700 (PDT) Date: Thu, 3 Nov 2022 08:53:29 -0700 From: Minchan Kim To: Johannes Weiner Cc: Sergey Senozhatsky , Nhat Pham , akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ngupta@vflare.org, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com Subject: Re: [PATCH 2/5] zsmalloc: Consolidate zs_pool's migrate_lock and size_class's locks Message-ID: References: <20221026200613.1031261-1-nphamcs@gmail.com> <20221026200613.1031261-3-nphamcs@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667490813; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iefhXYcFca+HyMrTaCiWzQ24fHFIs/brIYiJSEqSUcQ=; b=azLtTVbElI5GSm+dx4h2KboRie5PTjqZkYDB8vLNnaaCClj+S7J7NQE88TBaDVszYg1pjb 1+OdpuWmCq90DvgoqGRzpvRWjP37BYm0L53FC3GLqHrqSATOHYfTeZ5uQahzgAa8/p462S M2/3UglqgSMsbaguULKfQEZFXj6yeaE= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=mnEgPRYk; spf=pass (imf23.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.215.171 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667490813; a=rsa-sha256; cv=none; b=5s3vM7k5W4qx8kObBfRq59fXUAfMPmhoHXFJMniuIf4t2q0n3RKFqsCvYi92ZRdKpfFa7V PXwNOCAF7lYidIPFv8fCzNOAkjN5GoY7YsjgnwMIjgGvF1SHS4sOHFPf+NYDFfIM1hvlxc 515wy6MPbpz7N7gke3qBaSSLdrGCy3A= Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=mnEgPRYk; spf=pass (imf23.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.215.171 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-Stat-Signature: xyjt1akwxcci3s7e9y6jwp7ohsw19shy X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 3A173140004 X-HE-Tag: 1667490813-916766 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Nov 03, 2022 at 11:18:04AM -0400, Johannes Weiner wrote: > On Wed, Nov 02, 2022 at 02:36:35PM -0700, Minchan Kim wrote: > > On Wed, Nov 02, 2022 at 12:28:56PM +0900, Sergey Senozhatsky wrote: > > > On (22/10/26 13:06), Nhat Pham wrote: > > > > struct size_class { > > > > - spinlock_t lock; > > > > struct list_head fullness_list[NR_ZS_FULLNESS]; > > > > /* > > > > * Size of objects stored in this class. Must be multiple > > > > @@ -247,8 +245,7 @@ struct zs_pool { > > > > #ifdef CONFIG_COMPACTION > > > > struct work_struct free_work; > > > > #endif > > > > - /* protect page/zspage migration */ > > > > - rwlock_t migrate_lock; > > > > + spinlock_t lock; > > > > }; > > > > > > I'm not in love with this, to be honest. One big pool lock instead > > > of 255 per-class locks doesn't look attractive, as one big pool lock > > > is going to be hammered quite a lot when zram is used, e.g. as a regular > > > block device with a file system and is under heavy parallel writes/reads. > > TBH the class always struck me as an odd scope to split the lock. Lock > contention depends on how variable the compression rate is of the > hottest incoming data, which is unpredictable from a user POV. > > My understanding is that the primary usecase for zram is swapping, and > the pool lock is the same granularity as the swap locking. People uses the zram to store caching object files in build server. > > Regardless, we'll do some benchmarks with filesystems to understand > what a reasonable tradeoff would be between overhead and complexity. Thanks. > Do you have a particular one in mind? (I'm thinking journaled ones are > not of much interest, since their IO tends to be fairly serialized.) > > btrfs? I am not sure what FSes others are using but at least for me, just plain ext4. > > > I am also worry about that LRU stuff should be part of allocator > > instead of higher level. > > I'm sorry, but that's not a reasonable objection. > > These patches implement a core feature of being a zswap backend, using > standard LRU and locking techniques established by the other backends. > > I don't disagree that it would nicer if zswap had a strong abstraction > for backend pages and a generalized LRU. But that is major surgery on > a codebase of over 6,500 lines. It's not a reasonable ask to change > all that first before implementing a basic feature that's useful now. With same logic, folks added the LRU logic into their allocators without the effort considering moving the LRU into upper layer. And then trend is still going on since I have seen multiple times people are trying to add more allocators. So if it's not a reasonable ask to consier, we couldn't stop the trend in the end. > > I get that your main interest is zram, and so this feature isn't of > interest to you. But zram isn't the only user, nor is it the primary I am interest to the feature but my interest is more of general swap layer to manage the LRU so that it could support any hierarchy among swap devices, not only zswap.