From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78111C433FE for ; Thu, 3 Nov 2022 18:08:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE5D36B0072; Thu, 3 Nov 2022 14:08:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E95D36B0073; Thu, 3 Nov 2022 14:08:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5D386B0074; Thu, 3 Nov 2022 14:08:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C7E146B0072 for ; Thu, 3 Nov 2022 14:08:00 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 908C380709 for ; Thu, 3 Nov 2022 18:08:00 +0000 (UTC) X-FDA: 80092914720.12.3CE95AF Received: from mail-qv1-f46.google.com (mail-qv1-f46.google.com [209.85.219.46]) by imf15.hostedemail.com (Postfix) with ESMTP id F18D9A0005 for ; Thu, 3 Nov 2022 18:07:59 +0000 (UTC) Received: by mail-qv1-f46.google.com with SMTP id n18so1660097qvt.11 for ; Thu, 03 Nov 2022 11:07:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20210112.gappssmtp.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=gHWzePCnywgOXV+O/5+ROU9THC1c+N7y7y85crqqww4=; b=SdoLiKAvEI0X3ZKrhFAvn+VBz0+Uq4XIm0h9Q+Ov3GjeBG2TIlyxeIDv0ZgIj4qnan m9bQJApX2h7WTueOIyqCTcHSMcQxa4u1RwfqnmJDUv6bAK8YCateqqCV3ZxTR6Voyg4C 1wICqpbS5oEQZ6CZZYm9q9HOVKnffRTU4HuCHG3KyEGK7p9D+YjfkeIiO7ssK0ZrWyBM SIan8t066bwC4oLGuzHoYIxKUqDkgazoomJYrOAQMhnrLjDTTDjOB9DYBH0Z5iEOmNM4 9flCmzuE05+4qcZ90FYEd/hHfpfMKWXjJiEMCgq6pspxiT91yfulUGhT/gvzm2INRA8a MSHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=gHWzePCnywgOXV+O/5+ROU9THC1c+N7y7y85crqqww4=; b=wbA+QL/lCuQFRMb0GWqZ+cXilIx0Xkl9Olu5iRuSGfMAHtqW9vKccEL3b8oXo4V6kM A9mM6yJK1tWuW0I7FLXznH107eiMcrPib7rFT9NSL5sUZLsEd0G57aSDqVv4f9vFry0s yAecjcGwTr04ELTl7fYyQtnes0xy2Mf7q+e6/KB/Q//a+HFTgO1Su/nSK55TH9pUSRkg e0LPEwvl6U8n1gNENhdEMRKMuLklVXGBvYbPOW69kXQnpDrHF4G7IQVieOmIFNPiqRzB HyRjhcndc6iPs+d3iYxdLHp9B7HACFMyr4Id/W5lDnQ1moF+wQPy682x9VJkouJZwDzK k26Q== X-Gm-Message-State: ACrzQf1/8NWdXOKBLWFO9A7mobb561Z7UGKmkXtjVUrCOBe4SqedgQv6 rYZhxb5o1S6X+Ssajlp6Rth3Z5wZ3pbPBA== X-Google-Smtp-Source: AMsMyM75f2VqFwcj+WFltf8V+3IqGmreE/xNvmEpZD5E+Zw3mupbx0mxDw+9RzHN4+5/dKs4+O7/ww== X-Received: by 2002:a0c:f0d4:0:b0:4bb:6583:66e6 with SMTP id d20-20020a0cf0d4000000b004bb658366e6mr28609051qvl.123.1667498879108; Thu, 03 Nov 2022 11:07:59 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::25f1]) by smtp.gmail.com with ESMTPSA id v25-20020ac87499000000b003a530a32f67sm880713qtq.65.2022.11.03.11.07.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Nov 2022 11:07:58 -0700 (PDT) Date: Thu, 3 Nov 2022 14:08:01 -0400 From: Johannes Weiner To: Minchan Kim Cc: Sergey Senozhatsky , Nhat Pham , akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ngupta@vflare.org, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com Subject: Re: [PATCH 2/5] zsmalloc: Consolidate zs_pool's migrate_lock and size_class's locks Message-ID: References: <20221026200613.1031261-1-nphamcs@gmail.com> <20221026200613.1031261-3-nphamcs@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667498880; a=rsa-sha256; cv=none; b=UrObm8d3ZUAIVJXQTdzL3WDAZhd9miWnxhTFP3xxx0T4BLW/3wMS/qoqUX6Z7Wq/zbb5va WtWRFOpCCyenbHpxeoalqedESuhIAAXSN8f22Kr48nvEBoV/oHNHIZwwbFfh+ypFS5VIKr o8H73oM8zoTrdVEO2Wu99LJhdKHu1h8= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=SdoLiKAv; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf15.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.46 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667498880; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gHWzePCnywgOXV+O/5+ROU9THC1c+N7y7y85crqqww4=; b=tr6Uw7Qa9vzoOz1IKtn5vW/zsy9g2eLF0y/3fjQRl9iWmSdJAKp5ba2ElxP09yC6mdkdfB wFL6JRuHY1PB5PwzrQKPfALV8raxZUZUTdyCkG4fi1Tkdy3bMzomlnZDtjxhJOeL98VQL4 57z02a9vr7XTJFVRsOCI60yJGXCT1VQ= X-Stat-Signature: u9xqwcabaceq3c3tur5sf7k4dw34ihim X-Rspamd-Queue-Id: F18D9A0005 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=SdoLiKAv; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf15.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.46 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1667498879-61369 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Nov 03, 2022 at 08:53:29AM -0700, Minchan Kim wrote: > On Thu, Nov 03, 2022 at 11:18:04AM -0400, Johannes Weiner wrote: > > On Wed, Nov 02, 2022 at 02:36:35PM -0700, Minchan Kim wrote: > > > On Wed, Nov 02, 2022 at 12:28:56PM +0900, Sergey Senozhatsky wrote: > > > > On (22/10/26 13:06), Nhat Pham wrote: > > > > > struct size_class { > > > > > - spinlock_t lock; > > > > > struct list_head fullness_list[NR_ZS_FULLNESS]; > > > > > /* > > > > > * Size of objects stored in this class. Must be multiple > > > > > @@ -247,8 +245,7 @@ struct zs_pool { > > > > > #ifdef CONFIG_COMPACTION > > > > > struct work_struct free_work; > > > > > #endif > > > > > - /* protect page/zspage migration */ > > > > > - rwlock_t migrate_lock; > > > > > + spinlock_t lock; > > > > > }; > > > > > > > > I'm not in love with this, to be honest. One big pool lock instead > > > > of 255 per-class locks doesn't look attractive, as one big pool lock > > > > is going to be hammered quite a lot when zram is used, e.g. as a regular > > > > block device with a file system and is under heavy parallel writes/reads. > > > > TBH the class always struck me as an odd scope to split the lock. Lock > > contention depends on how variable the compression rate is of the > > hottest incoming data, which is unpredictable from a user POV. > > > > My understanding is that the primary usecase for zram is swapping, and > > the pool lock is the same granularity as the swap locking. > > People uses the zram to store caching object files in build server. Oh, interesting. We can try with a kernel build directory on zram. > > Do you have a particular one in mind? (I'm thinking journaled ones are > > not of much interest, since their IO tends to be fairly serialized.) > > > > btrfs? > > I am not sure what FSes others are using but at least for me, just > plain ext4. Okay, we can test with both. > > > I am also worry about that LRU stuff should be part of allocator > > > instead of higher level. > > > > I'm sorry, but that's not a reasonable objection. > > > > These patches implement a core feature of being a zswap backend, using > > standard LRU and locking techniques established by the other backends. > > > > I don't disagree that it would nicer if zswap had a strong abstraction > > for backend pages and a generalized LRU. But that is major surgery on > > a codebase of over 6,500 lines. It's not a reasonable ask to change > > all that first before implementing a basic feature that's useful now. > > With same logic, folks added the LRU logic into their allocators > without the effort considering moving the LRU into upper layer. > > And then trend is still going on since I have seen multiple times > people are trying to add more allocators. So if it's not a reasonable > ask to consier, we couldn't stop the trend in the end. So there is actually an ongoing effort to do that. Yosry and I have spent quite some time on coming up with an LRU design that's independent from compression policy over email and at Plumbers. My concern is more about the order of doing things: 1. The missing writeback support is a gaping hole in zsmalloc, which affects production systems. A generalized LRU list is a good idea, but it's a huge task that from a user pov really is not critical. Even from a kernel dev / maintainer POV, there are bigger fish to fry in the zswap code base and the backends than this. 2. Refactoring existing functionality is much easier than writing generalized code that simultaneously enables new behavior. zsmalloc is the most complex of our backends. To make its LRU writeback work we had to patch zswap's ->map ordering to accomodate it, e.g. Such tricky changes are easier to make and test incrementally. The generalized LRU project will hugely benefit from already having a proven writeback implementation in zsmalloc, because then all the requirements in zswap and zsmalloc will be in black and white. > > I get that your main interest is zram, and so this feature isn't of > > interest to you. But zram isn't the only user, nor is it the primary > > I am interest to the feature but my interest is more of general swap > layer to manage the LRU so that it could support any hierarchy among > swap devices, not only zswap. I think we're on the same page about the longer term goals.