From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1005EC54FB3 for ; Mon, 2 Jun 2025 15:18:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 657766B02CF; Mon, 2 Jun 2025 11:18:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E26F6B02D1; Mon, 2 Jun 2025 11:18:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D14F6B02D2; Mon, 2 Jun 2025 11:18:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2E2B26B02CF for ; Mon, 2 Jun 2025 11:18:05 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A42AE1D514E for ; Mon, 2 Jun 2025 15:18:04 +0000 (UTC) X-FDA: 83510816088.09.5CEC63D Received: from lgeamrelo03.lge.com (lgeamrelo03.lge.com [156.147.51.102]) by imf12.hostedemail.com (Postfix) with ESMTP id B22D740010 for ; Mon, 2 Jun 2025 15:18:01 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=lge.com; spf=pass (imf12.hostedemail.com: domain of youngjun.park@lge.com designates 156.147.51.102 as permitted sender) smtp.mailfrom=youngjun.park@lge.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748877483; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GHgDlBkjS++M2AmNAelgOaThdIsGAKbHRn0Pl5wqEPU=; b=BOMYYa/5wShrD3V/YNdzXTL7A7RA91WfxOaprkzoSzGerSSHbF149MqCJ8C7x9ThO4xgFN a2lC9J+PTjJ1qeXOhzFa/kI7oqPmlgdbkz7XmwP0A8Q33LBCoGvrzZsA5be4lt83TcDXCn 8Fagvo+Fe4Gk89XYb+Is5sqwnG0MbzQ= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=lge.com; spf=pass (imf12.hostedemail.com: domain of youngjun.park@lge.com designates 156.147.51.102 as permitted sender) smtp.mailfrom=youngjun.park@lge.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748877483; a=rsa-sha256; cv=none; b=fnpXyd+xMX+kRHzeD03CFL2lUxTNYW/m0orNNAp1Rzn+HLdNJO6hlO7Boqt5//d1RjuHZj +wUJtwdzRkzr2jlwLUNDCwLtTNRwzibwdOm/L/L1ifrS31vy9Uu5sV8OW5OsPn+joKuD9g qpupFVEVx/m9UWz9VPII/odGVdS8JwQ= Received: from unknown (HELO yjaykim-PowerEdge-T330) (10.177.112.156) by 156.147.51.102 with ESMTP; 3 Jun 2025 00:17:58 +0900 X-Original-SENDERIP: 10.177.112.156 X-Original-MAILFROM: youngjun.park@lge.com Date: Tue, 3 Jun 2025 00:17:58 +0900 From: YoungJun Park To: Kairui Song Cc: Nhat Pham , linux-mm@kvack.org, akpm@linux-foundation.org, hannes@cmpxchg.org, hughd@google.com, yosry.ahmed@linux.dev, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, len.brown@intel.com, chengming.zhou@linux.dev, chrisl@kernel.org, huang.ying.caritas@gmail.com, ryan.roberts@arm.com, viro@zeniv.linux.org.uk, baohua@kernel.org, osalvador@suse.de, lorenzo.stoakes@oracle.com, christophe.leroy@csgroup.eu, pavel@kernel.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-pm@vger.kernel.org, peterx@redhat.com, gunho.lee@lge.com, taejoon.song@lge.com, iamjoonsoo.kim@lge.com Subject: Re: [RFC PATCH v2 00/18] Virtual Swap Space Message-ID: References: <20250429233848.3093350-1-nphamcs@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B22D740010 X-Stat-Signature: g4rgamx95bwxn11f7ap4nzz4y8ey1jk8 X-Rspam-User: X-HE-Tag: 1748877481-744190 X-HE-Meta: U2FsdGVkX19DJAbqHDxqBKM0HgKsyCZUsFIl1Q1nI4iCAtVp7eYWF8qC2IYKc//ZDcJS3Px8pkgqwa56NkuOnGcWPnnmf8FWF8KfFLGwvC/0J7udBjENq5FONP9RMDZr90u0BZIcYZOOSRXCPTIeXb3fptCxm7D+zwtIwijyGOP6JQ1HLEwIwDZ30hwR1RkrWVa7v89I1ehbrOffSh/bTybEvZP9xvi1qtShaOUkIR0Tbrq4Y2Zznu//0FC0BCyqP9NwuxUiPFoSZkbvo5zoPJzGkioCvKmnLP5Tl6LL3B0jqpm2G2MC2SJLUDkn/m5elEkVprvPAbQLd5/OYJFz7Cw7ck1I0MreOwCwfiMbuRWnHSMISrd25xNmnkexdi5z237p8Lnk/X3j1wsmu/fFoCxQz7NqWs4aJnDYEwR+APjTu1WTRGhdX1N/wI2fXBMeVfs8QlrhbDOjHuWkgOkrtv2Enuarbwz0buKavcEoLV86P4/GiPfP4Gmf1NkJjPTxGVr3Sli1zibN6QsBzoepR6w72hmACDQZirlyKb7eWpEN4Si7GR6TuX9Bd39O/s1ZZJnAvOFG7t1ObT26bflWi7T6q6H3aKq1lG/jDQpbZoRp1MzWGurDMQuSc1I5WFsJ3GVL5dVDVdgacptdXJaLzWaE27/PpDp48WPd4Rzln5ez+ifrWr9UOinGQzbludwD7A7KJxh+p82xxUPp0NFJcPsTXhHZf4lUxfWFMK+2mO1MxOb84FdNJrwuPLdQ1qRsbjeVJHzxmlskTczpZ9UnF4+c1Ke3XuyyECsXUE9UVdlJVW9NOmqFl2/OBgBGCDloeP9ZeKwb8+npTzD+KsfHc+1fLDyhsztMcQweVcYTrCTkuDZCGyfWvk7KbsRhbuoCI0KLg657oLvA6wOsdKglq1P/8jfDqK9q6ywu/URDArQh0922h4ymZ/8p+bYfz0Fzzh9Vqh3cXTd1TrWtSvk k64qYsEp vY3XC9jsGYHLiui04M41+Ecezqio/GwP+5H+k9UZxCVZQaE+kZKteCzueoSdyraSMFfP0WP22PIrmOd5IpIGgFXoIai0Kx0l/EClRMUnsrcuNdKNL+zp1nrWa/CTvHIh8s5VboFF/uG44+qvZmz1G94SYMTV2xxmKaYbR4zmi+m6iI+dMJHJKALg5xCMO/YZYH7JYq92oaKlf3BhU5sNryCzbCANs/3y4fnUJon18JbA89/nyWGIrtWfnjNE9DtaQF9Z+1STSSXlpp8xZ3ojkuMSrxWoZ5iCxMeSOt4ax3kPsG//0MnInYGNfWQmdfJ8/v+VnV+MYdjBA5ybmNgVCqcb7Ww== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jun 02, 2025 at 12:14:53AM +0800, Kairui Song wrote: > On Sun, Jun 1, 2025 at 8:56 PM YoungJun Park wrote: > > > > On Fri, May 30, 2025 at 09:52:42AM -0700, Nhat Pham wrote: > > > On Thu, May 29, 2025 at 11:47 PM YoungJun Park wrote: > > > > > > > > On Tue, Apr 29, 2025 at 04:38:28PM -0700, Nhat Pham wrote: > > > > > Changelog: > > > > > * v2: > > > > > * Use a single atomic type (swap_refs) for reference counting > > > > > purpose. This brings the size of the swap descriptor from 64 KB > > > > > down to 48 KB (25% reduction). Suggested by Yosry Ahmed. > > > > > * Zeromap bitmap is removed in the virtual swap implementation. > > > > > This saves one bit per phyiscal swapfile slot. > > > > > * Rearrange the patches and the code change to make things more > > > > > reviewable. Suggested by Johannes Weiner. > > > > > * Update the cover letter a bit. > > > > > > > > Hi Nhat, > > > > > > > > Thank you for sharing this patch series. > > > > I’ve read through it with great interest. > > > > > > > > I’m part of a kernel team working on features related to multi-tier swapping, > > > > and this patch set appears quite relevant > > > > to our ongoing discussions and early-stage implementation. > > > > > > May I ask - what's the use case you're thinking of here? Remote swapping? > > > > > > > Yes, that's correct. > > Our usage scenario includes remote swap, > > and we're experimenting with assigning swap tiers per cgroup > > in order to improve specific scene of our target device performance. > > > > We’ve explored several approaches and PoCs around this, > > and in the process of evaluating > > whether our direction could eventually be aligned > > with the upstream kernel, > > I came across your patchset and wanted to ask whether > > similar efforts have been discussed or attempted before. > > > > > > > > > > I had a couple of questions regarding the future direction. > > > > > > > > > * Multi-tier swapping (as mentioned in [5]), with transparent > > > > > transferring (promotion/demotion) of pages across tiers (see [8] and > > > > > [9]). Similar to swapoff, with the old design we would need to > > > > > perform the expensive page table walk. > > > > > > > > Based on the discussion in [5], it seems there was some exploration > > > > around enabling per-cgroup selection of multiple tiers. > > > > Do you envision the current design evolving in a similar direction > > > > to those past discussions, or is there a different direction you're aiming for? > > > > > > IIRC, that past design focused on the interface aspect of the problem, > > > but never actually touched the mechanism to implement a multi-tier > > > swapping solution. > > > > > > The simple reason is it's impossible, or at least highly inefficient > > > to do it in the current design, i.e without virtualizing swap. Storing > > > > As you pointed out, there are certainly inefficiencies > > in supporting this use case with the current design, > > but if there is a valid use case, > > I believe there’s room for it to be supported in the current model > > —possibly in a less optimized form— > > until a virtual swap device becomes available > > and provides a more efficient solution. > > What do you think about? > > Hi All, > > I'd like to share some info from my side. Currently we have an > internal solution for multi tier swap, implemented based on ZRAM and > writeback: 4 compression level and multiple block layer level. The > ZRAM table serves a similar role to the swap table in the "swap table > series" or the virtual layer here. > > We hacked the BIO layer to let ZRAM be Cgroup aware, so it even > supports per-cgroup priority, and per-cgroup writeback control, and it > worked perfectly fine in production. > > The interface looks something like this: > /sys/fs/cgroup/cg1/zram.prio: [1-4] > /sys/fs/cgroup/cg1/zram.writeback_prio [1-4] > /sys/fs/cgroup/cg1/zram.writeback_size [0 - 4K] > > It's really nothing fancy and complex, the four priority is simply the > four ZRAM compression streams that's already in upstream, and you can > simply hardcode four *bdev in "struct zram" and reuse the bits, then > chain the write bio with new underlayer bio... Getting the priority > info of a cgroup is even simpler once ZRAM is cgroup aware. > > All interfaces can be adjusted dynamically at any time (e.g. by an > agent), and already swapped out pages won't be touched. The block > devices are specified in ZRAM's sys files during swapon. > > It's easy to implement, but not a good idea for upstream at all: > redundant layers, and performance is bad (if not optimized): > - it breaks SYNCHRONIZE_IO, causing a huge slowdown, so we removed the > SYNCHRONIZE_IO completely which actually improved the performance in > every aspect (I've been trying to upstream this for a while); > - ZRAM's block device allocator is just not good (just a bitmap) so we > want to use the SWAP allocator directly (which I'm also trying to > upstream with the swap table series); > - And many other bits and pieces like bio batching are kind of broken, > busy loop due to the ZRAM_WB bit, etc... > - Lacking support for things like effective migration/compaction, > doable but looks horrible. > That's interesting — we've explored a similar idea as well, although not by attaching it to ZRAM. Instead, our concept involved creating a separate block device capable of performing the tiering functionality, and using it as follows: 1. Prepare a block device that can manage multiple backend block devices. 2. Perform swapon on this block device. 3. Within the block device, use cgroup awareness to carry out tiered swap operations across the prepared backend devices. However, we ended up postponing this approach as a second-tier option, mainly due to the following concerns: 1. The idea of allocating physical slots but managing them internally as logical slots felt inefficient. 2. Embedding cgroup awareness within a block device seemed like a layer violation. > So I definitely don't like this band-aid solution, but hey, it works. > I'm looking forward to replacing it with native upstream support. > That's one of the motivations behind the swap table series, which > I think it would resolve the problems in an elegant and clean way > upstreamly. The initial tests do show it has a much lower overhead > and cleans up SWAP. > But maybe this is kind of similar to the "less optimized form" you > are talking about? As I mentioned I'm already trying to upstream > some nice parts of it, and hopefully replace it with an upstream > solution finally. > > I can try upstream other parts of it if there are people really > interested, but I strongly recommend that we should focus on the > right approach instead and not waste time on that and spam the > mail list. I am in agreement with the points you’ve made. > I have no special preference on how the final upstream interface > should look like. But currently SWAP devices already have priorities, > so maybe we should just make use of that. I have been exploring an interface design that leverages the existing swap priority mechanism, and I believe it would be valuable to share this for further discussion and feedback. As mentioned in my earlier response to Nhat, I intend to submit this as an RFC to solicit broader input from the community. Best regards, YoungJun Park