From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23BE5C5475B for ; Thu, 14 Mar 2024 08:26:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67C4F80087; Thu, 14 Mar 2024 04:26:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 62D0D80073; Thu, 14 Mar 2024 04:26:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A62A80087; Thu, 14 Mar 2024 04:26:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 399F880073 for ; Thu, 14 Mar 2024 04:26:58 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id D6E371A14A8 for ; Thu, 14 Mar 2024 08:26:57 +0000 (UTC) X-FDA: 81894964074.14.6A77D40 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf01.hostedemail.com (Postfix) with ESMTP id 995A44000E for ; Thu, 14 Mar 2024 08:26:55 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=2etUFJnF; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="4ypGQ/Fe"; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=2etUFJnF; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="4ypGQ/Fe"; dmarc=none; spf=pass (imf01.hostedemail.com: domain of jack@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=jack@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710404816; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jlepsNXeJ0oLq1IIGlLTQbLgUEjI22NYTR3vfMAT65k=; b=MJ0HawzIHS6Ga64G/KlcXSR9A7jN8xUd9DxU6LCVFDPuJ6dl7TZsNcJMByd/0G2utavMJK 2H5qTACVqNO8TXpmCYd1wq4B0GrHoXBMaebFUqMY5YhpcvjrNSVWlu3DvQtkNuN8TIBOVr OqHGCsQX4uAKcHFS+CL8axC/W5O/SYU= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=2etUFJnF; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="4ypGQ/Fe"; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=2etUFJnF; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="4ypGQ/Fe"; dmarc=none; spf=pass (imf01.hostedemail.com: domain of jack@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=jack@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710404816; a=rsa-sha256; cv=none; b=aqDQE2oR0WbkPbr+CvCMGXI8wp5/UWTUjd13KiGUyRQW1JSGM0dSLXgBj4zayqQOInn5YP gS1v5hPWI3unU9TklIahqJj9KCBva7AEt3ZafH+dzpHjctVoFdqDCiNkLxcbcoyZx2/Hwo ccyODdW4ecXyjeYcGYJ5eiL+tDo3Tf4= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id C950F1F812; Thu, 14 Mar 2024 08:26:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1710404811; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jlepsNXeJ0oLq1IIGlLTQbLgUEjI22NYTR3vfMAT65k=; b=2etUFJnFnEUyN95mmw8MP4Ssna05pP6mF5DMLVLzrB94PbtJ1ySfSAEvy+tqJLRfLkKz8Y utlPoLWmJGAMyMy4RczEK+3ld2GVagzsDSFVDQMyy2G5jdB37MBnJoDpG6/3VLPKJO04+a oDZDpq/7KAx9dt1zUFaNRIUJ0K0ttks= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1710404811; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jlepsNXeJ0oLq1IIGlLTQbLgUEjI22NYTR3vfMAT65k=; b=4ypGQ/FeY69DRzjg1tFfJ+OFzzEM/hJkBZ3XjNfyE5wLf6m9AZQGBayG3Uanl8TSezJjHA xOcn+jtF2amy+TCA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1710404811; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jlepsNXeJ0oLq1IIGlLTQbLgUEjI22NYTR3vfMAT65k=; b=2etUFJnFnEUyN95mmw8MP4Ssna05pP6mF5DMLVLzrB94PbtJ1ySfSAEvy+tqJLRfLkKz8Y utlPoLWmJGAMyMy4RczEK+3ld2GVagzsDSFVDQMyy2G5jdB37MBnJoDpG6/3VLPKJO04+a oDZDpq/7KAx9dt1zUFaNRIUJ0K0ttks= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1710404811; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jlepsNXeJ0oLq1IIGlLTQbLgUEjI22NYTR3vfMAT65k=; b=4ypGQ/FeY69DRzjg1tFfJ+OFzzEM/hJkBZ3XjNfyE5wLf6m9AZQGBayG3Uanl8TSezJjHA xOcn+jtF2amy+TCA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id BAE491386D; Thu, 14 Mar 2024 08:26:51 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id J0OeLcu08mX2IwAAD6G6ig (envelope-from ); Thu, 14 Mar 2024 08:26:51 +0000 Received: by quack3.suse.cz (Postfix, from userid 1000) id 53705A07D9; Thu, 14 Mar 2024 09:26:51 +0100 (CET) Date: Thu, 14 Mar 2024 09:26:51 +0100 From: Jan Kara To: Chuanhua Han Cc: Jan Kara , Chris Li , linux-mm , lsf-pc@lists.linux-foundation.org, ryan.roberts@arm.com, 21cnbao@gmail.com, david@redhat.com Subject: Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Swap Abstraction "the pony" Message-ID: <20240314082651.ckfpp2tyslq2hl2c@quack3> References: <039190fb-81da-c9b3-3f33-70069cdb27b0@oppo.com> <20240307140344.4wlumk6zxustylh6@quack3> <8da6a093-346b-35cd-818a-a82abfa6a930@oppo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <8da6a093-346b-35cd-818a-a82abfa6a930@oppo.com> X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 995A44000E X-Stat-Signature: ouebf7ttdt9eurponm7pb5kgd3ehzgjw X-Rspam-User: X-HE-Tag: 1710404815-899011 X-HE-Meta: U2FsdGVkX1+XBEosUdWl8S4WK6x3YkJdCakqL/UOmx5cWhiCsU6cG55qbVCprQvnhbZISS0KFUb30t5CzIODBAD8jBNgpgF44RaxA4s9hZJUYQeL4Md3Vfn6kUbjVhHU94sjBedrysPiITXiLOBcXcAA33hGgxp0GzrnXwZwcr57cflHYLeddT3dHabtknAcuHf5Hfp7gHehiwm29jZljHqGUuqokYSfjTXxnCCqEdXUaVsPc3UUoifef1dKyIOKQoq0ojmIgvRSYpDbKqU6Ov7qhGfOojwdWXlxZFQ6AlfCFuV3ytArjytYoHNpje0zuupIw4mdnpmhooyAmuvgtf4Jvcd9vXH3deKJX+R8iyPGfCZ8HQ8S5wDxyiggLgSLKmpHtNCLu3VExJGvdL6yCsUqUMeiXXL8+XY9HkAtwYSxaFQGMV2bqgcL29+7ntg+r/RtmNtdlsk8l9R9batyShsZFsQoZB0+AHZl7wl7a5VuSNWmRfrggtpo7WNFMkP9T37lg8hW5Snw5gLG2hgQxUWXx0yC82T5qnOqMd//KZ9ZF6YYf695fkBYCtrLXLHsh7VpzGTlCc5PHgQPXfgtO6JSMTiDMp9AnnWXN5Auj+ksMdTty8t7gdZrNoa7QviIESRKO+8PhjoG8uMFniQES20Jafk2EZODrR5IiXl7MmDe7211U2hmZ2852qanWehy2jYiztVRKmjvsISeTEBgNqNneGZRDzp0Xvvnw1XnQI69iHijhpl4H3CUsin3v4LKvgtB0yTlKDWEL+1n5CHe2uJ+tmaunS+01OFK3AZaF+5sLBMQQzjovKBoHwngoix5Xm3BHsJEkbvZNFsKvvusA6WJakurikZumRbpOh4U0SGmqPNDMnauMMcOPPjzbi5y/v/yiI8mqMg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri 08-03-24 10:02:20, Chuanhua Han wrote: > > 在 2024/3/7 22:03, Jan Kara 写道: > > On Thu 07-03-24 15:56:57, Chuanhua Han via Lsf-pc wrote: > >> 在 2024/3/1 17:24, Chris Li 写道: > >>> In last year's LSF/MM I talked about a VFS-like swap system. That is > >>> the pony that was chosen. > >>> However, I did not have much chance to go into details. > >>> > >>> This year, I would like to discuss what it takes to re-architect the > >>> whole swap back end from scratch? > >>> > >>> Let’s start from the requirements for the swap back end. > >>> > >>> 1) support the existing swap usage (not the implementation). > >>> > >>> Some other design goals:: > >>> > >>> 2) low per swap entry memory usage. > >>> > >>> 3) low io latency. > >>> > >>> What are the functions the swap system needs to support? > >>> > >>> At the device level. Swap systems need to support a list of swap files > >>> with a priority order. The same priority of swap device will do round > >>> robin writing on the swap device. The swap device type includes zswap, > >>> zram, SSD, spinning hard disk, swap file in a file system. > >>> > >>> At the swap entry level, here is the list of existing swap entry usage: > >>> > >>> * Swap entry allocation and free. Each swap entry needs to be > >>> associated with a location of the disk space in the swapfile. (offset > >>> of swap entry). > >>> * Each swap entry needs to track the map count of the entry. (swap_map) > >>> * Each swap entry needs to be able to find the associated memory > >>> cgroup. (swap_cgroup_ctrl->map) > >>> * Swap cache. Lookup folio/shadow from swap entry > >>> * Swap page writes through a swapfile in a file system other than a > >>> block device. (swap_extent) > >>> * Shadow entry. (store in swap cache) > >>> > >>> Any new swap back end might have different internal implementation, > >>> but needs to support the above usage. For example, using the existing > >>> file system as swap backend, per vma or per swap entry map to a file > >>> would mean it needs additional data structure to track the > >>> swap_cgroup_ctrl, combined with the size of the file inode. It would > >>> be challenging to meet the design goal 2) and 3) using another file > >>> system as it is.. > >>> > >>> I am considering grouping different swap entry data into one single > >>> struct and dynamically allocate it so no upfront allocation of > >>> swap_map. > >>> > >>> For the swap entry allocation.Current kernel support swap out 0 order > >>> or pmd order pages. > >>> > >>> There are some discussions and patches that add swap out for folio > >>> size in between (mTHP) > >>> > >>> https://lore.kernel.org/linux-mm/20231025144546.577640-1-ryan.roberts@arm.com/ > >>> > >>> and swap in for mTHP: > >>> > >>> https://lore.kernel.org/all/20240229003753.134193-1-21cnbao@gmail.com/ > >>> > >>> The introduction of swapping different order of pages will further > >>> complicate the swap entry fragmentation issue. The swap back end has > >>> no way to predict the life cycle of the swap entries. Repeat allocate > >>> and free swap entry of different sizes will fragment the swap entries > >>> array. If we can’t allocate the contiguous swap entry for a mTHP, it > >>> will have to split the mTHP to a smaller size to perform the swap in > >>> and out. T > >>> > >>> Current swap only supports 4K pages or pmd size pages. When adding the > >>> other in between sizes, it greatly increases the chance of fragmenting > >>> the swap entry space. When no more continuous swap swap entry for > >>> mTHP, it will force the mTHP split into 4K pages. If we don’t solve > >>> the fragmentation issue. It will be a constant source of splitting the > >>> mTHP. > >>> > >>> Another limitation I would like to address is that swap_writepage can > >>> only write out IO in one contiguous chunk, not able to perform > >>> non-continuous IO. When the swapfile is close to full, it is likely > >>> the unused entry will spread across different locations. It would be > >>> nice to be able to read and write large folio using discontiguous disk > >>> IO locations. > >>> > >>> Some possible ideas for the fragmentation issue. > >>> > >>> a) buddy allocator for swap entities. Similar to the buddy allocator > >>> in memory. We can use a buddy allocator system for the swap entry to > >>> avoid the low order swap entry fragment too much of the high order > >>> swap entry. It should greatly reduce the fragmentation caused by > >>> allocate and free of the swap entry of different sizes. However the > >>> buddy allocator has its own limit as well. Unlike system memory, we > >>> can move and compact the memory. There is no rmap for swap entry, it > >>> is much harder to move a swap entry to another disk location. So the > >>> buddy allocator for swap will help, but not solve all the > >>> fragmentation issues. > >> I have an idea here😁 > >> > >> Each swap device is divided into multiple chunks, and each chunk is > >> allocated to meet each order allocation > >> (order indicates the order of swapout's folio, and each chunk is used > >> for only one order).   > >> This can solve the fragmentation problem, which is much simpler than > >> buddy, easier to implement, > >>  and can be compatible with multiple sizes, similar to small slab allocator. > >> > >> 1) Add structure members   > >> In the swap_info_struct structure, we only need to add the offset array > >> representing the offset of each order search. > >> eg: > >> > >> #define MTHP_NR_ORDER 9 > >> > >> struct swap_info_struct { > >>     ... > >>     long order_off[MTHP_NR_ORDER]; > >>     ... > >> }; > >> > >> Note: order_off = -1 indicates that this order is not supported. > >> > >> 2) Initialize > >> Set the proportion of swap device occupied by each order. > >> For the sake of simplicity, there are 8 kinds of orders.   > >> Number of slots occupied by each order: chunk_size = 1/8 * maxpages > >> (maxpages indicates the maximum number of available slots in the current > >> swap device) > > Well, but then if you fill in space of a particular order and need to swap > > out a page of that order what do you do? Return ENOSPC prematurely? > If we swapout a subpage of large folio(due to a split in large folio),   > Simply search for a free swap entry from order_off[0]. I meant what are you going to do if you want to swapout 2MB huge page but you don't have any free swap entry of the appropriate order? History shows that these schemes where you partition available space into buckets of pages of different order tends to fragment rather quickly so you need to also implement some defragmentation / compaction scheme and once you do that you are at the complexity of a standard filesystem block allocator. That is all I wanted to point at :) Honza -- Jan Kara SUSE Labs, CR