From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA9FCC48BF6 for ; Mon, 4 Mar 2024 22:04:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 63D1C6B0089; Mon, 4 Mar 2024 17:04:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5ED036B008A; Mon, 4 Mar 2024 17:04:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B5396B008C; Mon, 4 Mar 2024 17:04:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3BC3D6B0089 for ; Mon, 4 Mar 2024 17:04:09 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C91801C0D91 for ; Mon, 4 Mar 2024 22:04:08 +0000 (UTC) X-FDA: 81860735376.04.C9C53FB Received: from mail-oa1-f49.google.com (mail-oa1-f49.google.com [209.85.160.49]) by imf08.hostedemail.com (Postfix) with ESMTP id 333AD16000F for ; Mon, 4 Mar 2024 22:04:06 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GrmVC1nK; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of jaredeh@gmail.com designates 209.85.160.49 as permitted sender) smtp.mailfrom=jaredeh@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709589847; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zBGNKMBM2uC2kxYg0KRL9TfEizS3FLfb3BGYKQCQSVY=; b=AM3nuySpqINUOBW+B/TE3c46I94hc75FQqRnbM7mb2qxP+ZbskhS7kK6V+fBj1fGisHHx2 xbB8GXtTxzhGUGqxCRMlHOnMy6Kov+FI4tfjGXUcXsFeTKT2Ljugn5wIzGn/s19+hYOzBh X4cIj1Dr8jfqrTvqhcwC3N2TnQPjvv8= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GrmVC1nK; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of jaredeh@gmail.com designates 209.85.160.49 as permitted sender) smtp.mailfrom=jaredeh@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709589847; a=rsa-sha256; cv=none; b=NYNu+1T4Kxm/ShhgyGuisz673CWbdsFr1pGnrojfbDvXJV45AfRKXvbtxYgQyhs0yiSgBc CSPWQ1THgFkap7tpnMG49wrDPXubV2kxQef8Ar9Cf18cYv2X45IvtpGlzNaBaPumsOyn7b IyUx8tXjym48KwcNMIvGG8LxDojQ4zQ= Received: by mail-oa1-f49.google.com with SMTP id 586e51a60fabf-220a0cacf9fso2274119fac.1 for ; Mon, 04 Mar 2024 14:04:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709589846; x=1710194646; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=zBGNKMBM2uC2kxYg0KRL9TfEizS3FLfb3BGYKQCQSVY=; b=GrmVC1nK8nnwauncTiWLsum9v4vowIUQo4oTUOcuJjn01o2uSPIaAXZ18lC16OHMKN qcjnSMIF/WFH8qQjjCAtuU8HmbjpmZjQjC+Dt7OOQihrn9zltkr5tljtB6bnkEqGQDt7 KM4ge+a9oiS3QHZK9NLvt+gLjosPXA0sfTsZtqAyc3sp1JqnY8UcrCAFon0IsVPg/5tl LmC1GgoVJWa9yw8jJlftJI9mGBme6YBdU6i0FrKqnyKvCo7sTZoZ1pWtaumV1fz2Em8/ gc1ubPJo7y2Mck2TugH5GoSOvqiRc1LFnomxzMT3ty2OyWTP0KpcadG8uxgxkcK+fIic p5qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709589846; x=1710194646; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zBGNKMBM2uC2kxYg0KRL9TfEizS3FLfb3BGYKQCQSVY=; b=Q8tvTuDK4A0Es+kzmH/vR4GHKdnslpHOwZIsu8E2cJVMn6K10YHBmGH/bUAly6oQtO ldZFGgTMf1PVp7jTnWsDwxMApTDIKrKRjN9qAdzGF4AiXc4Ms/3+U9/SulaZPaNF4hHb E2NlLWtTQ8uqVo1teQVWC1rLl2qLFKB0MvwjC6XW6faujie3eEd5xjw1cbMxMsraP0m+ 4MxRUsI2YgDaUubIYSK7pf3BXaJ7xEszk9yuR+Cz6anYLp7SobkhH6gsxHKVCSmMRhw3 z95gulI32qWQ2VGxPz9BZFi4Lau6nrNr3AJ4h458szpW/hY1J5e1+0A1QvQKWtSr+KOV r3jg== X-Forwarded-Encrypted: i=1; AJvYcCXHPbyPMi5i9oduQtZ6XFlg7gii3JOwin7kDoadl70upPXVWC5fA8bn8IXfJAa/M0bNtibcbl4VsW+IXwN32P22c6c= X-Gm-Message-State: AOJu0YzREvz+Z0ReXbKJJ82Ba61PIdiUcs2nzgd19TUZ/6Ae2ZEp8TW+ y41cDJwX6ap2TSQMJ/sHDOPG7yH+t9hKb0/DSwniXUaHREBzD0qYYixJb4lmsB9bXA3SR1VvZJZ b5sfslZczRItiikZoaL3M+ShUZb4= X-Google-Smtp-Source: AGHT+IGO3LuS8dXtQfAFOHbCKDXOw4zEBNjnyXIoYmxRXHOihNxp7Z7gx3uiaGT7B+mHBnXSVdw97SA3NIu3WL9wbO8= X-Received: by 2002:a05:6870:4622:b0:220:d263:49ff with SMTP id z34-20020a056870462200b00220d26349ffmr9062179oao.44.1709589846129; Mon, 04 Mar 2024 14:04:06 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Jared Hulbert Date: Mon, 4 Mar 2024 14:03:54 -0800 Message-ID: Subject: Re: [LSF/MM/BPF TOPIC] Swap Abstraction "the pony" To: Kairui Song Cc: Chris Li , lsf-pc@lists.linux-foundation.org, linux-mm , ryan.roberts@arm.com, David Hildenbrand , Barry Song <21cnbao@gmail.com>, Chuanhua Han Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: 8yi5yt4m6cs6dzkupquhkgwty4b7a84g X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 333AD16000F X-HE-Tag: 1709589846-102911 X-HE-Meta: U2FsdGVkX18vHR5EvgLKfQzOOB7GZUDJh7YckPySAdRRzL9jZQpKvdefqR85gjq/XIs7tnWyejgURxfNkom8IXnraMbXwwYV5o9NBgIx74Lw7TSS46frYWJI+T6JFBqNr4MYhshQ/hRjeA9MXwGfl3PGuVh3E8nNI+R2jKwCKvgdLjdv+IKMx1yX2lWlNwvbssqwk/vPml1cYmb67/+9fXr0VX7r81Cg+ounR/H1GrWyGGdy8E2A0D9ep3AJNWYXtMX9lWRd3sZghETmJ2w8SfjNDGAXrtn67A2bc6RUlLE+dl2IAoFq88JzL/nUuhv2OCpXZvTgBjeqCnEZTp/anLf+Xo8Wcy+toWsnbpRj7ZmJaeJuExvnrJFiL7Q8qRE41gCijcavrVQxBoL2cuPiZx4QEyrKaM+ADgBN3MwOxpA/FO6RcbM2WfXsGQCGGqy8lh8ACF0BSehz+gkVe+r5WLCTUgXIpZ8uI9vVYzhlWtpL+vIsGW5iSQ5B1OkfJZV7NmegcJU34wKiFTsA2BYBmwhtVDgv3UaTvkY565X9FO38XdCRF6SA/ObJjWyImHLdzKONeW4lPRL8XGOKAFm8gCnpxJO3m6ha2iMZTQ/xWQRPL6DYDztWgmzik0bOkzweUkpGBnMeElyAiDB+TheeELIihOx+1kYu0zMT+AXdFdMEltaU+iLG4/qVzyitZHPlNpv+QIgkLHjX7kIg5s6HFMSI5KQ/ZswskXuQVYef7Uki49v0K9y0zCWGp2LUG0OgboG+qYdIZKtJX2qS9QQiXNCor7zkLU6RNQf5UmXjyRIPhOcj5iEWh0+FHmyIbU/DeMie91n8LIWHpTNqynRYvhEQBGR819Wo7pUDgW9qOsi2CmpHukTSZlzDuu7QpFgn+TWBB/pFbyxfBQFWg1LBHyizmapBWPcF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Mar 4, 2024 at 10:44=E2=80=AFAM Kairui Song wrot= e: > > On Fri, Mar 1, 2024 at 5:27=E2=80=AFPM Chris Li wrote= : > > > > In last year's LSF/MM I talked about a VFS-like swap system. That is > > the pony that was chosen. > > However, I did not have much chance to go into details. > > > > This year, I would like to discuss what it takes to re-architect the > > whole swap back end from scratch? > > Very interesting topic! Have been stepping into many pitfalls and > existing issues of SWAP recently, and things are complicated, > definitely need more attention. > > > > > Let=E2=80=99s start from the requirements for the swap back end. > > > > 1) support the existing swap usage (not the implementation). > > > > Some other design goals:: > > > > 2) low per swap entry memory usage. > > > > 3) low io latency. > > > > What are the functions the swap system needs to support? > > > > At the device level. Swap systems need to support a list of swap files > > with a priority order. The same priority of swap device will do round > > robin writing on the swap device. The swap device type includes zswap, > > zram, SSD, spinning hard disk, swap file in a file system. > > > > At the swap entry level, here is the list of existing swap entry usage: > > > > * Swap entry allocation and free. Each swap entry needs to be > > associated with a location of the disk space in the swapfile. (offset > > of swap entry). > > * Each swap entry needs to track the map count of the entry. (swap_map) > > * Each swap entry needs to be able to find the associated memory > > cgroup. (swap_cgroup_ctrl->map) > > * Swap cache. Lookup folio/shadow from swap entry > > * Swap page writes through a swapfile in a file system other than a > > block device. (swap_extent) > > * Shadow entry. (store in swap cache) > > > > Any new swap back end might have different internal implementation, > > but needs to support the above usage. For example, using the existing > > file system as swap backend, per vma or per swap entry map to a file > > would mean it needs additional data structure to track the > > swap_cgroup_ctrl, combined with the size of the file inode. It would > > be challenging to meet the design goal 2) and 3) using another file > > system as it is.. > > > > I am considering grouping different swap entry data into one single > > struct and dynamically allocate it so no upfront allocation of > > swap_map. > > Just some modest ideas about this ... > > Besides the usage, I noticed currently we already have following > metadata reserved for SWAP: > SWAP map (Array of char) > SWAP shadow (XArray of pointer/long) > SWAP cgroup map (Array of short) > And ZSWAP has its own data. > Also the folio->private (SWAP entry) > PTE (SWAP entry) > > Maybe something new can combine and make better use of these, also > reduce redundant. eg. SWAP shadow (assume it's not shrinked) contains > cgroup info already; a folio in the swap cache having it's -> private > pointing to SWAP entry while mapping/index are all empty; These may > indicate some space for a smarter usage. > > One easy approach might be making better use of the current swap cache > xarray. We can never skip it even for direct swap in path (SYNC_IO), > I'm working on it (not for a whole new swap abstraction, just trying > to resolve some other issue and optimize things) and so far it seems > OK. With some optimizations performance is even better than before, as > we are already doing lookup and shadow cleaning in the current kernel. > > And considering XArray is capable of storing ranged data with size of > order of 2, this gives us a nice tool to store grouped swap metadata > for folios, and reduce memory overhead. > > Following this idea we may be able to have a smoother progressive > transition to a better design of SWAP (eg. start with storing more > complex things other than folio/shadow, then make it more > backend-specified, add features bit by bit), it is more unlikely to > break things and we can test the stability and performance step by > step. > > > For the swap entry allocation.Current kernel support swap out 0 order > > or pmd order pages. > > > > There are some discussions and patches that add swap out for folio > > size in between (mTHP) > > > > https://lore.kernel.org/linux-mm/20231025144546.577640-1-ryan.roberts@a= rm.com/ > > > > and swap in for mTHP: > > > > https://lore.kernel.org/all/20240229003753.134193-1-21cnbao@gmail.com/ > > > > The introduction of swapping different order of pages will further > > complicate the swap entry fragmentation issue. The swap back end has > > no way to predict the life cycle of the swap entries. Repeat allocate > > and free swap entry of different sizes will fragment the swap entries > > array. If we can=E2=80=99t allocate the contiguous swap entry for a mTH= P, it > > will have to split the mTHP to a smaller size to perform the swap in > > and out. T > > > > Current swap only supports 4K pages or pmd size pages. When adding the > > other in between sizes, it greatly increases the chance of fragmenting > > the swap entry space. When no more continuous swap swap entry for > > mTHP, it will force the mTHP split into 4K pages. If we don=E2=80=99t s= olve > > the fragmentation issue. It will be a constant source of splitting the > > mTHP. > > > > Another limitation I would like to address is that swap_writepage can > > only write out IO in one contiguous chunk, not able to perform > > non-continuous IO. When the swapfile is close to full, it is likely > > the unused entry will spread across different locations. It would be > > nice to be able to read and write large folio using discontiguous disk > > IO locations. > > > > Some possible ideas for the fragmentation issue. > > > > a) buddy allocator for swap entities. Similar to the buddy allocator > > in memory. We can use a buddy allocator system for the swap entry to > > avoid the low order swap entry fragment too much of the high order > > swap entry. It should greatly reduce the fragmentation caused by > > allocate and free of the swap entry of different sizes. However the > > buddy allocator has its own limit as well. Unlike system memory, we > > can move and compact the memory. There is no rmap for swap entry, it > > is much harder to move a swap entry to another disk location. So the > > buddy allocator for swap will help, but not solve all the > > fragmentation issues. > > > > b) Large swap entries. Take file as an example, a file on the file > > system can write to a discontinuous disk location. The file system > > responsible for tracking how to map the file offset into disk > > location. A large swap entry can have a similar indirection array map > > out the disk location for different subpages within a folio. This > > allows a large folio to write out dis-continguos swap entries on the > > swap file. The array will need to store somewhere as part of the > > overhead.When allocating swap entries for the folio, we can allocate a > > batch of smaller 4k swap entries into an array. Use this array to > > read/write the large folio. There will be a lot of plumbing work to > > get it to work. > > > > Solution a) and b) can work together as well. Only use b) if not able > > to allocate swap entries from a). > > Despite the limitation, I think a) is a better approach. non-sequel > read/write is very performance unfriendly even for ZRAM, so it will be > better if the data is continuous in both RAM and SWAP. Why is it so unfriendly with ZRAM? I'm surprised to hear that. Even with NVMe SSD's (controversial take here) the penalty for non-sequential writes, if batched, is not necessarily significant, you need other factors to be in play in the drive state/usage. > And if not, something like VMA readahead can already help improve > performance. But we have seen this have a negative impact with fast > devices like ZRAM so it's disabled in the current kernel. > > Migration of swap entries is a good thing to have, but the migration > cost seems too high... I don't have a better idea on this. One of my issues with the arch IIRC is that though it's called swap_type/swap_offset in the PTE, that it's functionally swap_partition/swap_offset. The consequence being there is no practical way to for example migrate swapped pages from one swap backend to another. Instead we awkwardly do these sort of things inside the backend. I need to look at the swap cache xarray (pointer to where to start would be welcome). Would it be feasible to enable a redirection there?