From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CBE4C47DD9 for ; Wed, 27 Mar 2024 11:05:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6E32B6B0083; Wed, 27 Mar 2024 07:05:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 66C5B6B0085; Wed, 27 Mar 2024 07:05:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50CB36B0087; Wed, 27 Mar 2024 07:05:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 316AD6B0083 for ; Wed, 27 Mar 2024 07:05:06 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E51BA1A0CCA for ; Wed, 27 Mar 2024 11:05:05 +0000 (UTC) X-FDA: 81942536970.23.52C12D5 Received: from mail-lj1-f179.google.com (mail-lj1-f179.google.com [209.85.208.179]) by imf28.hostedemail.com (Postfix) with ESMTP id 048ACC0010 for ; Wed, 27 Mar 2024 11:05:03 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=BrrG5gkt; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.179 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711537504; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ifymdw0r8QQrsy/jrDgRy80PrPmMAwYeA7nyVuXGI4A=; b=Eno/O81f2voQ3viYK3NKUXqwqIupGKTiOIDuBk5nH6ww3egS2JoYHCUQlFyrFAHSqrIu4v TGE16oq67zTSTkcKTz68NmzZ9dHhfkZbpt+51Tgopyuc+gDxVEE/uC3UTOei2YaDB+riTT zsuFFGd5ooL+orqJOagj6I5/20CeUrQ= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=BrrG5gkt; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.179 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711537504; a=rsa-sha256; cv=none; b=RZMLdQzlzoeA5sL8xtXHzSefZ69Lz2763FmTg/VIkgrurfIo/N/lqYXAkp11HztbOVxJ00 JLJ2Ie6VgrPxwU3cdw30hhI5Dn3MIYrg89FIDV/4S6PiU7JTTILtd/jgcsjyc6UtM0Dxvf Karb4DTu+5VxVcBNhgS7GWmd6r+OBT8= Received: by mail-lj1-f179.google.com with SMTP id 38308e7fff4ca-2d52e65d4a8so87637611fa.0 for ; Wed, 27 Mar 2024 04:05:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711537502; x=1712142302; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=ifymdw0r8QQrsy/jrDgRy80PrPmMAwYeA7nyVuXGI4A=; b=BrrG5gktMDr4nyp+8WHeaepdyGhz9jpwTDo+Lr6RmaZ1F1VVAsn3VhM8JbCP4tin3f Pzox9C55gHjuiZ49zx+O8EHZpGJ1tJq1l1YTAkrc/5M1wZDkCP4TDgK5N3JJty1LnV0i uZ3Jgn5cdRmLuUmhf1j8+3Yl3/OHM+LAg9KAzfzUYakE0D0s+uF55q3TtYw8YbC6Qxhk oSBXs6Yu4QPWpcH513FzX96fBhCt9rGKEHPZDctdLFRfx2S+sOdc+1GLibybVjP7DFsf fSpRldwyM2iMzB1FKIeqkwdYQizJdKgLn5nF9eZWLkiN9qsOObPfr6LJAiJpGX2EhQqB ipyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711537502; x=1712142302; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ifymdw0r8QQrsy/jrDgRy80PrPmMAwYeA7nyVuXGI4A=; b=IMewAI0wx1sTQIjvTwCa8aBYGUSzwfLn182q24i2RCcWEXxgfu+otZyzbGqHYacWPj EqkadqVHs+MHBLc+XZcphMIK+0MovAe+AxKqJr5cXAqV0Jyq0CAsSZK9aabOOgMAsN97 dP73IXrOXe5boIO/cbX34c3BbuMyGnUDJk2fJlTLGAlUx5zRUgXnS+p9AlDN2O4lxAdC PCfoVu1JeNYkJyvdNlGT9kxVimzftgAv0ifpmXGECwwsdjYrUQ2s1CxQKmKkS7rf8A2u GTbegLU+IdSpyvWyvjsbuCqrxtu+7yN9Dy3W9FkLcGVbDQ56OSKzgvuFZBQ+qKdd0828 nqtQ== X-Forwarded-Encrypted: i=1; AJvYcCU/x2gAYd+m+vhP1yfeRxbs/Er5PCkFFLWl4m9x1lBIjHG4e7myqn2FEI3bTUjcCiLjoXiUT7lsxAJsKV/0vUBu+og= X-Gm-Message-State: AOJu0Yxacyi3CJqhItqMWConrJyVD4GbMbiKkN0QSGxqtsZ7IwZkQTfa wsrCI2Ml6oICsi73UY7t74jEspPG5idWxmgyz8AtTPo6B0Dn/6MX9RW8yFf8wbUZ8MNUjCXCEKe 9NWw8JxWVkLlOKBpqRTtFEVUnzy4= X-Google-Smtp-Source: AGHT+IFCtkun3aPv+TBFVnO2BQPj2UIM3665xmoS96Pe49jk/wc3sInWORK0KjckV4jOamBqI8twv7VVwZQ6tmPl69c= X-Received: by 2002:a2e:9f4a:0:b0:2d2:4108:72a with SMTP id v10-20020a2e9f4a000000b002d24108072amr621779ljk.12.1711537501678; Wed, 27 Mar 2024 04:05:01 -0700 (PDT) MIME-Version: 1.0 References: <20240326185032.72159-1-ryncsn@gmail.com> <878r24o07p.fsf@yhuang6-desk2.ccr.corp.intel.com> <58e4f0c2-99d1-42b9-ab70-907cf35ac1a7@arm.com> In-Reply-To: <58e4f0c2-99d1-42b9-ab70-907cf35ac1a7@arm.com> From: Kairui Song Date: Wed, 27 Mar 2024 19:04:45 +0800 Message-ID: Subject: Re: [RFC PATCH 00/10] mm/swap: always use swap cache for synchronization To: Ryan Roberts Cc: "Huang, Ying" , linux-mm@kvack.org, Chris Li , Minchan Kim , Barry Song , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 048ACC0010 X-Rspam-User: X-Stat-Signature: gp3qxie4r6g6umr6tode18rqb1cy577i X-Rspamd-Server: rspam01 X-HE-Tag: 1711537503-315806 X-HE-Meta: U2FsdGVkX18gd+0IKY4HkyhPzzAZzFY4iRGEFmtFiYxZ2MgKh67ptxGikQ1pgGGXMtzlOwcvceBxIYca/4e8yI9XAs6cOcBQdFA5HZPeziRF+JUGYeNvsB/ZIQpuWk9kciXt5AhzFnFhCHBaDeX2TKYydjGfS0XXEwCJp8AGem1ou8x0Q2vWOqNg5bBWIuAEQ2tUqMN5hs8/N8W36j4yhrLQpiPUhFaKhfiALaAgKk76FvU+51TyYdu+8AVI9YHxT+961Zgt99VJoeZPWK8hxHlRP0sxWLvPTUKy0/1o0/xYiLrxo+EZkMTA6P1Y8iWf496Gz0mNwfXU7srORUjPQikiAylLZ3AJ8iiKpHOmEWXxR44MUDSQ5RngUQHrurpL7HZ56htNXFhI1tV8eKUChSD3+6KDFOCDOeri8qvuUiU21UfINLIkr9+rhibA6Yy53tQGfn28YhM9+I3ESoZF2XT5oPHIm2xTlX3AsNdeVyTN+PPwElyU//N2Pf8XgehC5G6knjbuZY7h7Gnfw8q95mse7jyslXlWH31xsxwCeDdhndpFgnPyxWV5dZZffglG3vM1HR8EU0gYNcFGfN2utzF1G+gaK7zPdJLQrNFUHIO6qYCQt9NASz1F6R1IfUgGmtuYQtKn4Z+HkG2vUJoggCaDvmsBwnGGoVWKb4pVzJ4MYaSvgJ/BSkwquo7ZifBouOaCqAQO9WHMEQlnd6XPLNDacC/FgwkeGaypdmwXZFsG2Goq2bg+arAOb7slxRj2TccJ/ObyqdgohO3L7gwL6M1v5DxFhX7OmjsWGl2ph5xQ9q9AVHly4/HV3mpyP5MmrCzzCaX+yO6+NZFwJrBljekF+o8oVwNcZrf+l/Rf5cj884if8+AswC/JbL9TelGV83erhnH2DuNxr9yyWAOqqjOhlS+f0heDokENuKgvG/gMaSJmhhIHvxQX7EW5ozhkt5ShIcqBrCC7S3Y1Xmz I+zi6ZJL m93AqrGlOfBYrW7EDnTFXrQc9nSzpEL3luGVkaJoU1N/wLvnuvvMXUnKCgiVnG1UvvHLUBpb7BIBfGt6q76YNPzAlehodV5GpuBF/OrSitVF9tqbGbIBbW3A4sXDGe65+ckE69DSdUMtw5Oel8wMNies25QZWPDb/a5FZNxCKFdw3rI/2MJBgrq9x84PnH8gS3UhTM9qWLG95R9jy6UmRxvgj7PlLZv380yKMyAnx/uXbll/Eh50+olvw1Pc0wH5O96t5kS5bZ/J7fXOnY/hkK8gM4iJgKZloE6dh0PIXdPX3Rw58Y3hTa5365gSh4AIon1Q4Ql60k3PelLiVCxh/ZDBa7GbL9IsPqb50VfF3p5PEjV+DFjym45deMg1ie3UI/f0zgrGSUjRl3cP3QTLc7zVP9gTCmhByix25 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Mar 27, 2024 at 4:27=E2=80=AFPM Ryan Roberts = wrote: > > [...] > > >>> Test 1, sequential swapin/out of 30G zero page on ZRAM: > >>> > >>> Before (us) After (us) > >>> Swapout: 33619409 33886008 > >>> Swapin: 32393771 32465441 (- 0.2%) > >>> Swapout (THP): 7817909 6899938 (+11.8%) > >>> Swapin (THP) : 32452387 33193479 (- 2.2%) > >> > >> If my understanding were correct, we don't have swapin (THP) support, > >> yet. Right? > > > > Yes, this series doesn't change how swapin/swapout works with THP in > > general, but now THP swapout will leave shadows with large order, so > > it needs to be splitted upon swapin, that will slow down later swapin > > by a little bit but I think that's worth it. > > > > If we can do THP swapin in the future, this split on swapin can be > > saved to make the performance even better. > > I'm confused by this (clearly my understanding of how this works is incor= rect). > Perhaps you can help me understand: > > When you talk about "shadows" I assume you are referring to the swap cach= e? It > was my understanding that swapping out a THP would always leave the large= folio > in the swap cache, so this is nothing new? > > And on swap-in, if the target page is in the swap cache, even if part of = a large > folio, why does it need to be split? I assumed the single page would just= be > mapped? (and if all the other pages subsequently fault, then you end up w= ith a > fully mapped large folio back in the process)? > > Perhaps I'm misunderstanding what "shadows" are? Hi Ryan My bad I haven't made this clear. Ying have posted the link to the commit that added "shadow" support for anon pages, it has become a very important part for LRU activation / workingset tracking. Basically when folios are removed from the cache xarray (eg. after swap writeback is done), instead of releasing the xarray slot, an unsigned long / void * is stored to it, recording some info that will be used when refault happens, to decide how to handle the folio from LRU / workingset side. And about large folio in swapcahce: if you look at the current version of add_to_swap_cache in mainline (it adds a folio of any order into swap cache), it calls xas_create_range(&xas) which fill all xarray slots in entire range covered by the folio. But xarray supports multi-index storing, making use of the nature of the radix tree to save a lot of slots. eg. for a 2M THP page, previously 8 + 512 slots (8 extra xa nodes) is needed to store it, after this series it only needs 8 slots by using a multi-index store. (not sure if I did the math right). Same for shadow, when folio is being deleted, __delete_from_swap_cache will currently walk the xarray with xas_next update all 8 + 512 slots one by one, after this series only 8 stores are needed (ignoring fragmentation). And upon swapin, I was talking about swapin 1 sub page of a THP folio, and the folio is gone, leaving a few multi-index shadow slots. The multi-index slots need to be splitted (multi-index slot have to be updated as a whole or split first, __filemap_add_folio handles such split), I optimize and reused routine in __filemap_add_folio in this series so without too much work it works perfectly for swapcache.