From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A970FC001DC for ; Thu, 20 Jul 2023 11:34:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1DA782800F3; Thu, 20 Jul 2023 07:34:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 114ED28004C; Thu, 20 Jul 2023 07:34:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 002AE2800F3; Thu, 20 Jul 2023 07:34:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E2E5D28004C for ; Thu, 20 Jul 2023 07:34:40 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A514E12031D for ; Thu, 20 Jul 2023 11:34:40 +0000 (UTC) X-FDA: 81031782720.09.D7B3A24 Received: from mail-vk1-f169.google.com (mail-vk1-f169.google.com [209.85.221.169]) by imf20.hostedemail.com (Postfix) with ESMTP id 943B81C0023 for ; Thu, 20 Jul 2023 11:34:37 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=IQjVkdRj; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.221.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689852877; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IMMjhtcZTStjbNkvTZf4V2jHayFRPXrALrjnBQhdN3U=; b=zBLJHVRgCdIjVbTu7iWad/DF0jaCBYL1Y3S2zkWjogro8q7Xtf3bRBXcmvV6lVYx91rsGw Zgq8SXWbd2OvS37rTh4CT1+2qK2GK23AwWOhzt+JjdPRtkrndTq08cFB7xegtUg/w6InAz 2FX2YrNKDG9LqqH+cKKWE1CahNSD/TU= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=IQjVkdRj; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.221.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689852877; a=rsa-sha256; cv=none; b=m6Y41YXj0A9SP6h+3k75wxFWQSfJW1pe6mTKbulcJOlxCvf1eBIDvbT2mYIBItlWsHwc6v xupe+Hdco8el/cgO1uhqdWygYPCbQ5ZM8tTZsd5Wlo6TVLaycMlvfHObzRWNhFlUbryf/5 NUPZ7PDyDLSe4A2syjAsSend1upPn4U= Received: by mail-vk1-f169.google.com with SMTP id 71dfb90a1353d-48168bbffbfso280725e0c.3 for ; Thu, 20 Jul 2023 04:34:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689852876; x=1690457676; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=IMMjhtcZTStjbNkvTZf4V2jHayFRPXrALrjnBQhdN3U=; b=IQjVkdRjot0p6iU2O93sdpl5WsD07G0JoXcTDPwCvwZawrpSIgK6GbmsUu+hIwBw7v zZnKxhU5lobVBBONJ5XhAUz9xd6uN7x+Qs7EKRm/ZHkvfyU9zExVEBl1AKabDcFfFtU/ NRXb2ZmtHRx53MvP/nT0tJOjO6/BCJKSG8Mc5HjdNHl8HWN+mnExYxkHxsGxQbMzb1y7 ZXkMU/+osHcyHbhVxJj51fGY2qkaQ0nRGWw6iJU4q0RJcx7rAjXXg0b3/C5bIz1ErhfE 2XhzyoQbJZ9SAZYsK2UnnvqrmFp/QkHN0p2NyVELICmYM58Gs7EO9kH2Eko4GcICxm3+ EWUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689852876; x=1690457676; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IMMjhtcZTStjbNkvTZf4V2jHayFRPXrALrjnBQhdN3U=; b=i88opFm8YYPobPIdGivUabe6qM3GggCGU4a/WdyIxRA+WOcjqNiWZhxifXplW4xhm3 zYe3GscMPx1nioRh8ZmraI5VGV5/IK18sjN7bNsWciCVrszUjy29RqLW/EAAGii3UBhz HHRpVTPz06POEHaEr17mXVStAz72lVvzrCf/Qru0ajWK3q+RubGAnVey8YEoHYZ7Wyru K263OrSFOBjHiBnb5BL3ZolGYjzeCB9zpYsboK/UVYQnF6S9FOZvGFfX3KlGx7Hbx5AF DQmv1R/x/PtZmyfDovUMhz2J7wS77xGbAydIJqvLfetvzxYj/jUHtpKderarBcsBP3/8 e9cA== X-Gm-Message-State: ABy/qLaG1H2jWJy9qd/0NDJPQLjUOJbYMDAjIplQBcx2CQOSMTjH9jRU K6TXxZ8TSqYWKrdn65gTm1hS4wTMCnznZ5U/CLo= X-Google-Smtp-Source: APBJJlHQZWdyDX+cuJcWzU4zzOtZAXTvX0zMkPY+ZyBRdv+6TV8SR7VOluZg8jnNjpBCibtg8sh2KTv1Lu6BivJkJco= X-Received: by 2002:a1f:5e09:0:b0:461:98db:89a3 with SMTP id s9-20020a1f5e09000000b0046198db89a3mr3894740vkb.9.1689852876366; Thu, 20 Jul 2023 04:34:36 -0700 (PDT) MIME-Version: 1.0 References: <20230713042037.980211-1-42.hyeyoo@gmail.com> <20230720071826.GE955071@google.com> In-Reply-To: From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Date: Thu, 20 Jul 2023 20:34:25 +0900 Message-ID: Subject: Re: [RFC PATCH v2 00/21] mm/zsmalloc: Split zsdesc from struct page To: Yosry Ahmed Cc: Sergey Senozhatsky , Minchan Kim , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox , Mike Rapoport Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 943B81C0023 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: mubgwz13xft5i93if1sj5pj7dsff1ew7 X-HE-Tag: 1689852877-902486 X-HE-Meta: U2FsdGVkX18j0+FhkdK1R7GfdwsUvQzpGx3mLl2ZUuxe5xa006DMvoFB84CDeYHF3G9BbB0qukkIvq8Tx2ScgpmxdeJsDHVWbcQTFn1ieXfh5dwLDs/8wxWhC4YBAJp9SRA2+0FbT9iD7rjGxjkEPu2qSg1HMOrx5jiVmh3gKMr1S739tY1hKmbSv8DvRfVPRFbwmH2URF+9xo5lP0AVz1UYw0nlmb7NAdtKjku0Ey7un3F9Wjf1+Q0dHwp240vsNMANVwqvTPqx1FwZkJG2Fj1DoqLl8Gh/XYaTLHa4GGiiDfOU8d+gqi+INZ+lE9L/89C4QokY+yx7TOYOsDrArV71SMIwXlcpSbYEFwocdZE7mI+GRFpPW3xCFGdAj44z+lb3Cotn1f/TpFN/n/UrMzUNWG/8esISkNjfRUuqrdN6TWhg5Qr4gbCBUK+Eoe43VkVS/zgjS4/mHV/xiabJqe9mWOegWD2msA2Lt88IYq61pJyd0DgRC3iarBAaH4dTpOkuVQ++a8zDSJQL9sMek6aBbfrK+rLidmvoB/FZIFd3iCPrHiHQf6uFqL0KSIOY0YPuBpWI4wwzsNE/7ad980k3YrXQ4ThRps8WiumxGHbGf9JfeSkLclgjAz5DjhwyXUXL5yjvH1im3OivHTkG7Ng9a91AEXQ6YhBv+iSEdFOoiqea4+OAnQv/VOPALnD7suIOH5oMVOzhye7OY8khxaP6IbkFS8GsN/6uUulPjCxGs8D3lOsqLsKwhqGVp/pemb5/GfzwQJ7mUYhRD8XIdvN0n7VGrQGaKvdYJHsaUkpkVlIo6cmqCJ6eaU8goH6/jxHuzU7hLLHFe9grf0IdwsTpma9u6y26jqRj37LDAASDSgHwnej7zaqtVrt/YqXHcd8wQRbSiYx8Wzw3tv5/lA+SwJiRRS8v3Gqgm43aDNeQldDibtHa++D6Jxn2OrjlFcox0HtdkOAoaQQDFPv K3GIM9m4 Zje+do3x+t4B6zaqhKvwmyOjLlDLbwKng+E23HvjyK505THewCb77M+aVyEgtJdUr1WKC2q9lGrXeyC1QXkOPRRKuqYq6vZ/9v0lw4Y2zrYS5XIaVrmIl5ExaQYDH0QFJAo0/+Hz9TGMhC4uwwHvB5OC9olMF6hphq3uk8xvCwF5969Txa18seZqhAqG6/ZJ7r/i+OXObZ2k4y6xWnnePAAQdIBrUg4ZNomYNaRmgZ7Vn4Ub9VIBJP7HOq5sXdUjr2DeIG9oTTInauOmOMd8kOrSAsn7Ex+wEz9tz3MvFcwKoKIasTlcb5NKs9E1Smo2eG8AXR8napsDLcycIrEAxJ9JceyfuKD96Raj4zVcoU5/G5iiK4G68Iy/bGgt8npdsPx7IPcc7KPKSS1qAtChg2adQPR3wrHUblEIxUwRvPH3CjiLQ5Z4aw/2aDFG2OVRJA+Q2izKME4350z7YkCmkhb2GQw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000013, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jul 20, 2023 at 4:55=E2=80=AFPM Yosry Ahmed = wrote: > > On Thu, Jul 20, 2023 at 12:18=E2=80=AFAM Sergey Senozhatsky > wrote: > > > > On (23/07/13 13:20), Hyeonggon Yoo wrote: > > > The purpose of this series is to define own memory descriptor for zsm= alloc, > > > instead of re-using various fields of struct page. This is a part of = the > > > effort to reduce the size of struct page to unsigned long and enable > > > dynamic allocation of memory descriptors. > > > > > > While [1] outlines this ultimate objective, the current use of struct= page > > > is highly dependent on its definition, making it challenging to separ= ately > > > allocate memory descriptors. > > > > I glanced through the series and it all looks pretty straight forward t= o > > me. I'll have a closer look. And we definitely need Minchan to ACK it. > > > > > Therefore, this series introduces new descriptor for zsmalloc, called > > > zsdesc. It overlays struct page for now, but will eventually be alloc= ated > > > independently in the future. > > > > So I don't expect zsmalloc memory usage increase. On one hand for each > > physical page that zspage consists of we will allocate zsdesc (extra by= tes), > > but at the same time struct page gets slimmer. So we should be even, or > > am I wrong? > > Well, it depends. Here is my understanding (which may be completely wrong= ): > > The end goal would be to have an 8-byte memdesc for each order-0 page, > and then allocate a specialized struct per-folio according to the use > case. In this case, we would have a memdesc and a zsdesc for each > order-0 page. If sizeof(zsdesc) is 64 bytes (on 64-bit), then it's a > net loss. The savings only start kicking in with higher order folios. > As of now, zsmalloc only uses order-0 pages as far as I can tell, so > the usage would increase if I understand correctly. I partially agree with you that the point of memdesc stuff is allocating a use-case specific descriptor per folio. but I thought the primary gain from memdesc was from anon and file pages (where high order pages are more usable), rather than zsmalloc. And I believe enabling a memory descriptor per folio would be impossible (or inefficient) if zsmalloc and other subsystems are using struct page in the current way (or please tell me I'm wrong?) So I expect the primary gain would be from high-order anon/file folios, while this series is a prerequisite for them to work sanely. > It seems to me though the sizeof(zsdesc) is actually 56 bytes (on > 64-bit), so sizeof(zsdesc) + sizeof(memdesc) would be equal to the > current size of struct page. If that's true, then there is no loss, Yeah, zsdesc would be 56 bytes on 64 bit CPUs as memcg_data field is not used in zsmalloc. More fields in the current struct page might not be needed in the future, although it's hard to say at the moment. but it's not a loss. > and there's potential gain if we start using higher order folios in > zsmalloc in the future. AFAICS zsmalloc should work even when the system memory is fragmented, so we may implement fallback allocation (as currently discussed in large anon folios thread). It might work, but IMHO the purpose of this series is to enable memdesc for large anon/file folios, rather than seeing a large gain in zsmalloc its= elf. (But even in zsmalloc, it's not a loss) > (That is of course unless we want to maintain cache line alignment for > the zsdescs, then we might end up using 64 bytes anyway). we already don't require cache line alignment for struct page. the current alignment requirement is due to SLUB's cmpxchg128 operation, not cache line alignment. I might be wrong in some aspects, so please tell me if I am. And thank you and Sergey for taking a look at this! -- Hyeonggon