From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E963C4332F for ; Sun, 12 Nov 2023 00:07:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB92A8D0016; Sat, 11 Nov 2023 19:07:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C68CA8D0002; Sat, 11 Nov 2023 19:07:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B30008D0016; Sat, 11 Nov 2023 19:07:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9D1D98D0002 for ; Sat, 11 Nov 2023 19:07:12 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5FC981603C7 for ; Sun, 12 Nov 2023 00:07:12 +0000 (UTC) X-FDA: 81447362304.05.EF7FF3F Received: from mail-io1-f41.google.com (mail-io1-f41.google.com [209.85.166.41]) by imf29.hostedemail.com (Postfix) with ESMTP id A1F5F120004 for ; Sun, 12 Nov 2023 00:07:10 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HmNT5YrE; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf29.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.41 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699747630; a=rsa-sha256; cv=none; b=N8hDTzkQCkCC4s2tzLFrNlVzMIfc1kBEQW0Gc5yzHbN5IdZKu0UwBk/tYTdVyKNMnHVSDU /basiTE6NOfD07YCKtuK5FFKFH38mo0uIsKVWBUO+ctafGim1raRL3bnqUqJFsC7PxS2uj pKLV1/9cacU5Mk7+Wm943w5HqhGrxGs= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HmNT5YrE; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf29.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.41 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699747630; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZIq6EzjNebTJSLRL1CT61ONPXQE/G9+uZ3pGYRIYVRQ=; b=hWHxll+VOPj+ujLdnijMKoeXODDYAF2elDZv3/RXKW18XWAdK6/9ENKvv9IKCX5Tnb10AK naiJdcELupU9YSAnHsfHVmPu75R0S/VD9Blvs6pazv/B+MRbouaZ02zclZLLZknvCj/80G //EmkqvE9gSalHpUEiaEgRcfRcqQDto= Received: by mail-io1-f41.google.com with SMTP id ca18e2360f4ac-7a694a365d9so136506139f.0 for ; Sat, 11 Nov 2023 16:07:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1699747630; x=1700352430; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=ZIq6EzjNebTJSLRL1CT61ONPXQE/G9+uZ3pGYRIYVRQ=; b=HmNT5YrEjcC8Wi5N5xUmeTLnBhR4hxSL1ej7S7xQkYUcS2rF+tVfTsqDVL8rWLoElJ wMaELc/J5Gm4kJevIcUSA/+E4LHwsNI1F4jhK6hE9dmESAe7+352imI1T2kbtmZZbOsE TwNs2bzlF8LsnW567kSzoXwU+EHP/pNyFNJMfsKBn4JDi2hiTTfCTOfNVhQlwWMLl4x4 MQDjyL+wT2pJ+cXn/76fYgUFpd5qsaEfBPAHkuCz6AGcXZmrTCZ+332fCxDRsEWUbq6C WznkLycYNwPdq5WUr+mqtlsPc7EuIus6Rl6K1/2MlcnO/mzc4ZBQIrxi5pnKH19bm1oK o4yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699747630; x=1700352430; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZIq6EzjNebTJSLRL1CT61ONPXQE/G9+uZ3pGYRIYVRQ=; b=Whajz7qCinMPBetxzs9gVWYspAvdlrs7nBenGwCbGkZXmNFmGjuUSo1QbgJKVgLbga KY6vfsqba8RusuKEFEAOleXSZNwlEiha3VVA6mXclIlvim5Cmh6kdFFEcibI34pQEj9L XZwuXoWR9Q5LkPiK/q1pCF4quLC8vnrAnc7ExAP575bndccSYvSPPFDMWjaGcOzcPb6F nI3FES0gu5dBRUIQC8iwPR6Dc0cJ2//wmq/3ASh1PXAEpleLD8j1aAwd0c/m6M+eT5mz 5yVk1e9RV3eP4KfBwxPCyVOGzfNBicvf/8tqEyyaAUmi0W9FghLbytQrtyGdPNtAVo53 j7Rw== X-Gm-Message-State: AOJu0Yzwb2A1w0T1dw7uDbhQgnLp96OgBt9o6742N8BhtrcJP3CZrTQE HeFGBtiobjD+PQ4rUAZb0cUHVK0nocltZHzH6OU= X-Google-Smtp-Source: AGHT+IGFQmTQBtABTIDx9JT7gq+afKVZZrIts5BpiQWRUEpwTHxC0AiBWp54Bi/ZiGbPnv3gbTAlTDyug+hkn7Fc3Ng= X-Received: by 2002:a05:6602:3799:b0:7a9:984c:1427 with SMTP id be25-20020a056602379900b007a9984c1427mr4353142iob.21.1699747629734; Sat, 11 Nov 2023 16:07:09 -0800 (PST) MIME-Version: 1.0 References: <20231106231158.380730-1-nphamcs@gmail.com> In-Reply-To: From: Nhat Pham Date: Sat, 11 Nov 2023 16:06:58 -0800 Message-ID: Subject: Re: [PATCH v4] zswap: memcontrol: implement zswap writeback disabling To: Chris Li Cc: Andrew Morton , tj@kernel.org, lizefan.x@bytedance.com, Johannes Weiner , Domenico Cerasuolo , Yosry Ahmed , Seth Jennings , Dan Streetman , Vitaly Wool , mhocko@kernel.org, roman.gushchin@linux.dev, Shakeel Butt , muchun.song@linux.dev, Hugh Dickins , corbet@lwn.net, Konrad Rzeszutek Wilk , senozhatsky@chromium.org, rppt@kernel.org, linux-mm , kernel-team@meta.com, LKML , linux-doc@vger.kernel.org, david@ixit.cz Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: A1F5F120004 X-Stat-Signature: icdh6we5x6fwmpi1t1qquwbuac6d7p6o X-HE-Tag: 1699747630-759258 X-HE-Meta: U2FsdGVkX1+utHVis0mUR6uSM1y6uZ5YC53/Sj4aEgcJV6K+UvM3mlJBgWJRfgf+YQFjUJWdlcQl83E7cE/eXxrlXAochw+KEIWDfjo7oWejEWV5FdtpeQC3CooFM4fyYLavyltHVmMvbWa3l+KDUrzW5kRRmA5Yki9TB0xvvsrLF3DJ5+IHbVLmnKB28KNEMXeluyqpNMYWNFSnkEtZLekXOtiegYn6lZf1hMW6imoj+I1m7qe1aYqeKyXnYCg4YTd4ID+MOps5HKnz4b42Lh2N+AsO8rAEzpooh6HXi4sHTYMGRhb6VvASExhfNk0O/h8t0Qt731j7m7CrQNYzQCzutkgweotArzwObWuKwxh72o2M4XZIvKE5lqjmBgzYm6m+LJFlWz5FJIFzjFND7vfmtnijLcrK1xqG0XHAJ1xUPwi2rAdAt56D9uW+oKdm70YkAnzpwXfBgnj9bMBINhhA+qG+E42VKuvjhpU9XqqrHyJfArBBgYoCMlocikQI3Kn+G6kOLD8VPphVs57kn6PQN6OA/5QuunDc1GM1VdGroU4u3WEHTMBeGAPwS11uIYq3fpSwIiK/2hqFOzRCTeWT+XzVQ/TkTx9rLrvwsPRkbng0+ii6ZJF664U2kBXEP9FogmygrZn0yfkMkNWg4yM/1D8IF9+P1+0vazlYtZv79W5t7dA6V3FoykrgxBqjtbE7aNwMvcMs1oL9dTMMdegl6uJ6uycCjLqxBDBKD7cCwKvWPjPZlUnGrD9hsK2ibyY1BDW1j9M7whbYiaRI8CqBW2c5k+YialVBJIL3dg3/fqOnzIl7uBp0RvGuFxdkcAaeFg4Hfg5YEw3Q+1OOwZJnLoAULJrjR6x9uARuxcbTLFGbghV5gS13fbUkuVSRQ2JXIspnTmKBwPxLJgeUYsNVuFtFDmm5UlqNEzNrFGuK8fDAZ3Fw90Ob5/V+www/W+U/1EJdboDTd1jsPlL ia8r3JJe 6xmaLhEmZEdWyW4PsP3pdfStSasZlXday4pT0U9Ik0FzKBmQwsnxq5v99c1YrUAhm6OTLAgMN9Vmf9jUxJ9Os6tZlgj+88iRTTaYlRB6Ji6GnnYq+avWNeM18k5C3SiZxHfBJWgMV/PJwFMbfsxfLaW7yBN9YDhBrW9Z5yw6UOYC9YZgZCFoj5IScpFbCGo3uANxoIk1cm0wStLxlAuQ6hm3ja6SNy9JWUqCdDjE1p7Rwap3lykP8huurSNro6uHj0j9jeeuDSH3CG8tGCqGO6kcSLXZUAuJ99c5YqfklDRwkD8ZUsMEWVPFyv2SSxZDdZjSKm4WXrBLmASeTJfB6UkRoN9svoa5G9lsH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Nov 11, 2023 at 10:22=E2=80=AFAM Chris Li wrote= : > > On Fri, Nov 10, 2023 at 4:10=E2=80=AFPM Nhat Pham wro= te: > > > I notice the bool is between two integers. > > > mem_cgroup structure has a few bool sprinkle in different locations. > > > Arrange them together might save a few padding bytes. We can also > > > consider using bit fields. > > > It is a very minor point, the condition also exists before your chang= e. > > > > This sounds like an optimization worthy of its own patch. Two random > > thoughts however: > > Sure. I consider this a very minor point as well. > > > > > a) Can this be done at the compiler level? I believe you can reduce > > the padding required by sorting the fields of a struct by its size, cor= rect? > > That sounds like a job that a compiler should do for us... > > According to the C standard, the struct member should be layered out > in the order it was declared. There are too many codes that assume the > first member has the same address of the struct. Consider we use > struct for DMA descriptor as well, where the memory layout needs to > match the underlying hardware. Re-ordering the members will be really > bad there. There are gcc extensions to do structure member > randomization. But the randomization layout is determined by the > randomization seed. The compiler actually doesn't have the flexibility > to rearrange the member orders to reduce the padding either. > Ah I see. Yeah then it might be worth tweaking around manually. But yeah, we should do this separately from this patch. > > > > b) Re: the bitfield idea, some of the fields are CONFIG-dependent (well > > like this one). Might be a bit hairier to do it... > > You can declare the bit field under preprocessor condition as well, > just like a normal declare. Can you clarify why it is more hairier? > The bitfield does not have a pointer address associated with it, the > compiler can actually move the bit field bits around. You get the > compiler to do it for you in this case. I see hmmm. > > > > > > > > > > #endif /* _LINUX_ZSWAP_H */ > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > > index e43b5aba8efc..9cb3ea912cbe 100644 > > > > --- a/mm/memcontrol.c > > > > +++ b/mm/memcontrol.c > > > > @@ -5545,6 +5545,11 @@ mem_cgroup_css_alloc(struct cgroup_subsys_st= ate *parent_css) > > > > WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX); > > > > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP) > > > > memcg->zswap_max =3D PAGE_COUNTER_MAX; > > > > + > > > > + if (parent) > > > > + WRITE_ONCE(memcg->zswap_writeback, READ_ONCE(parent= ->zswap_writeback)); > > > > + else > > > > + WRITE_ONCE(memcg->zswap_writeback, true); > > > > > > You can combine this two WRITE_ONCE to one > > > > > > bool writeback =3D !parent || READ_ONCE(parent->zswap_writeback); > > > WRITE_ONCE(memcg->zswap_writeback, writeback); > > > > > > > Yeah I originally did something similar, but then decided to do the if-= else > > instead. Honest no strong preference here - just felt that the if-else = was > > cleaner at that moment. > > One WRITE_ONCE will produce slightly better machine code as less > memory store instructions. Normally the compiler is allowed to do the > common expression elimination to merge the write. However here it has > explicite WRITE_ONCE, so the compiler has to place two memory stores > instructions, because you have two WRITE_ONCE. My suggestion will only > have one memory store instruction. I agree it is micro optimization. > Ohh I did not think about this. Seems like my original version was more than just a clever one-liner haha. It's a bit of a micro-optimization indeed. But if for some reason I need to send v5 or a fixlet, I'll keep this in mind! Thanks for the explanation, Chris! > Chris