From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9E19CD4857 for ; Wed, 4 Sep 2024 16:08:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D4A68D025C; Wed, 4 Sep 2024 12:08:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 45E6F8D0250; Wed, 4 Sep 2024 12:08:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2AF488D025C; Wed, 4 Sep 2024 12:08:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 07B548D0250 for ; Wed, 4 Sep 2024 12:08:23 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id F186B404F9 for ; Wed, 4 Sep 2024 16:08:21 +0000 (UTC) X-FDA: 82527538002.23.C07D4EB Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf24.hostedemail.com (Postfix) with ESMTP id 0B6E1180037 for ; Wed, 4 Sep 2024 16:08:19 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=IH0AgSZb; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of surenb@google.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725465993; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=60X+GVTsm77C3O7RWqxDsSogxlp8g48EOsrd3C6f93s=; b=dYNqnyOgNXSm1BFA4P2WImUxhNCQb6vPynbD/SxwNvl2akQ/UTTtY3EKmm4qiY8seUKwsK VaYbc3LRCm6QYg66R5UpEqLrWK2bmojDCUog87+xEBw9l34ykH4lQBxjfsTSfuinLLtINv NG/HQESXNfQnKKLivtjQLx4n5WWfg28= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725465993; a=rsa-sha256; cv=none; b=ba7Mi18MNJ2etO+/EDmdTSF8raoLN/PTdx133iOFAxgx2TFjk6g4pfzDYkSz40moaa/TKA g9cYGpTQqcjdjHCEF+lpGCsM3T9UrwIUxhCLX58uCjZ9R5nFN3qRyD/BpW2M4qrYfU7N79 qdLQYako74jnC0Fw6NcVv73hlZrsbnQ= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=IH0AgSZb; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of surenb@google.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=surenb@google.com Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-2056aa5cefcso211215ad.0 for ; Wed, 04 Sep 2024 09:08:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725466098; x=1726070898; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=60X+GVTsm77C3O7RWqxDsSogxlp8g48EOsrd3C6f93s=; b=IH0AgSZbTBjroBKUhCClxHYNmJkxTl0BTu9cb3E+j850IlNk+BpJtTbtU3an/ZOhCo 0dyghwlR58YPjJtxh3uZBI1yFyid7KLFCwwqZRA7dAQkCJkGu/Rii6UQblItsYmYH/M8 Bu5oihOhf0YzAuXUCOyt/Sfv7ua/81yelVY+a8vfLS16ROhGVntoGYDG5BY9U/rDh3W3 KHgmjy7X+lBM+h3t4zZAHyNyRWAN2JBlJykM1RJVv2MF8mtIbF3OSNm0hk2uEAN6I24b SKgtwk313D23iho5eFUB57vP5hkFb/nTHOMLQLh2qqKCie7mi+tZYqLnyZRRk7NI7pJJ kANw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725466098; x=1726070898; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=60X+GVTsm77C3O7RWqxDsSogxlp8g48EOsrd3C6f93s=; b=qgjDwLNZXiaQVBdrBd9eVJsuEwHo6Pu24bJFu1jnpiKWxTSNIu/ekGD+RKirvBEI/g PpO3eDwUJ83RGGGxj28pDXsbVuce2FwQce8nfMVTAhVXhJqIGpAxPAVU1E6XQ/841gbN Kc4uLerl9tAgStFUMHwY3jfzh5/I7jH0nJ9AN5K/+mDf0fX1TIzJVQB6D3ufBM5AXN2q jqnpSAqfSbbem9GC94PZiPI1m1HE/8E11uIYoeqaWrs38lBbUpX00nb62JWWE+dpz6tH NKTs0tuTBtomEk4Z6uMc2gVSGuVHzb26zLcE6O1kwRAgmNsDK5L8TfDHTViqFTAHw9cq +Kfg== X-Forwarded-Encrypted: i=1; AJvYcCVqoPeX2LUdojxDRW9tvsl5nk2xrv1lqKlGCQAxr4vAIVKN/kE2FMrC/OpkSK1HymK1fbpPxHRimA==@kvack.org X-Gm-Message-State: AOJu0Yxx2paVdrSvzUPCie5TPXF3lPyt7sqHG+GE3rh/T9ngigS/dxDm hN6Ms2vk4505KIieTP58NUT9H7nUwgkTQBaglaz/0YTYJoZ02srAllczYV2D+0yM2qp1fxSBXmr 7f+MATYr3whraWObIkN/iGGq4m7UnUI8jFWI2 X-Google-Smtp-Source: AGHT+IEl7wqni8pYwfiTi4i+C6Cah6Qs7+S0s/DuJ3hiJc2vSd83YZ8C7wA85q+KXr69umyz9wqxDRGcOIVMOyg4Ceg= X-Received: by 2002:a17:902:f9c6:b0:205:753e:b49a with SMTP id d9443c01a7336-206b07c6bfdmr3813365ad.18.1725466097883; Wed, 04 Sep 2024 09:08:17 -0700 (PDT) MIME-Version: 1.0 References: <20240902044128.664075-1-surenb@google.com> <20240902044128.664075-7-surenb@google.com> <20240901221636.5b0af3694510482e9d9e67df@linux-foundation.org> <47c4ef47-3948-4e46-8ea5-6af747293b18@nvidia.com> <70ef75d9-a573-4989-9a9d-c8bc087f212b@nvidia.com> In-Reply-To: <70ef75d9-a573-4989-9a9d-c8bc087f212b@nvidia.com> From: Suren Baghdasaryan Date: Wed, 4 Sep 2024 09:08:04 -0700 Message-ID: Subject: Re: [PATCH v2 6/6] alloc_tag: config to store page allocation tag refs in page flags To: John Hubbard Cc: Andrew Morton , kent.overstreet@linux.dev, corbet@lwn.net, arnd@arndb.de, mcgrof@kernel.org, rppt@kernel.org, paulmck@kernel.org, thuth@redhat.com, tglx@linutronix.de, bp@alien8.de, xiongwei.song@windriver.com, ardb@kernel.org, david@redhat.com, vbabka@suse.cz, mhocko@suse.com, hannes@cmpxchg.org, roman.gushchin@linux.dev, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, pasha.tatashin@soleen.com, souravpanda@google.com, keescook@chromium.org, dennis@kernel.org, yuzhao@google.com, vvvvvv@google.com, rostedt@goodmis.org, iamjoonsoo.kim@lge.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0B6E1180037 X-Stat-Signature: e173pacjebtbkj6g3eqtj714xnm1ynk4 X-Rspam-User: X-HE-Tag: 1725466099-332830 X-HE-Meta: U2FsdGVkX19CekymHdPgqKru5HGd6lG7x00TD6za8+RJhiV+1kweSwiZmHYZhYAcM6gQxFeicCnM1bhVWT9XNw1gCkbGOhJLWno3AEWg9wOZdAqJ7CLBLTGwoT2TRqHG5FPCe77LRMeGvcnvKE4V+IqoaNKPKLMAvgcQNCI5dtwSLNflLvzKJf2Fq8/DmXzSnycpF8Nr488KU1MaH0vSw+HzolUuiRv9xx5cMHIB47aXV+EI2qyHPo321F7mJydu6P3um75QYNFxHLQbqM/nsRxXKgwsB8Ojo/zeRZgBFXkTDsZK98NFlNqnnFCQTeNDe1+LGJbNXQnec+XYIxTBbDGrhaclVr5ho5gEFGzkQ8UXfVGreO7OxHp2+Va5brdKm12XtdZohEIaBiMUU6pc1KuC4PQ6pn/8jzMGaH0J5VGHz5+t9iSSJHWF8FGFfeGibUf1zK2dSa4V6IMPvC6q1minSZXH2tJ25ng+xSLAXhura0vEx27mrYCodqBG9ql2DcTpug3nsnTD+ab2j7wL4qbCKLzfGzYNZYI7ukNs9KEW+J3VT+i9MEI7LDFGm3HFEFhZwdHEctzCQV2lPu9zkU8+EaigRp0QDym0BOVJi1VimGVYsBj0Mzxx/0rLkXpjDXehbotU5xLk20e9SXYjVqFYsUOcYkdZD9cQWtF06B0BnmP2lQAScyP5ExGJ2gtYHIZAaTAmu3apNhMqyVZbWrgSV4mw3KspdIQCcIEQ49fXa1RedVpV1mUD1rhJ2w31vZ35yoBHMyq8xIf2qqYjn9S/BVl5hRe2U7F2cz1kH9Db7kTeplmZQWCTke2fyjE2YtDsr6Y1EyTad+nAz1DNsiGoD202uyyw/UdlIx2UlVT+7iurLE+X6Z+p8h9X0uqpyFecFWDwEkuoJ+htQpcCvUigXmNDi1w65x8WQMTMpj7T2NSiSEVimDqMPPISECvXkSRFjo1a5MBLISzy8nr jq/Z7iUw wQTpW9A8eb4NgeUVhNsiHKJkZJRtmy2JA/sc9yhyrfjJOW8sbqnTpXweKWoKXcJ8djFVbszE+9zE4UFhCREk94ZQD9HoqNbOtyRdfAhX553sijid3ALs0Deo1d5Fp2Fsz4oYZ1uUD6yN021f3z6DrhiboYNEP4gxVzi1jxdk6c0hJsljrIfGTmZDaQZWIe0+N0xKV3RM0AbNsT9YEcP6cDnn7bvRenNMIddgtRehDX6uV3iR6Bra9+FhUmWCvjAdi3NFmpUCIJf9+igWkNquxc6Pz+Da8DJE2/oH/VHwzHXa3O21XY5WBqwreBHdm0H6h3trRjuUgQaZc86BWDORQBy0iFigB32XqXXyr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Sep 3, 2024 at 7:06=E2=80=AFPM 'John Hubbard' via kernel-team wrote: > > On 9/3/24 6:25 PM, John Hubbard wrote: > > On 9/3/24 11:19 AM, Suren Baghdasaryan wrote: > >> On Sun, Sep 1, 2024 at 10:16=E2=80=AFPM Andrew Morton wrote: > >>> On Sun, 1 Sep 2024 21:41:28 -0700 Suren Baghdasaryan wrote: > > ... > >>> We shouldn't be offering things like this to our users. If we cannot= decide, how > >>> can they? > >> > >> Thinking about the ease of use, the CONFIG_PGALLOC_TAG_REF_BITS is the > >> hardest one to set. The user does not know how many page allocations > > I should probably clarify my previous reply, so here is the more detailed > version: > > >> are there. I think I can simplify this by trying to use all unused > >> page flag bits for addressing the tags. Then, after compilation we can > > Yes. > > >> follow the rules I mentioned before: > >> - If the available bits are not enough to address all kernel page > >> allocations, we issue an error. The user should disable > >> CONFIG_PGALLOC_TAG_USE_PAGEFLAGS. > > The configuration should disable itself, in this case. But if that is > too big of a change for now, I suppose we could fall back to an error > message to the effect of, "please disable CONFIG_PGALLOC_TAG_USE_PAGEFLAG= S > because the kernel build system is still too primitive to do that for you= ". :) I don't think we can detect this at build time. We need to know how many page allocations there are, which we find out only after we build the kernel image (from the section size that holds allocation tags). Therefore it would have to be a post-build check. So I think the best we can do is to generate the error like the one you suggested after we build the image. Dependency on CONFIG_PAGE_EXTENSION is yet another complexity because if we auto-disable CONFIG_PGALLOC_TAG_USE_PAGEFLAGS, we would have to also auto-enable CONFIG_PAGE_EXTENSION if it's not already enabled. I'll dig around some more to see if there is a better way. > > > >> - If there are enough unused bits but we have to push last_cpupid out > >> of page flags, we issue a warning and continue. The user can disable > >> CONFIG_PGALLOC_TAG_USE_PAGEFLAGS if last_cpupid has to stay in page > >> flags. > > Let's try to decide now, what that tradeoff should be. Just pick one base= d > on what some of us perceive to be the expected usefulness and frequency o= f > use between last_cpuid and these tag refs. > > If someone really needs to change the tradeoff for that one bit, then tha= t > someone is also likely able to hack up a change for it. Yeah, from all the feedback, I realize that by pursuing the maximum flexibility I made configuring this mechanism close to impossible. I think the first step towards simplifying this would be to identify usable configurations. From that POV, I can see 3 useful modes: 1. Page flags are not used. In this mode we will use direct pointer references and page extensions, like we do today. This mode is used when we don't have enough page flags. This can be a safe default which keeps things as they are today and should always work. 2. Page flags are used but not forced. This means we will try to use all free page flags bits (up to a reasonable limit of 16) without pushing out last_cpupid. 3. Page flags are forced. This means we will try to use all free page flags bits after pushing last_cpupid out of page flags. This mode could be used if the user cares about memory profiling more than the performance overhead caused by last_cpupid. I'm not 100% sure (3) is needed, so I think we can skip it until someone asks for it. It should be easy to add that in the future. If we detect at build time that we don't have enough page flag bits to cover kernel allocations for modes (2) or (3), we issue an error prompting the user to reconfigure to mode (1). Ideally, I would like to have (2) as default mode and automatically fall back to (1) when it's impossible but as I mentioned before, I don't yet see a way to do that automatically. For loadable modules, I think my earlier suggestion should work fine. If a module causes us to run out of space for tags, we disable memory profiling at runtime and log a warning for the user stating that we disabled memory profiling and if the user needs it they should configure mode (1). I *think* I can even disable profiling only for that module and not globally but I need to try that first. I can start with modes (1) and (2) support which requires only CONFIG_PGALLOC_TAG_USE_PAGEFLAGS defaulted to N. Any user can try enabling this config and if that builds fine then keeping it for better performance and memory usage. Does that sound acceptable? Thanks, Suren. > > thanks, > -- > John Hubbard > > >> - If we run out of addressing space during module loading, we disable > >> allocation tagging and continue. The user should disable > >> CONFIG_PGALLOC_TAG_USE_PAGEFLAGS. > > > > If the computer already knows what to do, it should do it, rather than > > prompting the user to disable a deeply mystifying config parameter. > > > >> > >> This leaves one outstanding case: > >> - If we run out of addressing space during module loading but we would > >> not run out of space if we pushed last_cpupid out of page flags during > >> compilation. > >> In this case I would want the user to have an option to request a > >> larger addressing space for page allocation tags at compile time. > >> Maybe I can keep CONFIG_PGALLOC_TAG_REF_BITS for such explicit > >> requests for a larger space? This would limit the use of > >> CONFIG_PGALLOC_TAG_REF_BITS to this case only. In all other cases the > >> number of bits would be set automatically. WDYT? > > > > Manually dealing with something like this is just not going to work. > > > > The more I read this story, the clearer it becomes that this should be > > entirely done by the build system: set it, or don't set it, automatical= ly. > > > > And if you can make it not even a kconfig item at all, that's probably = even > > better. > > > > And if there is no way to set it automatically, then that probably mean= s > > that the feature is still too raw to unleash upon the world. > > > > thanks, > > > > To unsubscribe from this group and stop receiving emails from it, send an= email to kernel-team+unsubscribe@android.com. >