From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46529C3601E for ; Thu, 10 Apr 2025 16:52:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6BBED280119; Thu, 10 Apr 2025 12:52:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 643A8280118; Thu, 10 Apr 2025 12:52:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 448C06B00C3; Thu, 10 Apr 2025 12:52:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1E77B6B00BF for ; Thu, 10 Apr 2025 12:52:24 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B3B47810A2 for ; Thu, 10 Apr 2025 16:52:24 +0000 (UTC) X-FDA: 83318727408.03.0166A8D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 9D0291A0013 for ; Thu, 10 Apr 2025 16:52:22 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RAnFP+sa; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744303943; a=rsa-sha256; cv=none; b=ZdZl7ZUrO4TmmBg+ruX51boA92/wQ3gBynPR2/kSMVQ6YHa6C4jpXACTYzzm39lWZmJMiF 2a0ahDbUBjw3trszEgZE7r3cmyZD2DkYxpfjVocxIvW9Ut9u8nDJK+GLTWCGQuhowI74+y uKhT/JsJp/ptjPBnt9aCltK8SzpHqF4= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RAnFP+sa; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744303943; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EjNA9E6fQoObQrEF9x/xR2bwcpg2cboOt/4jyue8tog=; b=4gFIrwKQJ7ZglW7lCDhmDsGi1mi3RfJvOxRdPqJ2pivfRO0/C+bI1T7+A+0L76QZ0/JWoC 7BsYtakaWmExp0VRoKHP+7q0B77fBTYmP6LaDvwrI/ys2W7T99QbFZUX1fZSYxXZsnTWM5 Pkoa49xkKvJTmY+Cna3tO5ds8NxTkQA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=EjNA9E6fQoObQrEF9x/xR2bwcpg2cboOt/4jyue8tog=; b=RAnFP+sa8REHwREQhuJs2YHIWN OrZAs12oMxeWbpncHVB29giNb8cyVPFV3+HdAb/cmklF4kTJD8BRqCv4exsJXSKaiW7dMlbUE2hHr 6rgi9NFwGz8ahECnVaFqrUK7nZu6YDFwiK+BHj3w1GQ+CEtykNrITzBxkTj4pqJiUfipuZUkHryYk FcoRDUUy751+rhrtNHwqavdxk62n4fXInDOnrKxh3cYuCP8m6EB01JqmoKvbOfJc3bUcP58GoemOM 1W4ZmB951GNXPtbhO0Bcpxn93wdKB00fjmkCjI9efyEMQ5M5eQvjPtqEsh9cRwdCtzr+Y2WqmmMYv c2UMaN8A==; Received: from willy by casper.infradead.org with local (Exim 4.98.2 #2 (Red Hat Linux)) id 1u2v83-000000039mo-2MIy; Thu, 10 Apr 2025 16:51:51 +0000 Date: Thu, 10 Apr 2025 17:51:51 +0100 From: Matthew Wilcox To: Jason Gunthorpe Cc: Mike Rapoport , Pratyush Yadav , Changyuan Lyu , linux-kernel@vger.kernel.org, graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: Re: [PATCH v5 09/16] kexec: enable KHO support for memory preservation Message-ID: References: <20250407141626.GB1557073@nvidia.com> <20250407170305.GI1557073@nvidia.com> <20250409125630.GI1778492@nvidia.com> <20250409153714.GK1778492@nvidia.com> <20250409162837.GN1778492@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250409162837.GN1778492@nvidia.com> X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 9D0291A0013 X-Stat-Signature: fn4b575fa5t1cfr1gti7m6syd4xefju3 X-HE-Tag: 1744303942-762583 X-HE-Meta: U2FsdGVkX19iTSxlM/F8clm2pB7j6+HAf9Bc5CxtvBjnTPWWBWkPUyyInF1PuZzqh2o4eAFVQq3taMUQfi3gfJx7HE7YeCW4VbEop8fpwl6PJ++jcOtjVFCvrRfXY2zsr/3x7NFUDyEIKAsg7SmCiBKW6vw4vkvJHcKc7QY+mW+imr/hfT9xF6phjuzwc/peANdpY241pgf8D5DGQiQR39oxhxSXYDES5sCo+0cu3kkgC4NwrnZhFkEhmd3TlQgw+hyw4B9SefzdKlmttZAvXYIJoq4OTERa/SE4ebIxM/LcmRcLtZGagM6UMjOWuUGelBC1CjRcLiT1g5t/3YcS8Ov2w3zVfF7IMW9N73oSi3iH+fHkwBzucHfSMi+LX1WkOxzASQMnvx1c3QESf9niOy6UhnFk5SEm9jRUSLLYuecfUzX2va2/e2IlH2vv78ABVP898DnrFO5pmMRue1nnFzhCuRMjwNxVMci4TDqCIVS1hSr8VMQ5X+PDII/6bRDfaSoNtI6MACngMGhMMAR5ekod2HE1/qqmrqaRq7eGbUJTUe28TGSCss8PoFYIeOhhDLTuBgHkxme0DC3Y7MrRC21qrNseuTRdQWWzWrvu19QH3alR2SQragBIH9kIiC+797i0ohN1/czcT4ziUzQL0i12hIOsewzw9KM87nTN8Jc5KB+mGnLN8WcXIiXwVSPnPq2hQIYzwqxtuMVXa5BGIqCXN8V1xaLB4y+cS+s9IzD8qCAn7AX26mS+uBvji27X81ltmy/QUTC1a6D6jJUhnGR5fBh2Y0t8GHk2NL+ae56qMW2KosrNPTniVNzJ8cWXvBz0NKnghHtv9h3O2O+/C6o9qHvaUpnWmGKrznapLe6A9yk1dN/uOm3SR2WgzPh+iGNd2DwnLeIA1k/Wc0tZwrCuvzw0eLsUZCpxMI4ksL8par9R/jocCZr+18YSJt2dq5p/n4D3Gj7LQOUwhH9 AnUScwpX OfyRnNYo1gXpeWTeKjiwy6ZaaWgTomHTcncDv+pK3cREjOEXJ6tgKNzQ2Ewiv+Db2bleXWvBZFAC6omPflItLMofoqo1m3JPDbrkQtljKFqT2ytvjTAwy8GwIUTKr1IYmwMogJcYH7ZrLv51+xpJyTjGIVbzw2ysxoqztXvzlr3jkXNdduPmNWs1QP+mauFNEQIljsGaHJa6fEERyWB20+v8T76mPny03cVA+EIvPayF9j6jk0mxsoKSWUbk1qSatk4CgYJ9jiJb33gT/1rmIu21t3GPAaEoeI2WMbTqn6ePR3j6yyHWunIi9tksnjIyq91AQzG+dyg6ITRL/aDwHpxUmzhvpnC48mSGzfxXXmfUBgrCLgg4d/OvY6JziUU21g8whMjUUAoJCDQPSo3xmF3w0e+Fet/8iOFTw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Apr 09, 2025 at 01:28:37PM -0300, Jason Gunthorpe wrote: > On Wed, Apr 09, 2025 at 07:19:30PM +0300, Mike Rapoport wrote: > > But we have memdesc today, it's struct page. > > No, I don't think it is. struct page seems to be turning into > something legacy that indicates the code has not been converted to the > new stuff yet. No, struct page will be with us for a while. Possibly forever. I have started reluctantly talking about a future in which there aren't struct pages, but it's really premature at this point. That's a 2030 kind of future. For 2025-2029, we will still have alloc_page(s)(). It's just that the size of struct page will be gradually shrinking over that time. > > And when the data structure that memdesc points to will be allocated > > separately folios won't make sense for order-0 allocations. > > At that point the lowest level allocator function will be allocating > the memdesc along with the struct page. Then folio will become > restricted to only actual folio memdescs and alot of the type punning > should go away. We are not there yet. We'll have a few allocator functions. There'll be a slab_alloc(), folio_alloc(), pt_alloc() and so on. I sketched out how these might work last year: https://kernelnewbies.org/MatthewWilcox/FolioAlloc > > > The lowest allocator primitive returns folios, which can represent any > > > order, and the caller casts to their own memdesc. > > > > The lowest allocation primitive returns pages. > > Yes, but as I understand things, we should not be calling that > interface in new code because we are trying to make 'struct page' go > away. > > Instead you should use the folio interfaces and cast to your own > memdesc, or use an allocator interface that returns void * (ie slab) > and never touch the struct page area. > > AFAICT, and I just wrote one of these.. Casting is the best you can do today because I haven't provided a better interface yet. > > And I don't think folio will be a lowest primitive buddy returns anytime > > soon if ever. > > Maybe not internally, but driver facing, I think it should be true. > > Like I just completely purged all struct page from the iommu code: > > https://lore.kernel.org/linux-iommu/0-v4-c8663abbb606+3f7-iommu_pages_jgg@nvidia.com/ > > I don't want some weird KHO interface that doesn't align with using > __folio_alloc_node() and folio_put() as the lowest level allocator > interface. I think it's fine to say "the KHO interface doesn't support bare pages; you must have a memdesc". But I'm not sure that's the right approach.