From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EC95C71148 for ; Fri, 13 Jun 2025 14:22:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9EF9B6B0095; Fri, 13 Jun 2025 10:22:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C4806B0096; Fri, 13 Jun 2025 10:22:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8DCA66B0098; Fri, 13 Jun 2025 10:22:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 6F5016B0095 for ; Fri, 13 Jun 2025 10:22:38 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 34219BC3BA for ; Fri, 13 Jun 2025 14:22:38 +0000 (UTC) X-FDA: 83550593196.20.28D9FB9 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf15.hostedemail.com (Postfix) with ESMTP id 61812A0016 for ; Fri, 13 Jun 2025 14:22:36 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=J9dzXAFE; spf=pass (imf15.hostedemail.com: domain of pratyush@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=pratyush@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749824556; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F40l49kROy//bCHAH9EyoBv8EvpOtzaWqX+WT7SDDYg=; b=wzYClapsYJ+Bh0IUrHf821/U3hMIbMDSc/wUKVvvCzdKIk+KbFZ2eurl5WJWYsvxtyYcG1 /fzv7DHlRJi68Eg+/K43NF7IR7JgYdpWKdbfva8mHyry+f/mKOGat6P6fLqwVRoGpsj1ST ZTyw+9D7F1heqIsxlGD/O+CB97YJkzI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749824556; a=rsa-sha256; cv=none; b=5jMMbqJrE328VKm7NGUoLsEXxztC4uUOPS0OX0SV3Cz3bh41cDnpbAPGtZK0CCqEysRUPL BBxwcuDABbhSBt4uidoTVSEHN1uEeX2a7scwsK4cNo5QscUE0p/LSF8/zYAAPhxncWqC7d ymSGyMCEBc5mLSdxidVGUfAPlSxqICI= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=J9dzXAFE; spf=pass (imf15.hostedemail.com: domain of pratyush@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=pratyush@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id BAF0A5C6AC5; Fri, 13 Jun 2025 14:20:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 57FF2C4CEF1; Fri, 13 Jun 2025 14:22:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1749824555; bh=mFrrGfLemhIJ3f/aDQPvUYUk++CNfebL5xpQeoqtlTo=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=J9dzXAFEnZgd4nf1M5o7aWGcV/gX9j/9MDOyEwxli67WfOelfJk0qeCctddCBj6Bj hGgEfzIZVVzzJukrthsoZZQoRGrvdse7kXhvO4BcjdqRyoYT2/GQq1XoFkbpJ8jycx MAhYZ+Q8XLztkCx2EP0kJ3ii0ZIHICr7KAFcExYM24G8f8gY4BIALSggI6X85EDwED LCSuyA9la02D+RsIX1PRMMLN2s6AsUR4gDRjjiG196yP4HZWJM/bhC5Zb6V/+MzF07 JenuoVPQL6pTTm5mRI4GV5y11lqDMOMVFCxUYdorgZFftJxWZHbz9DSwSTL4LaoLA1 dtEewPyS5Wz8w== From: Pratyush Yadav To: Mike Rapoport Cc: Pratyush Yadav , Pasha Tatashin , Alexander Graf , Changyuan Lyu , Andrew Morton , Baoquan He , kexec@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Michal Clapinski Subject: Re: [PATCH] kho: initialize tail pages for higher order folios properly In-Reply-To: References: Date: Fri, 13 Jun 2025 16:22:32 +0200 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Queue-Id: 61812A0016 X-Stat-Signature: sqch8qm7qbiw1jyqddqoxthtzh1ap6sg X-Rspamd-Server: rspam04 X-HE-Tag: 1749824556-973590 X-HE-Meta: U2FsdGVkX18R8uOHIc2UcI7wiTgIfdhgyds140GrMgRvvtT170RTaYetkTHR9ZGSraaRfv0Pw+cAGqSAmZqt7gPD0aRlB4XEtd+iRXeCCgh+0TFxHW3kb79Dmprmp8kHfYdDg/QVIrOURaVbLS9jC5PL6g4ZWJWX2xEpyQxQXkmATXf5nEtQTQ/DrJYYUZhBAFvDIRsMs9sDVS4cd+Q+/UpgKCjcPSnA0E8Mb8zr+HKE5nEcId7Xyg1f7T5cTZmYyAfhi0gwd0Y6aVT7XEBf+Vw64BJy3+BjhSLxUK4EKund2uPpIQtMXXqaeHkyMH1bnZ06KG29K37+CDoeDd02LBan0qP9hK14VjNP2R3mooFefVGfcI6CcbdP4Vq2grTOsBeoMwj8IycsmyRvOQ+4UsvuMG+zliPCerq8teclIkY6f2+yg/wj5GaLw2rnaOqk6zL4HUgLBvtPOfR4DVyryeeIV7J/KsRfQXYC5NSVptoH4svwcZ7u0EuARNTGWo3fiV7d7DQbkx/YmVnbuz8+BO/4fPDXGcM/zjlerCalVsIDB0bc92tWk4k8qy+PwNG69+D8yhmlRTahWGzZIAeRKLhCsmNP27wOqXxWUp9ISfem6SyogpLUXHkJZ8HWE15DRsOPuq7sgqQFjPFbR/Mr5taKpzjCNXDin61UzqvJ4dc12Yro38VpI+9R/PAVo6x5J0urrgSsjs2W5iyB0zRVbE2IObDIV6y6q0MShrTQeyxROqZVK5+EW2nVXSBU8EBGqBjc3ktEEVuJFZcB1Dg5CM8PtYKe7w/SQBtp4kwx4qhivziW6tGfRIxeFPA5GLTm2iJr4E/n8HpZupJdYqul9FrPaCrn+gIZluS8WFMjBUpPtS99pQcy4R3qGkEYNPa4DCycVYTIGPDIDnCNbW3k1g7mcUfD8GmAEqb+Iug6zE34m1Rh6eb+RyaBQauaGm3bSpUGVfGOlI+SuK7ihmZ zp7lVHpb OWUpzIpod/pHIcVVhg9hOwT2lblCTLqlu5xS8suNVzbZQRk6sEZpM8oOcUKMWO1FgA9X7y6+VqOIgqWnlp7qKdrnmlf3IuUfCsJbp+W8FlCNS4ef00pC5IfXjDz0hAaaxmC44FDk70Rlr4v22srk/U4UVpDJjziSxM6kfFJsBqYlQpQXF6THoiTAeWRzz3OrLTGxmYSmZQNMOTxwzbD0SMC0OCmHrS2IHH3IH0Oiyz8qyjCysuOX1T4bmDw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jun 11 2025, Mike Rapoport wrote: > On Wed, Jun 11, 2025 at 04:01:52PM +0200, Pratyush Yadav wrote: >> On Wed, Jun 11 2025, Mike Rapoport wrote: >>=20 >> > On Wed, Jun 11, 2025 at 09:14:55AM -0400, Pasha Tatashin wrote: >> >> On Wed, Jun 11, 2025 at 9:06=E2=80=AFAM Pratyush Yadav wrote: >> >> > >> >> > On Tue, Jun 10 2025, Pasha Tatashin wrote: >> >> > >> >> > >> > > I think it should be the other way around, KHO should depend= on >> >> > >> > > !DEFERRED_STRUCT_PAGE_INIT. >> >> > >> > >> >> > >> > Agreed, and this is what I first tried, but that does not work= , there >> >> > >> > is some circular dependency breaking the build. If you feel >> >> > >> > adventurous you can try that :-) >> >> > >> >> >> > >> Hmm, weird, worked for me :/ >> >> > >> >> > Worked for me as well. >> >> > >> >> > > >> >> > > I am super confused, it did not work for me over weekend, and now= it >> >> > > is working. Even `make menuconfig` would not work. Anyways, I wil= l put >> >> > > it in the appropriate place. >> >> > > >> >> > >> >> >> > >> > > > We will need to teah KHO to work with deferred struct page= init. I >> >> > >> > > > suspect, we could init preserved struct pages and then ski= p over them >> >> > >> > > > during deferred init. >> >> > >> > > >> >> > >> > > We could, but with that would mean we'll run this before SMP= and it's not >> >> > >> > > desirable. Also, init_deferred_page() for a random page requ= ires >> >> > >> > >> >> > >> > We already run KHO init before smp_init: >> >> > >> > start_kernel() -> mm_core_init() -> kho_memory_init() -> >> >> > >> > kho_restore_folio() -> struct pages must be already initialize= d here! >> >> > >> > >> >> > >> > While deferred struct pages are initialized: >> >> > >> > start_kernel() -> rest_init() -> kernel_init() -> >> >> > >> > kernel_init_freeable() -> page_alloc_init_late() -> >> >> > >> > deferred_init_memmap() >> >> > >> > >> >> > >> > If the number of preserved pages that is needed during early b= oot is >> >> > >> > relatively small, that it should not be an issue to pre-initia= lize >> >> > >> > struct pages for them before deferred struct pages are initial= ized. We >> >> > >> > already pre-initialize some "struct pages" that are needed du= ring >> >> > >> > early boot before the reset are initialized, see deferred_grow= _zone() >> >> > >> >> >> > >> deferred_grow_zone() takes a chunk in the beginning of uninitial= ized range, >> >> > >> with kho we are talking about some random pages. If we preinit t= hem early, >> >> > >> deferred_init_memmap() will overwrite them. >> >> > > >> >> > > Yes, this is why I am saying that we would need to skip the KHO >> >> > > initialized "struct pages" somehow during deferred initialization= . If >> >> > > we create an ordered by PFN list of early-initialized KHO struct >> >> > > pages, skipping during deferred initialization could be done >> >> > > efficiently. >> >> > >> >> > Or keep things simple and don't use any KHO struct pages during ear= ly >> >> > init. You can access the page itself, just don't use its struct pag= e. >> >> > >> >> > Currently the only user of kho_restore_folio() during init is >> >> > kho_memory_init(). The FDT is accessed by doing >> >> > phys_to_virt(kho_in.fdt_phys) anyway, so there is really no need for >> >> > restoring the folio so early. It can be done later, for example whe= n LUO >> >> > does the finish event, to clean up and free the folio. >> >>=20 >> >> Good suggestion, however, KHO does not have any sophisticated users >> >> that we are going to be adding as part of the live update work in the >> >> future: IR, KVM, early VCPU threads, and so on. So, while today, this >> >> might work, in the future, I am not sure if we should expect struct >> >> pages are not accessed until after deferred initialization or simply >> >> fix it once and for all. >> > >> > KHO already accesses stuct page early and uses page->private for order. >> > Since preserved memory is reserved in memblock, deferred init of struct >> > pages won't touch those pages, we just need to make sure they are prop= erly=20 >>=20 >> Not strictly true. Some of them might have been initialized from >> free_area_init() -> memmap_init() (the ones not eligible for deferred >> init), which happens before KHO makes its memblock reservations. >>=20 >> > initialized at some point. If we don't expect many kho_restore_folio() >> > before page_alloc_init_late() we can use init_deferred_page() for early >> > accesses. >>=20 >> I tried doing this when looking into this initially, but it doesn't work >> for some reason. >>=20 >> static void kho_restore_page(struct page *page, unsigned int order) >> { >> unsigned int i, nr_pages =3D (1 << order); >>=20=20=20=20=20 >> /* Head page gets refcount of 1. */ >> init_deferred_page(page_to_pfn(page), NUMA_NO_NODE); > > > This would do > > if (early_page_initialised(pfn, nid)) > return; > > __init_page_from_nid(pfn, nid); > > and I'm really surprised it didn't crash in early_page_initialised() > because of NUMA_NO_NODE :) Oh, right. Using the wrong node completely throws early_page_initialised() off. > > What might work here is=20 > > pfn =3D page_to_pfn(page); > __init_page_from_nid(pfn, early_pfn_to_nid(pfn)); Yep, that works. Although this would do early_pfn_to_nid() for each page so it isn't very efficient. And we also need to make sure memblock does not go away. > >> set_page_count(page, 1); >>=20=20=20=20=20 >> /* For higher order folios, tail pages get a page count of zero. */ >> for (i =3D 1; i < nr_pages; i++) { >> init_deferred_page(page_to_pfn(page + i), NUMA_NO_NODE); >> set_page_count(page + i, 0); >> } >>=20=20=20=20=20 >> [...] >>=20 [...] --=20 Regards, Pratyush Yadav