* [PATCH v2 0/2] kho: clean up page initialization logic @ 2026-01-16 11:22 Pratyush Yadav 2026-01-16 11:22 ` [PATCH v2 1/2] kho: use unsigned long for nr_pages Pratyush Yadav 2026-01-16 11:22 ` [PATCH v2 2/2] kho: simplify page initialization in kho_restore_page() Pratyush Yadav 0 siblings, 2 replies; 9+ messages in thread From: Pratyush Yadav @ 2026-01-16 11:22 UTC (permalink / raw) To: Andrew Morton, Alexander Graf, Mike Rapoport, Pasha Tatashin, Pratyush Yadav Cc: kexec, linux-mm, linux-kernel, Suren Baghdasaryan From: "Pratyush Yadav (Google)" <pratyush@kernel.org> Hi, This series simplifies the page initialization logic in kho_restore_page(). It was originally only a single patch [0], but on Pasha's suggestion, I added another patch to use unsigned long for nr_pages. Technically speaking, the patches aren't related and can be applied independently, but bundling them together since patch 2 relies on 1 and it is easier to manage them this way. Changes in v2: - Use unsigned long for nr_pages. [0] https://lore.kernel.org/all/20251223104448.195589-1-pratyush@kernel.org/ Pratyush Yadav (2): kho: use unsigned long for nr_pages kho: simplify page initialization in kho_restore_page() include/linux/kexec_handover.h | 6 ++-- kernel/liveupdate/kexec_handover.c | 47 +++++++++++++++++++----------- 2 files changed, 33 insertions(+), 20 deletions(-) base-commit: 0f61b1860cc3f52aef9036d7235ed1f017632193 -- 2.52.0.457.g6b5491de43-goog ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 1/2] kho: use unsigned long for nr_pages 2026-01-16 11:22 [PATCH v2 0/2] kho: clean up page initialization logic Pratyush Yadav @ 2026-01-16 11:22 ` Pratyush Yadav 2026-01-16 22:26 ` Andrew Morton ` (2 more replies) 2026-01-16 11:22 ` [PATCH v2 2/2] kho: simplify page initialization in kho_restore_page() Pratyush Yadav 1 sibling, 3 replies; 9+ messages in thread From: Pratyush Yadav @ 2026-01-16 11:22 UTC (permalink / raw) To: Andrew Morton, Alexander Graf, Mike Rapoport, Pasha Tatashin, Pratyush Yadav Cc: kexec, linux-mm, linux-kernel, Suren Baghdasaryan With 4k pages, a 32-bit nr_pages can span up to 16 TiB. While it is a lot, there exist systems with terabytes of RAM. gup is also moving to using long for nr_pages. Use unsigned long and make KHO future-proof. Suggested-by: Pasha Tatashin <pasha.tatashin@soleen.com> Signed-off-by: Pratyush Yadav <pratyush@kernel.org> --- Changes in v2: - New in v2. include/linux/kexec_handover.h | 6 +++--- kernel/liveupdate/kexec_handover.c | 11 ++++++----- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h index 5f7b9de97e8d..81814aa92370 100644 --- a/include/linux/kexec_handover.h +++ b/include/linux/kexec_handover.h @@ -45,15 +45,15 @@ bool is_kho_boot(void); int kho_preserve_folio(struct folio *folio); void kho_unpreserve_folio(struct folio *folio); -int kho_preserve_pages(struct page *page, unsigned int nr_pages); -void kho_unpreserve_pages(struct page *page, unsigned int nr_pages); +int kho_preserve_pages(struct page *page, unsigned long nr_pages); +void kho_unpreserve_pages(struct page *page, unsigned long nr_pages); int kho_preserve_vmalloc(void *ptr, struct kho_vmalloc *preservation); void kho_unpreserve_vmalloc(struct kho_vmalloc *preservation); void *kho_alloc_preserve(size_t size); void kho_unpreserve_free(void *mem); void kho_restore_free(void *mem); struct folio *kho_restore_folio(phys_addr_t phys); -struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages); +struct page *kho_restore_pages(phys_addr_t phys, unsigned long nr_pages); void *kho_restore_vmalloc(const struct kho_vmalloc *preservation); int kho_add_subtree(const char *name, void *fdt); void kho_remove_subtree(void *fdt); diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c index 9dc51fab604f..709484fbf9fd 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -222,7 +222,8 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn, static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) { struct page *page = pfn_to_online_page(PHYS_PFN(phys)); - unsigned int nr_pages, ref_cnt; + unsigned long nr_pages; + unsigned int ref_cnt; union kho_page_info info; if (!page) @@ -249,7 +250,7 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) * count of 1 */ ref_cnt = is_folio ? 0 : 1; - for (unsigned int i = 1; i < nr_pages; i++) + for (unsigned long i = 1; i < nr_pages; i++) set_page_count(page + i, ref_cnt); if (is_folio && info.order) @@ -283,7 +284,7 @@ EXPORT_SYMBOL_GPL(kho_restore_folio); * * Return: 0 on success, error code on failure */ -struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages) +struct page *kho_restore_pages(phys_addr_t phys, unsigned long nr_pages) { const unsigned long start_pfn = PHYS_PFN(phys); const unsigned long end_pfn = start_pfn + nr_pages; @@ -829,7 +830,7 @@ EXPORT_SYMBOL_GPL(kho_unpreserve_folio); * * Return: 0 on success, error code on failure */ -int kho_preserve_pages(struct page *page, unsigned int nr_pages) +int kho_preserve_pages(struct page *page, unsigned long nr_pages) { struct kho_mem_track *track = &kho_out.track; const unsigned long start_pfn = page_to_pfn(page); @@ -873,7 +874,7 @@ EXPORT_SYMBOL_GPL(kho_preserve_pages); * kho_preserve_pages() call. Unpreserving arbitrary sub-ranges of larger * preserved blocks is not supported. */ -void kho_unpreserve_pages(struct page *page, unsigned int nr_pages) +void kho_unpreserve_pages(struct page *page, unsigned long nr_pages) { struct kho_mem_track *track = &kho_out.track; const unsigned long start_pfn = page_to_pfn(page); -- 2.52.0.457.g6b5491de43-goog ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 1/2] kho: use unsigned long for nr_pages 2026-01-16 11:22 ` [PATCH v2 1/2] kho: use unsigned long for nr_pages Pratyush Yadav @ 2026-01-16 22:26 ` Andrew Morton 2026-01-20 13:06 ` Mike Rapoport 2026-01-20 13:03 ` Mike Rapoport 2026-01-22 19:08 ` Pasha Tatashin 2 siblings, 1 reply; 9+ messages in thread From: Andrew Morton @ 2026-01-16 22:26 UTC (permalink / raw) To: Pratyush Yadav Cc: Alexander Graf, Mike Rapoport, Pasha Tatashin, kexec, linux-mm, linux-kernel, Suren Baghdasaryan On Fri, 16 Jan 2026 11:22:14 +0000 Pratyush Yadav <pratyush@kernel.org> wrote: > With 4k pages, a 32-bit nr_pages can span up to 16 TiB. While it is a > lot, there exist systems with terabytes of RAM. gup is also moving to > using long for nr_pages. Use unsigned long and make KHO future-proof. We can expect people to be using LTS kernel five years from now, perhaps much longer. Machines will be bigger then! IOW, shouldn't we backport this? ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 1/2] kho: use unsigned long for nr_pages 2026-01-16 22:26 ` Andrew Morton @ 2026-01-20 13:06 ` Mike Rapoport 0 siblings, 0 replies; 9+ messages in thread From: Mike Rapoport @ 2026-01-20 13:06 UTC (permalink / raw) To: Andrew Morton Cc: Pratyush Yadav, Alexander Graf, Pasha Tatashin, kexec, linux-mm, linux-kernel, Suren Baghdasaryan On Fri, Jan 16, 2026 at 02:26:35PM -0800, Andrew Morton wrote: > On Fri, 16 Jan 2026 11:22:14 +0000 Pratyush Yadav <pratyush@kernel.org> wrote: > > > With 4k pages, a 32-bit nr_pages can span up to 16 TiB. While it is a > > lot, there exist systems with terabytes of RAM. gup is also moving to > > using long for nr_pages. Use unsigned long and make KHO future-proof. > > We can expect people to be using LTS kernel five years from now, > perhaps much longer. Machines will be bigger then! > > IOW, shouldn't we backport this? The latest LTS is 6.12 that still does not have KHO, I don't think it makes sense to backport this to 6.18. -- Sincerely yours, Mike. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 1/2] kho: use unsigned long for nr_pages 2026-01-16 11:22 ` [PATCH v2 1/2] kho: use unsigned long for nr_pages Pratyush Yadav 2026-01-16 22:26 ` Andrew Morton @ 2026-01-20 13:03 ` Mike Rapoport 2026-01-22 19:08 ` Pasha Tatashin 2 siblings, 0 replies; 9+ messages in thread From: Mike Rapoport @ 2026-01-20 13:03 UTC (permalink / raw) To: Pratyush Yadav Cc: Andrew Morton, Alexander Graf, Pasha Tatashin, kexec, linux-mm, linux-kernel, Suren Baghdasaryan On Fri, Jan 16, 2026 at 11:22:14AM +0000, Pratyush Yadav wrote: > With 4k pages, a 32-bit nr_pages can span up to 16 TiB. While it is a > lot, there exist systems with terabytes of RAM. gup is also moving to > using long for nr_pages. Use unsigned long and make KHO future-proof. > > Suggested-by: Pasha Tatashin <pasha.tatashin@soleen.com> > Signed-off-by: Pratyush Yadav <pratyush@kernel.org> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> > --- > > Changes in v2: > - New in v2. > > include/linux/kexec_handover.h | 6 +++--- > kernel/liveupdate/kexec_handover.c | 11 ++++++----- > 2 files changed, 9 insertions(+), 8 deletions(-) > > diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h > index 5f7b9de97e8d..81814aa92370 100644 > --- a/include/linux/kexec_handover.h > +++ b/include/linux/kexec_handover.h > @@ -45,15 +45,15 @@ bool is_kho_boot(void); > > int kho_preserve_folio(struct folio *folio); > void kho_unpreserve_folio(struct folio *folio); > -int kho_preserve_pages(struct page *page, unsigned int nr_pages); > -void kho_unpreserve_pages(struct page *page, unsigned int nr_pages); > +int kho_preserve_pages(struct page *page, unsigned long nr_pages); > +void kho_unpreserve_pages(struct page *page, unsigned long nr_pages); > int kho_preserve_vmalloc(void *ptr, struct kho_vmalloc *preservation); > void kho_unpreserve_vmalloc(struct kho_vmalloc *preservation); > void *kho_alloc_preserve(size_t size); > void kho_unpreserve_free(void *mem); > void kho_restore_free(void *mem); > struct folio *kho_restore_folio(phys_addr_t phys); > -struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages); > +struct page *kho_restore_pages(phys_addr_t phys, unsigned long nr_pages); > void *kho_restore_vmalloc(const struct kho_vmalloc *preservation); > int kho_add_subtree(const char *name, void *fdt); > void kho_remove_subtree(void *fdt); > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c > index 9dc51fab604f..709484fbf9fd 100644 > --- a/kernel/liveupdate/kexec_handover.c > +++ b/kernel/liveupdate/kexec_handover.c > @@ -222,7 +222,8 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn, > static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) > { > struct page *page = pfn_to_online_page(PHYS_PFN(phys)); > - unsigned int nr_pages, ref_cnt; > + unsigned long nr_pages; > + unsigned int ref_cnt; > union kho_page_info info; > > if (!page) > @@ -249,7 +250,7 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) > * count of 1 > */ > ref_cnt = is_folio ? 0 : 1; > - for (unsigned int i = 1; i < nr_pages; i++) > + for (unsigned long i = 1; i < nr_pages; i++) > set_page_count(page + i, ref_cnt); > > if (is_folio && info.order) > @@ -283,7 +284,7 @@ EXPORT_SYMBOL_GPL(kho_restore_folio); > * > * Return: 0 on success, error code on failure > */ > -struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages) > +struct page *kho_restore_pages(phys_addr_t phys, unsigned long nr_pages) > { > const unsigned long start_pfn = PHYS_PFN(phys); > const unsigned long end_pfn = start_pfn + nr_pages; > @@ -829,7 +830,7 @@ EXPORT_SYMBOL_GPL(kho_unpreserve_folio); > * > * Return: 0 on success, error code on failure > */ > -int kho_preserve_pages(struct page *page, unsigned int nr_pages) > +int kho_preserve_pages(struct page *page, unsigned long nr_pages) > { > struct kho_mem_track *track = &kho_out.track; > const unsigned long start_pfn = page_to_pfn(page); > @@ -873,7 +874,7 @@ EXPORT_SYMBOL_GPL(kho_preserve_pages); > * kho_preserve_pages() call. Unpreserving arbitrary sub-ranges of larger > * preserved blocks is not supported. > */ > -void kho_unpreserve_pages(struct page *page, unsigned int nr_pages) > +void kho_unpreserve_pages(struct page *page, unsigned long nr_pages) > { > struct kho_mem_track *track = &kho_out.track; > const unsigned long start_pfn = page_to_pfn(page); > -- > 2.52.0.457.g6b5491de43-goog > -- Sincerely yours, Mike. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 1/2] kho: use unsigned long for nr_pages 2026-01-16 11:22 ` [PATCH v2 1/2] kho: use unsigned long for nr_pages Pratyush Yadav 2026-01-16 22:26 ` Andrew Morton 2026-01-20 13:03 ` Mike Rapoport @ 2026-01-22 19:08 ` Pasha Tatashin 2 siblings, 0 replies; 9+ messages in thread From: Pasha Tatashin @ 2026-01-22 19:08 UTC (permalink / raw) To: Pratyush Yadav Cc: Andrew Morton, Alexander Graf, Mike Rapoport, kexec, linux-mm, linux-kernel, Suren Baghdasaryan > @@ -222,7 +222,8 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn, > static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) > { > struct page *page = pfn_to_online_page(PHYS_PFN(phys)); > - unsigned int nr_pages, ref_cnt; > + unsigned long nr_pages; > + unsigned int ref_cnt; > union kho_page_info info; > > if (!page) > @@ -249,7 +250,7 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) > * count of 1 > */ > ref_cnt = is_folio ? 0 : 1; > - for (unsigned int i = 1; i < nr_pages; i++) > + for (unsigned long i = 1; i < nr_pages; i++) > set_page_count(page + i, ref_cnt); > > if (is_folio && info.order) > @@ -283,7 +284,7 @@ EXPORT_SYMBOL_GPL(kho_restore_folio); > * > * Return: 0 on success, error code on failure > */ > -struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages) > +struct page *kho_restore_pages(phys_addr_t phys, unsigned long nr_pages) > { > const unsigned long start_pfn = PHYS_PFN(phys); > const unsigned long end_pfn = start_pfn + nr_pages; > @@ -829,7 +830,7 @@ EXPORT_SYMBOL_GPL(kho_unpreserve_folio); > * > * Return: 0 on success, error code on failure > */ > -int kho_preserve_pages(struct page *page, unsigned int nr_pages) > +int kho_preserve_pages(struct page *page, unsigned long nr_pages) > { > struct kho_mem_track *track = &kho_out.track; > const unsigned long start_pfn = page_to_pfn(page); > @@ -873,7 +874,7 @@ EXPORT_SYMBOL_GPL(kho_preserve_pages); > * kho_preserve_pages() call. Unpreserving arbitrary sub-ranges of larger > * preserved blocks is not supported. > */ > -void kho_unpreserve_pages(struct page *page, unsigned int nr_pages) > +void kho_unpreserve_pages(struct page *page, unsigned long nr_pages) Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com> ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 2/2] kho: simplify page initialization in kho_restore_page() 2026-01-16 11:22 [PATCH v2 0/2] kho: clean up page initialization logic Pratyush Yadav 2026-01-16 11:22 ` [PATCH v2 1/2] kho: use unsigned long for nr_pages Pratyush Yadav @ 2026-01-16 11:22 ` Pratyush Yadav 2026-01-20 13:05 ` Mike Rapoport 2026-01-22 19:11 ` Pasha Tatashin 1 sibling, 2 replies; 9+ messages in thread From: Pratyush Yadav @ 2026-01-16 11:22 UTC (permalink / raw) To: Andrew Morton, Alexander Graf, Mike Rapoport, Pasha Tatashin, Pratyush Yadav Cc: kexec, linux-mm, linux-kernel, Suren Baghdasaryan When restoring a page (from kho_restore_pages()) or folio (from kho_restore_folio()), KHO must initialize the struct page. The initialization differs slightly depending on if a folio is requested or a set of 0-order pages is requested. Conceptually, it is quite simple to understand. When restoring 0-order pages, each page gets a refcount of 1 and that's it. When restoring a folio, head page gets a refcount of 1 and tail pages get 0. kho_restore_page() tries to combine the two separate initialization flow into one piece of code. While it works fine, it is more complicated to read than it needs to be. Make the code simpler by splitting the two initalization paths into two separate functions. This improves readability by clearly showing how each type must be initialized. Signed-off-by: Pratyush Yadav <pratyush@kernel.org> --- Changes in v2: - Use unsigned long for nr_pages. kernel/liveupdate/kexec_handover.c | 40 +++++++++++++++++++----------- 1 file changed, 26 insertions(+), 14 deletions(-) diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c index 709484fbf9fd..92da76977684 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -219,11 +219,32 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn, return 0; } +/* For physically contiguous 0-order pages. */ +static void kho_init_pages(struct page *page, unsigned long nr_pages) +{ + for (unsigned long i = 0; i < nr_pages; i++) + set_page_count(page + i, 1); +} + +static void kho_init_folio(struct page *page, unsigned int order) +{ + unsigned long nr_pages = (1 << order); + + /* Head page gets refcount of 1. */ + set_page_count(page, 1); + + /* For higher order folios, tail pages get a page count of zero. */ + for (unsigned long i = 1; i < nr_pages; i++) + set_page_count(page + i, 0); + + if (order > 0) + prep_compound_page(page, order); +} + static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) { struct page *page = pfn_to_online_page(PHYS_PFN(phys)); unsigned long nr_pages; - unsigned int ref_cnt; union kho_page_info info; if (!page) @@ -241,20 +262,11 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) /* Clear private to make sure later restores on this page error out. */ page->private = 0; - /* Head page gets refcount of 1. */ - set_page_count(page, 1); - - /* - * For higher order folios, tail pages get a page count of zero. - * For physically contiguous order-0 pages every pages gets a page - * count of 1 - */ - ref_cnt = is_folio ? 0 : 1; - for (unsigned long i = 1; i < nr_pages; i++) - set_page_count(page + i, ref_cnt); - if (is_folio && info.order) - prep_compound_page(page, info.order); + if (is_folio) + kho_init_folio(page, info.order); + else + kho_init_pages(page, nr_pages); adjust_managed_page_count(page, nr_pages); return page; -- 2.52.0.457.g6b5491de43-goog ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 2/2] kho: simplify page initialization in kho_restore_page() 2026-01-16 11:22 ` [PATCH v2 2/2] kho: simplify page initialization in kho_restore_page() Pratyush Yadav @ 2026-01-20 13:05 ` Mike Rapoport 2026-01-22 19:11 ` Pasha Tatashin 1 sibling, 0 replies; 9+ messages in thread From: Mike Rapoport @ 2026-01-20 13:05 UTC (permalink / raw) To: Pratyush Yadav Cc: Andrew Morton, Alexander Graf, Pasha Tatashin, kexec, linux-mm, linux-kernel, Suren Baghdasaryan On Fri, Jan 16, 2026 at 11:22:15AM +0000, Pratyush Yadav wrote: > When restoring a page (from kho_restore_pages()) or folio (from > kho_restore_folio()), KHO must initialize the struct page. The > initialization differs slightly depending on if a folio is requested or > a set of 0-order pages is requested. > > Conceptually, it is quite simple to understand. When restoring 0-order > pages, each page gets a refcount of 1 and that's it. When restoring a > folio, head page gets a refcount of 1 and tail pages get 0. > > kho_restore_page() tries to combine the two separate initialization flow > into one piece of code. While it works fine, it is more complicated to > read than it needs to be. Make the code simpler by splitting the two > initalization paths into two separate functions. This improves > readability by clearly showing how each type must be initialized. > > Signed-off-by: Pratyush Yadav <pratyush@kernel.org> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> > --- > > Changes in v2: > - Use unsigned long for nr_pages. > > kernel/liveupdate/kexec_handover.c | 40 +++++++++++++++++++----------- > 1 file changed, 26 insertions(+), 14 deletions(-) > > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c > index 709484fbf9fd..92da76977684 100644 > --- a/kernel/liveupdate/kexec_handover.c > +++ b/kernel/liveupdate/kexec_handover.c > @@ -219,11 +219,32 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn, > return 0; > } > > +/* For physically contiguous 0-order pages. */ > +static void kho_init_pages(struct page *page, unsigned long nr_pages) > +{ > + for (unsigned long i = 0; i < nr_pages; i++) > + set_page_count(page + i, 1); > +} > + > +static void kho_init_folio(struct page *page, unsigned int order) > +{ > + unsigned long nr_pages = (1 << order); > + > + /* Head page gets refcount of 1. */ > + set_page_count(page, 1); > + > + /* For higher order folios, tail pages get a page count of zero. */ > + for (unsigned long i = 1; i < nr_pages; i++) > + set_page_count(page + i, 0); > + > + if (order > 0) > + prep_compound_page(page, order); > +} > + > static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) > { > struct page *page = pfn_to_online_page(PHYS_PFN(phys)); > unsigned long nr_pages; > - unsigned int ref_cnt; > union kho_page_info info; > > if (!page) > @@ -241,20 +262,11 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) > > /* Clear private to make sure later restores on this page error out. */ > page->private = 0; > - /* Head page gets refcount of 1. */ > - set_page_count(page, 1); > - > - /* > - * For higher order folios, tail pages get a page count of zero. > - * For physically contiguous order-0 pages every pages gets a page > - * count of 1 > - */ > - ref_cnt = is_folio ? 0 : 1; > - for (unsigned long i = 1; i < nr_pages; i++) > - set_page_count(page + i, ref_cnt); > > - if (is_folio && info.order) > - prep_compound_page(page, info.order); > + if (is_folio) > + kho_init_folio(page, info.order); > + else > + kho_init_pages(page, nr_pages); > > adjust_managed_page_count(page, nr_pages); > return page; > -- > 2.52.0.457.g6b5491de43-goog > -- Sincerely yours, Mike. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 2/2] kho: simplify page initialization in kho_restore_page() 2026-01-16 11:22 ` [PATCH v2 2/2] kho: simplify page initialization in kho_restore_page() Pratyush Yadav 2026-01-20 13:05 ` Mike Rapoport @ 2026-01-22 19:11 ` Pasha Tatashin 1 sibling, 0 replies; 9+ messages in thread From: Pasha Tatashin @ 2026-01-22 19:11 UTC (permalink / raw) To: Pratyush Yadav Cc: Andrew Morton, Alexander Graf, Mike Rapoport, kexec, linux-mm, linux-kernel, Suren Baghdasaryan On Fri, Jan 16, 2026 at 6:22 AM Pratyush Yadav <pratyush@kernel.org> wrote: > > When restoring a page (from kho_restore_pages()) or folio (from > kho_restore_folio()), KHO must initialize the struct page. The > initialization differs slightly depending on if a folio is requested or > a set of 0-order pages is requested. > > Conceptually, it is quite simple to understand. When restoring 0-order > pages, each page gets a refcount of 1 and that's it. When restoring a > folio, head page gets a refcount of 1 and tail pages get 0. > > kho_restore_page() tries to combine the two separate initialization flow > into one piece of code. While it works fine, it is more complicated to > read than it needs to be. Make the code simpler by splitting the two > initalization paths into two separate functions. This improves > readability by clearly showing how each type must be initialized. > > Signed-off-by: Pratyush Yadav <pratyush@kernel.org> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com> ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-01-22 19:12 UTC | newest] Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2026-01-16 11:22 [PATCH v2 0/2] kho: clean up page initialization logic Pratyush Yadav 2026-01-16 11:22 ` [PATCH v2 1/2] kho: use unsigned long for nr_pages Pratyush Yadav 2026-01-16 22:26 ` Andrew Morton 2026-01-20 13:06 ` Mike Rapoport 2026-01-20 13:03 ` Mike Rapoport 2026-01-22 19:08 ` Pasha Tatashin 2026-01-16 11:22 ` [PATCH v2 2/2] kho: simplify page initialization in kho_restore_page() Pratyush Yadav 2026-01-20 13:05 ` Mike Rapoport 2026-01-22 19:11 ` Pasha Tatashin
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox