linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
To: Mike Rapoport <rppt@kernel.org>
Cc: linux-mm@kvack.org, Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>,
	Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH] mm: Fix memblock_free_late() when using deferred struct page
Date: Mon, 16 Feb 2026 15:53:33 +1100	[thread overview]
Message-ID: <1e83d9c3cf11ba825237dbc7d6a70ba47ab328cc.camel@kernel.crashing.org> (raw)
In-Reply-To: <aYtBctm4TlHGnzXV@kernel.org>

(stripping history)

So I went into a big refresher (or learning exercise since there's
quite a bit here that I never really looked at before either).

So here is a break down, in chronological order, of the setup and
initialization of the memory map, and how the reserve business
interacts with it as I understand it from reading the code.

Please correct me if I missed or misunderstood something :-) Also maybe
this is worth turning into a piece of doc ?

Then some conclusions (I think I know why the patches crashed).

1) Setting up the memblock maps
-------------------------------

This is the first thing that happens, usually deep in arch code (though
DT based archs use common code for it).

 * memblock.memory is initialized (from e820 in our case). In the e820
case, we only populate what is explicitely marked as usable. So we have
a pile of holes in there, especially around low memory where ACPI
sticks a bunch of things.

So for example, this snippet:

[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable

Will result in a 'hole' from 0x0000000000800000 to 0x0000000000807fff.

 * This is also where we collect the EFI boot services memory map and
plonk
it in memblock.reserved on x86 via efi_reserve_boot_services().
This will be useful later.

From this point, memblock is the memory allocator.

2) Allocation of memory backing for struct pages (memmap).
----------------------------------------------------------

Before we poke at struct pages, they need to exist.

On sparsemem systems, this happens at
setup_arch() -> ... -> paging_init() (in arch code)
which calls sparse_init() to do the job.

From my understanding they memmap is effectively created (though not
initialized) in sections by memblocks_present() in sparse.c which
iterates
the memblock.memory list (coming from e820 above) and calls
memblock_present() for each usable chunk.

On sparsemem, the section_mem_map in the memory sections is set to
track
which sections have mapped backing pages, for use later by pfn_valid()

Note that hole we had in my example is too small to result in a missing
sparsemem allocation, but any big enough hole (as big as a section)
could result in struct page(s) not existing at all.

For non-sparsemem systems, the mem_map allocation happens a little bit
later, in
paging_init() -> zone_sizes_init() -> free_area_init() ->
free_area_init_node(), but for all intent and purpose, it is the same
time.

3) Early initialization of struct pages
---------------------------------------

Once allocated, struct pages need to be initialized. We have a multi-
stage process due to the option of deferring that initialization to a
multithreaded process.

The first stage of initialization of struct pages happens in
paging_init() -> free_area_init(). So *right after* the allocation
mentioned above.

It sets up the zones and a bunch of other things (including
free_area_init_node() mentioned above), and eventually calls
memmap_init() which is the interesting bit here.

Ignoring the ZONE_DEVICE case for now, memmap_init() will iterate the
memblock.memory ranges (so the same ranges for which we ensured we have
allocated the corresponding sections of mem_map earlier) and the zones,
calling memmap_init_zone_range() for each combination:
 
First memmap_init_zone_range() will for each valid intersection of
memory range and zone, initialize struct pages until defer_init() says
no more (ie, deferring by setting pgdat->first_deferred_pfn to
something that isn't ULONG_MAX).

We start with only one section. This is where the "deferral point" is
established. (There is a mechanism to "grow" that early initialization
on demand if early allocs need it but I'll ignore that for now as
well).

It also tracks the holes between the regions and calls
init_unavailable_range() for those (additionally memmap_init() calls
init_unavailable_range() one last time for any hole after the last
region).

Note that init_unavailable_range() is thus called for *every* hole
between the memory regions, regardless of whether we have deferred
something or not and regardless of whether we have allocated sections
of memory map or not at this point. The pfn_valid() test inside
init_unavailable_range() will take care of skipping the unallocated
sections of memory map. So far so good...

So at this point, we have:

 - mem_map allocated
 - "usable" memory ranges has struct pages initialized up to the
"deferral" point for not-already-reserved regions. (additionally marked
reserved already for ZONE_DEVICE, otherwise not).
 - holes between memory ranges have struct page initialized and
reserved provided they have corresponding backing struct pages
allocated (present sections).
 - What is uninitialized at this point are any struct pages above the
deferral point. Anything else is initialized. Not all reservations are
represented yet

IE. The memmap has backing memory, initialized for all holes and up to
the deferral point for the rest. Only reserved for holes (and
ZONE_DEVICE). We still have work to do :-)

Now, we go back from setup_arch() to the main boot process, memblock is
still "live" and our primary memory allocator.

4) Transition to the page allocator
-----------------------------------

A bit later, still fairly early during boot, it's time to enable the
page allocator and slab. It all starts with mm_core_init() ->
mem_init() in arch code.

Now mem_init() has been abused over time to do more than just this, but
the meat here is that it eventually calls memblock_free_all(). This is
when we start actually "freeing" pages and reserving memblock.reserved
pages.

 * First we calls free_unused_memmap(). So from what I can tell, this
frees bits of the mem_map that aren't covered in memblock.memory. Now
I'm not too sure what the purpose of this is at this point, as we
already only allocated the mem_map for what's in memblock.memory early
on. Could this be that we have code path that take out sections of
memblock.memory between then and now that I missed ?

 * Then the meat of the matter: free_low_memory_core_early() which does
the interesting stuff, notably memmap_init_reserved_pages() and
__free_memory_core(). The former reserves the stuff that should be
reserved, the latter sends non-deferred and non-reserved pages to buddy
for use. Let's focus on the former:

 * memmap_init_reserved_pages() is doing mostly two passes, one looks
for memblock.memory regions marked nomap, and reserves them, which I'll
ignore. The second pass use the for_each_reserved_mem_region() iterator
to mark memblock.reserved regions reserved using
reserve_bootmem_region().

This will just walk memblock.reserved blindly, doesn't specifically
limits itself to things covered by memblock.memory (ie e820). The
saving grace here is that it checks for pfn_valid(), and so will avoid
holes in the mem_map.

There is no other check, so if a page happens to be marked as reserved
by the BIOS and also part of a "hole", the struct page will be
initialized twice.

In both cases we land in init_reserved_page() followed by
__SetPageReserved(). In both case pfn_valid() should save the day if
the corresponding section of mem_map hasn't been allocated (which could
happen since we ignore memblock.memory).

Let's have a closer look. init_reserved_page() is called for every
reserved page in memblock.reserved() for which a backing struct page
exists basically. However the first thing it does is:

	if (early_page_initialised(pfn, nid))
                return;

That means that anything below the deferral point is skipped. Fair
enough, it has already been initialized as we established earlier
(note: the marking of PG_reserved happens in the caller, so it happens
regardless of that test, as expected).

That does mean that there is a small window here for double-
initialization: reserved areas covering memory holes above the deferral
point will be initialized twice, once earlier as all holes are, and
once here. I don't think that's an issue however, is it ?

At this point, we thus have initialized and marked all
memblock.reserved pages properly (as long as they don't land in a
hole), whether they sit below or above the deferral point.

Next we actually free some memory into the page allocator with:

        for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &start, &end,
                                NULL)
                count += __free_memory_core(start, end);

Nothing much to add here, it skips reserved regions and "frees" the
remaining pages in the usable mem ranges.

One little nit: This iterates everything. The decision to skip pages
below the deferral point (since they struct page isn't initialized)
comes from the test early_page_initialised() inside
memblock_free_pages().

At this point, the page allocator is "live" and memblock is "dead"
(though the memblock data structures are still around, it is just not
supposed to be updated anymore).

5) Late freeing of memblock memory (EFI Boot Services and others)
-----------------------------------------------------------------

This is the result of something calling memblock_free_late() after the
above point.

Now, for the sake of this conversation, I assume this happens *before*
the deferred pages init. There could be cases where it happens after, I
haven't audited all callers of memblock_free_late(), I'm mostly
interested in what happens in efi_free_boot_services() and that happens
before.

We also assume we cannot trust the EFI memory map to contains only
things referencing usable memory. So we get called with stuff that may
or may not be backed by a struct page, and if it does, the struct page
may or may not be initialized.

I think we can assume that:

 * If pfn_valid() the struct page exists, otherwise it doesn't.

 * If it exists, then the struct page was initialized if (and only if)
it was marked reserved earlier. It doesn't matter if it sits in a hole
anymore at this point. If it was not marked reserved, the struct page
has also not been initialized if above the deferral point. We assume
that all those pages HAVE been marked reserved by
efi_reserve_boot_services() earlier, meaning they *are* initialized as
long as pfn_valid() is happy.

 * One thing I have NOT yet figured out ... do we have a problem if the
page is in a hole that lands outside of a zone boundary ? I haven't
really got my head deep down into the details of zone initializations
(especially as we adjust the boundaries here or there), so this could
be a problem.


99) Conclusion :-)
------------------

Nothing firm yet here but a few hints at what could possibly go wrong
and one obvious issue with the previous patch(es).

First the obvious ... the proposed patch that just makes
memblock_free_late() call free_reserved_page() is missing a call to
pfn_valid(). Without this, it can (and will) hit holes in the mem_map,
and that's probably one of the crashes I reported.

Now, it would be nice to then go allocate those missing bits of
mem_map, because I really don't want to give up on that memory. Small
instances are a thing and with the current price of DRAM, a fairly
relevant one :-) But I'll look at that later.

My original patch had the exact same issue btw.

The other potential issue, for which I welcome your input as I'm
running short on time for the day is ... the impact to zones. I see a
possibility for those pages to be outside of any zone's
zone_start_pfn/spanned_pages range ... or not ? As I said, I didn't get
my head yet around the zones init and spanning adjustments that
happens, so I don't know if we really have potentially "holes" here or
not.

This leads to the question... could we work around a lot of those
issues easily by making the early efi_reserve_boot_services() *also*
add the regions to memblock.memory in addition to memblock.reserve ?
ie, those regions are marked as boot services code/data, so they must
be memory to begin with, and that's all early enough that we can do it.

We should still add the missing pfn_valid() of course, if anything for
the sake of any other caller of memblock_free_late() ... or we could
change memblock_free_late() to only consider ranges that are both
reserved *and* in memblock.memory. You mentioned that might be slow
though.

Opinions ?

Cheers,
Ben.


On Tue, 2026-02-10 at 16:32 +0200, Mike Rapoport wrote:
> Hi Ben,
> 
> On Tue, Feb 10, 2026 at 07:34:15PM +1100, Benjamin Herrenschmidt
> wrote:
> > On Tue, 2026-02-10 at 17:17 +1100, Benjamin Herrenschmidt wrote:
> > > 
> > > So ... that was a backport to 6.12.68 and my original patch is
> > > crashing
> > > the same way ! (it was working last week interestingly enough,
> > > something else got backported that gets in the way maybe ?).
> > > 
> > > I'm going to have to go back to digging :-(
> > > 
> > > I suspect the pages aren't reserved. I swear this was working :-)
> > 
> > So I rebuilt with a bit of extra debug prints, CONFIG_DEBUG_VM on,
> > and
> > memblock=debug ... it's not hitting the reserved check, but it's
> > also
> > not crashing the same way (still 6.12, I'll play with upstream
> > again
> > later):
> > 
> >  .../...
> 
> Do you mind sending the entire log?
>  
> > 
> > [    0.045633] Freeing SMP alternatives memory: 36K
> > [    0.045633] pid_max: default: 32768 minimum: 301
> > [    0.045633] memblock_free_late: [0x000000003d36b000-
> > 0x000000003d37bfff] efi_free_boot_services+0x11f/0x2e0
> > [    0.045633] memblock_free_late: [0x000000003b336000-
> > 0x000000003d36afff] efi_free_boot_services+0x11f/0x2e0
> > [    0.045633] memblock_free_late: [0x000000003b317000-
> > 0x000000003b335fff] efi_free_boot_services+0x11f/0x2e0
> > [    0.045633] memblock_free_late: [0x000000003b2f7000-
> > 0x000000003b316fff] efi_free_boot_services+0x11f/0x2e0
> > [    0.045633] memblock_free_late: [0x000000003b000000-
> > 0x000000003b1fffff] efi_free_boot_services+0x11f/0x2e0
> > [    0.045633] memblock_free_late: [0x00000000393de000-
> > 0x00000000393defff] efi_free_boot_services+0x11f/0x2e0
> > [    0.045633] memblock_free_late: [0x0000000038e73000-
> > 0x00000000390cdfff] efi_free_boot_services+0x11f/0x2e0
> > [    0.045633] LSM: initializing
> > lsm=lockdown,capability,landlock,yama,safesetid,selinux,bpf,ima
> > [    0.045633] landlock: Up and running.
> > [    0.045633] Yama: becoming mindful.
> > [    0.045633] SELinux:  Initializing.
> > [    0.045633] LSM support for eBPF active
> > [    0.045633] Mount-cache hash table entries: 2048 (order: 2,
> > 16384 bytes, linear)
> > [    0.045633] Mountpoint-cache hash table entries: 2048 (order: 2,
> > 16384 bytes, linear)
> > [    0.045633] smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU
> > @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7)
> > [    0.045633] Performance Events: unsupported p6 CPU model 85 no
> > PMU driver, software events only.
> > [    0.045633] signal: max sigframe size: 3632
> > [    0.045633] rcu: Hierarchical SRCU implementation.
> > [    0.045633] rcu: 	Max phase no-delay instances is 1000.
> > [    0.045633] Timer migration: 1 hierarchy levels; 8 children per
> > group; 1 crossnode level
> > [    0.045633] smp: Bringing up secondary CPUs ...
> > [    0.045633] smpboot: x86: Booting SMP configuration:
> > [    0.045633] .... node  #0, CPUs:      #1
> > [    0.045633] MDS CPU bug present and SMT on, data leak possible.
> > See
> > https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
> >  for more details.
> > [    0.045633] MMIO Stale Data CPU bug present and SMT on, data
> > leak possible. See
> > https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html
> >  for more details.
> > [    0.045633] smp: Brought up 1 node, 2 CPUs
> > [    0.045633] smpboot: Total of 2 processors activated (9999.97
> > BogoMIPS)
> > [    0.045633] node 0 deferred pages initialised in 0ms
> > [    0.045633] Memory: 900460K/999468K available (16384K kernel
> > code, 9440K rwdata, 11364K rodata, 3740K init, 6440K bss, 94600K
> > reserved, 0K cma-reserved)
> > [    0.045633] devtmpfs: initialized
> > [    0.045633] x86/mm: Memory block size: 128MB
> > [    0.045633] ------------[ cut here ]------------
> > [    0.045633] page type is 1, passed migratetype is 0 (nr=16)
> > [    0.045633] WARNING: CPU: 1 PID: 2 at mm/page_alloc.c:721
> > rmqueue_bulk+0x82e/0x880
> > [    0.045633] Modules linked in:
> > [    0.045633] CPU: 1 UID: 0 PID: 2 Comm: kthreadd Not tainted
> > 6.12.68-93.123.amzn2023.x86_64 #1
> > [    0.045633] Hardware name: Amazon EC2 t3.micro/, BIOS 1.0
> > 10/16/2017
> > [    0.045633] RIP: 0010:rmqueue_bulk+0x82e/0x880
> > [    0.045633] Code: c6 05 be be 13 02 01 e8 b0 b5 ff ff 44 89 e9
> > 8b 14 24 48 c7 c7 a8 6d 51 8e 48 89 c6 b8 01 00 00 00 d3 e0 89 c1
> > e8 32 4f d2 ff <0f> 0b 4c 8b 44 24 48 e9 79 fc ff ff 48 c7 c6 e0 77
> > 51 8e 4c 89 e7
> > [    0.045633] RSP: 0000:ffffd592c002f898 EFLAGS: 00010086
> > [    0.045633] RAX: 0000000000000000 RBX: ffff8e363b2cbc80 RCX:
> > ffffffff8f1f0c68
> > [    0.045633] RDX: 0000000000000000 RSI: 00000000fffeffff RDI:
> > 0000000000000001
> > [    0.045633] RBP: fffffb9c40e3a408 R08: 0000000000000000 R09:
> > ffffd592c002f740
> > [    0.045633] R10: ffffd592c002f738 R11: ffffffff8f370ca8 R12:
> > fffffb9c40e3a400
> > [    0.045633] R13: 0000000000000004 R14: 0000000000000003 R15:
> > 0000000000038e90
> > [    0.045633] FS:  0000000000000000(0000)
> > GS:ffff8e3639f00000(0000) knlGS:0000000000000000
> > [    0.045633] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [    0.045633] CR2: 0000000000000000 CR3: 000000001bc34001 CR4:
> > 00000000007706f0
> > [    0.045633] PKRU: 55555554
> > [    0.045633] Call Trace:
> > [    0.045633]  <TASK>
> > [    0.045633]  __rmqueue_pcplist+0x233/0x2c0
> > [    0.045633]  rmqueue.constprop.0+0x4b6/0xe80
> > [    0.045633]  ? _raw_spin_unlock+0xa/0x30
> > [    0.045633]  ? rmqueue.constprop.0+0x557/0xe80
> > [    0.045633]  ? _raw_spin_unlock_irqrestore+0xa/0x30
> > [    0.045633]  get_page_from_freelist+0x16e/0x5f0
> > [    0.045633]  __alloc_pages_noprof+0x18a/0x350
> > [    0.045633]  alloc_pages_mpol_noprof+0xf2/0x1e0
> > [    0.045633]  ? shuffle_freelist+0x126/0x1b0
> > [    0.045633]  allocate_slab+0x2b3/0x410
> > [    0.045633]  ___slab_alloc+0x396/0x830
> > [    0.045633]  ? switch_hrtimer_base+0x8e/0x190
> > [    0.045633]  ? timerqueue_add+0x9b/0xc0
> > [    0.045633]  ? dup_task_struct+0x2d/0x1b0
> > [    0.045633]  ? _raw_spin_unlock_irqrestore+0xa/0x30
> > [    0.045633]  ? start_dl_timer+0xb0/0x140
> > [    0.045633]  kmem_cache_alloc_node_noprof+0x271/0x2e0
> > [    0.045633]  ? dup_task_struct+0x2d/0x1b0
> > [    0.045633]  dup_task_struct+0x2d/0x1b0
> > [    0.045633]  copy_process+0x195/0x17e0
> > [    0.045633]  kernel_clone+0x9a/0x3b0
> > [    0.045633]  ? psi_task_switch+0x105/0x290
> > [    0.045633]  kernel_thread+0x6b/0x90
> > [    0.045633]  ? __pfx_kthread+0x10/0x10
> > [    0.045633]  kthreadd+0x276/0x2d0
> > [    0.045633]  ? __pfx_kthreadd+0x10/0x10
> > [    0.045633]  ret_from_fork+0x30/0x50
> > [    0.045633]  ? __pfx_kthreadd+0x10/0x10
> > [    0.045633]  ret_from_fork_asm+0x1a/0x30
> > [    0.045633]  </TASK>
> > [    0.045633] ---[ end trace 0000000000000000 ]---
> > [    0.045633] ------------[ cut here ]------------
> > [    0.045633] page type is 1, passed migratetype is 0 (nr=8)
> > [    0.045633] WARNING: CPU: 1 PID: 2 at mm/page_alloc.c:686
> > expand+0x1af/0x1e0
> > [    0.045633] Modules linked in:
> > [    0.045633] CPU: 1 UID: 0 PID: 2 Comm: kthreadd Tainted:
> > G        W          6.12.68-93.123.amzn2023.x86_64 #1
> > [    0.045633] Tainted: [W]=WARN
> > [    0.045633] Hardware name: Amazon EC2 t3.micro/, BIOS 1.0
> > 10/16/2017
> > [    0.045633] RIP: 0010:expand+0x1af/0x1e0
> > [    0.045633] Code: c6 05 af 06 14 02 01 e8 9f fd ff ff 89 e9 8b
> > 54 24 34 48 c7 c7 a8 6d 51 8e 48 89 c6 b8 01 00 00 00 d3 e0 89 c1
> > e8 21 97 d2 ff <0f> 0b e9 e5 fe ff ff 48 c7 c6 e0 6d 51 8e 4c 89 ff
> > e8 eb 23 fc ff
> > [    0.045633] RSP: 0000:ffffd592c002f828 EFLAGS: 00010082
> > [    0.045633] RAX: 0000000000000000 RBX: ffff8e363b2cbc80 RCX:
> > ffffffff8f1f0c68
> > [    0.045633] RDX: 0000000000000000 RSI: 00000000fffeffff RDI:
> > 0000000000000001
> > [    0.045633] RBP: 0000000000000003 R08: 0000000000000000 R09:
> > ffffd592c002f6d0
> > [    0.045633] R10: ffffd592c002f6c8 R11: ffffffff8f370ca8 R12:
> > 0000000000000008
> > [    0.045633] R13: 0000000000038e98 R14: 0000000000000003 R15:
> > fffffb9c40e3a600
> > [    0.045633] FS:  0000000000000000(0000)
> > GS:ffff8e3639f00000(0000) knlGS:0000000000000000
> > [    0.045633] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [    0.045633] CR2: 0000000000000000 CR3: 000000001bc34001 CR4:
> > 00000000007706f0
> > [    0.045633] PKRU: 55555554
> > [    0.045633] Call Trace:
> > [    0.045633]  <TASK>
> > [    0.045633]  rmqueue_bulk+0x541/0x880
> > [    0.045633]  __rmqueue_pcplist+0x233/0x2c0
> > [    0.045633]  rmqueue.constprop.0+0x4b6/0xe80
> > [    0.045633]  ? _raw_spin_unlock+0xa/0x30
> > [    0.045633]  ? rmqueue.constprop.0+0x557/0xe80
> > [    0.045633]  ? _raw_spin_unlock_irqrestore+0xa/0x30
> > [    0.045633]  get_page_from_freelist+0x16e/0x5f0
> > [    0.045633]  __alloc_pages_noprof+0x18a/0x350
> > [    0.045633]  alloc_pages_mpol_noprof+0xf2/0x1e0
> > [    0.045633]  ? shuffle_freelist+0x126/0x1b0
> > [    0.045633]  allocate_slab+0x2b3/0x410
> > [    0.045633]  ___slab_alloc+0x396/0x830
> > [    0.045633]  ? switch_hrtimer_base+0x8e/0x190
> > [    0.045633]  ? timerqueue_add+0x9b/0xc0
> > [    0.045633]  ? dup_task_struct+0x2d/0x1b0
> > [    0.045633]  ? _raw_spin_unlock_irqrestore+0xa/0x30
> > [    0.045633]  ? start_dl_timer+0xb0/0x140
> > [    0.045633]  kmem_cache_alloc_node_noprof+0x271/0x2e0
> > [    0.045633]  ? dup_task_struct+0x2d/0x1b0
> > [    0.045633]  dup_task_struct+0x2d/0x1b0
> > [    0.045633]  copy_process+0x195/0x17e0
> > [    0.045633]  kernel_clone+0x9a/0x3b0
> > [    0.045633]  ? psi_task_switch+0x105/0x290
> > [    0.045633]  kernel_thread+0x6b/0x90
> > [    0.045633]  ? __pfx_kthread+0x10/0x10
> > [    0.045633]  kthreadd+0x276/0x2d0
> > [    0.045633]  ? __pfx_kthreadd+0x10/0x10
> > [    0.045633]  ret_from_fork+0x30/0x50
> > [    0.045633]  ? __pfx_kthreadd+0x10/0x10
> > [    0.045633]  ret_from_fork_asm+0x1a/0x30
> > [    0.045633]  </TASK>
> > [    0.045633] ---[ end trace 0000000000000000 ]---
> > 
> > > > 
> > > 
> > 
> 


  parent reply	other threads:[~2026-02-16  4:54 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-03  8:02 Benjamin Herrenschmidt
2026-02-03 18:40 ` Mike Rapoport
2026-02-03 19:53   ` Benjamin Herrenschmidt
2026-02-04  7:39     ` Mike Rapoport
2026-02-04  9:02       ` Benjamin Herrenschmidt
2026-02-06 10:33         ` Mike Rapoport
2026-02-10  1:04           ` Benjamin Herrenschmidt
2026-02-10  2:10             ` Benjamin Herrenschmidt
2026-02-10  6:17               ` Benjamin Herrenschmidt
2026-02-10  8:34                 ` Benjamin Herrenschmidt
2026-02-10 14:32                   ` Mike Rapoport
2026-02-10 23:23                     ` Benjamin Herrenschmidt
2026-02-11  5:20                       ` Mike Rapoport
2026-02-16  5:34                       ` Benjamin Herrenschmidt
2026-02-16  6:51                         ` Benjamin Herrenschmidt
2026-02-16  4:53                     ` Benjamin Herrenschmidt [this message]
2026-02-16 15:28                       ` Mike Rapoport
2026-02-16 10:36           ` Alexander Potapenko
2026-02-17  8:28 ` [PATCH v2] " Benjamin Herrenschmidt
2026-02-17 12:32   ` Mike Rapoport
2026-02-17 22:00     ` Benjamin Herrenschmidt
2026-02-17 21:47   ` Benjamin Herrenschmidt
2026-02-18  0:15     ` Benjamin Herrenschmidt
2026-02-18  8:05       ` Mike Rapoport
2026-02-19  2:48         ` Benjamin Herrenschmidt
2026-02-19 10:16           ` Mike Rapoport
2026-02-19 22:46             ` Benjamin Herrenschmidt
2026-02-20  4:57               ` Benjamin Herrenschmidt
2026-02-20  9:09                 ` Mike Rapoport
2026-02-20  9:00               ` Mike Rapoport
2026-02-20  5:12             ` Benjamin Herrenschmidt
2026-02-20  5:15             ` Benjamin Herrenschmidt
2026-02-20  5:47             ` Benjamin Herrenschmidt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1e83d9c3cf11ba825237dbc7d6a70ba47ab328cc.camel@kernel.crashing.org \
    --to=benh@kernel.crashing.org \
    --cc=dvyukov@google.com \
    --cc=elver@google.com \
    --cc=glider@google.com \
    --cc=linux-mm@kvack.org \
    --cc=rppt@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox