On 1/21/26 2:43 PM, Andrew Morton wrote: > On Wed, 21 Jan 2026 14:10:36 -0500 Waiman Long wrote: > >> Commit 3acb913c9d5b ("mm/mm_init: use deferred_init_memmap_chunk() >> in deferred_grow_zone()") made deferred_grow_zone() call >> deferred_init_memmap_chunk() within a pgdat_resize_lock() critical >> section with irqs disabled. It did check for irqs_disabled() in >> deferred_init_memmap_chunk() to avoid calling cond_resched(). For a >> PREEMPT_RT kernel build, however, spin_lock_irqsave() does not disable >> interrupt but rcu_read_lock() is called. This leads to the following >> bug report. >> >> BUG: sleeping function called from invalid context at mm/mm_init.c:2091 >> in_atomic(): 0, irqs_disabled(): 0, non_block: 0, pid: 1, name: swapper/0 >> preempt_count: 0, expected: 0 >> RCU nest depth: 1, expected: 0 >> 3 locks held by swapper/0/1: >> #0: ffff80008471b7a0 (sched_domains_mutex){+.+.}-{4:4}, at: sched_domains_mutex_lock+0x28/0x40 >> #1: ffff003bdfffef48 (&pgdat->node_size_lock){+.+.}-{3:3}, at: deferred_grow_zone+0x140/0x278 >> #2: ffff800084acf600 (rcu_read_lock){....}-{1:3}, at: rt_spin_lock+0x1b4/0x408 >> CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Tainted: G W 6.19.0-rc6-test #1 PREEMPT_{RT,(full) >> } >> Tainted: [W]=WARN >> Call trace: >> show_stack+0x20/0x38 (C) >> dump_stack_lvl+0xdc/0xf8 >> dump_stack+0x1c/0x28 >> __might_resched+0x384/0x530 >> deferred_init_memmap_chunk+0x560/0x688 >> deferred_grow_zone+0x190/0x278 >> _deferred_grow_zone+0x18/0x30 >> get_page_from_freelist+0x780/0xf78 >> __alloc_frozen_pages_noprof+0x1dc/0x348 >> alloc_slab_page+0x30/0x110 >> allocate_slab+0x98/0x2a0 >> new_slab+0x4c/0x80 >> ___slab_alloc+0x5a4/0x770 >> __slab_alloc.constprop.0+0x88/0x1e0 >> __kmalloc_node_noprof+0x2c0/0x598 >> __sdt_alloc+0x3b8/0x728 >> build_sched_domains+0xe0/0x1260 >> sched_init_domains+0x14c/0x1c8 >> sched_init_smp+0x9c/0x1d0 >> kernel_init_freeable+0x218/0x358 >> kernel_init+0x28/0x208 >> ret_from_fork+0x10/0x20 >> >> Fix it by checking rcu_preempt_depth() as well to prevent calling >> cond_resched(). Note that CONFIG_PREEMPT_RCU should always be enabled >> in a PREEMPT_RT kernel. >> >> ... >> >> --- a/mm/mm_init.c >> +++ b/mm/mm_init.c >> @@ -2085,7 +2085,12 @@ deferred_init_memmap_chunk(unsigned long start_pfn, unsigned long end_pfn, >> >> spfn = chunk_end; >> >> - if (irqs_disabled()) >> + /* >> + * pgdat_resize_lock() only disables irqs in non-RT >> + * kernels but calls rcu_read_lock() in a PREEMPT_RT >> + * kernel. >> + */ >> + if (irqs_disabled() || rcu_preempt_depth()) >> touch_nmi_watchdog(); > rcu_preempt_depth() seems a fairly internal low-level thing - it's > rarely used. That is true. Beside the scheduler, workqueue also use rcu_preempt_depth(). This API is included in "include/linux/rcupdate.h" which is included directly or indirectly by many kernel files. So even though it is rarely used, but it is still a public API. > > Is there a more official way of detecting this condition? Maybe even > #ifdef CONFIG_PREEMPT_RCU? > I am not aware of a more official way of detecting this. Maybe Sebastian has some ideas. rcu_preempt_count() is defined whether CONFIG_PREEMPT_RCU is defined or not. So we don't need a "#ifdef CONFIG_PREEMPT_RCU". Maybe I should explicitly include "include/linux/rcupdate.h" in mm/mm_init.c just to be sure. CONFIG_PREEMPT_RCU defaults to on if PREMPT_RT is set. With !CONFIG_PREEMPT_RCU, rcu_preempt_depth() is hard-coded to 0 and will be optimized out. Cheers, Longman