From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6BA9C35FFC for ; Thu, 20 Mar 2025 01:24:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68394280002; Wed, 19 Mar 2025 21:24:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 60B47280001; Wed, 19 Mar 2025 21:24:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 484D7280002; Wed, 19 Mar 2025 21:24:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 23EBD280001 for ; Wed, 19 Mar 2025 21:24:34 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 71D12B6EAA for ; Thu, 20 Mar 2025 01:24:34 +0000 (UTC) X-FDA: 83240184468.01.1AE4A93 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by imf14.hostedemail.com (Postfix) with ESMTP id C762F100002 for ; Thu, 20 Mar 2025 01:24:31 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=R22eUZSJ; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf14.hostedemail.com: domain of yi1.lai@linux.intel.com has no SPF policy when checking 192.198.163.19) smtp.mailfrom=yi1.lai@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742433872; a=rsa-sha256; cv=none; b=xIObZhGRjnI4cCx1oVa3/9ESaOtqhVjRkRlrSCGf7S0tWisStPsmvqqehhlwXHEJA4yxxs SxBEUd7bKLUHdx1D9gkMJxd8aTQoBDTK48QVqMRzEFKV+ui6id5FB3864ytxfUOqANkXru v4zcp3cuv6EbgOTgE1idjPQog2q6HHo= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=R22eUZSJ; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf14.hostedemail.com: domain of yi1.lai@linux.intel.com has no SPF policy when checking 192.198.163.19) smtp.mailfrom=yi1.lai@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742433872; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rKDieWUaPkq+2qmwKfLk9LrCVYOvdzVQL39+T0Go6AA=; b=jg3i3JnG3rGN2rAWnN8rBm+6xiFpr4dhwzEo7QYcKahC2IequyiFyAKmHP8KZw2DnGCR+I 5E3MCwsgkJwDYDSSkkcRzvV3E2MLIOnZzfQCCGrnZgeM9poOj/QCaDwwbWKpmZ3BAUczEw Q7mBY4fyPYyNTNlgXbLi8+xnw56fw64= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1742433872; x=1773969872; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=mVjHOvtZt3NxlZEmYD6UIjkVerVA4XASUbcf/dG+FVs=; b=R22eUZSJCRHnRL9ieN9E/AZ/MOD7rs/AQueW4QUIvVpSOHE2/8Q1tl6f SfODmv95OCt6ii9OhtIx2fhMmve8vE5/3+Hu4QxXc0iXEHBZgzhwyt09c zD3DUUKCBB7tYU96Z179uR/M801VG4XIJOoZhenctbhp9x2N58pSJfphB rqToIXN5WEpFlxqsmOBX6rkA/WOziPY56WzhcY8nM/LfAa1nz6zbNyWYS /vQt7GY3aqC+91re3ROtHaGSx6Rf4wJR6U1jSDwgAPQWhBi+vkTBRY0qB WeU3hlNDi36Rd5Qdu6BX9HaODH7MlF2aKXa+VPvjjhRQ8LiFCA0pdv9Wv w==; X-CSE-ConnectionGUID: t/xmeXsLSrqsQV7AJ2hT9A== X-CSE-MsgGUID: FyQxXHjUSDWpb5w5SCAcSQ== X-IronPort-AV: E=McAfee;i="6700,10204,11378"; a="42819317" X-IronPort-AV: E=Sophos;i="6.14,260,1736841600"; d="scan'208";a="42819317" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2025 18:24:30 -0700 X-CSE-ConnectionGUID: DbN0uQkNRpeSpgiOBLvoZg== X-CSE-MsgGUID: qfJrijahQaOY85HhtQR5NQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.14,260,1736841600"; d="scan'208";a="122600501" Received: from ly-workstation.sh.intel.com (HELO ly-workstation) ([10.239.161.23]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2025 18:24:27 -0700 Date: Thu, 20 Mar 2025 09:24:51 +0800 From: "Lai, Yi" To: Luis Chamberlain Cc: Oliver Sang , David Hildenbrand , Alistair Popple , linux-mm@kvack.org, Christian Brauner , Hannes Reinecke , oe-lkp@lists.linux.dev, lkp@intel.com, "Matthew Wilcox (Oracle)" , John Garry , linux-block@vger.kernel.org, ltp@lists.linux.it, Pankaj Raghav , Daniel Gomez , yi1.lai@intel.com Subject: Re: [linux-next:master] [block/bdev] 3c20917120: BUG:sleeping_function_called_from_invalid_context_at_mm/util.c Message-ID: References: <202503101536.27099c77-lkp@intel.com> <20250311-testphasen-behelfen-09b950bbecbf@brauner> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Queue-Id: C762F100002 X-Rspamd-Server: rspam05 X-Stat-Signature: dfdm3ghip9uohik95ai963hy1u8iqiew X-HE-Tag: 1742433871-64503 X-HE-Meta: U2FsdGVkX18AZJrSMCGbXnuBlByEE+eBO/55FjCbw4mqkrjdSsFBxZ3HSb0mZnyj3MU+PPvvrO0TwPB3vRdtifZZahofPqQxxIVURFkcpiSFwrBUwq1suUYDGs9N0VP1rgThxDJT5qWwZNiqYGjUs5Z/48bVykkV42s8S3OCtbs5aZOEVDdMfESYuoByVP1pHWRx7HtMT+8DSCu6v0SXF//HDUgPPR1shhPUKo/SF2E4kj5Yr6nwEr22wpiZnyo/cddP56CgrtYJL+PZDXBB3Tc1ZwLPvnJ7EKrwYaZAS9A99FWiInR5yXVYW4xX0gQlNyOuC4bwtG60dheAoaTWKhFF9wZGM+mn79DN4ogOeplAIgsNirQCLrO7+jISN56eC+aHZ4ODyoikpi5hIk0VxCplgy7Vddw/3oWzCW26010/5570l+mj6IKqi7/aOdb5gVct7Qn6KbGJotIMA857RGDwTRq5mwFFpVXsIHfTnN0UVOZrzQG2UVNCrlFMYniu6YzGJCjwYPraWrPvnLMUa1Vw41CEg7idVg9thAYgKiBIovB0fs0g6KDa8rIKhj2mHyPnLcCfYqOgZwqldLgv9+H4RoHIULfjG0o0NgkyLRI/SM3ANOJkUveaOep3pqMJfz196MmL9n4DPkodc8CwK8HWbF+9Hhy6/sSdyZxYJjkMhKc02wfa/6kF4/PcK3Zn8EvEIkA1dOpn0kSdXmT/pCQX7LDvDA2Z9ZmMe+FnCRYNMDBUnEX3Gv7vEIAEbVLNareFTErsEKLtCDqFig5QbdL5X8FiMayg6qc9J0oXpzT92jpKvePciQ8V5A0gkGkHnuGz3MoUDzUo0xkL3W/of+Wlsenwu/f5XbVhynN9IVM6vUJd7+iqkazLz0VnJUt9p+0eSeIcJJXhw8kQgBCBL/FL7pffozly5rXsDaL0tJKoxGDsn+j6gG8mSK48Cdt2so00kcMdGyVhD5pzXUX 3J5Mj2vn xnCDFw2qhCYbuHIz++SlvWZixlEEH0AZemDg9M1Idof0B7IJEmmKEmTIygAH/ZuF5qdURn6DN1UL9RxfVs+SeDcnMq8+XSGgCe+xn1fzcE7N0cy0k5cWcp5ZRFGSdmjo1PpCcRtGJSH3YYcXpZC0Kh7LtvPScYOFjPoDo2wnAPg/UmJq1aRz54HkWG6KzxlEfhm7NrmSYdVlAhh80PgwnBZQaWgh51iUgmbS3nIUDL167f7COZBOHQbYO3UIafLG8eEeVHOjTsAcM1CoyhMWmG2KSoSVtLBJo448lpyA9e/N7D8Y= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Mar 18, 2025 at 01:15:33AM -0700, Luis Chamberlain wrote: > On Tue, Mar 18, 2025 at 01:28:20PM +0800, Oliver Sang wrote: > > hi, Christian Brauner, > > > > On Tue, Mar 11, 2025 at 01:10:43PM +0100, Christian Brauner wrote: > > > On Mon, Mar 10, 2025 at 03:43:49PM +0800, kernel test robot wrote: > > > > > > > > > > > > Hello, > > > > > > > > kernel test robot noticed "BUG:sleeping_function_called_from_invalid_context_at_mm/util.c" on: > > > > > > > > commit: 3c20917120ce61f2a123ca0810293872f4c6b5a4 ("block/bdev: enable large folio support for large logical block sizes") > > > > https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master > > > > > > Is this also already fixed by: > > > > > > commit a64e5a596067 ("bdev: add back PAGE_SIZE block size validation for sb_set_blocksize()") > > > > > > ? > > > > sorry for late. > > > > commit a64e5a596067 cannot fix the issue. one dmesg is attached FYI. > > > > we also tried to check linux-next/master tip, but neither below one can boot > > successfully in our env which we need further check. > > > > da920b7df70177 (tag: next-20250314, linux-next/master) Add linux-next specific files for 20250314 > > > > e94bd4ec45ac1 (tag: next-20250317, linux-next/master) Add linux-next specific files for 20250317 > > > > so we are not sure the status of latest linux-next/master. > > > > if you want us to check other commit or other patches, please let us know. thanks! > > I cannot reproduce the issue by running the LTP test manually in a loop > for a long time: > > export LTP_RUNTIME_MUL=2 > > while true; do \ > ./testcases/kernel/syscalls/close_range/close_range01; done > > What's the failure rate of just running the test alone above? > Does it always fail on this system? Is this a deterministic failure > or does it have a lower failure rate? > Hi Luis, Greetings! I used Syzkaller and found that this issue can also be reproduced using Syzkaller reproduction binary. All detailed into can be found at: https://github.com/laifryiee/syzkaller_logs/tree/main/250320_033346_folio_mc_copy Syzkaller repro code: https://github.com/laifryiee/syzkaller_logs/tree/main/250320_033346_folio_mc_copy/repro.c Syzkaller repro syscall steps: https://github.com/laifryiee/syzkaller_logs/tree/main/250320_033346_folio_mc_copy/repro.prog Syzkaller report: https://github.com/laifryiee/syzkaller_logs/tree/main/250320_033346_folio_mc_copy/repro.report Kconfig(make olddefconfig): https://github.com/laifryiee/syzkaller_logs/tree/main/250320_033346_folio_mc_copy/kconfig_origin Bisect info: https://github.com/laifryiee/syzkaller_logs/tree/main/250320_033346_folio_mc_copy/bisect_info.log bzImage: https://github.com/laifryiee/syzkaller_logs/raw/refs/heads/main/250320_033346_folio_mc_copy/bzImage_e94bd4ec45ac156616da285a0bf03056cd7430fc Issue dmesg: https://github.com/laifryiee/syzkaller_logs/blob/main/250320_033346_folio_mc_copy/e94bd4ec45ac156616da285a0bf03056cd7430fc_dmesg.log After bisection and the first bad commit is: " 3c20917120ce block/bdev: enable large folio support for large logical block sizes " " [ 23.399326] dump_stack+0x19/0x20 [ 23.399332] __might_resched+0x37b/0x5a0 [ 23.399345] ? __kasan_check_read+0x15/0x20 [ 23.399354] folio_mc_copy+0x111/0x240 [ 23.399368] __migrate_folio.constprop.0+0x173/0x3c0 [ 23.399377] __buffer_migrate_folio+0x6a2/0x7b0 [ 23.399389] buffer_migrate_folio_norefs+0x3d/0x50 [ 23.399398] move_to_new_folio+0x153/0x5b0 [ 23.399403] ? __pfx_buffer_migrate_folio_norefs+0x10/0x10 [ 23.399412] migrate_pages_batch+0x19e0/0x2890 [ 23.399424] ? __pfx_compaction_free+0x10/0x10 [ 23.399444] ? __pfx_migrate_pages_batch+0x10/0x10 [ 23.399450] ? __kasan_check_read+0x15/0x20 [ 23.399455] ? __lock_acquire+0xdb6/0x5d60 [ 23.399475] ? __pfx___lock_acquire+0x10/0x10 [ 23.399486] migrate_pages+0x18de/0x2450 [ 23.399500] ? __pfx_compaction_free+0x10/0x10 [ 23.399505] ? __pfx_compaction_alloc+0x10/0x10 [ 23.399514] ? __pfx_migrate_pages+0x10/0x10 [ 23.399519] ? __this_cpu_preempt_check+0x21/0x30 [ 23.399533] ? rcu_is_watching+0x19/0xc0 [ 23.399546] ? isolate_migratepages_block+0x2253/0x41c0 [ 23.399565] ? __pfx_isolate_migratepages_block+0x10/0x10 [ 23.399578] compact_zone+0x1d66/0x4480 [ 23.399600] ? perf_trace_lock+0xe0/0x4f0 [ 23.399612] ? __pfx_compact_zone+0x10/0x10 [ 23.399617] ? __pfx_perf_trace_lock+0x10/0x10 [ 23.399627] ? __pfx_lock_acquire+0x10/0x10 [ 23.399639] compact_node+0x190/0x2c0 [ 23.399647] ? __pfx_compact_node+0x10/0x10 [ 23.399653] ? __pfx_lock_release+0x10/0x10 [ 23.399678] ? _raw_spin_unlock_irqrestore+0x45/0x70 [ 23.399694] kcompactd+0x784/0xde0 [ 23.399705] ? __pfx_kcompactd+0x10/0x10 [ 23.399711] ? lockdep_hardirqs_on+0x89/0x110 [ 23.399721] ? __pfx_autoremove_wake_function+0x10/0x10 [ 23.399731] ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30 [ 23.399742] ? __kthread_parkme+0x15d/0x230 [ 23.399753] ? __pfx_kcompactd+0x10/0x10 [ 23.399761] kthread+0x444/0x980 [ 23.399769] ? __pfx_kthread+0x10/0x10 [ 23.399776] ? _raw_spin_unlock_irq+0x3c/0x60 [ 23.399784] ? __pfx_kthread+0x10/0x10 [ 23.399792] ret_from_fork+0x56/0x90 [ 23.399802] ? __pfx_kthread+0x10/0x10 [ 23.399809] ret_from_fork_asm+0x1a/0x30 [ 23.399827] " Hope this cound be insightful to you. Regards, Yi Lai --- If you don't need the following environment to reproduce the problem or if you already have one reproduced environment, please ignore the following information. How to reproduce: git clone https://gitlab.com/xupengfe/repro_vm_env.git cd repro_vm_env tar -xvf repro_vm_env.tar.gz cd repro_vm_env; ./start3.sh // it needs qemu-system-x86_64 and I used v7.1.0 // start3.sh will load bzImage_2241ab53cbb5cdb08a6b2d4688feb13971058f65 v6.2-rc5 kernel // You could change the bzImage_xxx as you want // Maybe you need to remove line "-drive if=pflash,format=raw,readonly=on,file=./OVMF_CODE.fd \" for different qemu version You could use below command to log in, there is no password for root. ssh -p 10023 root@localhost After login vm(virtual machine) successfully, you could transfer reproduced binary to the vm by below way, and reproduce the problem in vm: gcc -pthread -o repro repro.c scp -P 10023 repro root@localhost:/root/ Get the bzImage for target kernel: Please use target kconfig and copy it to kernel_src/.config make olddefconfig make -jx bzImage //x should equal or less than cpu num your pc has Fill the bzImage file into above start3.sh to load the target kernel in vm. Tips: If you already have qemu-system-x86_64, please ignore below info. If you want to install qemu v7.1.0 version: git clone https://github.com/qemu/qemu.git cd qemu git checkout -f v7.1.0 mkdir build cd build yum install -y ninja-build.x86_64 yum -y install libslirp-devel.x86_64 ../configure --target-list=x86_64-softmmu --enable-kvm --enable-vnc --enable-gtk --enable-sdl --enable-usb-redir --enable-slirp make make install > I also can't see how the patch ("("block/bdev: enable large folio > support for large logical block sizes") would trigger this. > > You could try this patch but ... > > https://lore.kernel.org/all/20250312050028.1784117-1-mcgrof@kernel.org/ > > we decided this is not right and not needed, and if we have a buggy > block driver we can address that. > > I just can't see how this LTP test actually doing anything funky with block > devices at all. > > The associated sleeping while atomic warning is triggered during > compaction though: > > [ 218.143642][ T299] Architecture: x86_64 > [ 218.143659][ T299] > [ 218.427851][ T51] BUG: sleeping function called from invalid context at mm/util.c:901 > [ 218.435981][ T51] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 51, name: kcompactd0 > [ 218.444773][ T51] preempt_count: 1, expected: 0 > [ 218.449601][ T51] RCU nest depth: 0, expected: 0 > [ 218.454476][ T51] CPU: 2 UID: 0 PID: 51 Comm: kcompactd0 Tainted: G S 6.14.0-rc1-00006-g3c20917120ce #1 > [ 218.454486][ T51] Tainted: [S]=CPU_OUT_OF_SPEC > [ 218.454488][ T51] Hardware name: Hewlett-Packard HP Pro 3340 MT/17A1, BIOS 8.07 01/24/2013 > [ 218.454492][ T51] Call Trace: > [ 218.454495][ T51] > [ 218.454498][ T51] dump_stack_lvl+0x4f/0x70 > [ 218.454508][ T51] __might_resched+0x2c6/0x450 > [ 218.454517][ T51] folio_mc_copy+0xca/0x1f0 > [ 218.454525][ T51] ? _raw_spin_lock+0x81/0xe0 > [ 218.454532][ T51] __migrate_folio+0x11a/0x2d0 > [ 218.454541][ T51] __buffer_migrate_folio+0x558/0x660 > [ 218.454548][ T51] move_to_new_folio+0xf5/0x410 > [ 218.454555][ T51] migrate_folio_move+0x211/0x770 > [ 218.454562][ T51] ? __pfx_compaction_free+0x10/0x10 > [ 218.454572][ T51] ? __pfx_migrate_folio_move+0x10/0x10 > [ 218.454578][ T51] ? compaction_alloc_noprof+0x441/0x720 > [ 218.454587][ T51] ? __pfx_compaction_alloc+0x10/0x10 > [ 218.454594][ T51] ? __pfx_compaction_free+0x10/0x10 > [ 218.454601][ T51] ? __pfx_compaction_free+0x10/0x10 > [ 218.454607][ T51] ? migrate_folio_unmap+0x329/0x890 > [ 218.454614][ T51] migrate_pages_batch+0xddf/0x1810 > [ 218.454621][ T51] ? __pfx_compaction_free+0x10/0x10 > [ 218.454631][ T51] ? __pfx_migrate_pages_batch+0x10/0x10 > [ 218.454638][ T51] ? cgroup_rstat_updated+0xf1/0x860 > [ 218.454648][ T51] migrate_pages_sync+0x10c/0x8e0 > [ 218.454656][ T51] ? __pfx_compaction_alloc+0x10/0x10 > [ 218.454662][ T51] ? __pfx_compaction_free+0x10/0x10 > [ 218.454669][ T51] ? lru_gen_del_folio+0x383/0x820 > [ 218.454677][ T51] ? __pfx_migrate_pages_sync+0x10/0x10 > [ 218.454683][ T51] ? set_pfnblock_flags_mask+0x179/0x220 > [ 218.454691][ T51] ? __pfx_lru_gen_del_folio+0x10/0x10 > [ 218.454699][ T51] ? __pfx_compaction_alloc+0x10/0x10 > [ 218.454705][ T51] ? __pfx_compaction_free+0x10/0x10 > [ 218.454713][ T51] migrate_pages+0x846/0xe30 > [ 218.454720][ T51] ? __pfx_compaction_alloc+0x10/0x10 > [ 218.454726][ T51] ? __pfx_compaction_free+0x10/0x10 > [ 218.454733][ T51] ? __pfx_buffer_migrate_folio_norefs+0x10/0x10 > [ 218.454740][ T51] ? __pfx_migrate_pages+0x10/0x10 > [ 218.454748][ T51] ? isolate_migratepages+0x32d/0xbd0 > [ 218.454757][ T51] compact_zone+0x9e1/0x1680 > [ 218.454767][ T51] ? __pfx_compact_zone+0x10/0x10 > [ 218.454774][ T51] ? _raw_spin_lock_irqsave+0x87/0xe0 > [ 218.454780][ T51] ? __pfx__raw_spin_lock_irqsave+0x10/0x10 > [ 218.454788][ T51] compact_node+0x159/0x250 > [ 218.454795][ T51] ? __pfx_compact_node+0x10/0x10 > [ 218.454807][ T51] ? __pfx_extfrag_for_order+0x10/0x10 > [ 218.454814][ T51] ? __pfx_mutex_unlock+0x10/0x10 > [ 218.454822][ T51] ? finish_wait+0xd1/0x280 > [ 218.454831][ T51] kcompactd+0x582/0x960 > [ 218.454839][ T51] ? __pfx_kcompactd+0x10/0x10 > [ 218.454846][ T51] ? _raw_spin_lock_irqsave+0x87/0xe0 > [ 218.454852][ T51] ? __pfx__raw_spin_lock_irqsave+0x10/0x10 > [ 218.454858][ T51] ? __pfx_autoremove_wake_function+0x10/0x10 > [ 218.454867][ T51] ? __kthread_parkme+0xba/0x1e0 > [ 218.454874][ T51] ? __pfx_kcompactd+0x10/0x10 > [ 218.454880][ T51] kthread+0x3a1/0x770 > [ 218.454887][ T51] ? __pfx_kthread+0x10/0x10 > [ 218.454895][ T51] ? __pfx_kthread+0x10/0x10 > [ 218.454902][ T51] ret_from_fork+0x30/0x70 > [ 218.454910][ T51] ? __pfx_kthread+0x10/0x10 > [ 218.454915][ T51] ret_from_fork_asm+0x1a/0x30 > [ 218.454924][ T51] > > So the only thing I can think of the patch which the patch can do is > push more large folios to be used and so compaction can be a secondary > effect which managed to trigger another mm issue. I know there was a > recent migration fix but I can't see the relationship at all either. > > Luis