From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25223C25B74 for ; Mon, 27 May 2024 08:53:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 816936B0082; Mon, 27 May 2024 04:53:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7C63B6B0083; Mon, 27 May 2024 04:53:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68F986B0085; Mon, 27 May 2024 04:53:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 49EDA6B0082 for ; Mon, 27 May 2024 04:53:35 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B107CA149D for ; Mon, 27 May 2024 08:53:34 +0000 (UTC) X-FDA: 82163562348.02.D575BBA Received: from mail-lj1-f181.google.com (mail-lj1-f181.google.com [209.85.208.181]) by imf07.hostedemail.com (Postfix) with ESMTP id F1CF84000E for ; Mon, 27 May 2024 08:53:31 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=N891Qzta; spf=pass (imf07.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.208.181 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716800012; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XPQUqFdtvvs2dAPHMtj4XC01+ObsZNB1T1TqSRk+bE4=; b=k4fJkMq5Xcjaj5aiAA/Tls1pEmsWsNKlGZ6JpgHyouhemYopA010Am59No2rNuOfKYnIcf SQIZ/rXGbyPye9VcKNMW3vhGlbTmRvCkr4UFny79Pc7JM8ocU4J/iycfGvqr+7LCfZIMXe UcKRkzu50Z4k9ZhgwBeAAy9SmdKUnG8= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=N891Qzta; spf=pass (imf07.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.208.181 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716800012; a=rsa-sha256; cv=none; b=2fZ3VGUX165OTVWI5AXbmHrfAXpU2+Lp8lQrCtm5xskC1eT/HQ9pS56onWVCftikG/1yQK 49pxjnTZoVSAxqblPjbbXukHMmoZBRJ88Zk1muJI6iDD0TRK2VeF7eViv7FW4EyuK9uMbx kTqwK39YCP6D/N48rOzGc+Cv7FEVKk4= Received: by mail-lj1-f181.google.com with SMTP id 38308e7fff4ca-2e73359b900so76201901fa.2 for ; Mon, 27 May 2024 01:53:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1716800010; x=1717404810; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=XPQUqFdtvvs2dAPHMtj4XC01+ObsZNB1T1TqSRk+bE4=; b=N891Qzta51kcOiSWnflixAq0U7a/u5+NOxhjXk789B/a8DaQhBEFcq2KnIAXmbOiaB x9u9E3RfBCCK6M8S1OkzjJEh8L9xnDGFhiD1lCF+5u7gFGJE5vn1qdh72q+GhxPgUvP3 JFeFS+qpQq8UMXZCf0uAbuQ+ap0IETqd5A/YXIHuvGlrmWZAymXrRqvOueOLjHC0FnJJ Yi5WbQgdtp1hU83v67A8rollxls2asQYR4L3K+BOg2VUqG9QMfxxcBxh0IK4aSUMuc7i URkxwm0qyAVM0m0bl86kwnTjAY8qm3MTPgBm/qdAQSZ12yaT84QyHJIiqCUyUEtBdQBI KQYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716800010; x=1717404810; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XPQUqFdtvvs2dAPHMtj4XC01+ObsZNB1T1TqSRk+bE4=; b=Q8B7ry0Reyo2FjVTyLsGdQL5yNTFd+NcSfDXnyLhiYUS0iwj37soLGnCxpMyCg/I4g hovFOEgg13brm/gOLv//fkDEQigDW2YE5yjJP6x/LuvNYLSjziyq39EHHzXjjMLq5HL3 si6ulq64zekhjwEHkhZvc+VQf8AfXGx/7N6TFQgyEPU5CuqAUZP5pE2ZPtOkyF2asDTT SAvvSteOZWkYb4x66xpKGAsKyrIt7EyJgZVmHmLlODlDgQQsY+pb68XomT5L5Qp2J0Gb 3pzQMkR5Vzj6ZWz6jn2Bw+r7h0SyzP6KTZLgpJ+MK5JiLpSZGQwfxwuHEzLEc/Z76ep6 OI0g== X-Forwarded-Encrypted: i=1; AJvYcCXXj8GZPDx5LYsiQKkzxJtrArHmGCxouo4rG8uFppzZMd7mvgXHCbeIszEYh624M4yD1e0gqjrW83kvYIHiA6ZVTOY= X-Gm-Message-State: AOJu0Yzl8u+ZnQdK6jVBcUT3zAeDZrOsXIiNaUTmQVBu50GiAeA8VBa1 hm+jrm0IWu1RGzF+i6XEPk2N3L5zhEOI6iZ/Bmr3MvhpaQeyt7DTFTvzTn55dagdQ4z4yzmq6ib wqfuza9B6bPXlLL6yxxsGyJt+4c0= X-Google-Smtp-Source: AGHT+IERp7IqRCjE8aPqlYlScps/l99oFB5iGtsmTHDPYdN/H8aVfauctLbdIjZ9Q9UfECy7IquFpYZT0PlNUFYU6ss= X-Received: by 2002:a2e:9b08:0:b0:2de:73b5:427c with SMTP id 38308e7fff4ca-2e95b096cdfmr53444831fa.9.1716800009908; Mon, 27 May 2024 01:53:29 -0700 (PDT) MIME-Version: 1.0 References: <20240412064353.133497-1-zhaoyang.huang@unisoc.com> <20240412143457.5c6c0ae8f6df0f647d7cf0be@linux-foundation.org> <2652f0c1-acc9-4288-8bca-c95ee49aa562@marcinwanat.pl> <5f989315-e380-46aa-80d1-ce8608889e5f@marcinwanat.pl> In-Reply-To: <5f989315-e380-46aa-80d1-ce8608889e5f@marcinwanat.pl> From: Zhaoyang Huang Date: Mon, 27 May 2024 16:53:18 +0800 Message-ID: Subject: Re: [PATCH 1/1] mm: protect xa split stuff under lruvec->lru_lock during migration To: Marcin Wanat Cc: Dave Chinner , Andrew Morton , "zhaoyang.huang" , Alex Shi , "Kirill A . Shutemov" , Hugh Dickins , Baolin Wang , linux-mm@kvack.org, linux-kernel@vger.kernel.org, steve.kang@unisoc.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: F1CF84000E X-Stat-Signature: 1yht597tizt6kdk3inpj9mkqdrdqj8dr X-Rspam-User: X-HE-Tag: 1716800011-203586 X-HE-Meta: U2FsdGVkX1/knOuBzB5cbFa4tRGgVhJKr4xecvKROezl01QqUucY7jNhHmiqJWv+ggI1gGExIhXfTvc9nGrTn1+VtFNo5sOVHQy4IfhP685bu8+jL4anJJ4D5DiK8VLl42opJ5wDQW7B1UL9uTUa0JifUkDNHrAqP1TGcrwTScXse5Dk+mCGNDeuLnIBTMcCg+rMIChUY4StddKaTFsyk6ocf7zQD+UwaGRkT7O9kIQ+i2HCOSMOhMUdIBoNgw0KzQDHDq5tqBAEpVFWIt68eGE9oOkjMHPAS41Q2HGWqby39KBlzjgPmzHlzSIBvpgWcg0AfcSB/xO2TswmQ2KhkUUjt4urpwZ4Y6pHTZ9nQ5CmhcHFmPuuTVyjl9qm6pR0SpSumd2mqpKPJ279PDgNSvJo9JbCkUX//U4k9dVfuXhbMmp/eVssNTW3noMYC/lTsFatAujRCOK0DMJcnuVmca4RTB7gpADPnjawl6q/+3CbTkwTdZ1yfNhc2W3olsP54DTsTdTXU1LEaqHr5PgIvChcWndx64AmlZYL/bGN9Nb7YXGwtiUTQ+MHTZlfLdPplc3zew33V+3UQpwXFIOAABANyssfklLIZUIgJk8K4X+w6fwGPzfxNX0xYIJHJD5V0zteqai9erMgsK4CDT9X/beHNGf8nE/SyoAlpCD6S8lfRFBsdXyi6HlYKejquDkNdSZPnR3QY4wOgzNLQi3mRJ8dUI6lCWmwkrTw8fE0plFGIto6Ru/AFF0/xmiKqedCtnqpSqM+MevivYskPGMVVyuD2dAh8q/FQif86gdwiHaGDR3ljaFEzOouPUxmV0uiz/tb0Azm4Opj1cgsD7tSe2zFLOo0I4S0jTslJum55iP9ZiPpzr9AoJwjxh6EZVk1ZHqvmy6iO3Bch273DoDCidsixn++qBsBLf0pst4IIlPlt+AM9WrHeG+0EJ3nRREOiCvRz290vpPIwGqtNcs ugTafNrQ V//m3DMiAOKAA+4Ln3XLpr7Fmdn85S2xcIksdrAM3GAi3OC/TmyOHATsS5AxvrFRLgUePP0tXySUWKoPjl568EKlK5w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, May 27, 2024 at 4:22=E2=80=AFPM Marcin Wanat wrote: > > On 22.05.2024 12:13, Marcin Wanat wrote: > > On 22.05.2024 07:37, Zhaoyang Huang wrote: > >> On Tue, May 21, 2024 at 11:47=E2=80=AFPM Marcin Wanat > >> wrote: > >>> > >>> On 21.05.2024 03:00, Zhaoyang Huang wrote: > >>>> On Tue, May 21, 2024 at 8:58=E2=80=AFAM Zhaoyang Huang > >>>> wrote: > >>>>> > >>>>> On Tue, May 21, 2024 at 3:42=E2=80=AFAM Marcin Wanat > >>>>> wrote: > >>>>>> > >>>>>> On 15.04.2024 03:50, Zhaoyang Huang wrote: > >>>>>> I have around 50 hosts handling high I/O (each with 20Gbps+ uplink= s > >>>>>> and multiple NVMe drives), running RockyLinux 8/9. The stock RHEL > >>>>>> kernel 8/9 is NOT affected, and the long-term kernel 5.15.X is NOT > >>>>>> affected. > >>>>>> However, with long-term kernels 6.1.XX and 6.6.XX, > >>>>>> (tested at least 10 different versions), this lockup always appear= s > >>>>>> after 2-30 days, similar to the report in the original thread. > >>>>>> The more load (for example, copying a lot of local files while > >>>>>> serving 20Gbps traffic), the higher the chance that the bug will > >>>>>> appear. > >>>>>> > >>>>>> I haven't been able to reproduce this during synthetic tests, > >>>>>> but it always occurs in production on 6.1.X and 6.6.X within 2-30 > >>>>>> days. > >>>>>> If anyone can provide a patch, I can test it on multiple machines > >>>>>> over the next few days. > >>>>> Could you please try this one which could be applied on 6.6 > >>>>> directly. Thank you! > >>>> URL: https://lore.kernel.org/linux-mm/20240412064353.133497-1- > >>>> zhaoyang.huang@unisoc.com/ > >>>> > >>> > >>> Unfortunately, I am unable to cleanly apply this patch against the > >>> latest 6.6.31 > >> Please try below one which works on my v6.6 based android. Thank you > >> for your test in advance :D > >> > >> mm/huge_memory.c | 22 ++++++++++++++-------- > >> 1 file changed, 14 insertions(+), 8 deletions(-) > >> > >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > > > I have compiled 6.6.31 with this patch and will test it on multiple > > machines over the next 30 days. I will provide an update after 30 days > > if everything is fine or sooner if any of the hosts experience the same > > soft lockup again. > > > > First server with 6.6.31 and this patch hang today. Soft lockup changed > to hard lockup: > > [26887.389623] watchdog: Watchdog detected hard LOCKUP on cpu 21 > [26887.389626] Modules linked in: nft_limit xt_limit xt_hashlimit > ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_connlimit > nf_conncount tls xt_set ip_set_hash_net ip_set xt_CT xt_conntrack > nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tables > nfnetlink rfkill intel_rapl_msr intel_rapl_common intel_uncore_frequency > intel_uncore_frequency_common isst_if_common skx_edac nfit libnvdimm > x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass > rapl intel_cstate ipmi_ssif irdma ext4 mbcache ice iTCO_wdt jbd2 mgag200 > intel_pmc_bxt iTCO_vendor_support ib_uverbs i2c_algo_bit acpi_ipmi > intel_uncore mei_me drm_shmem_helper pcspkr ib_core i2c_i801 ipmi_si > drm_kms_helper mei lpc_ich i2c_smbus ioatdma intel_pch_thermal > ipmi_devintf ipmi_msghandler acpi_pad acpi_power_meter joydev tcp_bbr > drm fuse xfs libcrc32c sd_mod t10_pi sg crct10dif_pclmul crc32_pclmul > crc32c_intel ixgbe polyval_clmulni ahci polyval_generic libahci mdio > i40e libata megaraid_sas dca ghash_clmulni_intel wmi > [26887.389682] CPU: 21 PID: 264 Comm: kswapd0 Kdump: loaded Tainted: G > W 6.6.31.el9 #3 > [26887.389685] Hardware name: FUJITSU PRIMERGY RX2540 M4/D3384-A1, BIOS > V5.0.0.12 R1.22.0 for D3384-A1x 06/04/2018 > [26887.389687] RIP: 0010:native_queued_spin_lock_slowpath+0x6e/0x2c0 > [26887.389696] Code: 08 0f 92 c2 8b 45 00 0f b6 d2 c1 e2 08 30 e4 09 d0 > a9 00 01 ff ff 0f 85 ea 01 00 00 85 c0 74 12 0f b6 45 00 84 c0 74 0a f3 > 90 <0f> b6 45 00 84 c0 75 f6 b8 01 00 00 00 66 89 45 00 5b 5d 41 5c 41 > [26887.389698] RSP: 0018:ffffb3e587a87a20 EFLAGS: 00000002 > [26887.389700] RAX: 0000000000000001 RBX: ffff9ad6c6f67050 RCX: > 0000000000000000 > [26887.389701] RDX: 0000000000000000 RSI: 0000000000000000 RDI: > ffff9ad6c6f67050 > [26887.389703] RBP: ffff9ad6c6f67050 R08: 0000000000000000 R09: > 0000000000000067 > [26887.389704] R10: 0000000000000000 R11: 0000000000000000 R12: > 0000000000000046 > [26887.389705] R13: 0000000000000200 R14: 0000000000000000 R15: > ffffe1138aa98000 > [26887.389707] FS: 0000000000000000(0000) GS:ffff9ade20340000(0000) > knlGS:0000000000000000 > [26887.389708] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [26887.389710] CR2: 000000002912809b CR3: 000000064401e003 CR4: > 00000000007706e0 > [26887.389711] DR0: 0000000000000000 DR1: 0000000000000000 DR2: > 0000000000000000 > [26887.389712] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: > 0000000000000400 > [26887.389713] PKRU: 55555554 > [26887.389714] Call Trace: > [26887.389717] > [26887.389720] ? watchdog_hardlockup_check+0xac/0x150 > [26887.389725] ? __perf_event_overflow+0x102/0x1d0 > [26887.389729] ? handle_pmi_common+0x189/0x3e0 > [26887.389735] ? set_pte_vaddr_p4d+0x4a/0x60 > [26887.389738] ? flush_tlb_one_kernel+0xa/0x20 > [26887.389742] ? native_set_fixmap+0x65/0x80 > [26887.389745] ? ghes_copy_tofrom_phys+0x75/0x110 > [26887.389751] ? __ghes_peek_estatus.isra.0+0x49/0xb0 > [26887.389755] ? intel_pmu_handle_irq+0x10b/0x230 > [26887.389756] ? perf_event_nmi_handler+0x28/0x50 > [26887.389759] ? nmi_handle+0x58/0x150 > [26887.389764] ? native_queued_spin_lock_slowpath+0x6e/0x2c0 > [26887.389768] ? default_do_nmi+0x6b/0x170 > [26887.389770] ? exc_nmi+0x12c/0x1a0 > [26887.389772] ? end_repeat_nmi+0x16/0x1f > [26887.389777] ? native_queued_spin_lock_slowpath+0x6e/0x2c0 > [26887.389780] ? native_queued_spin_lock_slowpath+0x6e/0x2c0 > [26887.389784] ? native_queued_spin_lock_slowpath+0x6e/0x2c0 > [26887.389787] > [26887.389788] > [26887.389789] __raw_spin_lock_irqsave+0x3d/0x50 > [26887.389793] folio_lruvec_lock_irqsave+0x5e/0x90 > [26887.389798] __page_cache_release+0x68/0x230 > [26887.389801] ? remove_migration_ptes+0x5c/0x80 > [26887.389807] __folio_put+0x24/0x60 > [26887.389808] __split_huge_page+0x368/0x520 > [26887.389812] split_huge_page_to_list+0x4b3/0x570 > [26887.389816] deferred_split_scan+0x1c8/0x290 > [26887.389819] do_shrink_slab+0x12f/0x2d0 > [26887.389824] shrink_slab_memcg+0x133/0x1d0 > [26887.389829] shrink_node_memcgs+0x18e/0x1d0 > [26887.389832] shrink_node+0xa7/0x370 > [26887.389836] balance_pgdat+0x332/0x6f0 > [26887.389842] kswapd+0xf0/0x190 > [26887.389845] ? balance_pgdat+0x6f0/0x6f0 > [26887.389848] kthread+0xee/0x120 > [26887.389851] ? kthread_complete_and_exit+0x20/0x20 > [26887.389853] ret_from_fork+0x2d/0x50 > [26887.389857] ? kthread_complete_and_exit+0x20/0x20 > [26887.389859] ret_from_fork_asm+0x11/0x20 > [26887.389864] > [26887.389865] Kernel panic - not syncing: Hard LOCKUP > [26887.389867] CPU: 21 PID: 264 Comm: kswapd0 Kdump: loaded Tainted: G > W 6.6.31.el9 #3 > [26887.389869] Hardware name: FUJITSU PRIMERGY RX2540 M4/D3384-A1, BIOS > V5.0.0.12 R1.22.0 for D3384-A1x 06/04/2018 > [26887.389870] Call Trace: > [26887.389871] > [26887.389872] dump_stack_lvl+0x44/0x60 > [26887.389877] panic+0x241/0x330 > [26887.389881] nmi_panic+0x2f/0x40 > [26887.389883] watchdog_hardlockup_check+0x119/0x150 > [26887.389886] __perf_event_overflow+0x102/0x1d0 > [26887.389889] handle_pmi_common+0x189/0x3e0 > [26887.389893] ? set_pte_vaddr_p4d+0x4a/0x60 > [26887.389896] ? flush_tlb_one_kernel+0xa/0x20 > [26887.389899] ? native_set_fixmap+0x65/0x80 > [26887.389902] ? ghes_copy_tofrom_phys+0x75/0x110 > [26887.389906] ? __ghes_peek_estatus.isra.0+0x49/0xb0 > [26887.389909] intel_pmu_handle_irq+0x10b/0x230 > [26887.389911] perf_event_nmi_handler+0x28/0x50 > [26887.389913] nmi_handle+0x58/0x150 > [26887.389916] ? native_queued_spin_lock_slowpath+0x6e/0x2c0 > [26887.389920] default_do_nmi+0x6b/0x170 > [26887.389922] exc_nmi+0x12c/0x1a0 > [26887.389923] end_repeat_nmi+0x16/0x1f > [26887.389926] RIP: 0010:native_queued_spin_lock_slowpath+0x6e/0x2c0 > [26887.389930] Code: 08 0f 92 c2 8b 45 00 0f b6 d2 c1 e2 08 30 e4 09 d0 > a9 00 01 ff ff 0f 85 ea 01 00 00 85 c0 74 12 0f b6 45 00 84 c0 74 0a f3 > 90 <0f> b6 45 00 84 c0 75 f6 b8 01 00 00 00 66 89 45 00 5b 5d 41 5c 41 > [26887.389931] RSP: 0018:ffffb3e587a87a20 EFLAGS: 00000002 > [26887.389933] RAX: 0000000000000001 RBX: ffff9ad6c6f67050 RCX: > 0000000000000000 > [26887.389934] RDX: 0000000000000000 RSI: 0000000000000000 RDI: > ffff9ad6c6f67050 > [26887.389935] RBP: ffff9ad6c6f67050 R08: 0000000000000000 R09: > 0000000000000067 > [26887.389936] R10: 0000000000000000 R11: 0000000000000000 R12: > 0000000000000046 > [26887.389937] R13: 0000000000000200 R14: 0000000000000000 R15: > ffffe1138aa98000 > [26887.389940] ? native_queued_spin_lock_slowpath+0x6e/0x2c0 > [26887.389943] ? native_queued_spin_lock_slowpath+0x6e/0x2c0 > [26887.389946] > [26887.389947] > [26887.389947] __raw_spin_lock_irqsave+0x3d/0x50 > [26887.389950] folio_lruvec_lock_irqsave+0x5e/0x90 > [26887.389953] __page_cache_release+0x68/0x230 > [26887.389955] ? remove_migration_ptes+0x5c/0x80 > [26887.389958] __folio_put+0x24/0x60 > [26887.389960] __split_huge_page+0x368/0x520 > [26887.389963] split_huge_page_to_list+0x4b3/0x570 > [26887.389967] deferred_split_scan+0x1c8/0x290 > [26887.389971] do_shrink_slab+0x12f/0x2d0 > [26887.389974] shrink_slab_memcg+0x133/0x1d0 > [26887.389978] shrink_node_memcgs+0x18e/0x1d0 > [26887.389982] shrink_node+0xa7/0x370 > [26887.389985] balance_pgdat+0x332/0x6f0 > [26887.389991] kswapd+0xf0/0x190 > [26887.389994] ? balance_pgdat+0x6f0/0x6f0 > [26887.389997] kthread+0xee/0x120 > [26887.389998] ? kthread_complete_and_exit+0x20/0x20 > [26887.390000] ret_from_fork+0x2d/0x50 > [26887.390003] ? kthread_complete_and_exit+0x20/0x20 > [26887.390004] ret_from_fork_asm+0x11/0x20 > [26887.390009] > ok, thanks for the information. That should be generated by lock's contention. I will check the code and keep you posted.