From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E98CC021B1 for ; Thu, 20 Feb 2025 10:57:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5227280286; Thu, 20 Feb 2025 05:57:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A0083280282; Thu, 20 Feb 2025 05:57:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 87AA1280286; Thu, 20 Feb 2025 05:57:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 67E1A280282 for ; Thu, 20 Feb 2025 05:57:53 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1AF9BC0F7A for ; Thu, 20 Feb 2025 10:57:53 +0000 (UTC) X-FDA: 83140022826.29.E0F9A42 Received: from mail-qk1-f175.google.com (mail-qk1-f175.google.com [209.85.222.175]) by imf16.hostedemail.com (Postfix) with ESMTP id 1F88118000D for ; Thu, 20 Feb 2025 10:57:50 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=c6azqEEA; spf=pass (imf16.hostedemail.com: domain of mmpgouride@gmail.com designates 209.85.222.175 as permitted sender) smtp.mailfrom=mmpgouride@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740049071; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RyzsqUD+1GFaW4IUO+rQVXbA0doRro4OzGdwNaVmmvc=; b=dm53S242/d+XqPR4zHsmRK+It/Kfwg39Z2O9By+MzXbK5JZU81Kgwbr0lBn8f3t8B/sdlL fnyPMGdl7bLMIhI1iG1TbiappZygjtVsUzbSs5etfNFhGO+YqMc50qac+UVJL//bdEFZEu kobFfi50IqpyfVNOSUYUFpG+2ocueo4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=c6azqEEA; spf=pass (imf16.hostedemail.com: domain of mmpgouride@gmail.com designates 209.85.222.175 as permitted sender) smtp.mailfrom=mmpgouride@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740049071; a=rsa-sha256; cv=none; b=Z3e0BoCEeGy1ABkWYwgGgjwXE2I2vU3MutQwAHiC1sINwRSYK1ttnRtNhJpFUmpueOa+iI pU11wcoUTQ/i7K3eHWTxiufn9fBH3c0A5UQU7ixx1unAbQr5pd/aWXc9dbIJDCa8/2pSz8 2g4vQ8QOEvIOnXqTeyQHMXfI848rTo0= Received: by mail-qk1-f175.google.com with SMTP id af79cd13be357-7be8f281714so80959485a.1 for ; Thu, 20 Feb 2025 02:57:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740049070; x=1740653870; darn=kvack.org; h=to:references:message-id:content-transfer-encoding:cc:date :in-reply-to:from:subject:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=RyzsqUD+1GFaW4IUO+rQVXbA0doRro4OzGdwNaVmmvc=; b=c6azqEEAKaHvT2Mu1G7qXJi8qFBi8lYb1vwYLLJoH7UjxXFIeQsRUhVKnKP3PuE2zY RJBj44jEyKMAqoKumIF5G3dhS7/YhJq0CeklbpyD5od+VGOycFNNR2i/VLMTs5MZBYPm 8d7HHWIMVBzExASKPfAjhR8gsZkPbAVzHmRU/R+6xVITTCKZdzHCS6Ugmt/aZDYFI1LW gv0wHNdCFCJs3I7I50fti4G8zDllJxfnusEs66aKje1Idpm1SvxwsbRQuvijYHxzEfue P7Ih2M66ZeFSZ1kCQEFBeZjOsoxipGrgsAzUusMSraPqga9qzFSXS94/Ueq8srRU/0ro UcrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740049070; x=1740653870; h=to:references:message-id:content-transfer-encoding:cc:date :in-reply-to:from:subject:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RyzsqUD+1GFaW4IUO+rQVXbA0doRro4OzGdwNaVmmvc=; b=BAfS9JkJW/peB0K/LJQQcGBn8Wn841QztFYXO719GICcpUtKb4yrZxk5MmPTLIEZkI SZhiRCspL0o1yblWuPAx4Hrz/rXX6VILPgXdg1mFI/o8IWyX+oOYYF3G8Yjek9fwI8Lz aNn10pc+IzbHCVBf+kMf5g0kGeu49dK6BxjG7XBxy5N6IesDEiLFBo2PKtsgaX0RZhGq lRJ0qHMsqzF4nBoZluENbq7QA8P+gzL3+3bBIRV7M9lBydq0W/7PQUrk++e44TrayRZU U6x9RkCJQMMZ/7TZAMrmq2rCk3NWecFuDNKSKidISuBHeZi9JQ3jhpV4/ssyLCKKIHF9 Tzkw== X-Forwarded-Encrypted: i=1; AJvYcCXr8ozxSlfB55txMj7VecI9YJ9CitShRx4PJVY0TwzUmYg3Hh5YaHQxBIDXys/m4pmUAJQcDLM76Q==@kvack.org X-Gm-Message-State: AOJu0YwgpX38G+uP5BYbLv5H4nbvI1POLd2zXduUp6fZjXTrR0B3vwJk fI/Z6TYee8w/g/8YNzlKYtual2sjT2uZm25waxkovnHpgwLJtgoxvPL519Q2 X-Gm-Gg: ASbGnctlZWEqpMRCeDyOAAu+2S9W0Z1dhDPVHTWpLnrC9BQCKxBkjF5z3zcu41aUxfl AcI87oD5FsObKKOjnvgOwtZgzHapTCiCLDhikMpzfBNPhw+EE4f9XPI7JYN8q3hxi70Wi/eOHNM wDyjybA3gIQL4MgjQ2ph3jpCqDctCMvX+yotaxmlO+CiTVjlUcDe7w9xpcZzhPrVjSCcF6VagJj XTUHJhGaD9//hqn01FoRHxDVT/stx4mdxhjG5nHhk1zt9NHWF4S/tYF5lEo+2ORaBy2nVN/a90= X-Google-Smtp-Source: AGHT+IEBFRPUz13nuRgAv/IaqCIGav5GjdX4+D0YB/zj5B9NcrPO9kz1hW0MrSNdxSii3dDSxKalnA== X-Received: by 2002:a05:620a:640b:b0:7c0:b018:592c with SMTP id af79cd13be357-7c0b0185de0mr1133790985a.14.1740049070042; Thu, 20 Feb 2025 02:57:50 -0800 (PST) Received: from smtpclient.apple ([2402:d0c0:11:86::1]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7c0bf10f945sm126692685a.48.2025.02.20.02.57.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 20 Feb 2025 02:57:49 -0800 (PST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3826.400.131.1.6\)) Subject: Re: [PATCH] bcachefs: Use alloc_percpu_gfp to avoid deadlock From: Alan Huang In-Reply-To: Date: Thu, 20 Feb 2025 18:57:32 +0800 Cc: linux-bcachefs@vger.kernel.org, syzbot+fe63f377148a6371a9db@syzkaller.appspotmail.com, linux-mm@kvack.org, Tejun Heo , Dennis Zhou , Christoph Lameter Content-Transfer-Encoding: quoted-printable Message-Id: <25FBAAE5-8BC6-41F3-9A6D-65911BA5A5D7@gmail.com> References: <20250212100625.55860-1-mmpgouride@gmail.com> To: Kent Overstreet X-Mailer: Apple Mail (2.3826.400.131.1.6) X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 1F88118000D X-Stat-Signature: w5thd8udzwh1ai1xrhbdjgrsgitz6dhr X-Rspam-User: X-HE-Tag: 1740049070-551707 X-HE-Meta: U2FsdGVkX181CEPOhiY9uifFvI63koT81EDtrYNhn9TMl6psjEBMiB7av3I404avVpJNMG5P43JJqg2h757QublC5BpEG5T3cTPUGXrzW7iGh+cBsG2GfIX7uEC8goDu91cFL5A8a853ewk0eNSNgVSr7HqUFgm5Zyv70iec5RJeX9O6rL2l2aOqNGRWPXlMrwr6R7rfuP+/HS9NMQPk2uGOtkwa5BpZW6hh2SIABW7qVQAxii2k7lLqaZEqGQjTL5z61Yi7yNB84J+pv/aFYt9XBaSeD8ZihxZboWKJrTT5WzxrzhS69go25n6z5BdX7uMzgGMp/SrC0ZUl4Er/N8KAnQdbL+m+1pS+5+AD2lCqnrC+rq/rjLTiJLFxuJ7ZoILz0V8YKe17a2c3OiglARRdwW5u8HbtnaU/0pJ8sR5CRlXglIjPrYvSFz+gZ1tZzdyK9bMtAav/TSxD7DrugU8YCbCdPWTQL+OPc9E/dkFZwHceFIET/V6zaYM8/A9+nnl8GJhdr5htU6fI0uykXMCmsxWCipyPodbBGXxX//DeIAuthwI6ybH5KySR/EKHg/LeJTjEHfb+uK4xYVM9vDj1mF729V9vhssCMsvu7Q4xko7K3V4rglP28ygekiip/IJ8fqZTSbxJfcGCxFD8ZpACoU0GWxcKFcFE9364FEAs2ojDaY8/3VQ2zazbTlD27uLGJcx1I/ienAb38+MLy8uMoDc1Ll01snWZoKc1ZhGe8ZdHAjBi/w/4agMtZu7CNvRUA0Mu4epHL3Kc0tWpN4Xl+LAJKlUrn8xxva8wcUBltB1eqDHeJY9sW2vWzrcp1DY+yrqoJrhGOzKHqsnhteOrZqAngOCx9oC/ndeaiGcm4o09XY4e+/4wF/m8V4K2/HF5p7TWECPsNDx0x5SHo0pI71yPee+zcH2u6topi8XzSSlRiMoX36awIslR1zgbQBb2F6hd59K0KxJP67c JsRIZOEC uLZINS0i1q2EXqSXyMHE0SauKfzgycSnqGiGNXaD0gsLS/K461+cgfd6/MOrxLLceARt1Fhfmqrb15D19IlN0fO8/Jirdll64jN0TWIDz/j31KIVr5348xSpb8Vbdxy5Fq712Lqhjgkm6ospiuMYR+bCGcRON0keJe4BvuuaKJXex5FJVNWXgHRXMdu3CzJIFrft7qHBF94AExVd4mtmU0N0tnMg02XLxV1yLjyrYViRHzo8mVJnBj8pvgSNaWnL1QSvuYbip3HMBcw8InNrgjXE8CQ+PQ+/QKIYZUu7joypRFqQX272dgF79SIEgi4RhT7K8H8T+4DhCm0RmndVR9De79fUFf6F7sjjgHl0ncB4kf3Xf9Yoxm/7RweuyZSqSvRGfNctfiafj3BId5OK1dau6MiBE0YdEe3mDyOMyMKR8LzSTbt18/3Sw2lkM/D2nUjFn/fGKXgSW3ETH25m4e683DEMZBgWUn+8khTji1GChJAP1f6M3mYxl2ffTWtKxmVisPNrJ9ens/MuOiV9tFiao3wl3+BY068aUucozUiZK9KhR/bONKQUhPNwO/RkdwhGvww+GGOqcAlWh2Zy+GCtzsCS3Wj41SdiI0sa4d2uEIC/jkSenJtHPzg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.001673, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Ping > On Feb 12, 2025, at 22:27, Kent Overstreet = wrote: >=20 > Adding pcpu people to the CC >=20 > On Wed, Feb 12, 2025 at 06:06:25PM +0800, Alan Huang wrote: >> The cycle: >>=20 >> CPU0: CPU1: >> bc->lock pcpu_alloc_mutex >> pcpu_alloc_mutex bc->lock >>=20 >> Reported-by: syzbot+fe63f377148a6371a9db@syzkaller.appspotmail.com >> Tested-by: syzbot+fe63f377148a6371a9db@syzkaller.appspotmail.com >> Signed-off-by: Alan Huang >=20 > So pcpu_alloc_mutex -> fs_reclaim? >=20 > That's really awkward; seems like something that might invite more > issues. We can apply your fix if we need to, but I want to hear with = the > percpu people have to say first. >=20 > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D > WARNING: possible circular locking dependency detected > 6.14.0-rc2-syzkaller-00039-g09fbf3d50205 #0 Not tainted > ------------------------------------------------------ > syz.0.21/5625 is trying to acquire lock: > ffffffff8ea19608 (pcpu_alloc_mutex){+.+.}-{4:4}, at: = pcpu_alloc_noprof+0x293/0x1760 mm/percpu.c:1782 >=20 > but task is already holding lock: > ffff888051401c68 (&bc->lock){+.+.}-{4:4}, at: = bch2_btree_node_mem_alloc+0x559/0x16f0 fs/bcachefs/btree_cache.c:804 >=20 > which lock already depends on the new lock. >=20 >=20 > the existing dependency chain (in reverse order) is: >=20 > -> #2 (&bc->lock){+.+.}-{4:4}: > lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851 > __mutex_lock_common kernel/locking/mutex.c:585 [inline] > __mutex_lock+0x19c/0x1010 kernel/locking/mutex.c:730 > bch2_btree_cache_scan+0x184/0xec0 fs/bcachefs/btree_cache.c:482 > do_shrink_slab+0x72d/0x1160 mm/shrinker.c:437 > shrink_slab+0x1093/0x14d0 mm/shrinker.c:664 > shrink_one+0x43b/0x850 mm/vmscan.c:4868 > shrink_many mm/vmscan.c:4929 [inline] > lru_gen_shrink_node mm/vmscan.c:5007 [inline] > shrink_node+0x37c5/0x3e50 mm/vmscan.c:5978 > kswapd_shrink_node mm/vmscan.c:6807 [inline] > balance_pgdat mm/vmscan.c:6999 [inline] > kswapd+0x20f3/0x3b10 mm/vmscan.c:7264 > kthread+0x7a9/0x920 kernel/kthread.c:464 > ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148 > ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 >=20 > -> #1 (fs_reclaim){+.+.}-{0:0}: > lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851 > __fs_reclaim_acquire mm/page_alloc.c:3853 [inline] > fs_reclaim_acquire+0x88/0x130 mm/page_alloc.c:3867 > might_alloc include/linux/sched/mm.h:318 [inline] > slab_pre_alloc_hook mm/slub.c:4066 [inline] > slab_alloc_node mm/slub.c:4144 [inline] > __do_kmalloc_node mm/slub.c:4293 [inline] > __kmalloc_noprof+0xae/0x4c0 mm/slub.c:4306 > kmalloc_noprof include/linux/slab.h:905 [inline] > kzalloc_noprof include/linux/slab.h:1037 [inline] > pcpu_mem_zalloc mm/percpu.c:510 [inline] > pcpu_alloc_chunk mm/percpu.c:1430 [inline] > pcpu_create_chunk+0x57/0xbc0 mm/percpu-vm.c:338 > pcpu_balance_populated mm/percpu.c:2063 [inline] > pcpu_balance_workfn+0xc4d/0xd40 mm/percpu.c:2200 > process_one_work kernel/workqueue.c:3236 [inline] > process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317 > worker_thread+0x870/0xd30 kernel/workqueue.c:3398 > kthread+0x7a9/0x920 kernel/kthread.c:464 > ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148 > ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 >=20 > -> #0 (pcpu_alloc_mutex){+.+.}-{4:4}: > check_prev_add kernel/locking/lockdep.c:3163 [inline] > check_prevs_add kernel/locking/lockdep.c:3282 [inline] > validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3906 > __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5228 > lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851 > __mutex_lock_common kernel/locking/mutex.c:585 [inline] > __mutex_lock+0x19c/0x1010 kernel/locking/mutex.c:730 > pcpu_alloc_noprof+0x293/0x1760 mm/percpu.c:1782 > __six_lock_init+0x104/0x150 fs/bcachefs/six.c:876 > bch2_btree_lock_init+0x38/0x100 fs/bcachefs/btree_locking.c:12 > bch2_btree_node_mem_alloc+0x565/0x16f0 = fs/bcachefs/btree_cache.c:807 > __bch2_btree_node_alloc fs/bcachefs/btree_update_interior.c:304 = [inline] > bch2_btree_reserve_get+0x2df/0x1890 = fs/bcachefs/btree_update_interior.c:532 > bch2_btree_update_start+0xe56/0x14e0 = fs/bcachefs/btree_update_interior.c:1230 > bch2_btree_split_leaf+0x121/0x880 = fs/bcachefs/btree_update_interior.c:1851 > bch2_trans_commit_error+0x212/0x1380 = fs/bcachefs/btree_trans_commit.c:908 > __bch2_trans_commit+0x812b/0x97a0 = fs/bcachefs/btree_trans_commit.c:1085 > bch2_trans_commit fs/bcachefs/btree_update.h:183 [inline] > bch2_trans_mark_metadata_bucket+0x47a/0x17b0 = fs/bcachefs/buckets.c:1043 > bch2_trans_mark_metadata_sectors fs/bcachefs/buckets.c:1060 = [inline] > __bch2_trans_mark_dev_sb fs/bcachefs/buckets.c:1100 [inline] > bch2_trans_mark_dev_sb+0x3f6/0x820 fs/bcachefs/buckets.c:1128 > bch2_trans_mark_dev_sbs_flags+0x6be/0x720 = fs/bcachefs/buckets.c:1138 > bch2_fs_initialize+0xba0/0x1610 fs/bcachefs/recovery.c:1149 > bch2_fs_start+0x36d/0x610 fs/bcachefs/super.c:1042 > bch2_fs_get_tree+0xd8d/0x1740 fs/bcachefs/fs.c:2203 > vfs_get_tree+0x90/0x2b0 fs/super.c:1814 > do_new_mount+0x2be/0xb40 fs/namespace.c:3560 > do_mount fs/namespace.c:3900 [inline] > __do_sys_mount fs/namespace.c:4111 [inline] > __se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4088 > do_syscall_x64 arch/x86/entry/common.c:52 [inline] > do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 > entry_SYSCALL_64_after_hwframe+0x77/0x7f >=20 > other info that might help us debug this: >=20 > Chain exists of: > pcpu_alloc_mutex --> fs_reclaim --> &bc->lock >=20 > Possible unsafe locking scenario: >=20 > CPU0 CPU1 > ---- ---- > lock(&bc->lock); > lock(fs_reclaim); > lock(&bc->lock); > lock(pcpu_alloc_mutex); >=20 > *** DEADLOCK *** >=20 > 4 locks held by syz.0.21/5625: > #0: ffff888051400278 (&c->state_lock){+.+.}-{4:4}, at: = bch2_fs_start+0x45/0x610 fs/bcachefs/super.c:1010 > #1: ffff888051404378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: = srcu_lock_acquire include/linux/srcu.h:164 [inline] > #1: ffff888051404378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: = srcu_read_lock include/linux/srcu.h:256 [inline] > #1: ffff888051404378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: = __bch2_trans_get+0x7e4/0xd30 fs/bcachefs/btree_iter.c:3377 > #2: ffff8880514266d0 (&c->gc_lock){.+.+}-{4:4}, at: = bch2_btree_update_start+0x682/0x14e0 = fs/bcachefs/btree_update_interior.c:1180 > #3: ffff888051401c68 (&bc->lock){+.+.}-{4:4}, at: = bch2_btree_node_mem_alloc+0x559/0x16f0 fs/bcachefs/btree_cache.c:804 >=20 > stack backtrace: > CPU: 0 UID: 0 PID: 5625 Comm: syz.0.21 Not tainted = 6.14.0-rc2-syzkaller-00039-g09fbf3d50205 #0 > Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS = 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 > Call Trace: > > __dump_stack lib/dump_stack.c:94 [inline] > dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 > print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2076 > check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2208 > check_prev_add kernel/locking/lockdep.c:3163 [inline] > check_prevs_add kernel/locking/lockdep.c:3282 [inline] > validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3906 > __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5228 > lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851 > __mutex_lock_common kernel/locking/mutex.c:585 [inline] > __mutex_lock+0x19c/0x1010 kernel/locking/mutex.c:730 > pcpu_alloc_noprof+0x293/0x1760 mm/percpu.c:1782 > __six_lock_init+0x104/0x150 fs/bcachefs/six.c:876 > bch2_btree_lock_init+0x38/0x100 fs/bcachefs/btree_locking.c:12 > bch2_btree_node_mem_alloc+0x565/0x16f0 fs/bcachefs/btree_cache.c:807 > __bch2_btree_node_alloc fs/bcachefs/btree_update_interior.c:304 = [inline] > bch2_btree_reserve_get+0x2df/0x1890 = fs/bcachefs/btree_update_interior.c:532 > bch2_btree_update_start+0xe56/0x14e0 = fs/bcachefs/btree_update_interior.c:1230 > bch2_btree_split_leaf+0x121/0x880 = fs/bcachefs/btree_update_interior.c:1851 > bch2_trans_commit_error+0x212/0x1380 = fs/bcachefs/btree_trans_commit.c:908 > __bch2_trans_commit+0x812b/0x97a0 = fs/bcachefs/btree_trans_commit.c:1085 > bch2_trans_commit fs/bcachefs/btree_update.h:183 [inline] > bch2_trans_mark_metadata_bucket+0x47a/0x17b0 = fs/bcachefs/buckets.c:1043 > bch2_trans_mark_metadata_sectors fs/bcachefs/buckets.c:1060 [inline] > __bch2_trans_mark_dev_sb fs/bcachefs/buckets.c:1100 [inline] > bch2_trans_mark_dev_sb+0x3f6/0x820 fs/bcachefs/buckets.c:1128 > bch2_trans_mark_dev_sbs_flags+0x6be/0x720 fs/bcachefs/buckets.c:1138 > bch2_fs_initialize+0xba0/0x1610 fs/bcachefs/recovery.c:1149 > bch2_fs_start+0x36d/0x610 fs/bcachefs/super.c:1042 > bch2_fs_get_tree+0xd8d/0x1740 fs/bcachefs/fs.c:2203 > vfs_get_tree+0x90/0x2b0 fs/super.c:1814 > do_new_mount+0x2be/0xb40 fs/namespace.c:3560 > do_mount fs/namespace.c:3900 [inline] > __do_sys_mount fs/namespace.c:4111 [inline] > __se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4088 > do_syscall_x64 arch/x86/entry/common.c:52 [inline] > do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 > entry_SYSCALL_64_after_hwframe+0x77/0x7f > RIP: 0033:0x7fcaed38e58a > Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb a6 e8 de 1a 00 00 66 2e 0f = 1f 84 00 00 00 00 00 0f 1f 40 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d = 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 > RSP: 002b:00007fcaec5fde68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5 > RAX: ffffffffffffffda RBX: 00007fcaec5fdef0 RCX: 00007fcaed38e58a > RDX: 00004000000000c0 RSI: 0000400000000180 RDI: 00007fcaec5fdeb0 > RBP: 00004000000000c0 R08: 00007fcaec5fdef0 R09: 0000000000000000 >=20 >> --- >> fs/bcachefs/six.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >>=20 >> diff --git a/fs/bcachefs/six.c b/fs/bcachefs/six.c >> index 7e7c66a1e1a6..ccdc6d496910 100644 >> --- a/fs/bcachefs/six.c >> +++ b/fs/bcachefs/six.c >> @@ -873,7 +873,7 @@ void __six_lock_init(struct six_lock *lock, const = char *name, >> * failure if they wish by checking lock->readers, but generally >> * will not want to treat it as an error. >> */ >> - lock->readers =3D alloc_percpu(unsigned); >> + lock->readers =3D alloc_percpu_gfp(unsigned, = GFP_NOWAIT|__GFP_NOWARN); >> } >> #endif >> } >> --=20 >> 2.47.0 >>=20