From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EB6EC5ACD7 for ; Wed, 18 Mar 2020 14:42:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 301F32077B for ; Wed, 18 Mar 2020 14:42:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 301F32077B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C91266B0093; Wed, 18 Mar 2020 10:42:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C6A526B0096; Wed, 18 Mar 2020 10:42:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA68E6B0095; Wed, 18 Mar 2020 10:42:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 9E2946B0092 for ; Wed, 18 Mar 2020 10:42:34 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6C2A937E7 for ; Wed, 18 Mar 2020 14:42:34 +0000 (UTC) X-FDA: 76608749028.07.music27_509f8fc4dc10b X-HE-Tag: music27_509f8fc4dc10b X-Filterd-Recvd-Size: 8375 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Wed, 18 Mar 2020 14:42:33 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id D95E5ACAE; Wed, 18 Mar 2020 14:42:31 +0000 (UTC) From: Vlastimil Babka To: linux-mm@kvack.org Cc: Vlastimil Babka , Sachin Sant , Bharata B Rao , Srikar Dronamraju , Mel Gorman , Michael Ellerman , Michal Hocko , Christopher Lameter , linuxppc-dev@lists.ozlabs.org, Joonsoo Kim , Pekka Enberg , David Rientjes , Kirill Tkhai , Nathan Lynch Subject: [RFC 1/2] mm, slub: prevent kmalloc_node crashes and memory leaks Date: Wed, 18 Mar 2020 15:42:19 +0100 Message-Id: <20200318144220.18083-1-vbabka@suse.cz> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Sachin reports [1] a crash in SLUB __slab_alloc(): BUG: Kernel NULL pointer dereference on read at 0x000073b0 Faulting instruction address: 0xc0000000003d55f4 Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=3D64K MMU=3DHash SMP NR_CPUS=3D2048 NUMA pSeries Modules linked in: CPU: 19 PID: 1 Comm: systemd Not tainted 5.6.0-rc2-next-20200218-autotest= #1 NIP: c0000000003d55f4 LR: c0000000003d5b94 CTR: 0000000000000000 REGS: c0000008b37836d0 TRAP: 0300 Not tainted (5.6.0-rc2-next-20200218= -autotest) MSR: 8000000000009033 CR: 24004844 XER: 0000000= 0 CFAR: c00000000000dec4 DAR: 00000000000073b0 DSISR: 40000000 IRQMASK: 1 GPR00: c0000000003d5b94 c0000008b3783960 c00000000155d400 c0000008b301f50= 0 GPR04: 0000000000000dc0 0000000000000002 c0000000003443d8 c0000008bb39862= 0 GPR08: 00000008ba2f0000 0000000000000001 0000000000000000 000000000000000= 0 GPR12: 0000000024004844 c00000001ec52a00 0000000000000000 000000000000000= 0 GPR16: c0000008a1b20048 c000000001595898 c000000001750c18 000000000000000= 2 GPR20: c000000001750c28 c000000001624470 0000000fffffffe0 5deadbeef000012= 2 GPR24: 0000000000000001 0000000000000dc0 0000000000000002 c0000000003443d= 8 GPR28: c0000008b301f500 c0000008bb398620 0000000000000000 c00c00000228718= 0 NIP [c0000000003d55f4] ___slab_alloc+0x1f4/0x760 LR [c0000000003d5b94] __slab_alloc+0x34/0x60 Call Trace: [c0000008b3783960] [c0000000003d5734] ___slab_alloc+0x334/0x760 (unreliab= le) [c0000008b3783a40] [c0000000003d5b94] __slab_alloc+0x34/0x60 [c0000008b3783a70] [c0000000003d6fa0] __kmalloc_node+0x110/0x490 [c0000008b3783af0] [c0000000003443d8] kvmalloc_node+0x58/0x110 [c0000008b3783b30] [c0000000003fee38] mem_cgroup_css_online+0x108/0x270 [c0000008b3783b90] [c000000000235aa8] online_css+0x48/0xd0 [c0000008b3783bc0] [c00000000023eaec] cgroup_apply_control_enable+0x2ec/0= x4d0 [c0000008b3783ca0] [c000000000242318] cgroup_mkdir+0x228/0x5f0 [c0000008b3783d10] [c00000000051e170] kernfs_iop_mkdir+0x90/0xf0 [c0000008b3783d50] [c00000000043dc00] vfs_mkdir+0x110/0x230 [c0000008b3783da0] [c000000000441c90] do_mkdirat+0xb0/0x1a0 [c0000008b3783e20] [c00000000000b278] system_call+0x5c/0x68 This is a PowerPC platform with following NUMA topology: available: 2 nodes (0-1) node 0 cpus: node 0 size: 0 MB node 0 free: 0 MB node 1 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 2= 3 24 25 26 27 28 29 30 31 node 1 size: 35247 MB node 1 free: 30907 MB node distances: node 0 1 0: 10 40 1: 40 10 possible numa nodes: 0-31 This only happens with a mmotm patch "mm/memcontrol.c: allocate shrinker_= map on appropriate NUMA node" [2] which effectively calls kmalloc_node for each possible node. SLUB however only allocates kmem_cache_node on online node= s with present memory, and relies on node_to_mem_node to return such valid = node for other nodes since commit a561ce00b09e ("slub: fall back to node_to_mem_node() node if allocating on memoryless node"). This is howev= er not true in this configuration where the _node_numa_mem_ array is not initial= ized for nodes 0 and 2-31, thus it contains zeroes and get_partial() ends up accessing non-allocated kmem_cache_node. A related issue was reported by Bharata [3] where a similar PowerPC configuration, but without patch [2] ends up allocating large amounts of = pages by kmalloc-1k kmalloc-512. This seems to have the same underlying issue w= ith node_to_mem_node() not behaving as expected, and might probably also lead to an infinite loop with CONFIG_SLUB_CPU_PARTIAL. This patch should fix both issues by not relying on node_to_mem_node() an= ymore and instead simply falling back to NUMA_NO_NODE, when kmalloc_node(node) = is attempted for a node that's not online or has no pages. Also in case alloc_slab_page() is reached with a non-online node, fallback as well, un= til we have a guarantee that all possible nodes have valid NODE_DATA with pro= per zonelist for fallback. [1] https://lore.kernel.org/linux-next/3381CD91-AB3D-4773-BA04-E7A072A639= 68@linux.vnet.ibm.com/ [2] https://lore.kernel.org/linux-mm/fff0e636-4c36-ed10-281c-8cdb0687c839= @virtuozzo.com/ [3] https://lore.kernel.org/linux-mm/20200317092624.GB22538@in.ibm.com/ [4] https://lore.kernel.org/linux-mm/088b5996-faae-8a56-ef9c-5b567125ae54= @suse.cz/ Reported-by: Sachin Sant Reported-by: Bharata B Rao Debugged-by: Srikar Dronamraju Signed-off-by: Vlastimil Babka Cc: Mel Gorman Cc: Michael Ellerman Cc: Michal Hocko Cc: Christopher Lameter Cc: linuxppc-dev@lists.ozlabs.org Cc: Joonsoo Kim Cc: Pekka Enberg Cc: David Rientjes Cc: Kirill Tkhai Cc: Vlastimil Babka Cc: Nathan Lynch --- Hi, this is my alternate solution of the SLUB issues to the series [1]. C= ould Sachin and Bharata please test whether it fixes the issues? 1) on plain mainline (Bharata) or next (Sachin) 2) the same but with [PATCH 0/3] Offline memoryless cpuless node 0 [2] as= I assume the series was not related to the SLUB issues and will be kept? Thanks! [1] https://lore.kernel.org/linux-mm/20200318072810.9735-1-srikar@linux.v= net.ibm.com/ [2] https://lore.kernel.org/linux-mm/20200311110237.5731-1-srikar@linux.v= net.ibm.com/ mm/slub.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 17dc00e33115..4d798cacdae1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1511,7 +1511,7 @@ static inline struct page *alloc_slab_page(struct k= mem_cache *s, struct page *page; unsigned int order =3D oo_order(oo); =20 - if (node =3D=3D NUMA_NO_NODE) + if (node =3D=3D NUMA_NO_NODE || !node_online(node)) page =3D alloc_pages(flags, order); else page =3D __alloc_pages_node(node, flags, order); @@ -1973,8 +1973,6 @@ static void *get_partial(struct kmem_cache *s, gfp_= t flags, int node, =20 if (node =3D=3D NUMA_NO_NODE) searchnode =3D numa_mem_id(); - else if (!node_present_pages(node)) - searchnode =3D node_to_mem_node(node); =20 object =3D get_partial_node(s, get_node(s, searchnode), c, flags); if (object || node !=3D NUMA_NO_NODE) @@ -2568,12 +2566,15 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, redo: =20 if (unlikely(!node_match(page, node))) { - int searchnode =3D node; - - if (node !=3D NUMA_NO_NODE && !node_present_pages(node)) - searchnode =3D node_to_mem_node(node); - - if (unlikely(!node_match(page, searchnode))) { + /* + * node_match() false implies node !=3D NUMA_NO_NODE + * but if the node is not online or has no pages, just + * ignore the constraint + */ + if ((!node_online(node) || !node_present_pages(node))) { + node =3D NUMA_NO_NODE; + goto redo; + } else { stat(s, ALLOC_NODE_MISMATCH); deactivate_slab(s, page, c->freelist, c); goto new_slab; --=20 2.25.1