From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D266C10F29 for ; Tue, 17 Mar 2020 13:34:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1915E205ED for ; Tue, 17 Mar 2020 13:34:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1915E205ED Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B7DB16B0005; Tue, 17 Mar 2020 09:34:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B55316B0006; Tue, 17 Mar 2020 09:34:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A91FE6B0007; Tue, 17 Mar 2020 09:34:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id 91ED26B0005 for ; Tue, 17 Mar 2020 09:34:29 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 49C52181AC9B6 for ; Tue, 17 Mar 2020 13:34:29 +0000 (UTC) X-FDA: 76604948658.09.sea76_9ffeedb0dd26 X-HE-Tag: sea76_9ffeedb0dd26 X-Filterd-Recvd-Size: 4851 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Tue, 17 Mar 2020 13:34:28 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 9DEAAAD2C; Tue, 17 Mar 2020 13:34:26 +0000 (UTC) Subject: Re: [PATCH 2/4] mm/slub: Use mem_node to allocate a new slab To: Srikar Dronamraju , Andrew Morton Cc: linux-mm@kvack.org, Mel Gorman , Michael Ellerman , Sachin Sant , Michal Hocko , Christopher Lameter , linuxppc-dev@lists.ozlabs.org, Joonsoo Kim , Kirill Tkhai , Bharata B Rao References: <3381CD91-AB3D-4773-BA04-E7A072A63968@linux.vnet.ibm.com> <20200317131753.4074-1-srikar@linux.vnet.ibm.com> <20200317131753.4074-3-srikar@linux.vnet.ibm.com> From: Vlastimil Babka Message-ID: Date: Tue, 17 Mar 2020 14:34:25 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0 MIME-Version: 1.0 In-Reply-To: <20200317131753.4074-3-srikar@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/17/20 2:17 PM, Srikar Dronamraju wrote: > Currently while allocating a slab for a offline node, we use its > associated node_numa_mem to search for a partial slab. If we don't find > a partial slab, we try allocating a slab from the offline node using > __alloc_pages_node. However this is bound to fail. > > NIP [c00000000039a300] __alloc_pages_nodemask+0x130/0x3b0 > LR [c00000000039a3c4] __alloc_pages_nodemask+0x1f4/0x3b0 > Call Trace: > [c0000008b36837f0] [c00000000039a3b4] __alloc_pages_nodemask+0x1e4/0x3b0 (unreliable) > [c0000008b3683870] [c0000000003d1ff8] new_slab+0x128/0xcf0 > [c0000008b3683950] [c0000000003d6060] ___slab_alloc+0x410/0x820 > [c0000008b3683a40] [c0000000003d64a4] __slab_alloc+0x34/0x60 > [c0000008b3683a70] [c0000000003d78b0] __kmalloc_node+0x110/0x490 > [c0000008b3683af0] [c000000000343a08] kvmalloc_node+0x58/0x110 > [c0000008b3683b30] [c0000000003ffd44] mem_cgroup_css_online+0x104/0x270 > [c0000008b3683b90] [c000000000234e08] online_css+0x48/0xd0 > [c0000008b3683bc0] [c00000000023dedc] cgroup_apply_control_enable+0x2ec/0x4d0 > [c0000008b3683ca0] [c0000000002416f8] cgroup_mkdir+0x228/0x5f0 > [c0000008b3683d10] [c000000000520360] kernfs_iop_mkdir+0x90/0xf0 > [c0000008b3683d50] [c00000000043e400] vfs_mkdir+0x110/0x230 > [c0000008b3683da0] [c000000000441ee0] do_mkdirat+0xb0/0x1a0 > [c0000008b3683e20] [c00000000000b278] system_call+0x5c/0x68 > > Mitigate this by allocating the new slab from the node_numa_mem. Are you sure this is really needed and the other 3 patches are not enough for the current SLUB code to work as needed? It seems you are changing the semantics here... > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1970,14 +1970,8 @@ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, > struct kmem_cache_cpu *c) > { > void *object; > - int searchnode = node; > > - if (node == NUMA_NO_NODE) > - searchnode = numa_mem_id(); > - else if (!node_present_pages(node)) > - searchnode = node_to_mem_node(node); > - > - object = get_partial_node(s, get_node(s, searchnode), c, flags); > + object = get_partial_node(s, get_node(s, node), c, flags); > if (object || node != NUMA_NO_NODE)> return object; > > return get_any_partial(s, flags, c); I.e. here in this if(), now node will never equal NUMA_NO_NODE (thanks to the hunk below), thus the get_any_partial() call becomes dead code? > @@ -2470,6 +2464,11 @@ static inline void *new_slab_objects(struct kmem_cache *s, gfp_t flags, > > WARN_ON_ONCE(s->ctor && (flags & __GFP_ZERO)); > > + if (node == NUMA_NO_NODE) > + node = numa_mem_id(); > + else if (!node_present_pages(node)) > + node = node_to_mem_node(node); > + > freelist = get_partial(s, flags, node, c); > > if (freelist) > @@ -2569,12 +2568,10 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > redo: > > if (unlikely(!node_match(page, node))) { > - int searchnode = node; > - > if (node != NUMA_NO_NODE && !node_present_pages(node)) > - searchnode = node_to_mem_node(node); > + node = node_to_mem_node(node); > > - if (unlikely(!node_match(page, searchnode))) { > + if (unlikely(!node_match(page, node))) { > stat(s, ALLOC_NODE_MISMATCH); > deactivate_slab(s, page, c->freelist, c); > goto new_slab; >