From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2713C678D5 for ; Wed, 8 Mar 2023 13:37:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 352616B0072; Wed, 8 Mar 2023 08:37:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 302D16B0074; Wed, 8 Mar 2023 08:37:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A2A76B007B; Wed, 8 Mar 2023 08:37:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 09FAF6B0072 for ; Wed, 8 Mar 2023 08:37:23 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C2E2D40F30 for ; Wed, 8 Mar 2023 13:37:22 +0000 (UTC) X-FDA: 80545832724.03.F18696C Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by imf16.hostedemail.com (Postfix) with ESMTP id B69A7180004 for ; Wed, 8 Mar 2023 13:37:20 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Wz1rqGM9; spf=pass (imf16.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.44 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678282640; a=rsa-sha256; cv=none; b=ZEMKBj2AZh6OH19f+Gegox5YnJY5hEw1T4kHs7W9jMteRunR2I1PsRZhUIZlvwD1ZNjrMT NxSW45bu5pHtxixGCOzTTqMM3QyekCv1LmritHcT1FvAuaeqdFr66+CMwFUniF0KBE3rJX yBgTivKBLK38JkQ3pWL4IqxhzbCCmBo= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Wz1rqGM9; spf=pass (imf16.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.44 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678282640; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=skOevrBH/VdT5SIidruNWf0l96ZcVNpPBjmQkY/d2IQ=; b=kpfb6mW2wxqOqVB89jGxF9WRjU2rODIBz4n6bEQmhuw53ktLUFePHDEVdxlh/zNhFxQ3jK +avfpK0sP6rIOiU6t2xmL2RVKqorXAxKsDajHWCr0n12fkfXYv6unQEm+IuX7tijQJB7Gj mxO8Oo2T1CbO328XE6W1bWw1a9kDUAc= Received: by mail-pj1-f44.google.com with SMTP id nn12so1772512pjb.5 for ; Wed, 08 Mar 2023 05:37:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678282639; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=skOevrBH/VdT5SIidruNWf0l96ZcVNpPBjmQkY/d2IQ=; b=Wz1rqGM9u7pa/IXS179+od4C1bHsMAYPmBwY2249MGxdGEOlnIzanmdT9MWEM2pJK7 GeEQ9+tRUCMX6IOssznZjpGwp+frC2DMjcVs3Qcc5JLsHPB91G+ADEp9yBjRHhKBG6ve +ZjMl4yk4+UCqat0DYNZlavasSXdTIl3pU+LEKHy2/o9TSWNockQkqd3nW/KEVNrJM+4 6TkyrL+A1nr5Z4MjmkkoRpaCY59kCF9JBio+OBHLUvTj+VebTvTTFjU7WkpR3qBSiAF9 d/h/kNXMVZrEPXDUyQkw5EHjF+dwuWhZh95BLqTBJ9jqBtjyD9E8R+HkUhWDKxhZUVQm YlOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678282639; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=skOevrBH/VdT5SIidruNWf0l96ZcVNpPBjmQkY/d2IQ=; b=mjbKROWnoh3t2HvG5VF0UsS8wyv8woygqOC0ZPEGSXYA6MZAGwM6ZYsjeuLzkxBGpp MaTlncFBBi9dW9kYagwouDORfoDkotoi30XCakSbjTFUDBZY7oF0uV74+ouoSV1lzYDA 0K5mOoVsBfUwZakIIT6NGXrMREW7qJplKV46UXYjFD/g7lG1/iTN7583NS55TzN1rEGO /UwQ8OBOt4Lqr9SRkkP5iEiYJnPV6KdVP23hftNX0Hnu2LHXDbBjp9CxebBmMwZj97ff sUtzCnaC1pEwxNBx5+HDq+RJOvyd9DYd9wYAFZu5Wa0i6NC1AkMQNFy391zuYjw1/Wv8 ROCg== X-Gm-Message-State: AO0yUKUxnwGV1ay3rcC9XwFPHn/ll0tdfbvuKQKgx2s9D2D/JdkWOTjp 7iZWOjhtqbSSoWU2F4x9gDk= X-Google-Smtp-Source: AK7set+3i5xscnsi+WC/xa2PTuvdUZN2C+YhvLhpH4GA681dwhDxy1HAYOY3w+6YxEgZJpRBEfsI9g== X-Received: by 2002:a05:6a20:e688:b0:c7:6cb7:cfbf with SMTP id mz8-20020a056a20e68800b000c76cb7cfbfmr16095556pzb.10.1678282639263; Wed, 08 Mar 2023 05:37:19 -0800 (PST) Received: from localhost ([2400:8902::f03c:93ff:fe27:642a]) by smtp.gmail.com with ESMTPSA id b13-20020aa7870d000000b0058dbd7a5e0esm9372589pfo.89.2023.03.08.05.37.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Mar 2023 05:37:18 -0800 (PST) Date: Wed, 8 Mar 2023 13:37:10 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: "chenjun (AM)" Cc: "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "cl@linux.com" , "penberg@kernel.org" , "rientjes@google.com" , "iamjoonsoo.kim@lge.com" , "akpm@linux-foundation.org" , "vbabka@suse.cz" , "xuqiang (M)" Subject: Re: [RFC] mm/slub: Reduce memory consumption in extreme scenarios Message-ID: References: <20230307082811.120774-1-chenjun102@huawei.com> <4ad448c565134d76bea0ac8afffe4f37@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <4ad448c565134d76bea0ac8afffe4f37@huawei.com> X-Rspam-User: X-Rspamd-Queue-Id: B69A7180004 X-Rspamd-Server: rspam01 X-Stat-Signature: exxxmaz5uggu4gtrjkgmkq37bj7fwyyj X-HE-Tag: 1678282640-124841 X-HE-Meta: U2FsdGVkX18r1rIwd5RykpAhvvv9PCL2OtcTqehuRhg6dsmkntPGm1w4mRta8b8UWVNMknUDDEQbDomal1qPGpDBbw0lzLswEhPuFZ4/oPyGZbo90Qt9cLbk7PWhA+CZ+Vnb9Vgt/BK2JotE2PuWvmCM31GcrUHNc+kc9uhP8mdRXXiiO4oi5biq5Pq0QqLFF0AgNHf3SjFo4BBZm4F/arhRWHmIiT5oD4PiSfSomiQOGKpFZ/S4zvzFKDF8gL5pyVxE/LW1b9YHtNEybap44lHiEuBh4BC5fDjEp1ogjT/qh13jdhNY5LxnJtt/fyczudKcGfNgLKAmnEdv67jkffS8/XCpOjKipqi3rEGiwazP3S4V9+k4Y7ACoZsFDCqCXWl/TKFNjZ0/TYY2MNBeSPUzwf2YtzhpfJ/Hq+wuz3M/jZutIg0crQzRm28Axy7EfA7ct9swwjN9i8wRu+IwKmZ84/eKpSSnawMEPQ2zJNz/E6uy7Fp4oCBt96cAjiOmiPJ1VIUMRPz+BHcK5Gh0VdtZkyeM5M8enZoF36F6yZzX4zaAOTAewuarzukTeB9Gq1vogCyHVdO9eqERXzmHO3z9z6rLzXrbaaqevXURrr0Umtwn9gP61CSTfURr/W5n0UTk1N1PP9wV0dk4qmqvw+A1bqVcFGZdXEbkJe2dBqogLGqZ/sObRfoE0qpBmJsAD1tS8DybdXWngu7p4POUg/yPpHxo/C+s1pFmiCTUEeFiY4yFcq2zBQIS+l1pKFCeZHh3xZLkb0Wa5LRCEZ/DM9iYWMZvrUtRbtNjQ5oPrPnZEJCuWoVr3Shmp67DwQ+ppZMxOwbTPHXzZ+NG1ens3tq+ZJS7ZUI+NGfOUSHdBgA7G3L9qAqjoogpOGQmHcU1SouIeabNTDmLeMc2dUNCQKgP1QPrUJEVbJ+4S6fWBuEiZXG2NhMfAoMo/DmfCAHpxQKnwY2jThrbxYg+kH/ XrWUjzqe loiDH3iCY/9xhBzYP0VewtJN2HMRpGomy6bIoQB6xVTlnDoF2apMIjt9axH/n0SBWXyJfcEqYADpq+gOnFzc1x0eoGj+e8HV52I8OnOOLjhdh7ZCbgMwy86cHtiK0WPy/Yrc/0tHmVa5dtxabZNH2+ZCK4TUGY4DmuNrcvFlKsauj/yM+0klHLeJJtIkhqak10k7CYygWWXCn4IkDP/hHss2ks7C7tlSuqzCyJU3WTvRmgYOUgOjINq9hsy92pbHMs0qiq0CpDViHC9oSK689a62G3tp5HELQsD/fmcoAWI7Z0pJSABAoD01hofEVLSeiCYs6sV6v8ofbBk+fLjaJvMLTNgwzTtm0pN9kGJiER4EO2LYvjmRiyaWZH7dzY2dDFOlB+Aewnq4s0qJsc6Ome/Ll4rqrAobMZoJHhMM1NjxVY2CzhwNJp4WO1vXiLFeL4b9OhNKMMrC3RSfAyWG2mm0TQwLmC7BlOc2d86q7s5kHGNMG5ihvD4S5QudE8RMD9rCcIVEoY/G5Bqg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 08, 2023 at 07:16:49AM +0000, chenjun (AM) wrote: > Hi, > > Thanks for reply. > > 在 2023/3/7 22:20, Hyeonggon Yoo 写道: > > On Tue, Mar 07, 2023 at 08:28:11AM +0000, Chen Jun wrote: > >> If call kmalloc_node with NO __GFP_THISNODE and node[A] with no memory. > >> Slub will alloc a slub page which is not belong to A, and put the page > >> to kmem_cache_node[page_to_nid(page)]. The page can not be reused > >> at next calling, because NULL will be get from get_partical(). > >> That make kmalloc_node consume more memory. > > > > Hello, > > > > elaborating a little bit: > > > > "When kmalloc_node() is called without __GFP_THISNODE and the target node > > lacks sufficient memory, SLUB allocates a folio from a different node other > > than the requested node, instead of taking a partial slab from it. > > > > However, since the allocated folio does not belong to the requested node, > > it is deactivated and added to the partial slab list of the node it > > belongs to. > > > > This behavior can result in excessive memory usage when the requested > > node has insufficient memory, as SLUB will repeatedly allocate folios from > > other nodes without reusing the previously allocated ones. > > > > To prevent memory wastage, take a partial slab from a different node when > > the requested node has no partial slab and __GFP_THISNODE is not explicitly > > specified." > > > > Thanks, This is more clear than what I described. > > >> On qemu with 4 numas and each numa has 1G memory, Write a test ko > >> to call kmalloc_node(196, 0xd20c0, 3) for 5 * 1024 * 1024 times. > >> > >> cat /proc/slabinfo shows: > >> kmalloc-256 4302317 15151808 256 32 2 : tunables.. > >> > >> the total objects is much more then active objects. > >> > >> After this patch, cat /prac/slubinfo shows: > >> kmalloc-256 5244950 5245088 256 32 2 : tunables.. > >> > >> Signed-off-by: Chen Jun > >> --- > >> mm/slub.c | 17 ++++++++++++++--- > >> 1 file changed, 14 insertions(+), 3 deletions(-) > >> > >> diff --git a/mm/slub.c b/mm/slub.c > >> index 39327e98fce3..c0090a5de54e 100644 > >> --- a/mm/slub.c > >> +++ b/mm/slub.c > >> @@ -2384,7 +2384,7 @@ static void *get_partial(struct kmem_cache *s, int node, struct partial_context > >> searchnode = numa_mem_id(); > >> > >> object = get_partial_node(s, get_node(s, searchnode), pc); > >> - if (object || node != NUMA_NO_NODE) > >> + if (object || (node != NUMA_NO_NODE && (pc->flags & __GFP_THISNODE))) > >> return object; > > > > I think the problem here is to avoid taking a partial slab from > > different node than the requested node even if __GFP_THISNODE is not set. > > (and then allocating new slab instead) > > > > Thus this hunk makes sense to me, > > even if SLUB currently do not implement __GFP_THISNODE semantics. > > > >> return get_any_partial(s, pc); > >> @@ -3069,6 +3069,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > >> struct slab *slab; > >> unsigned long flags; > >> struct partial_context pc; > >> + int try_thisndoe = 0; > >> > >> > >> stat(s, ALLOC_SLOWPATH); > >> > >> @@ -3181,8 +3182,12 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > >> } > >> > >> new_objects: > >> - > >> pc.flags = gfpflags; > >> + > >> + /* Try to get page from specific node even if __GFP_THISNODE is not set */ > >> + if (node != NUMA_NO_NODE && !(gfpflags & __GFP_THISNODE) && try_thisnode) > >> + pc.flags |= __GFP_THISNODE; > >> + > >> pc.slab = &slab; > >> pc.orig_size = orig_size; > >> freelist = get_partial(s, node, &pc); > >> @@ -3190,10 +3195,16 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > >> goto check_new_slab; > >> > >> slub_put_cpu_ptr(s->cpu_slab); > >> - slab = new_slab(s, gfpflags, node); > >> + slab = new_slab(s, pc.flags, node); > >> c = slub_get_cpu_ptr(s->cpu_slab); > >> > >> if (unlikely(!slab)) { > >> + /* Try to get page from any other node */ > >> + if (node != NUMA_NO_NODE && !(gfpflags & __GFP_THISNODE) && try_thisnode) { > >> + try_thisnode = 0; > >> + goto new_objects; > >> + } > >> + > >> slab_out_of_memory(s, gfpflags, node); > >> return NULL; > > > > But these hunks do not make sense to me. > > Why force __GFP_THISNODE even when the caller did not specify it? > > > > (Apart from the fact that try_thisnode is defined as try_thisndoe, > > and try_thisnode is never set to nonzero value.) > > My mistake, It should be: > int try_thisnode = 0; I think it should be try_thisnode = 1? Otherwise it won't be executed at all. Also bool type will be more readable than int. > > > > > IMHO the first hunk is enough to solve the problem. > > I think, we should try to alloc a page on the target node before getting > one from other nodes' partial. You are right. Hmm then the new behavior when (node != NUMA_NO_NODE) && (gfpflags & __GFP_THISNODE) is: 1) try to get a partial slab from target node with __GFP_THISNODE 2) if 1) failed, try to allocate a new slab from target node with __GFP_THISNODE 3) if 2) failed, retry 1) and 2) without __GFP_THISNODE constraint when node != NUMA_NO_NODE || (gfpflags & __GFP_THISNODE), the behavior remains unchanged. It does not look that crazy to me, although it complicates the code a little bit. (Vlastimil may have some opinions?) Now that I understand your intention, I think this behavior change also need to be added to the commit log. Thanks, Hyeonggon > If the caller does not specify __GFP_THISNODE, we add __GFP_THISNODE to > try to get the slab only on the target node. If it fails, use the > original GFP FLAG to allow fallback.