From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7143FC43460 for ; Wed, 5 May 2021 16:06:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E0D72613C9 for ; Wed, 5 May 2021 16:06:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E0D72613C9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 69C036B0071; Wed, 5 May 2021 12:06:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 671636B0074; Wed, 5 May 2021 12:06:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 561536B0075; Wed, 5 May 2021 12:06:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 382036B0071 for ; Wed, 5 May 2021 12:06:11 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AFDC53622 for ; Wed, 5 May 2021 16:06:10 +0000 (UTC) X-FDA: 78107654100.39.DBB0929 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf14.hostedemail.com (Postfix) with ESMTP id B618AC0007CC for ; Wed, 5 May 2021 16:05:49 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id A5D3AB0BD; Wed, 5 May 2021 16:06:08 +0000 (UTC) To: Waiman Long , Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Shakeel Butt Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org References: <20210505154613.17214-1-longman@redhat.com> <20210505154613.17214-3-longman@redhat.com> From: Vlastimil Babka Subject: Re: [PATCH v3 2/2] mm: memcg/slab: Create a new set of kmalloc-cg- caches Message-ID: <4c1a0436-2d46-d23a-2eef-d558e37373bf@suse.cz> Date: Wed, 5 May 2021 18:06:07 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.0 MIME-Version: 1.0 In-Reply-To: <20210505154613.17214-3-longman@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Authentication-Results: imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-Rspamd-Server: rspam03 X-Stat-Signature: 3g7begym6djyyy8h1tu3hezmh885oyhb X-Rspamd-Queue-Id: B618AC0007CC Received-SPF: none (suse.cz>: No applicable sender policy available) receiver=imf14; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: none/none X-HE-Tag: 1620230749-848997 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 5/5/21 5:46 PM, Waiman Long wrote: > There are currently two problems in the way the objcg pointer array > (memcg_data) in the page structure is being allocated and freed. >=20 > On its allocation, it is possible that the allocated objcg pointer > array comes from the same slab that requires memory accounting. If this > happens, the slab will never become empty again as there is at least > one object left (the obj_cgroup array) in the slab. >=20 > When it is freed, the objcg pointer array object may be the last one > in its slab and hence causes kfree() to be called again. With the > right workload, the slab cache may be set up in a way that allows the > recursive kfree() calling loop to nest deep enough to cause a kernel > stack overflow and panic the system. >=20 > One way to solve this problem is to split the kmalloc- caches > (KMALLOC_NORMAL) into two separate sets - a new set of kmalloc- > (KMALLOC_NORMAL) caches for non-accounted objects only and a new set of > kmalloc-cg- (KMALLOC_CGROUP) caches for accounted objects only. All > the other caches can still allow a mix of accounted and non-accounted > objects. >=20 > With this change, all the objcg pointer array objects will come from > KMALLOC_NORMAL caches which won't have their objcg pointer arrays. So > both the recursive kfree() problem and non-freeable slab problem are > gone. Since both the KMALLOC_NORMAL and KMALLOC_CGROUP caches no longer > have mixed accounted and unaccounted objects, this will slightly reduce > the number of objcg pointer arrays that need to be allocated and save > a bit of memory. >=20 > The new KMALLOC_CGROUP is added between KMALLOC_NORMAL and > KMALLOC_RECLAIM so that the first for loop in create_kmalloc_caches() > will include the newly added caches without change. >=20 > Suggested-by: Vlastimil Babka > Signed-off-by: Waiman Long > --- > include/linux/slab.h | 42 ++++++++++++++++++++++++++++++++++-------- > mm/slab_common.c | 23 +++++++++++++++-------- > 2 files changed, 49 insertions(+), 16 deletions(-) >=20 > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 0c97d788762c..f2d9ebc34f5c 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -305,9 +305,16 @@ static inline void __check_heap_object(const void = *ptr, unsigned long n, > /* > * Whenever changing this, take care of that kmalloc_type() and > * create_kmalloc_caches() still work as intended. > + * > + * KMALLOC_NORMAL is for non-accounted objects only whereas KMALLOC_CG= ROUP > + * is for accounted objects only. All the other kmem caches can have b= oth > + * accounted and non-accounted objects. > */ > enum kmalloc_cache_type { > KMALLOC_NORMAL =3D 0, > +#ifdef CONFIG_MEMCG_KMEM > + KMALLOC_CGROUP, > +#endif > KMALLOC_RECLAIM, > #ifdef CONFIG_ZONE_DMA > KMALLOC_DMA, > @@ -315,28 +322,47 @@ enum kmalloc_cache_type { > NR_KMALLOC_TYPES > }; > =20 > +#ifndef CONFIG_MEMCG_KMEM > +#define KMALLOC_CGROUP KMALLOC_NORMAL > +#endif > +#ifndef CONFIG_ZONE_DMA > +#define KMALLOC_DMA KMALLOC_NORMAL > +#endif You could move this to the enum definition itself? E.g.: #ifdef CONFIG_MEMCG_KMEM KMALLOC_CGROUP, #else KMALLOC_CGROUP =3D KMALLOC_NORMAL, #endif > + > #ifndef CONFIG_SLOB > extern struct kmem_cache * > kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1]; > =20 > +/* > + * Define gfp bits that should not be set for KMALLOC_NORMAL. > + */ > +#define KMALLOC_NOT_NORMAL_BITS \ > + (__GFP_RECLAIMABLE | \ > + (IS_ENABLED(CONFIG_ZONE_DMA) ? __GFP_DMA : 0) | \ > + (IS_ENABLED(CONFIG_MEMCG_KMEM) ? __GFP_ACCOUNT : 0)) > + > static __always_inline enum kmalloc_cache_type kmalloc_type(gfp_t flag= s) > { > -#ifdef CONFIG_ZONE_DMA > /* > * The most common case is KMALLOC_NORMAL, so test for it > * with a single branch for both flags. Not "both flags" anymore. Something like "so test with a single branch th= at there are none of the flags that would select a different type" > */ > - if (likely((flags & (__GFP_DMA | __GFP_RECLAIMABLE)) =3D=3D 0)) > + if (likely((flags & KMALLOC_NOT_NORMAL_BITS) =3D=3D 0)) > return KMALLOC_NORMAL; > =20 > /* > - * At least one of the flags has to be set. If both are, __GFP_DMA > - * is more important. > + * At least one of the flags has to be set. Their priorities in > + * decreasing order are: > + * 1) __GFP_DMA > + * 2) __GFP_RECLAIMABLE > + * 3) __GFP_ACCOUNT > */ > - return flags & __GFP_DMA ? KMALLOC_DMA : KMALLOC_RECLAIM; > -#else > - return flags & __GFP_RECLAIMABLE ? KMALLOC_RECLAIM : KMALLOC_NORMAL; > -#endif > + if (IS_ENABLED(CONFIG_ZONE_DMA) && (flags & __GFP_DMA)) > + return KMALLOC_DMA; > + if (!IS_ENABLED(CONFIG_MEMCG_KMEM) || (flags & __GFP_RECLAIMABLE)) > + return KMALLOC_RECLAIM; > + else > + return KMALLOC_CGROUP; > } Works for me this way, thanks. > =20 > /* > diff --git a/mm/slab_common.c b/mm/slab_common.c > index f8833d3e5d47..d750e3ba7af5 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -727,21 +727,25 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_= t flags) > } > =20 > #ifdef CONFIG_ZONE_DMA > -#define INIT_KMALLOC_INFO(__size, __short_size) \ > -{ \ > - .name[KMALLOC_NORMAL] =3D "kmalloc-" #__short_size, \ > - .name[KMALLOC_RECLAIM] =3D "kmalloc-rcl-" #__short_size, \ > - .name[KMALLOC_DMA] =3D "dma-kmalloc-" #__short_size, \ > - .size =3D __size, \ > -} > +#define KMALLOC_DMA_NAME(sz) .name[KMALLOC_DMA] =3D "dma-kmalloc-" #sz= , > +#else > +#define KMALLOC_DMA_NAME(sz) > +#endif > + > +#ifdef CONFIG_MEMCG_KMEM > +#define KMALLOC_CGROUP_NAME(sz) .name[KMALLOC_CGROUP] =3D "kmalloc-cg-= " #sz, > #else > +#define KMALLOC_CGROUP_NAME(sz) > +#endif > + > #define INIT_KMALLOC_INFO(__size, __short_size) \ > { \ > .name[KMALLOC_NORMAL] =3D "kmalloc-" #__short_size, \ > .name[KMALLOC_RECLAIM] =3D "kmalloc-rcl-" #__short_size, \ > + KMALLOC_CGROUP_NAME(__short_size) \ > + KMALLOC_DMA_NAME(__short_size) \ > .size =3D __size, \ > } > -#endif > =20 > /* > * kmalloc_info[] is to make slub_debug=3D,kmalloc-xx option work at b= oot time. > @@ -847,6 +851,9 @@ void __init create_kmalloc_caches(slab_flags_t flag= s) > int i; > enum kmalloc_cache_type type; > =20 > + /* > + * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined > + */ > for (type =3D KMALLOC_NORMAL; type <=3D KMALLOC_RECLAIM; type++) { > for (i =3D KMALLOC_SHIFT_LOW; i <=3D KMALLOC_SHIFT_HIGH; i++) { > if (!kmalloc_caches[type][i]) >=20