From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32707C433B4 for ; Wed, 5 May 2021 18:12:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A8A68613B4 for ; Wed, 5 May 2021 18:11:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A8A68613B4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1EDCB6B006C; Wed, 5 May 2021 14:11:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C4856B006E; Wed, 5 May 2021 14:11:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 064D46B0070; Wed, 5 May 2021 14:11:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id E2AB76B006C for ; Wed, 5 May 2021 14:11:58 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 96A85180AD830 for ; Wed, 5 May 2021 18:11:58 +0000 (UTC) X-FDA: 78107971116.23.8B256A0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf30.hostedemail.com (Postfix) with ESMTP id D90D0E000103 for ; Wed, 5 May 2021 18:11:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1620238317; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nw/NXWJf7YefMV4xFftmS9WnDS1NhKlaE8hzMyBIij0=; b=SOz1Ux9cXfdSoi6JIVvNSUcw1MC7RXjzjIUCnc7RmNcb5rHeCcHkZEpZHPDGOBm7kS7jw1 s0QBBlWfv9XtgB2vHlGtMfOmSiiV32wZq00TATkvtnOuHOgkVvXbp8I3u+pYSkpdE7kmsw LJKIeeesMj0Ae2xduAeifEHg/BTkyPs= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-483-7NRfyEk0MuKFORrj0amHOQ-1; Wed, 05 May 2021 14:11:56 -0400 X-MC-Unique: 7NRfyEk0MuKFORrj0amHOQ-1 Received: by mail-qv1-f71.google.com with SMTP id h88-20020a0c82610000b02901b70a2884e8so2263570qva.20 for ; Wed, 05 May 2021 11:11:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:cc:references:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=nw/NXWJf7YefMV4xFftmS9WnDS1NhKlaE8hzMyBIij0=; b=qaeHmV+igAgeh9YFeoJ8Ao6elgqJxdECTSPmtLu5VArkLGwbjiHv4SqAuFayaCXoe0 vEYQXhDIFTG3jxkba82ZnWFZiobjpNUNvvYUodnNzb6qkVWCR8okZ94F5R78pnjvJLc8 kuV65p4qCA5LCckyVySdomNhXL6kMjBsv4qrxa5C4RAg4QH3nS5pL21PUt5Wi5u3Yesq KNJYscy2UMm2W/1NV3od9XfhWjs4LZjIN9TeQ16qFACNTiINwVjMmscvKfcf4K9D43sx DODJ/3ScXqRbHYLIJiX3PEajX/0MJ9eVRb5+qXglKiDZdTEcDUvUq5EsEHedImJDX+wM smGA== X-Gm-Message-State: AOAM531qG15eCBaWCmvgSiXszWlk86dVu1tejgnX7Oqk/hPB+se9A8oZ BUrtk8uJoPj1dcjM88nk0XHWbsf4/9eA+rDQMxLTqF2oplcrG2mvzhPiSVngsqtx7lk9tvqkiru njbNB/j+v+SdnAkWlIYup+yGFl8GLltpHp93iEAc+U+C+yqFzobT8nIc3xUA= X-Received: by 2002:a37:de14:: with SMTP id h20mr53726qkj.34.1620238314982; Wed, 05 May 2021 11:11:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwduF4OFPouA+pHzOJt58325444lPg2GI9Z612zESLQpE/ZW3x/CASG2csLuEiKir1BNl4LYg== X-Received: by 2002:a37:de14:: with SMTP id h20mr53688qkj.34.1620238314589; Wed, 05 May 2021 11:11:54 -0700 (PDT) Received: from llong.remote.csb ([2601:191:8500:76c0::cdbc]) by smtp.gmail.com with ESMTPSA id z4sm5539059qtq.34.2021.05.05.11.11.53 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 05 May 2021 11:11:54 -0700 (PDT) From: Waiman Long X-Google-Original-From: Waiman Long Subject: Re: [PATCH v3 2/2] mm: memcg/slab: Create a new set of kmalloc-cg- caches To: Roman Gushchin Cc: Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Shakeel Butt , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org References: <20210505154613.17214-1-longman@redhat.com> <20210505154613.17214-3-longman@redhat.com> Message-ID: Date: Wed, 5 May 2021 14:11:52 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SOz1Ux9c; spf=none (imf30.hostedemail.com: domain of llong@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=llong@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: D90D0E000103 X-Stat-Signature: qguyjqcqgx16cers36ho553b78dkmnm8 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf30; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=216.205.24.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620238292-328920 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 5/5/21 1:30 PM, Roman Gushchin wrote: > On Wed, May 05, 2021 at 11:46:13AM -0400, Waiman Long wrote: >> There are currently two problems in the way the objcg pointer array >> (memcg_data) in the page structure is being allocated and freed. >> >> On its allocation, it is possible that the allocated objcg pointer >> array comes from the same slab that requires memory accounting. If this >> happens, the slab will never become empty again as there is at least >> one object left (the obj_cgroup array) in the slab. >> >> When it is freed, the objcg pointer array object may be the last one >> in its slab and hence causes kfree() to be called again. With the >> right workload, the slab cache may be set up in a way that allows the >> recursive kfree() calling loop to nest deep enough to cause a kernel >> stack overflow and panic the system. >> >> One way to solve this problem is to split the kmalloc- caches >> (KMALLOC_NORMAL) into two separate sets - a new set of kmalloc- >> (KMALLOC_NORMAL) caches for non-accounted objects only and a new set of >> kmalloc-cg- (KMALLOC_CGROUP) caches for accounted objects only. All >> the other caches can still allow a mix of accounted and non-accounted >> objects. > I agree that it's likely the best approach here. Thanks for discovering > and fixing the problem! > >> With this change, all the objcg pointer array objects will come from >> KMALLOC_NORMAL caches which won't have their objcg pointer arrays. So >> both the recursive kfree() problem and non-freeable slab problem are >> gone. Since both the KMALLOC_NORMAL and KMALLOC_CGROUP caches no longer >> have mixed accounted and unaccounted objects, this will slightly reduce >> the number of objcg pointer arrays that need to be allocated and save >> a bit of memory. > Unfortunately the positive effect of this change will be likely > reversed by a lower utilization due to a larger number of caches. That is also true, will mention that. > > Btw, I wonder if we also need a change in the slab caches merging procedure? > KMALLOC_NORMAL caches should not be merged with caches which can potentially > include accounted objects. Thank for catching this omission. I will take a look and modify the merging procedure in a new patch. Accounting is usually specified at kmem_cache_create() time. Though, I did find one instance of setting ACCOUNT flag in kmem_cache_alloc(), I will ignore this case and merge accounted, but unreclaimable caches to KMALLOC_CGROUP. > >> The new KMALLOC_CGROUP is added between KMALLOC_NORMAL and >> KMALLOC_RECLAIM so that the first for loop in create_kmalloc_caches() >> will include the newly added caches without change. >> >> Suggested-by: Vlastimil Babka >> Signed-off-by: Waiman Long >> --- >> include/linux/slab.h | 42 ++++++++++++++++++++++++++++++++++-------- >> mm/slab_common.c | 23 +++++++++++++++-------- >> 2 files changed, 49 insertions(+), 16 deletions(-) >> >> diff --git a/include/linux/slab.h b/include/linux/slab.h >> index 0c97d788762c..f2d9ebc34f5c 100644 >> --- a/include/linux/slab.h >> +++ b/include/linux/slab.h >> @@ -305,9 +305,16 @@ static inline void __check_heap_object(const void *ptr, unsigned long n, >> /* >> * Whenever changing this, take care of that kmalloc_type() and >> * create_kmalloc_caches() still work as intended. >> + * >> + * KMALLOC_NORMAL is for non-accounted objects only whereas KMALLOC_CGROUP >> + * is for accounted objects only. All the other kmem caches can have both >> + * accounted and non-accounted objects. >> */ >> enum kmalloc_cache_type { >> KMALLOC_NORMAL = 0, >> +#ifdef CONFIG_MEMCG_KMEM >> + KMALLOC_CGROUP, >> +#endif >> KMALLOC_RECLAIM, >> #ifdef CONFIG_ZONE_DMA >> KMALLOC_DMA, >> @@ -315,28 +322,47 @@ enum kmalloc_cache_type { >> NR_KMALLOC_TYPES >> }; >> >> +#ifndef CONFIG_MEMCG_KMEM >> +#define KMALLOC_CGROUP KMALLOC_NORMAL >> +#endif >> +#ifndef CONFIG_ZONE_DMA >> +#define KMALLOC_DMA KMALLOC_NORMAL >> +#endif >> + >> #ifndef CONFIG_SLOB >> extern struct kmem_cache * >> kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1]; >> >> +/* >> + * Define gfp bits that should not be set for KMALLOC_NORMAL. >> + */ >> +#define KMALLOC_NOT_NORMAL_BITS \ >> + (__GFP_RECLAIMABLE | \ >> + (IS_ENABLED(CONFIG_ZONE_DMA) ? __GFP_DMA : 0) | \ >> + (IS_ENABLED(CONFIG_MEMCG_KMEM) ? __GFP_ACCOUNT : 0)) >> + >> static __always_inline enum kmalloc_cache_type kmalloc_type(gfp_t flags) >> { >> -#ifdef CONFIG_ZONE_DMA >> /* >> * The most common case is KMALLOC_NORMAL, so test for it >> * with a single branch for both flags. >> */ >> - if (likely((flags & (__GFP_DMA | __GFP_RECLAIMABLE)) == 0)) >> + if (likely((flags & KMALLOC_NOT_NORMAL_BITS) == 0)) >> return KMALLOC_NORMAL; > Likely KMALLOC_CGROUP is also very popular, so maybe we want to change the > optimization here a bit. I doubt this optimization is really noticeable and whether KMALLOC_CGROUP is really popular will depend on the workloads. I am not planning to spend additional time to micro-optimize this part of the code. Cheers, Longman