From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19441C433ED for ; Wed, 5 May 2021 18:57:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A18C0613BC for ; Wed, 5 May 2021 18:57:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A18C0613BC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0811D6B0074; Wed, 5 May 2021 14:57:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 032266B0075; Wed, 5 May 2021 14:57:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DED5D6B0078; Wed, 5 May 2021 14:57:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id C3B096B0074 for ; Wed, 5 May 2021 14:57:00 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 74C433642 for ; Wed, 5 May 2021 18:57:00 +0000 (UTC) X-FDA: 78108084600.18.F6DBD81 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf25.hostedemail.com (Postfix) with ESMTP id 6257560006C0 for ; Wed, 5 May 2021 18:56:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1620241019; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RLt9keyCtnOhXLfxqmhiGEZ6r/sGiYdHdPIuhsn7KMg=; b=WEM9JBH5t8363HzEBQJImNrG8zCagRSxEdiQRTWQ0/EOLYnYtSbnZ0OjWEWbHSZNy3eCtJ 4To1GpRu0LzEPeJYKYDnLldcfelpSv1BqhbrffKjKdsnmp1xNZprbAkjSkAcMH1sSAbGNa /oNRtYQkwUztIvdiO7CAHuyFt7ndt50= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-506-EZcwjLPlPjW8gD5MNecPHg-1; Wed, 05 May 2021 14:56:58 -0400 X-MC-Unique: EZcwjLPlPjW8gD5MNecPHg-1 Received: by mail-qk1-f198.google.com with SMTP id h15-20020a37de0f0000b029029a8ada2e18so1822322qkj.11 for ; Wed, 05 May 2021 11:56:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:cc:references:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=RLt9keyCtnOhXLfxqmhiGEZ6r/sGiYdHdPIuhsn7KMg=; b=XKZfCe3BfqmUuv2C5WLktQeaW5GA2by/xJIkSGWbHxkiqJt2PBx0RUphi34frncgd4 5+7lMV26oOSkQTWSfS/Tvnp7REkgScGy+AO3fFzhNgo3M9dDhKnJAA2rootOkb5nDFdc PRR3fkgr/AtAeSoLQgejxAsgEo1GAfz4vDhbujof+Fi/GhAXSHIyB1NVr/4e7AYSQuRp LCZSq+gnQVZqiTt9h35Kw0oPf+RvRepxNso61WOIj4MJC5Vgdsh5tnYoOantaN0dGg7G 3yi0eEh+xiZfT8GDpqsboE4T1N24kPrtlcqjTByi7dbOlyEhJ0QoRauYdJOpJTq/YQXl Fosw== X-Gm-Message-State: AOAM531kGLROJVhmxDP7pAiiQt/51fqx1TT6N10FKclUcVzySPW6JpeO XvMosl3mNas+YxpPoX8cWL/vpHfGkwldsRszq5olOsmHCFFK+pzCeRA8z9jB/8ZJlmFHtgx2tpQ tkelzGVVW0y3oGvSNMU0iK7DC+DJI6iZz276Nir0cBrh7Pq/5PWQSVhou9To= X-Received: by 2002:a37:745:: with SMTP id 66mr204385qkh.5.1620241017535; Wed, 05 May 2021 11:56:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzckruqTN3kXgcSr9qWxQj+MrQudiplS4IKJ4eXLo4TgviBYuj44Hisiiloy2k99iFmqoy/Ig== X-Received: by 2002:a37:745:: with SMTP id 66mr204360qkh.5.1620241017317; Wed, 05 May 2021 11:56:57 -0700 (PDT) Received: from llong.remote.csb ([2601:191:8500:76c0::cdbc]) by smtp.gmail.com with ESMTPSA id a195sm109640qkg.101.2021.05.05.11.56.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 05 May 2021 11:56:56 -0700 (PDT) From: Waiman Long X-Google-Original-From: Waiman Long Subject: Re: [PATCH v3 2/2] mm: memcg/slab: Create a new set of kmalloc-cg- caches To: Roman Gushchin , Waiman Long Cc: Vlastimil Babka , Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Shakeel Butt , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org References: <20210505154613.17214-1-longman@redhat.com> <20210505154613.17214-3-longman@redhat.com> <235f45b4-2d99-f32d-ac2b-18b59fea5a25@suse.cz> <4e4b6903-2444-f4ed-f589-26d5beae3120@redhat.com> Message-ID: <1b235531-e165-954a-74b1-d3477c2a4b87@redhat.com> Date: Wed, 5 May 2021 14:56:55 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 6257560006C0 X-Stat-Signature: 8cyzmyahw6jajnawus8bm31ecwqt7wq8 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WEM9JBH5; spf=none (imf25.hostedemail.com: domain of llong@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=llong@redhat.com; dmarc=pass (policy=none) header.from=redhat.com Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf25; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=170.10.133.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620241013-475726 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 5/5/21 2:38 PM, Roman Gushchin wrote: > On Wed, May 05, 2021 at 02:31:28PM -0400, Waiman Long wrote: >> On 5/5/21 2:02 PM, Vlastimil Babka wrote: >>> On 5/5/21 7:30 PM, Roman Gushchin wrote: >>>> On Wed, May 05, 2021 at 11:46:13AM -0400, Waiman Long wrote: >>>>> With this change, all the objcg pointer array objects will come fro= m >>>>> KMALLOC_NORMAL caches which won't have their objcg pointer arrays. = So >>>>> both the recursive kfree() problem and non-freeable slab problem ar= e >>>>> gone. Since both the KMALLOC_NORMAL and KMALLOC_CGROUP caches no lo= nger >>>>> have mixed accounted and unaccounted objects, this will slightly re= duce >>>>> the number of objcg pointer arrays that need to be allocated and sa= ve >>>>> a bit of memory. >>>> Unfortunately the positive effect of this change will be likely >>>> reversed by a lower utilization due to a larger number of caches. >>>> >>>> Btw, I wonder if we also need a change in the slab caches merging pr= ocedure? >>>> KMALLOC_NORMAL caches should not be merged with caches which can pot= entially >>>> include accounted objects. >>> Good point. But looks like kmalloc* caches are extempt from all mergi= ng in >>> create_boot_cache() via >>> >>> s->refcount =3D -1; /* Exempt from merging for now */ >>> >>> It wouldn't hurt though to create the kmalloc-cg-* caches with SLAB_A= CCOUNT flag >>> to prevent accidental merging in case the above is ever removed. It w= ould also >>> better reflect reality, and ensure that the array is allocated immedi= ately with >>> the page, AFAICS. >>> >> I am not sure if this is really true. >> >> struct kmem_cache *__init create_kmalloc_cache(const char *name, >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 unsigned int size, slab_flags_t flags, >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 unsigned int useroffset, unsigned int usersize) >> { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct kmem_cache *s =3D k= mem_cache_zalloc(kmem_cache, GFP_NOWAIT); >> >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (!s) >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 panic("Out of memory when creating slab %s\n", name= ); >> >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 create_boot_cache(s, name,= size, flags, useroffset, usersize); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 kasan_cache_create_kmalloc= (s); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 list_add(&s->list, &slab_c= aches); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 s->refcount =3D 1; >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return s; >> } >> >> Even though refcount is set to -1 initially, it is set back to 1 after= ward. >> So merging can still happen AFAICS. > Right, thanks, I already noticed it. Then yeah, we should make sure we'= re not > merging KMALLOC_NORMAL caches with any others. > That should be easy. We just set the refcount to -1 for the=20 KMALLOC_NORMAL caches right after its creation then. Cheers, Longman