From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4484C00140 for ; Wed, 10 Aug 2022 18:45:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E7798E0001; Wed, 10 Aug 2022 14:45:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 597026B0072; Wed, 10 Aug 2022 14:45:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45E0A8E0001; Wed, 10 Aug 2022 14:45:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 37BE66B0071 for ; Wed, 10 Aug 2022 14:45:58 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0A5B6ABB90 for ; Wed, 10 Aug 2022 18:45:58 +0000 (UTC) X-FDA: 79784562396.01.905E00F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf31.hostedemail.com (Postfix) with ESMTP id 7DCF62016C for ; Wed, 10 Aug 2022 18:45:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660157156; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Vap7a99o24vJ9gF26JY37jOu10iwst4opAFZ2NW/EBw=; b=hUXATwL/el9x9pep8npdFq6wadQ/q02EsZ8z3RWZVenMv3b2PQEU4Qvh3oZDtW9UgzHmaR w3Oz8s4zXQKYdyBEG9i+NJ06H8IbxLhhC7aI3peAb8ivaszWupL6SaKGhCHZpZIiAAd2yh HHeHIKrPvllvzeSBvhSrr7PFUYeSFoA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-163-Jm0bHJ5fOiWgNLnnaHgzyg-1; Wed, 10 Aug 2022 14:45:54 -0400 X-MC-Unique: Jm0bHJ5fOiWgNLnnaHgzyg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 01B7C85A59A; Wed, 10 Aug 2022 18:45:54 +0000 (UTC) Received: from [10.22.9.72] (unknown [10.22.9.72]) by smtp.corp.redhat.com (Postfix) with ESMTP id 536A818EAA; Wed, 10 Aug 2022 18:45:53 +0000 (UTC) Message-ID: Date: Wed, 10 Aug 2022 14:45:53 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0 Subject: Re: [PATCH v2] mm/slab_common: Deleting kobject in kmem_cache_destroy() without holding slab_mutex/cpu_hotplug_lock Content-Language: en-US From: Waiman Long To: Roman Gushchin Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20220810164946.148634-1-longman@redhat.com> <9b95dc38-9a3f-b9f1-80cc-c834621bd81c@redhat.com> In-Reply-To: <9b95dc38-9a3f-b9f1-80cc-c834621bd81c@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660157156; a=rsa-sha256; cv=none; b=dyLg8+SbqwigCktlQghA9utmb9Bs2ZspNDHM+jwB8rfTSQV1ipg9CH7Hkn2kRfCQJ6KnPi 52YaZ3DJ2t6NJhw+miiJjOjzA4yLrh7U7WyQZN+lq8CIuQu+z018KkfqIHr/W2COUjPbJz fcG8mUwU8Kj7E6DanFT754XabJYxwxY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660157156; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Vap7a99o24vJ9gF26JY37jOu10iwst4opAFZ2NW/EBw=; b=lY+q/gkfMaND7I0MJCRdymeKAlmHF0nwb/ciAU8tSxvJmqfWbY5Z4sjeRId0Zic9iW7WA0 8VUEtwO+dx0WBq9vRUVNODrxqZ1uzWQAWbFt6mzJ2P2O+hQi0kvdushV4yp/LkckqikU7b tyuj097Vf11jtRqnCMLP6FwhenwPW+M= ARC-Authentication-Results: i=1; imf31.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="hUXATwL/"; spf=pass (imf31.hostedemail.com: domain of longman@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=longman@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: yw5zweh6wrooy6wai39oonanzth5gzt7 X-Rspamd-Queue-Id: 7DCF62016C Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="hUXATwL/"; spf=pass (imf31.hostedemail.com: domain of longman@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=longman@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1660157156-64110 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 8/10/22 14:27, Waiman Long wrote: > On 8/10/22 14:10, Roman Gushchin wrote: >> On Wed, Aug 10, 2022 at 12:49:46PM -0400, Waiman Long wrote: >>> A circular locking problem is reported by lockdep due to the following >>> circular locking dependency. >>> >>>    +--> cpu_hotplug_lock --> slab_mutex --> kn->active --+ >>>    |                                                     | >>>    +-----------------------------------------------------+ >>> >>> The forward cpu_hotplug_lock ==> slab_mutex ==> kn->active dependency >>> happens in >>> >>>    kmem_cache_destroy():    cpus_read_lock(); mutex_lock(&slab_mutex); >>>    ==> sysfs_slab_unlink() >>>        ==> kobject_del() >>>            ==> kernfs_remove() >>>           ==> __kernfs_remove() >>>               ==> kernfs_drain(): rwsem_acquire(&kn->dep_map, ...); >>> >>> The backward kn->active ==> cpu_hotplug_lock dependency happens in >>> >>>    kernfs_fop_write_iter(): kernfs_get_active(); >>>    ==> slab_attr_store() >>>        ==> cpu_partial_store() >>>            ==> flush_all(): cpus_read_lock() >>> >>> One way to break this circular locking chain is to avoid holding >>> cpu_hotplug_lock and slab_mutex while deleting the kobject in >>> sysfs_slab_unlink() which should be equivalent to doing a write_lock >>> and write_unlock pair of the kn->active virtual lock. >>> >>> Since the kobject structures are not protected by slab_mutex or the >>> cpu_hotplug_lock, we can certainly release those locks before doing >>> the delete operation. >>> >>> Move sysfs_slab_unlink() and sysfs_slab_release() to the newly >>> created kmem_cache_release() and call it outside the slab_mutex & >>> cpu_hotplug_lock critical sections. >>> >>> Signed-off-by: Waiman Long >>> --- >>>   [v2] Break kmem_cache_release() helper into 2 separate ones. >>> >>>   mm/slab_common.c | 54 >>> +++++++++++++++++++++++++++++++++--------------- >>>   1 file changed, 37 insertions(+), 17 deletions(-) >>> >>> diff --git a/mm/slab_common.c b/mm/slab_common.c >>> index 17996649cfe3..7742d0446d8b 100644 >>> --- a/mm/slab_common.c >>> +++ b/mm/slab_common.c >>> @@ -392,6 +392,36 @@ kmem_cache_create(const char *name, unsigned >>> int size, unsigned int align, >>>   } >>>   EXPORT_SYMBOL(kmem_cache_create); >>>   +#ifdef SLAB_SUPPORTS_SYSFS >>> +static void kmem_cache_workfn_release(struct kmem_cache *s) >>> +{ >>> +    sysfs_slab_release(s); >>> +} >>> +#else >>> +static void kmem_cache_workfn_release(struct kmem_cache *s) >>> +{ >>> +    slab_kmem_cache_release(s); >>> +} >>> +#endif >>> + >>> +/* >>> + * For a given kmem_cache, kmem_cache_destroy() should only be called >>> + * once or there will be a use-after-free problem. The actual deletion >>> + * and release of the kobject does not need slab_mutex or >>> cpu_hotplug_lock >>> + * protection. So they are now done without holding those locks. >>> + */ >>> +static void kmem_cache_release(struct kmem_cache *s) >>> +{ >>> +#ifdef SLAB_SUPPORTS_SYSFS >>> +    sysfs_slab_unlink(s); >>> +#endif >>> + >>> +    if (s->flags & SLAB_TYPESAFE_BY_RCU) >>> +        schedule_work(&slab_caches_to_rcu_destroy_work); >>> +    else >>> +        kmem_cache_workfn_release(s); >>> +} >>> + >>>   static void slab_caches_to_rcu_destroy_workfn(struct work_struct >>> *work) >>>   { >>>       LIST_HEAD(to_destroy); >>> @@ -418,11 +448,7 @@ static void >>> slab_caches_to_rcu_destroy_workfn(struct work_struct *work) >>>       list_for_each_entry_safe(s, s2, &to_destroy, list) { >>>           debugfs_slab_release(s); >>>           kfence_shutdown_cache(s); >>> -#ifdef SLAB_SUPPORTS_SYSFS >>> -        sysfs_slab_release(s); >>> -#else >>> -        slab_kmem_cache_release(s); >>> -#endif >>> +        kmem_cache_workfn_release(s); >>>       } >>>   } >>>   @@ -437,20 +463,10 @@ static int shutdown_cache(struct kmem_cache *s) >>>       list_del(&s->list); >>>         if (s->flags & SLAB_TYPESAFE_BY_RCU) { >>> -#ifdef SLAB_SUPPORTS_SYSFS >>> -        sysfs_slab_unlink(s); >>> -#endif >>>           list_add_tail(&s->list, &slab_caches_to_rcu_destroy); >>> -        schedule_work(&slab_caches_to_rcu_destroy_work); >> Hi Waiman! >> >> This version is much more readable, thank you! >> >> But can we, please, leave this >> schedule_work(&slab_caches_to_rcu_destroy_work) >> call here? I don't see a good reason to move it, do I miss something? >> It's nice to have list_add_tail() and schedule_work() calls nearby, so >> it's obvious we can't miss the latter. > > The reason that I need to move out schedule_work() as well is to make > sure that sysfs_slab_unlink() is called before sysfs_slab_release(). I > can't guarantee that if I do schedule_work() first. On the other hand, > moving sysfs_slab_unlink() into kmem_cache_workfn_release() introduces > unknown delay of when the sysfs file will be removed. I can add some > comment to make it more clear. OK, I just realize that the current patch doesn't have the ordering guarantee either if another kmem_cache_destroy() is happening in parallel. I will have to push sysfs_slab_unlink() into kmem_cache_workfn_release() and tolerate some delay in the disappearance of the sysfs files. Now I can move schedule_work() back to after list_add_tail(). Cheers, Longman