From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1639BC432BE for ; Wed, 11 Aug 2021 01:43:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9D9E660FC4 for ; Wed, 11 Aug 2021 01:43:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9D9E660FC4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id CAB426B0071; Tue, 10 Aug 2021 21:43:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C5BA78D0001; Tue, 10 Aug 2021 21:43:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4B276B0073; Tue, 10 Aug 2021 21:43:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id 97FF36B0071 for ; Tue, 10 Aug 2021 21:43:01 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2D9478249980 for ; Wed, 11 Aug 2021 01:43:01 +0000 (UTC) X-FDA: 78461101362.09.B56DD38 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by imf21.hostedemail.com (Postfix) with ESMTP id 62D28D0283F2 for ; Wed, 11 Aug 2021 01:43:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1628646180; x=1660182180; h=subject:to:cc:references:from:message-id:date: mime-version:in-reply-to:content-transfer-encoding; bh=8sYmq5fBAgsNnOGPza/iYi6HIXCeHTxXeW7dDs6f1zc=; b=vcLZkOGrVcIiVJEv3dUkqo1NZc18POmeXpd5hNfA0Zyxz6eDTdy5tVQK LPvB6LbJqIO7P7UCpjIf5VJY6MbU4H/OB2TkAcv9cQm6UafWU2P2dGi5v Vn46AOh3jDiiOmsla9y8Jc6V0rQ5urZFkFyaSieUC/gz3Bi1Rut55Cxc+ s=; Received: from unknown (HELO ironmsg05-sd.qualcomm.com) ([10.53.140.145]) by alexa-out-sd-01.qualcomm.com with ESMTP; 10 Aug 2021 18:42:59 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg05-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 10 Aug 2021 18:42:58 -0700 Received: from [10.111.168.10] (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Tue, 10 Aug 2021 18:42:57 -0700 Subject: Re: [PATCH v4 29/35] mm: slub: Move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context To: Vlastimil Babka , Andrew Morton , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim CC: , , Mike Galbraith , Sebastian Andrzej Siewior , Thomas Gleixner , Mel Gorman , Jesper Dangaard Brouer , Jann Horn References: <20210805152000.12817-1-vbabka@suse.cz> <20210805152000.12817-30-vbabka@suse.cz> <0b36128c-3e12-77df-85fe-a153a714569b@quicinc.com> <50fe26ba-450b-af57-506d-438f67cfbce3@suse.cz> From: Qian Cai Message-ID: <13a3f616-19b5-ce25-87ad-bb241d0b0c18@quicinc.com> Date: Tue, 10 Aug 2021 21:42:56 -0400 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.12.0 MIME-Version: 1.0 In-Reply-To: <50fe26ba-450b-af57-506d-438f67cfbce3@suse.cz> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03f.na.qualcomm.com (10.85.0.47) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 62D28D0283F2 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcdkim header.b=vcLZkOGr; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf21.hostedemail.com: domain of quic_qiancai@quicinc.com designates 199.106.114.38 as permitted sender) smtp.mailfrom=quic_qiancai@quicinc.com X-Stat-Signature: a3gxr1zdikqzbjetub61sc7ateg5jgr7 X-HE-Tag: 1628646180-275389 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 8/10/2021 10:33 AM, Vlastimil Babka wrote: > On 8/9/21 3:41 PM, Qian Cai wrote: > >>> static void flush_all(struct kmem_cache *s) >>> { >>> - on_each_cpu_cond(has_cpu_slab, flush_cpu_slab, s, 1); >>> + struct slub_flush_work *sfw; >>> + unsigned int cpu; >>> + >>> + mutex_lock(&flush_lock); >> >> Vlastimil, taking the lock here could trigger a warning during memory offline/online due to the locking order: >> >> slab_mutex -> flush_lock > > Here's the full fixup, also incorporating Mike's fix. Thanks. > > ----8<---- > From c2df67d5116d4615c322e262556e34117e268104 Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka > Date: Tue, 10 Aug 2021 10:58:07 +0200 > Subject: [PATCH] mm, slub: fix memory and cpu hotplug related lock ordering > issues > > Qian Cai reported [1] a lockdep splat on memory offline. > > [ 91.374541] WARNING: possible circular locking dependency detected > [ 91.381411] 5.14.0-rc5-next-20210809+ #84 Not tainted > [ 91.387149] ------------------------------------------------------ > [ 91.394016] lsbug/1523 is trying to acquire lock: > [ 91.399406] ffff800018e76530 (flush_lock){+.+.}-{3:3}, at: flush_all+0x50/0x1c8 > [ 91.407425] but task is already holding lock: > [ 91.414638] ffff800018e48468 (slab_mutex){+.+.}-{3:3}, at: slab_memory_callback+0x44/0x280 > [ 91.423603] which lock already depends on the new lock. > > To fix it, we need to change the order in flush_all() so that cpus_read_lock() > is first and mutex_lock(&flush_lock) second. > > Also when called from slab_mem_going_offline_callback() we are already under > cpus_read_lock() and cannot take it again, so create a flush_all_cpus_locked() > variant and decouple flushing from actual shrinking for this call path. > > Additionally, Mike Galbraith reported [2] wrong order of cpus_read_lock() and > slab_mutex in kmem_cache_destroy() path and proposed a fix to reverse it. > > This patch is a fixup for the mmotm patch > mm-slub-move-flush_cpu_slab-invocations-__free_slab-invocations-out-of-irq-context.patch > > [1] https://lore.kernel.org/lkml/0b36128c-3e12-77df-85fe-a153a714569b@quicinc.com/ > [2] https://lore.kernel.org/lkml/2eb3cf340716c40f03a0a342ab40219b3d1de195.camel@gmx.de/ > > Reported-by: Qian Cai > Reported-by: Mike Galbraith > Signed-off-by: Vlastimil Babka This is running fine for me. There is a separate hugetlb crash while fuzzing and will report to where it belongs.