From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7757C433EF for ; Tue, 14 Jun 2022 17:15:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 309396B0071; Tue, 14 Jun 2022 13:15:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B9AD6B0072; Tue, 14 Jun 2022 13:15:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1814F6B0073; Tue, 14 Jun 2022 13:15:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0986F6B0071 for ; Tue, 14 Jun 2022 13:15:17 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id CCF6B1207E1 for ; Tue, 14 Jun 2022 17:15:16 +0000 (UTC) X-FDA: 79577492232.22.3BD594D Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf08.hostedemail.com (Postfix) with ESMTP id 092EF16009D for ; Tue, 14 Jun 2022 17:15:15 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 017C1CE1B8B; Tue, 14 Jun 2022 17:15:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 662CFC3411B; Tue, 14 Jun 2022 17:15:09 +0000 (UTC) Date: Tue, 14 Jun 2022 18:15:05 +0100 From: Catalin Marinas To: Waiman Long Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 3/3] mm/kmemleak: Prevent soft lockup in first object iteration loop of kmemleak_scan() Message-ID: References: <20220612183301.981616-1-longman@redhat.com> <20220612183301.981616-4-longman@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220612183301.981616-4-longman@redhat.com> ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655226916; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hqKnhSCKvoFSBfBRgf9BiFOddEHObXYhd5/LaTjizU8=; b=P3jcKCSCEYD4Mhf0yPDVMe3vbJHWecXHnt8zaRDjKIEGevXc8BSazsTOHcVJscrc0+Wd5E DIsLskdOdjBVPShXCF/sY12ed6Za3uRk3WKKA6r3ryT2x69t9r5UObI3PC85Rj2139Isfv RYIAk8sJhbfCyZIRycNp2wlP1ek0dMU= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of cmarinas@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=cmarinas@kernel.org; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655226916; a=rsa-sha256; cv=none; b=RI9qvPq5+0QCrPaDDNIyEkAEVhDKVGtgnNHEExU4O76fpmO0RbZiZUD1irWNyWGOxT1l7l yaS+upIBP7r3No57E3O3/SnJbKuoiEEBxPZ1UBf81JKFNMF/yrnVttY9gB2dBWFFSU2zMv xjJbLgCfyIY1IsYSU5VBq1VhLRh5E+U= X-Stat-Signature: u1on8dx8atftd8ojy1rgitj35ra4fd3f X-Rspamd-Queue-Id: 092EF16009D X-Rspamd-Server: rspam11 X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of cmarinas@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=cmarinas@kernel.org; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none) X-HE-Tag: 1655226915-383856 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Jun 12, 2022 at 02:33:01PM -0400, Waiman Long wrote: > @@ -1437,10 +1440,25 @@ static void kmemleak_scan(void) > #endif > /* reset the reference count (whiten the object) */ > object->count = 0; > - if (color_gray(object) && get_object(object)) > + if (color_gray(object) && get_object(object)) { > list_add_tail(&object->gray_list, &gray_list); > + gray_list_cnt++; > + object_pinned = true; > + } > > raw_spin_unlock_irq(&object->lock); > + > + /* > + * With object pinned by a positive reference count, it > + * won't go away and we can safely release the RCU read > + * lock and do a cond_resched() to avoid soft lockup every > + * 64k objects. > + */ > + if (object_pinned && !(gray_list_cnt & 0xffff)) { > + rcu_read_unlock(); > + cond_resched(); > + rcu_read_lock(); > + } I'm not sure this gains much. There should be very few gray objects initially (those passed to kmemleak_not_leak() for example). The majority should be white objects. If we drop the fine-grained object->lock, we could instead take kmemleak_lock outside the loop with a cond_resched_lock(&kmemleak_lock) within the loop. I think we can get away with not having an rcu_read_lock() at all for list traversal with the big lock outside the loop. The reason I added it in the first kmemleak incarnation was to defer kmemleak_object freeing as it was causing a re-entrant call into the slab allocator. I later went for fine-grained locking and RCU list traversal but I may have overdone it ;). -- Catalin