From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01BADC43334 for ; Tue, 14 Jun 2022 17:28:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B8376B0071; Tue, 14 Jun 2022 13:28:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 667BC6B0072; Tue, 14 Jun 2022 13:28:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52F606B0073; Tue, 14 Jun 2022 13:28:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 44D8F6B0071 for ; Tue, 14 Jun 2022 13:28:05 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id EF5B220693 for ; Tue, 14 Jun 2022 17:28:04 +0000 (UTC) X-FDA: 79577524488.03.9CDF36A Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf06.hostedemail.com (Postfix) with ESMTP id 8347C180096 for ; Tue, 14 Jun 2022 17:28:04 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id F2910B81A2B; Tue, 14 Jun 2022 17:28:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A36EEC3411B; Tue, 14 Jun 2022 17:28:00 +0000 (UTC) Date: Tue, 14 Jun 2022 18:27:56 +0100 From: Catalin Marinas To: Waiman Long Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 3/3] mm/kmemleak: Prevent soft lockup in first object iteration loop of kmemleak_scan() Message-ID: References: <20220612183301.981616-1-longman@redhat.com> <20220612183301.981616-4-longman@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655227684; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JJewp66f+hnkUfGiI+PYWo/gajuDMon54PRlnizSb3Q=; b=6lDSUVIZp+h9TMCUKE/556blcXLpcPQRBqnArb8uUDwNb/wJNSk3GJFgDyCoUbhQjYu8sm 5iFjH69F2rAdqaY9KZSeI+dZBLkWUFwsG+NcBwxq1Y1v5jghtwVioXMKkvWkaHKlz91hm9 ApLMlNmgA3T3CZZsvOd3ILOKPn7J54A= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf06.hostedemail.com: domain of cmarinas@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=cmarinas@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655227684; a=rsa-sha256; cv=none; b=zbYr057Il5axOCQFMd6/KyGviJbcrVrkS6+NHNe1IrOTflM8HHj4Qrp1bubPVJDnhqrHrd Q8lOuOfZPIfOVfEW78645780MD4u02PlI3ixppQ2eyOKK68G+DhuXIMAMjUYoXhUdOztqX dNr4nz1q5O1gV/6lnAVmNhBC1Rebd14= X-Rspamd-Queue-Id: 8347C180096 X-Rspam-User: X-Stat-Signature: f3997ggpm7jmqb9i1bmsudfw3eb1b7tj Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf06.hostedemail.com: domain of cmarinas@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=cmarinas@kernel.org X-Rspamd-Server: rspam04 X-HE-Tag: 1655227684-450669 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 14, 2022 at 06:15:14PM +0100, Catalin Marinas wrote: > On Sun, Jun 12, 2022 at 02:33:01PM -0400, Waiman Long wrote: > > @@ -1437,10 +1440,25 @@ static void kmemleak_scan(void) > > #endif > > /* reset the reference count (whiten the object) */ > > object->count = 0; > > - if (color_gray(object) && get_object(object)) > > + if (color_gray(object) && get_object(object)) { > > list_add_tail(&object->gray_list, &gray_list); > > + gray_list_cnt++; > > + object_pinned = true; > > + } > > > > raw_spin_unlock_irq(&object->lock); > > + > > + /* > > + * With object pinned by a positive reference count, it > > + * won't go away and we can safely release the RCU read > > + * lock and do a cond_resched() to avoid soft lockup every > > + * 64k objects. > > + */ > > + if (object_pinned && !(gray_list_cnt & 0xffff)) { > > + rcu_read_unlock(); > > + cond_resched(); > > + rcu_read_lock(); > > + } > > I'm not sure this gains much. There should be very few gray objects > initially (those passed to kmemleak_not_leak() for example). The > majority should be white objects. > > If we drop the fine-grained object->lock, we could instead take > kmemleak_lock outside the loop with a cond_resched_lock(&kmemleak_lock) > within the loop. I think we can get away with not having an > rcu_read_lock() at all for list traversal with the big lock outside the > loop. Actually this doesn't work is the current object in the iteration is freed. Does list_for_each_rcu_safe() help? -- Catalin