From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87507C25B0F for ; Thu, 11 Aug 2022 23:00:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CAA546B0075; Thu, 11 Aug 2022 19:00:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C59D86B0078; Thu, 11 Aug 2022 19:00:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B48B86B007B; Thu, 11 Aug 2022 19:00:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A12326B0075 for ; Thu, 11 Aug 2022 19:00:31 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6A42CA0460 for ; Thu, 11 Aug 2022 23:00:31 +0000 (UTC) X-FDA: 79788832662.19.0123D16 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf03.hostedemail.com (Postfix) with ESMTP id 89DD820192 for ; Thu, 11 Aug 2022 23:00:22 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 95FBE61363; Thu, 11 Aug 2022 23:00:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7A23C433D6; Thu, 11 Aug 2022 23:00:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1660258821; bh=zUD6ps9EUrGl2WNrqIea32JkQhG6EuoV7XvAvbJoKrk=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=ON9OXYj/TbX1s1HgqO9bUsiHHWt8rQuxBUipZMXgHT3yXGwV01PTLeOqckMR4BWJv U4pLIg31s6mEApPAxBNCj2cfBUAva+zjwmG7Cm36E72SBmy+glSExnwXNzW6MTbuGH PLiM8VbItAZVdgxp2luh2gM2QPlclpL+92lD0vS4= Date: Thu, 11 Aug 2022 16:00:20 -0700 From: Andrew Morton To: Kefeng Wang Cc: , , Abhishek Shah , Gabriel Ryan Subject: Re: [PATCH] mm: ksm: fix data-race in __ksm_enter / run_store Message-Id: <20220811160020.1e6823094217e8d6d3aaebdf@linux-foundation.org> In-Reply-To: <20220802151550.159076-1-wangkefeng.wang@huawei.com> References: <20220802151550.159076-1-wangkefeng.wang@huawei.com> X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.33; x86_64-redhat-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Stat-Signature: 4udx4j18jenymsixu73iu4igeifh1f1a X-Rspamd-Queue-Id: 89DD820192 X-Rspam-User: X-Rspamd-Server: rspam03 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b="ON9OXYj/"; spf=temperror (imf03.hostedemail.com: error in processing during lookup of akpm@linux-foundation.org: DNS error) smtp.mailfrom=akpm@linux-foundation.org X-HE-Tag: 1660258822-185678 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 2 Aug 2022 23:15:50 +0800 Kefeng Wang wrote: > Abhishek reported a data-race issue, OK, but it would be better to perform an analysis of the alleged bug, describe the potential effects if the race is hit, etc. > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -2507,6 +2507,7 @@ int __ksm_enter(struct mm_struct *mm) > { > struct mm_slot *mm_slot; > int needs_wakeup; > + bool ksm_run_unmerge; > > mm_slot = alloc_mm_slot(); > if (!mm_slot) > @@ -2515,6 +2516,10 @@ int __ksm_enter(struct mm_struct *mm) > /* Check ksm_run too? Would need tighter locking */ > needs_wakeup = list_empty(&ksm_mm_head.mm_list); > > + mutex_lock(&ksm_thread_mutex); > + ksm_run_unmerge = !!(ksm_run & KSM_RUN_UNMERGE); > + mutex_unlock(&ksm_thread_mutex); > > spin_lock(&ksm_mmlist_lock); > insert_to_mm_slots_hash(mm, mm_slot); > /* > @@ -2527,7 +2532,7 @@ int __ksm_enter(struct mm_struct *mm) > * scanning cursor, otherwise KSM pages in newly forked mms will be > * missed: then we might as well insert at the end of the list. > */ > - if (ksm_run & KSM_RUN_UNMERGE) > + if (ksm_run_unmerge) run_store() can alter ksm_run right here, so __ksm_enter() is still acting on the old setting? > list_add_tail(&mm_slot->mm_list, &ksm_mm_head.mm_list); > else > list_add_tail(&mm_slot->mm_list, &ksm_scan.mm_slot->mm_list);