From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CF00C32771 for ; Fri, 19 Aug 2022 12:00:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E61E58D0005; Fri, 19 Aug 2022 08:00:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DEAA28D0001; Fri, 19 Aug 2022 08:00:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C63E28D0005; Fri, 19 Aug 2022 08:00:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B1A568D0001 for ; Fri, 19 Aug 2022 08:00:32 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6E41FA2E9F for ; Fri, 19 Aug 2022 12:00:32 +0000 (UTC) X-FDA: 79816199904.04.5C74B49 Received: from mx0b-00364e01.pphosted.com (mx0b-00364e01.pphosted.com [148.163.139.74]) by imf26.hostedemail.com (Postfix) with ESMTP id C1A4614000F for ; Fri, 19 Aug 2022 12:00:27 +0000 (UTC) Received: from pps.filterd (m0167076.ppops.net [127.0.0.1]) by mx0b-00364e01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27JBwq8n031477 for ; Fri, 19 Aug 2022 08:00:27 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=columbia.edu; h=mime-version : references : in-reply-to : reply-to : from : date : message-id : subject : to : cc : content-type; s=pps01; bh=Tuoh90pYta6GSVW3pcbNZuqheRMZFREi8rrFPHofcxE=; b=Tcfwf6IfJ4JIf3RtHT53EZkWw7yRQhaVjzbw8/ZLJX2KcIs4D4/gtCcQmW78/kH3NBgk Vnu7uJXuJf3xyfZvbeVWXf2LhyBN5dM7dDT4Z/vyfrnqJUmnUpRgZhibZq1D8RAnNBRX SESbILpENqqwjiOtJq99YK3ZeWNUUncDQT+kjvxLQg+Fd1gWYZ/GEr2GdyZQqBfXwNH/ Ym4GNvk8lQsOFWEVcA8mCHjFBXib97EC9lZWCWVm6LfGV0pcshoAmotePzkIx1ACibfC 4ZGIHdAPS9UuM15K8zjll1SwVX4lTflQyVyUI0HqYYB4o3TE2hYg68pWTLS8GlFVr2uV uA== Received: from sendprdmail20.cc.columbia.edu (sendprdmail20.cc.columbia.edu [128.59.72.22]) by mx0b-00364e01.pphosted.com (PPS) with ESMTPS id 3j0erb5u56-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 19 Aug 2022 08:00:26 -0400 Received: from mail-vs1-f70.google.com (mail-vs1-f70.google.com [209.85.217.70]) by sendprdmail20.cc.columbia.edu (8.14.7/8.14.4) with ESMTP id 27JC0CMu030883 (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 19 Aug 2022 08:00:26 -0400 Received: by mail-vs1-f70.google.com with SMTP id j11-20020a67ef0b000000b003902b806a91so403759vsr.17 for ; Fri, 19 Aug 2022 05:00:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:reply-to:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=Tuoh90pYta6GSVW3pcbNZuqheRMZFREi8rrFPHofcxE=; b=I0zLr2nEpqlellTgaU6wKNkStEtLGRM7ptgPHR6462kOHeeghDQstoOoBYsxYdmlhv YODEWFd4rtc41D2AOwi2tfr2h8gj1/RQbonV3iAVZFxDwbTuOjpDrnQiR8ZMEZ4J6uUO MsuVSkzC8ejBYiCEZoymTVT3lLy4S57vvmRn9ObhfhQPg/YSfNnAq1hoCWvP2PWMW6oB I8/lTuLWImSPAp9trGzTGTWjnq+yBSA/Q/05Ruvi07sEjjFnlrH7faVgo/Z7GJMnm3Fo e6edJDwdqwSBAsbGvjKMkl1ywCS/byoh8WhyOJ9VwZjMqg+KMylXeOcgcusynCWiq33e R2Dw== X-Gm-Message-State: ACgBeo0VdI+ifAaysWBtESYYTZnJvPclG12h8skwlRi+gLwcaxTI/8xP nWQPlNbH+5eg8DftLZVSsLSjiRFsDsMTYomVU+H02gV6MUpO6ilNd9t0jksk69/o/txILmFnT+S pacr7a7qrdS7IiwFhPkrcUWY6WPnvmZBX35o= X-Received: by 2002:ab0:a99:0:b0:39d:de9a:3d58 with SMTP id d25-20020ab00a99000000b0039dde9a3d58mr845071uak.36.1660910425725; Fri, 19 Aug 2022 05:00:25 -0700 (PDT) X-Google-Smtp-Source: AA6agR5caDrp322/e96KsmCuJMmXZw/WOMTkleWULgzj2Ub4gzaN6wjZDkazZ60FrxIjYHnsYZlkovLyQVTpeHxJSHU= X-Received: by 2002:ab0:a99:0:b0:39d:de9a:3d58 with SMTP id d25-20020ab00a99000000b0039dde9a3d58mr845050uak.36.1660910425241; Fri, 19 Aug 2022 05:00:25 -0700 (PDT) MIME-Version: 1.0 References: <20220802151550.159076-1-wangkefeng.wang@huawei.com> <20220811160020.1e6823094217e8d6d3aaebdf@linux-foundation.org> In-Reply-To: <20220811160020.1e6823094217e8d6d3aaebdf@linux-foundation.org> Reply-To: abhishek.shah@columbia.edu From: Abhishek Shah Date: Fri, 19 Aug 2022 08:00:00 -0400 Message-ID: Subject: Re: [PATCH] mm: ksm: fix data-race in __ksm_enter / run_store To: Andrew Morton Cc: Kefeng Wang , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Gabriel Ryan Content-Type: multipart/alternative; boundary="000000000000e6903605e696d729" X-Proofpoint-ORIG-GUID: o70TP4tW92TBrBUNgVh_xewS9o8flg2l X-Proofpoint-GUID: o70TP4tW92TBrBUNgVh_xewS9o8flg2l X-CU-OB: Yes X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-19_06,2022-08-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1011 lowpriorityscore=10 priorityscore=1501 phishscore=0 spamscore=0 impostorscore=10 adultscore=0 malwarescore=0 mlxscore=0 mlxlogscore=999 bulkscore=10 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208190044 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660910428; a=rsa-sha256; cv=none; b=SyF8ck9dVlHW4kte+QOSpgYBw6p5/uAq/2rzhbOnzAN9fEV49fd4ohdnTWPn1fnCN6I014 UcJrr41LNGAUXkDRvUYsGNW1wRP+eDSHkCKNgW65phdkYRZzB7drGcjXsjCwCEIVHE9iih KYremqyMyuuSgtH9AwXrnLtm7xEeXr8= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=columbia.edu header.s=pps01 header.b=Tcfwf6If; dmarc=none; spf=pass (imf26.hostedemail.com: domain of as5258@columbia.edu designates 148.163.139.74 as permitted sender) smtp.mailfrom=as5258@columbia.edu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660910428; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Tuoh90pYta6GSVW3pcbNZuqheRMZFREi8rrFPHofcxE=; b=q+WrnxB+/VPLxSaOUdZFL9dw0O0ylg5PxP0zmHQ+JCkPRIPWtD2OQnsETmsqlcJmM22STt hE3neKGUIjRIllF2q5qvxZBl7AJypy6wynlh87qyTQ/jyUgcBq1lgdZxHZ+W097OC0ZCeV BjS8OJzEFcPb3PFndjxfkjAnQp/V7Ec= X-Rspamd-Server: rspam06 X-Rspam-User: X-Stat-Signature: cd65yj8gqpd3t566tbnkmh3wq3cfo9tj Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=columbia.edu header.s=pps01 header.b=Tcfwf6If; dmarc=none; spf=pass (imf26.hostedemail.com: domain of as5258@columbia.edu designates 148.163.139.74 as permitted sender) smtp.mailfrom=as5258@columbia.edu X-Rspamd-Queue-Id: C1A4614000F X-HE-Tag: 1660910427-401120 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --000000000000e6903605e696d729 Content-Type: text/plain; charset="UTF-8" Hi all, I looked through the vulnerability some more and came up with this so far. Please let me know what you think Consider the following interleaving. 1. In *__ksm_enter*, thread 1 inserts the new mm into the list of mms at a position determined by *ksm_run* being set to KSM_RUN_MERGE (see here ) 2. In *run_store*, thread 2 changes *ksm_run* to KSM_RUN_UNMERGE here and executes unmerge_and_remove_all_rmap_items, where it can free the newly added mm via *mmdrop* here . 3. In *__ksm_enter*, thread 2 continues execution and updates the fields of the new mm (see here ), although it was freed, resulting in a use-after-free vulnerability. Thanks! On Thu, Aug 11, 2022 at 7:00 PM Andrew Morton wrote: > On Tue, 2 Aug 2022 23:15:50 +0800 Kefeng Wang > wrote: > > > Abhishek reported a data-race issue, > > OK, but it would be better to perform an analysis of the alleged bug, > describe the potential effects if the race is hit, etc. > > > --- a/mm/ksm.c > > +++ b/mm/ksm.c > > @@ -2507,6 +2507,7 @@ int __ksm_enter(struct mm_struct *mm) > > { > > struct mm_slot *mm_slot; > > int needs_wakeup; > > + bool ksm_run_unmerge; > > > > mm_slot = alloc_mm_slot(); > > if (!mm_slot) > > @@ -2515,6 +2516,10 @@ int __ksm_enter(struct mm_struct *mm) > > /* Check ksm_run too? Would need tighter locking */ > > needs_wakeup = list_empty(&ksm_mm_head.mm_list); > > > > + mutex_lock(&ksm_thread_mutex); > > + ksm_run_unmerge = !!(ksm_run & KSM_RUN_UNMERGE); > > + mutex_unlock(&ksm_thread_mutex); > > > > spin_lock(&ksm_mmlist_lock); > > insert_to_mm_slots_hash(mm, mm_slot); > > /* > > @@ -2527,7 +2532,7 @@ int __ksm_enter(struct mm_struct *mm) > > * scanning cursor, otherwise KSM pages in newly forked mms will be > > * missed: then we might as well insert at the end of the list. > > */ > > - if (ksm_run & KSM_RUN_UNMERGE) > > + if (ksm_run_unmerge) > > run_store() can alter ksm_run right here, so __ksm_enter() is still > acting on the old setting? > > > list_add_tail(&mm_slot->mm_list, &ksm_mm_head.mm_list); > > else > > list_add_tail(&mm_slot->mm_list, > &ksm_scan.mm_slot->mm_list); > > --000000000000e6903605e696d729 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi all,=C2=A0

=C2=A0I looked through=C2=A0the vulnerability=C2= =A0some more and came up with this so far. Please let me know what you thin= k

=
Consider the = following interleaving.=C2=A0
  1. In=C2=A0__ksm_enter, thread 1 inserts the new mm into= the list of mms at a position determined by=C2=A0ksm_run=C2=A0being= set to KSM_RUN_MERGE (see=C2=A0here)
  2. In=C2=A0run_store, thread 2 cha= nges=C2=A0ksm_run=C2=A0to KSM_RUN_UNMERGE=C2=A0h= ere=C2=A0and executes unmerge_and_remove_all_rmap_items, where it can f= ree the newly added mm via=C2=A0mmdrop=C2=A0here= .=C2=A0
  3. In=C2=A0__= ksm_enter, thread 2 continues execution and updates the fields of the n= ew mm (see=C2=A0here), although it was freed, resul= ting in a use-after-free vulnerability.=C2=A0
Thanks!
=

= On Thu, Aug 11, 2022 at 7:00 PM Andrew Morton <akpm@linux-foundation.org> wrote:
On Tue, 2 Aug 2022 23:15:50 +0= 800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:

> Abhishek reported a data-race issue,

OK, but it would be better to perform an analysis of the alleged bug,
describe the potential effects if the race is hit, etc.

> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2507,6 +2507,7 @@ int __ksm_enter(struct mm_struct *mm)
>=C2=A0 {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0struct mm_slot *mm_slot;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0int needs_wakeup;
> +=C2=A0 =C2=A0 =C2=A0bool ksm_run_unmerge;
>=C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0mm_slot =3D alloc_mm_slot();
>=C2=A0 =C2=A0 =C2=A0 =C2=A0if (!mm_slot)
> @@ -2515,6 +2516,10 @@ int __ksm_enter(struct mm_struct *mm)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Check ksm_run too?=C2=A0 Would need tight= er locking */
>=C2=A0 =C2=A0 =C2=A0 =C2=A0needs_wakeup =3D list_empty(&ksm_mm_head= .mm_list);
>=C2=A0
> +=C2=A0 =C2=A0 =C2=A0mutex_lock(&ksm_thread_mutex);
> +=C2=A0 =C2=A0 =C2=A0ksm_run_unmerge =3D !!(ksm_run & KSM_RUN_UNME= RGE);
> +=C2=A0 =C2=A0 =C2=A0mutex_unlock(&ksm_thread_mutex);
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0spin_lock(&ksm_mmlist_lock);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0insert_to_mm_slots_hash(mm, mm_slot);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0/*
> @@ -2527,7 +2532,7 @@ int __ksm_enter(struct mm_struct *mm)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 * scanning cursor, otherwise KSM pages in n= ewly forked mms will be
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 * missed: then we might as well insert at t= he end of the list.
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 */
> -=C2=A0 =C2=A0 =C2=A0if (ksm_run & KSM_RUN_UNMERGE)
> +=C2=A0 =C2=A0 =C2=A0if (ksm_run_unmerge)

run_store() can alter ksm_run right here, so __ksm_enter() is still
acting on the old setting?

>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0list_add_tail(&a= mp;mm_slot->mm_list, &ksm_mm_head.mm_list);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0else
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0list_add_tail(&a= mp;mm_slot->mm_list, &ksm_scan.mm_slot->mm_list);

--000000000000e6903605e696d729--