From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E9F2C2D0A3 for ; Sat, 24 Oct 2020 05:31:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8418D2225F for ; Sat, 24 Oct 2020 05:31:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="Z9zMHz/v" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8418D2225F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A62E96B0062; Sat, 24 Oct 2020 01:31:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A13BD6B0068; Sat, 24 Oct 2020 01:31:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B3C76B006E; Sat, 24 Oct 2020 01:31:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id 5EE8A6B0062 for ; Sat, 24 Oct 2020 01:31:55 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F13578249980 for ; Sat, 24 Oct 2020 05:31:54 +0000 (UTC) X-FDA: 77405697348.27.wind35_41012102725f Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id D60A43D668 for ; Sat, 24 Oct 2020 05:31:54 +0000 (UTC) X-HE-Tag: wind35_41012102725f X-Filterd-Recvd-Size: 5115 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Sat, 24 Oct 2020 05:31:54 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Fri, 23 Oct 2020 22:31:58 -0700 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Sat, 24 Oct 2020 05:31:52 +0000 Received: from [10.2.51.100] (172.20.13.39) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Sat, 24 Oct 2020 05:31:51 +0000 Subject: Re: [PATCH 2/2] mm: prevent gup_fast from racing with COW during fork From: John Hubbard To: Jason Gunthorpe , CC: Andrea Arcangeli , Andrew Morton , Aneesh Kumar K.V , Christoph Hellwig , Hugh Dickins , Jan Kara , Jann Horn , Kirill Shutemov , Kirill Tkhai , Leon Romanovsky , Linux-MM , Michal Hocko , Oleg Nesterov , Peter Xu , Linus Torvalds References: <2-v1-281e425c752f+2df-gup_fork_jgg@nvidia.com> Message-ID: <32a38d92-6ecc-243b-77be-8f1ea0792334@nvidia.com> Date: Fri, 23 Oct 2020 22:31:51 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1603517518; bh=bVSp9jXScA/4dT4Q9A4T3793iyYudim0hg/7++VdqnE=; h=Subject:From:To:CC:References:Message-ID:Date:User-Agent: MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=Z9zMHz/vhBfbHR6I0nVevW9oMVLhSEi5K0qwY31oF16irgbjjGYcWXZgnPs2Umabv 5w6JO5sKQVQzIH1r4Qp7meiyFTHn0TwUqLUEUWF2+bGiLLuoxok6QZmbPTSs6vcg4u +q/19j+gkbITiC2ORvFHCZbBhYjZaQYnCLLsMcZbfYObtnrzx2EVRM7RfaiGcY8zju hTfRLEdRDcRwGO0wAWtvnzjHO3CNpePi6t/Vmf9uco0gf3S2p5z3A4WjBjdKuYBuyF ryYm2iAEo+pf9OSajq9sr93DLaj9+/2f2kX3bAvfS74j5teYHoxqXHmkDK35XoGfsF RYTAyh+q7/5wQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10/23/20 10:19 PM, John Hubbard wrote: > On 10/23/20 5:19 PM, Jason Gunthorpe wrote: ... >> diff --git a/mm/memory.c b/mm/memory.c >> index c48f8df6e50268..e2f959cce8563d 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -1171,6 +1171,17 @@ copy_page_range(struct vm_area_struct *dst_vma, s= truct vm_area_struct=20 >> *src_vma) >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mmu_notifier_rang= e_init(&range, MMU_NOTIFY_PROTECTION_PAGE, >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0, src_vma, src_mm, = addr, end); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mmu_notifier_inva= lidate_range_start(&range); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * This is like a seqco= unt where the mmap_lock provides >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * serialization for th= e write side. However, unlike seqcount >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * the read side falls = back to obtaining the mmap_lock rather >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * than spinning. For t= his reason none of the preempt related >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * machinery in seqcoun= t is desired here. ooops...actually, that's a counter-argument to using the raw seqlock API. S= o maybe that's a dead end, after all. If so, it would still be good to wrap t= he "acquire" and "release" parts of this into functions, IMHO. So we'd end up with, effectiv= ely, a lock API anyway. >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */ >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mmap_assert_write_locked(src= _mm); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 WRITE_ONCE(src_mm->write_pro= tect_seq, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 src_mm->write_protect_seq + 1); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 smp_wmb(); >=20 > Even if you don't take the "use the raw seqlock API" advice, it seems lik= e these > operations could be wrapped up in a function call, yes? >=20 thanks, --=20 John Hubbard NVIDIA