From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A91EC433E0 for ; Thu, 21 May 2020 03:22:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 147AB2075F for ; Thu, 21 May 2020 03:22:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="GmGcqKEX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 147AB2075F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9777A8000A; Wed, 20 May 2020 23:22:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 94E6D80007; Wed, 20 May 2020 23:22:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88A288000A; Wed, 20 May 2020 23:22:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id 6E34080007 for ; Wed, 20 May 2020 23:22:45 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2B7A18248076 for ; Thu, 21 May 2020 03:22:45 +0000 (UTC) X-FDA: 76839279090.01.trail98_3516afb292d44 X-HE-Tag: trail98_3516afb292d44 X-Filterd-Recvd-Size: 5215 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 May 2020 03:22:44 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 44F7020709; Thu, 21 May 2020 03:22:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590031363; bh=CZ1I0nklSnkaDJwauZgG+9Dbgoyu9fZHiW90vV3R5W0=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=GmGcqKEXOlWyyxcYWYYL1pBDOUeUPB2f8ABMVVPGP6m2d0fV+ZAcsnUyRPCJvuE5X m0moyJC1vKRetNb30ghtKWYkeLZIZdv5+emYSimOqBAEBcNvn08mDS/Pci13cMkRCD bW2glvtmkRitz82QDNcRFeW/M3vRBUFs6cMUH93k= Date: Wed, 20 May 2020 20:22:42 -0700 From: Andrew Morton To: Michel Lespinasse Cc: linux-mm , LKML , Peter Zijlstra , Laurent Dufour , Vlastimil Babka , Matthew Wilcox , Liam Howlett , Jerome Glisse , Davidlohr Bueso , David Rientjes , Hugh Dickins , Ying Han , Jason Gunthorpe , Daniel Jordan , John Hubbard Subject: Re: [PATCH v6 12/12] mmap locking API: convert mmap_sem comments Message-Id: <20200520202242.dec6b520f0bab4a66a510d73@linux-foundation.org> In-Reply-To: <20200520052908.204642-13-walken@google.com> References: <20200520052908.204642-1-walken@google.com> <20200520052908.204642-13-walken@google.com> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 19 May 2020 22:29:08 -0700 Michel Lespinasse wrote: > Convert comments that reference mmap_sem to reference mmap_lock instead. This may not be complete.. From: Andrew Morton Subject: mmap-locking-api-convert-mmap_sem-comments-fix fix up linux-next leftovers Cc: Daniel Jordan Cc: Davidlohr Bueso Cc: David Rientjes Cc: Hugh Dickins Cc: Jason Gunthorpe Cc: Jerome Glisse Cc: John Hubbard Cc: Laurent Dufour Cc: Liam Howlett Cc: Matthew Wilcox Cc: Michel Lespinasse Cc: Peter Zijlstra Cc: Vlastimil Babka Cc: Ying Han Signed-off-by: Andrew Morton --- arch/powerpc/mm/fault.c | 2 +- include/linux/pgtable.h | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) --- a/arch/powerpc/mm/fault.c~mmap-locking-api-convert-mmap_sem-comments-fix +++ a/arch/powerpc/mm/fault.c @@ -138,7 +138,7 @@ static noinline int bad_access_pkey(stru * 2. T1 : set AMR to deny access to pkey=4, touches, page * 3. T1 : faults... * 4. T2: mprotect_key(foo, PAGE_SIZE, pkey=5); - * 5. T1 : enters fault handler, takes mmap_sem, etc... + * 5. T1 : enters fault handler, takes mmap_lock, etc... * 6. T1 : reaches here, sees vma_pkey(vma)=5, when we really * faulted on a pte with its pkey=4. */ --- a/include/linux/pgtable.h~mmap-locking-api-convert-mmap_sem-comments-fix +++ a/include/linux/pgtable.h @@ -1101,11 +1101,11 @@ static inline pmd_t pmd_read_atomic(pmd_ #endif /* * This function is meant to be used by sites walking pagetables with - * the mmap_sem hold in read mode to protect against MADV_DONTNEED and + * the mmap_lock held in read mode to protect against MADV_DONTNEED and * transhuge page faults. MADV_DONTNEED can convert a transhuge pmd * into a null pmd and the transhuge page fault can convert a null pmd * into an hugepmd or into a regular pmd (if the hugepage allocation - * fails). While holding the mmap_sem in read mode the pmd becomes + * fails). While holding the mmap_lock in read mode the pmd becomes * stable and stops changing under us only if it's not null and not a * transhuge pmd. When those races occurs and this function makes a * difference vs the standard pmd_none_or_clear_bad, the result is @@ -1115,7 +1115,7 @@ static inline pmd_t pmd_read_atomic(pmd_ * * For 32bit kernels with a 64bit large pmd_t this automatically takes * care of reading the pmd atomically to avoid SMP race conditions - * against pmd_populate() when the mmap_sem is hold for reading by the + * against pmd_populate() when the mmap_lock is hold for reading by the * caller (a special atomic read not done by "gcc" as in the generic * version above, is also needed when THP is disabled because the page * fault can populate the pmd from under us). _