From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A897C11D0B for ; Thu, 20 Feb 2020 15:54:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 21D8524656 for ; Thu, 20 Feb 2020 15:54:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MMZ66DoV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 21D8524656 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 988D66B0003; Thu, 20 Feb 2020 10:54:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 939866B0006; Thu, 20 Feb 2020 10:54:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82DA56B0007; Thu, 20 Feb 2020 10:54:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0041.hostedemail.com [216.40.44.41]) by kanga.kvack.org (Postfix) with ESMTP id 64BF16B0003 for ; Thu, 20 Feb 2020 10:54:03 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 15574180AD81A for ; Thu, 20 Feb 2020 15:54:03 +0000 (UTC) X-FDA: 76510951566.30.mint49_62bd3b0e8d1f X-HE-Tag: mint49_62bd3b0e8d1f X-Filterd-Recvd-Size: 12608 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Thu, 20 Feb 2020 15:54:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582214041; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QdrTvC2iyISE2wQiyCkGO6G/SFHLsHliWFNjOavrd0Y=; b=MMZ66DoVgZIgb1Z3wgFs4mUy0t8Gctcravx1yiqq7AR6cdp3rdQ+dvze9NF5Y31buB7AC2 HzWYJOY5A0UZdV3zzuZUAjEd5ON+pkdX/clTaC0v7T4wrCJIb3WSwFQmj0fcGyoOsRDCGH 0YxaNtvXhRlaZBDFxAczQoF/rfCVgpU= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-333-kMkG-XXYM56Q5yXKrk678w-1; Thu, 20 Feb 2020 10:53:59 -0500 Received: by mail-qv1-f71.google.com with SMTP id j15so2812483qvp.21 for ; Thu, 20 Feb 2020 07:53:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WMFaNK8jy5ohgEC7LxixJZcuE/vMkvzl3ZkjyXEzaf8=; b=SFlrRSknAcIMczk6kSHuCh4VT7VoA7hMCl4Y/dh5oWtB4o+O6VmTpH9tnvSekyBMV2 Jbx2pGb2q5eAfGZgPzr/UwBvhRz9caz7ZrW+F/cp9wBqn4/RXzurGDfEqUXAu6VsoYW+ kom21yIod2qXIpi+fMqJ6GclB2O3oyQDVd3Nx/R0gfbR8Ro9ypXou67mgAAapQYKE53t IRL4OIX7L3i1m9oN/exwqubniwk3Tp67giJuDsE0CKTGjPAj39KEmcvv4xTUml1vpzkc /JR2DGqSVPQvVLzO7W1ikPMYpRyhxl8uGccVY08Lt1X/SSuaDktLPU9gJy2Os2jAh1I0 zvrQ== X-Gm-Message-State: APjAAAXhr6aJBKPEfAV8sXdtL2Ophv06uLTwrQWHqA5Irxd5HRKzb7Qe XP0u5xjmPDrAm5eeNaZ5gVZXRDh2CoB5gpUrNfjheqq4YjTvkYudltFS/tHbYwVmbU2QlqjHlxl IFYZQ9DvH+UY= X-Received: by 2002:a05:620a:22fb:: with SMTP id p27mr27916614qki.365.1582214038933; Thu, 20 Feb 2020 07:53:58 -0800 (PST) X-Google-Smtp-Source: APXvYqyoccx5323bjWwcjOzCxaN9qx1pWbZMGU3qYCyz6+A71CmFHj0KnJwsJylJvq6b7uIMUs38pA== X-Received: by 2002:a05:620a:22fb:: with SMTP id p27mr27916571qki.365.1582214038544; Thu, 20 Feb 2020 07:53:58 -0800 (PST) Received: from xz-x1.redhat.com ([104.156.64.75]) by smtp.gmail.com with ESMTPSA id h20sm1807430qkk.64.2020.02.20.07.53.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Feb 2020 07:53:57 -0800 (PST) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrea Arcangeli , Martin Cracauer , Linus Torvalds , Mike Rapoport , "Kirill A . Shutemov" , Johannes Weiner , "Dr . David Alan Gilbert" , David Hildenbrand , Bobby Powers , Maya Gokhale , Jerome Glisse , Mike Kravetz , Matthew Wilcox , Marty McFadden , Mel Gorman , peterx@redhat.com, Hugh Dickins , Brian Geffon , Denis Plotnikov , Pavel Emelyanov Subject: [PATCH RESEND v6 01/16] mm/gup: Rename "nonblocking" to "locked" where proper Date: Thu, 20 Feb 2020 10:53:38 -0500 Message-Id: <20200220155353.8676-2-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200220155353.8676-1-peterx@redhat.com> References: <20200220155353.8676-1-peterx@redhat.com> MIME-Version: 1.0 X-MC-Unique: kMkG-XXYM56Q5yXKrk678w-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There's plenty of places around __get_user_pages() that has a parameter "nonblocking" which does not really mean that "it won't block" (because it can really block) but instead it shows whether the mmap_sem is released by up_read() during the page fault handling mostly when VM_FAULT_RETRY is returned. We have the correct naming in e.g. get_user_pages_locked() or get_user_pages_remote() as "locked", however there're still many places that are using the "nonblocking" as name. Renaming the places to "locked" where proper to better suite the functionality of the variable. While at it, fixing up some of the comments accordingly. Reviewed-by: Mike Rapoport Reviewed-by: Jerome Glisse Reviewed-by: David Hildenbrand Signed-off-by: Peter Xu --- mm/gup.c | 44 +++++++++++++++++++++----------------------- mm/hugetlb.c | 8 ++++---- 2 files changed, 25 insertions(+), 27 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1b521e0ac1de..1b4411bd0042 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -630,12 +630,12 @@ static int get_gate_page(struct mm_struct *mm, unsign= ed long address, } =20 /* - * mmap_sem must be held on entry. If @nonblocking !=3D NULL and - * *@flags does not include FOLL_NOWAIT, the mmap_sem may be released. - * If it is, *@nonblocking will be set to 0 and -EBUSY returned. + * mmap_sem must be held on entry. If @locked !=3D NULL and *@flags + * does not include FOLL_NOWAIT, the mmap_sem may be released. If it + * is, *@locked will be set to 0 and -EBUSY returned. */ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vm= a, -=09=09unsigned long address, unsigned int *flags, int *nonblocking) +=09=09unsigned long address, unsigned int *flags, int *locked) { =09unsigned int fault_flags =3D 0; =09vm_fault_t ret; @@ -647,7 +647,7 @@ static int faultin_page(struct task_struct *tsk, struct= vm_area_struct *vma, =09=09fault_flags |=3D FAULT_FLAG_WRITE; =09if (*flags & FOLL_REMOTE) =09=09fault_flags |=3D FAULT_FLAG_REMOTE; -=09if (nonblocking) +=09if (locked) =09=09fault_flags |=3D FAULT_FLAG_ALLOW_RETRY; =09if (*flags & FOLL_NOWAIT) =09=09fault_flags |=3D FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; @@ -673,8 +673,8 @@ static int faultin_page(struct task_struct *tsk, struct= vm_area_struct *vma, =09} =20 =09if (ret & VM_FAULT_RETRY) { -=09=09if (nonblocking && !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) -=09=09=09*nonblocking =3D 0; +=09=09if (locked && !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) +=09=09=09*locked =3D 0; =09=09return -EBUSY; =09} =20 @@ -751,7 +751,7 @@ static int check_vma_flags(struct vm_area_struct *vma, = unsigned long gup_flags) *=09=09only intends to ensure the pages are faulted in. * @vmas:=09array of pointers to vmas corresponding to each page. *=09=09Or NULL if the caller does not require them. - * @nonblocking: whether waiting for disk IO or mmap_sem contention + * @locked: whether we're still with the mmap_sem held * * Returns either number of pages pinned (which may be less than the * number requested), or an error. Details about the return value: @@ -786,13 +786,11 @@ static int check_vma_flags(struct vm_area_struct *vma= , unsigned long gup_flags) * appropriate) must be called after the page is finished with, and * before put_page is called. * - * If @nonblocking !=3D NULL, __get_user_pages will not wait for disk IO - * or mmap_sem contention, and if waiting is needed to pin all pages, - * *@nonblocking will be set to 0. Further, if @gup_flags does not - * include FOLL_NOWAIT, the mmap_sem will be released via up_read() in - * this case. + * If @locked !=3D NULL, *@locked will be set to 0 when mmap_sem is + * released by an up_read(). That can happen if @gup_flags does not + * have FOLL_NOWAIT. * - * A caller using such a combination of @nonblocking and @gup_flags + * A caller using such a combination of @locked and @gup_flags * must therefore hold the mmap_sem for reading only, and recognize * when it's been released. Otherwise, it must be held for either * reading or writing and will not be released. @@ -804,7 +802,7 @@ static int check_vma_flags(struct vm_area_struct *vma, = unsigned long gup_flags) static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm= , =09=09unsigned long start, unsigned long nr_pages, =09=09unsigned int gup_flags, struct page **pages, -=09=09struct vm_area_struct **vmas, int *nonblocking) +=09=09struct vm_area_struct **vmas, int *locked) { =09long ret =3D 0, i =3D 0; =09struct vm_area_struct *vma =3D NULL; @@ -850,7 +848,7 @@ static long __get_user_pages(struct task_struct *tsk, s= truct mm_struct *mm, =09=09=09if (is_vm_hugetlb_page(vma)) { =09=09=09=09i =3D follow_hugetlb_page(mm, vma, pages, vmas, =09=09=09=09=09=09&start, &nr_pages, i, -=09=09=09=09=09=09gup_flags, nonblocking); +=09=09=09=09=09=09gup_flags, locked); =09=09=09=09continue; =09=09=09} =09=09} @@ -868,7 +866,7 @@ static long __get_user_pages(struct task_struct *tsk, s= truct mm_struct *mm, =09=09page =3D follow_page_mask(vma, start, foll_flags, &ctx); =09=09if (!page) { =09=09=09ret =3D faultin_page(tsk, vma, start, &foll_flags, -=09=09=09=09=09nonblocking); +=09=09=09=09=09 locked); =09=09=09switch (ret) { =09=09=09case 0: =09=09=09=09goto retry; @@ -1129,7 +1127,7 @@ static __always_inline long __get_user_pages_locked(s= truct task_struct *tsk, * @vma: target vma * @start: start address * @end: end address - * @nonblocking: + * @locked: whether the mmap_sem is still held * * This takes care of mlocking the pages too if VM_LOCKED is set. * @@ -1137,14 +1135,14 @@ static __always_inline long __get_user_pages_locked= (struct task_struct *tsk, * * vma->vm_mm->mmap_sem must be held. * - * If @nonblocking is NULL, it may be held for read or write and will + * If @locked is NULL, it may be held for read or write and will * be unperturbed. * - * If @nonblocking is non-NULL, it must held for read only and may be - * released. If it's released, *@nonblocking will be set to 0. + * If @locked is non-NULL, it must held for read only and may be + * released. If it's released, *@locked will be set to 0. */ long populate_vma_page_range(struct vm_area_struct *vma, -=09=09unsigned long start, unsigned long end, int *nonblocking) +=09=09unsigned long start, unsigned long end, int *locked) { =09struct mm_struct *mm =3D vma->vm_mm; =09unsigned long nr_pages =3D (end - start) / PAGE_SIZE; @@ -1179,7 +1177,7 @@ long populate_vma_page_range(struct vm_area_struct *v= ma, =09 * not result in a stack expansion that recurses back here. =09 */ =09return __get_user_pages(current, mm, start, nr_pages, gup_flags, -=09=09=09=09NULL, NULL, nonblocking); +=09=09=09=09NULL, NULL, locked); } =20 /* diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dd8737a94bec..c84f721db020 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4266,7 +4266,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm= , long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, =09=09=09 struct page **pages, struct vm_area_struct **vmas, =09=09=09 unsigned long *position, unsigned long *nr_pages, -=09=09=09 long i, unsigned int flags, int *nonblocking) +=09=09=09 long i, unsigned int flags, int *locked) { =09unsigned long pfn_offset; =09unsigned long vaddr =3D *position; @@ -4337,7 +4337,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct= vm_area_struct *vma, =09=09=09=09spin_unlock(ptl); =09=09=09if (flags & FOLL_WRITE) =09=09=09=09fault_flags |=3D FAULT_FLAG_WRITE; -=09=09=09if (nonblocking) +=09=09=09if (locked) =09=09=09=09fault_flags |=3D FAULT_FLAG_ALLOW_RETRY; =09=09=09if (flags & FOLL_NOWAIT) =09=09=09=09fault_flags |=3D FAULT_FLAG_ALLOW_RETRY | @@ -4354,9 +4354,9 @@ long follow_hugetlb_page(struct mm_struct *mm, struct= vm_area_struct *vma, =09=09=09=09break; =09=09=09} =09=09=09if (ret & VM_FAULT_RETRY) { -=09=09=09=09if (nonblocking && +=09=09=09=09if (locked && =09=09=09=09 !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) -=09=09=09=09=09*nonblocking =3D 0; +=09=09=09=09=09*locked =3D 0; =09=09=09=09*nr_pages =3D 0; =09=09=09=09/* =09=09=09=09 * VM_FAULT_RETRY must not return an --=20 2.24.1