From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42F79C11D0B for ; Thu, 20 Feb 2020 14:54:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E3FAC208E4 for ; Thu, 20 Feb 2020 14:54:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MGL20s5l" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E3FAC208E4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7B8D36B0005; Thu, 20 Feb 2020 09:54:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 743196B0006; Thu, 20 Feb 2020 09:54:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5BBDB6B0007; Thu, 20 Feb 2020 09:54:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id 3D1BC6B0005 for ; Thu, 20 Feb 2020 09:54:42 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D38C92C78 for ; Thu, 20 Feb 2020 14:54:41 +0000 (UTC) X-FDA: 76510801962.08.stamp98_460c9c47e680f X-HE-Tag: stamp98_460c9c47e680f X-Filterd-Recvd-Size: 12596 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Thu, 20 Feb 2020 14:54:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582210481; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QdrTvC2iyISE2wQiyCkGO6G/SFHLsHliWFNjOavrd0Y=; b=MGL20s5lIX536EtFjuQUlYIAWOt/s/6EddWHyLLuJGF4NYW6O7hUjNZIp7q/0EUIZpURZD CJyWxD9W4xBiYN0PGMoEmPW10OuonmUNyWvkaW0wcXOWijQ+fhDx6CdzxIOBIVtNAI8k9p iJn+VwgMtFhQv41XEdbKDVGUSQwoHr8= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-344-iFY2SfcvPR-h_LLqSiwX5w-1; Thu, 20 Feb 2020 09:54:38 -0500 Received: by mail-qk1-f200.google.com with SMTP id t17so2855338qkg.16 for ; Thu, 20 Feb 2020 06:54:38 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WMFaNK8jy5ohgEC7LxixJZcuE/vMkvzl3ZkjyXEzaf8=; b=et1VdYKM0RRvcADTQhb2P8VzIyEQf4y5bn4NJ3sJiSqEEk+0TilyzFTe+eCHVG5hpZ GAvG7Xr10w+5mulhbcEaoCZjOKMM6G6tUTX+XLB5r5ez54mJ2gtnV8UY0yetfGuQWyyM 4qCbYFkz3MVOCbsQHw7GoNS4glAJcSNLjq81Cw05FY6GB6ObF4XLaXbDfZXMnt/5cYs0 7pXzsR8e/3Wo36Qclako46+IlYXM1SmPTpq0obJCGEJE8vC9XCzC3+fYsytS0g3VDgNf oCYNX5GJJhjueQSQTtvXwooGz5+w3u17goFjCVkgubwRDVBo/2zNVuvpjoBV+h9mxh1x AgUA== X-Gm-Message-State: APjAAAV+3dgW2no2lArM49oVwQoA1GEmdi+XWZuEUVFs+2ufrK2xLDKC 8e9BobdUxr/uVLOHDacjXaVO+se1djoBeHXsLTu8rNuY7r2l85ZvKaB81q7uaG7jE3OVOM7SUou JUO0E0ebryQQ= X-Received: by 2002:a37:e317:: with SMTP id y23mr28249407qki.431.1582210477916; Thu, 20 Feb 2020 06:54:37 -0800 (PST) X-Google-Smtp-Source: APXvYqxqUa8uyNyxI4bg/fS7KdeKvUid5QcsJFSpjJCMWBIEe43GbPeNkpM4ymngdHDqmqdEhDokfA== X-Received: by 2002:a37:e317:: with SMTP id y23mr28249364qki.431.1582210477564; Thu, 20 Feb 2020 06:54:37 -0800 (PST) Received: from xz-x1.redhat.com ([104.156.64.75]) by smtp.gmail.com with ESMTPSA id v82sm1725109qka.51.2020.02.20.06.54.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Feb 2020 06:54:36 -0800 (PST) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Martin Cracauer , Mike Rapoport , Hugh Dickins , Jerome Glisse , peterx@redhat.com, "Kirill A . Shutemov" , Matthew Wilcox , Pavel Emelyanov , Brian Geffon , Maya Gokhale , Denis Plotnikov , Andrea Arcangeli , Johannes Weiner , "Dr . David Alan Gilbert" , Linus Torvalds , Mike Kravetz , Marty McFadden , David Hildenbrand , Bobby Powers , Mel Gorman Subject: [PATCH v6 01/16] mm/gup: Rename "nonblocking" to "locked" where proper Date: Thu, 20 Feb 2020 09:54:17 -0500 Message-Id: <20200220145432.4561-2-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200220145432.4561-1-peterx@redhat.com> References: <20200220145432.4561-1-peterx@redhat.com> MIME-Version: 1.0 X-MC-Unique: iFY2SfcvPR-h_LLqSiwX5w-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There's plenty of places around __get_user_pages() that has a parameter "nonblocking" which does not really mean that "it won't block" (because it can really block) but instead it shows whether the mmap_sem is released by up_read() during the page fault handling mostly when VM_FAULT_RETRY is returned. We have the correct naming in e.g. get_user_pages_locked() or get_user_pages_remote() as "locked", however there're still many places that are using the "nonblocking" as name. Renaming the places to "locked" where proper to better suite the functionality of the variable. While at it, fixing up some of the comments accordingly. Reviewed-by: Mike Rapoport Reviewed-by: Jerome Glisse Reviewed-by: David Hildenbrand Signed-off-by: Peter Xu --- mm/gup.c | 44 +++++++++++++++++++++----------------------- mm/hugetlb.c | 8 ++++---- 2 files changed, 25 insertions(+), 27 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1b521e0ac1de..1b4411bd0042 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -630,12 +630,12 @@ static int get_gate_page(struct mm_struct *mm, unsign= ed long address, } =20 /* - * mmap_sem must be held on entry. If @nonblocking !=3D NULL and - * *@flags does not include FOLL_NOWAIT, the mmap_sem may be released. - * If it is, *@nonblocking will be set to 0 and -EBUSY returned. + * mmap_sem must be held on entry. If @locked !=3D NULL and *@flags + * does not include FOLL_NOWAIT, the mmap_sem may be released. If it + * is, *@locked will be set to 0 and -EBUSY returned. */ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vm= a, -=09=09unsigned long address, unsigned int *flags, int *nonblocking) +=09=09unsigned long address, unsigned int *flags, int *locked) { =09unsigned int fault_flags =3D 0; =09vm_fault_t ret; @@ -647,7 +647,7 @@ static int faultin_page(struct task_struct *tsk, struct= vm_area_struct *vma, =09=09fault_flags |=3D FAULT_FLAG_WRITE; =09if (*flags & FOLL_REMOTE) =09=09fault_flags |=3D FAULT_FLAG_REMOTE; -=09if (nonblocking) +=09if (locked) =09=09fault_flags |=3D FAULT_FLAG_ALLOW_RETRY; =09if (*flags & FOLL_NOWAIT) =09=09fault_flags |=3D FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; @@ -673,8 +673,8 @@ static int faultin_page(struct task_struct *tsk, struct= vm_area_struct *vma, =09} =20 =09if (ret & VM_FAULT_RETRY) { -=09=09if (nonblocking && !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) -=09=09=09*nonblocking =3D 0; +=09=09if (locked && !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) +=09=09=09*locked =3D 0; =09=09return -EBUSY; =09} =20 @@ -751,7 +751,7 @@ static int check_vma_flags(struct vm_area_struct *vma, = unsigned long gup_flags) *=09=09only intends to ensure the pages are faulted in. * @vmas:=09array of pointers to vmas corresponding to each page. *=09=09Or NULL if the caller does not require them. - * @nonblocking: whether waiting for disk IO or mmap_sem contention + * @locked: whether we're still with the mmap_sem held * * Returns either number of pages pinned (which may be less than the * number requested), or an error. Details about the return value: @@ -786,13 +786,11 @@ static int check_vma_flags(struct vm_area_struct *vma= , unsigned long gup_flags) * appropriate) must be called after the page is finished with, and * before put_page is called. * - * If @nonblocking !=3D NULL, __get_user_pages will not wait for disk IO - * or mmap_sem contention, and if waiting is needed to pin all pages, - * *@nonblocking will be set to 0. Further, if @gup_flags does not - * include FOLL_NOWAIT, the mmap_sem will be released via up_read() in - * this case. + * If @locked !=3D NULL, *@locked will be set to 0 when mmap_sem is + * released by an up_read(). That can happen if @gup_flags does not + * have FOLL_NOWAIT. * - * A caller using such a combination of @nonblocking and @gup_flags + * A caller using such a combination of @locked and @gup_flags * must therefore hold the mmap_sem for reading only, and recognize * when it's been released. Otherwise, it must be held for either * reading or writing and will not be released. @@ -804,7 +802,7 @@ static int check_vma_flags(struct vm_area_struct *vma, = unsigned long gup_flags) static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm= , =09=09unsigned long start, unsigned long nr_pages, =09=09unsigned int gup_flags, struct page **pages, -=09=09struct vm_area_struct **vmas, int *nonblocking) +=09=09struct vm_area_struct **vmas, int *locked) { =09long ret =3D 0, i =3D 0; =09struct vm_area_struct *vma =3D NULL; @@ -850,7 +848,7 @@ static long __get_user_pages(struct task_struct *tsk, s= truct mm_struct *mm, =09=09=09if (is_vm_hugetlb_page(vma)) { =09=09=09=09i =3D follow_hugetlb_page(mm, vma, pages, vmas, =09=09=09=09=09=09&start, &nr_pages, i, -=09=09=09=09=09=09gup_flags, nonblocking); +=09=09=09=09=09=09gup_flags, locked); =09=09=09=09continue; =09=09=09} =09=09} @@ -868,7 +866,7 @@ static long __get_user_pages(struct task_struct *tsk, s= truct mm_struct *mm, =09=09page =3D follow_page_mask(vma, start, foll_flags, &ctx); =09=09if (!page) { =09=09=09ret =3D faultin_page(tsk, vma, start, &foll_flags, -=09=09=09=09=09nonblocking); +=09=09=09=09=09 locked); =09=09=09switch (ret) { =09=09=09case 0: =09=09=09=09goto retry; @@ -1129,7 +1127,7 @@ static __always_inline long __get_user_pages_locked(s= truct task_struct *tsk, * @vma: target vma * @start: start address * @end: end address - * @nonblocking: + * @locked: whether the mmap_sem is still held * * This takes care of mlocking the pages too if VM_LOCKED is set. * @@ -1137,14 +1135,14 @@ static __always_inline long __get_user_pages_locked= (struct task_struct *tsk, * * vma->vm_mm->mmap_sem must be held. * - * If @nonblocking is NULL, it may be held for read or write and will + * If @locked is NULL, it may be held for read or write and will * be unperturbed. * - * If @nonblocking is non-NULL, it must held for read only and may be - * released. If it's released, *@nonblocking will be set to 0. + * If @locked is non-NULL, it must held for read only and may be + * released. If it's released, *@locked will be set to 0. */ long populate_vma_page_range(struct vm_area_struct *vma, -=09=09unsigned long start, unsigned long end, int *nonblocking) +=09=09unsigned long start, unsigned long end, int *locked) { =09struct mm_struct *mm =3D vma->vm_mm; =09unsigned long nr_pages =3D (end - start) / PAGE_SIZE; @@ -1179,7 +1177,7 @@ long populate_vma_page_range(struct vm_area_struct *v= ma, =09 * not result in a stack expansion that recurses back here. =09 */ =09return __get_user_pages(current, mm, start, nr_pages, gup_flags, -=09=09=09=09NULL, NULL, nonblocking); +=09=09=09=09NULL, NULL, locked); } =20 /* diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dd8737a94bec..c84f721db020 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4266,7 +4266,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm= , long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, =09=09=09 struct page **pages, struct vm_area_struct **vmas, =09=09=09 unsigned long *position, unsigned long *nr_pages, -=09=09=09 long i, unsigned int flags, int *nonblocking) +=09=09=09 long i, unsigned int flags, int *locked) { =09unsigned long pfn_offset; =09unsigned long vaddr =3D *position; @@ -4337,7 +4337,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct= vm_area_struct *vma, =09=09=09=09spin_unlock(ptl); =09=09=09if (flags & FOLL_WRITE) =09=09=09=09fault_flags |=3D FAULT_FLAG_WRITE; -=09=09=09if (nonblocking) +=09=09=09if (locked) =09=09=09=09fault_flags |=3D FAULT_FLAG_ALLOW_RETRY; =09=09=09if (flags & FOLL_NOWAIT) =09=09=09=09fault_flags |=3D FAULT_FLAG_ALLOW_RETRY | @@ -4354,9 +4354,9 @@ long follow_hugetlb_page(struct mm_struct *mm, struct= vm_area_struct *vma, =09=09=09=09break; =09=09=09} =09=09=09if (ret & VM_FAULT_RETRY) { -=09=09=09=09if (nonblocking && +=09=09=09=09if (locked && =09=09=09=09 !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) -=09=09=09=09=09*nonblocking =3D 0; +=09=09=09=09=09*locked =3D 0; =09=09=09=09*nr_pages =3D 0; =09=09=09=09/* =09=09=09=09 * VM_FAULT_RETRY must not return an --=20 2.24.1