From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F01EBC32772 for ; Mon, 22 Aug 2022 15:25:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67F548D0003; Mon, 22 Aug 2022 11:25:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 62F618D0001; Mon, 22 Aug 2022 11:25:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CFF28D0003; Mon, 22 Aug 2022 11:25:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3A9AB8D0001 for ; Mon, 22 Aug 2022 11:25:07 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 1311341263 for ; Mon, 22 Aug 2022 15:25:07 +0000 (UTC) X-FDA: 79827601854.14.C01E12C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id A1E104007C for ; Mon, 22 Aug 2022 15:24:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661181842; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=OKI7jUU4q+hD8UAfwPdeOUJa4DvoZ83dyAq8TB/1BpE=; b=KFOulvzNAFExWH3H1DFrUTXCshHoFC8M1uTPf6fuDG+Nwlsv8grH52KTBeouccnbwrjNdC CMwlSyHtCSqTyNeViDhUDySVihKStRtEFp1CiMeVxeaTw9zhestRvYyMUtp12dXU80nKnA VkGCy+biNJzYZF0GjPJF5ausmkLzSys= Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-597-QbrbVDMCOPqSOrVOzrnULQ-1; Mon, 22 Aug 2022 11:24:00 -0400 X-MC-Unique: QbrbVDMCOPqSOrVOzrnULQ-1 Received: by mail-pf1-f200.google.com with SMTP id x7-20020aa79407000000b00536368f1a07so2504379pfo.13 for ; Mon, 22 Aug 2022 08:24:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=OKI7jUU4q+hD8UAfwPdeOUJa4DvoZ83dyAq8TB/1BpE=; b=ytFXrI8vHOlFMpnx13YeUVrIomZQc8NU2L4/FRkR52xJsJ8u2uw4YgVTd3SRb3djb4 JdRn8pdffJtkd67wORKpHR1KL7b6u42E++0vKpZFAGc3Q7Qnj3iP/zPbwKcezDpf/PU5 KjBJh/BNfhvZ9HqMDf3FcP1mbvbVZaeVqqh8mVPSvOgTASPU3kTvcvinS42pHESKFMbU Ke11x48kMeff91JGPE0s8X0pJp+EG6mQ6wdW6U8l33e14td8Dhgg4qMAxeJic3Md0Sbm l0A9got+RbSkYCIVVElcDg77MMT5TwrMf2pPcDYNTVMZnAkx8p4YpnFTjPPQAovrd1xJ /ViQ== X-Gm-Message-State: ACgBeo3fOkDpRQRWO2OZvAfd0BE/PfMNBZyN2F0SZsbS2gudKy3HXwdC WcBstOWLrHQtdE9JIS+CSEpwkEJ5GZI2j26uLNb9uU0XA6yw3DhqGMKBY/Nztmg/b5HheLgY6DI Y3lpvf9UO0m/ZrdRBZ/VZlRcW3rY= X-Received: by 2002:a17:902:7b95:b0:172:9dc3:6c12 with SMTP id w21-20020a1709027b9500b001729dc36c12mr21065424pll.94.1661181839682; Mon, 22 Aug 2022 08:23:59 -0700 (PDT) X-Google-Smtp-Source: AA6agR4we4+KjiwVuU7xZC1yU+U2BoUIfm2HEAjboHDE8PZztFAE6ATwc+pZ2FXdfcZHV+MWkDpK8Mr2c/iBg3lfXSg= X-Received: by 2002:a17:902:7b95:b0:172:9dc3:6c12 with SMTP id w21-20020a1709027b9500b001729dc36c12mr21065403pll.94.1661181839442; Mon, 22 Aug 2022 08:23:59 -0700 (PDT) MIME-Version: 1.0 References: <20220821183547.950370-1-syoshida@redhat.com> <7398324a-2134-3501-ca81-67a56a4c8629@nvidia.com> In-Reply-To: <7398324a-2134-3501-ca81-67a56a4c8629@nvidia.com> From: Shigeru Yoshida Date: Tue, 23 Aug 2022 00:23:47 +0900 Message-ID: Subject: Re: [PATCH] mm/gup.c: Fix return value for __gup_longterm_locked() To: John Hubbard Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Alistair Popple , syzbot+616ff0452fec30f4dcfd@syzkaller.appspotmail.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: multipart/alternative; boundary="000000000000729b9105e6d60921" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661181843; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OKI7jUU4q+hD8UAfwPdeOUJa4DvoZ83dyAq8TB/1BpE=; b=SNiZyOI6jEJRXCEcKGJzgOalcwo/ARZbquDB4xJMPRdEiuWC4MAHOwqbiKc0PBW1uVN07e OLq08zlAStu+z3MdryD7CcR6/V6nore/ZWAD9lK7UW0nyH2fex4k/vxPR8K7b95hvomS0M 3MJUGcWgbXlQ0RCSQFOQdfiTJsz+KiE= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KFOulvzN; spf=pass (imf11.hostedemail.com: domain of syoshida@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=syoshida@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661181843; a=rsa-sha256; cv=none; b=w/oCWkSQ7hJluuZx7zPpaAeXAi+GJsEj80W3hDhYiG07Ix5WzfHWVD0crFcbT9OqdVwy0I BY0BwARkmpeDP2TKXuUi2pu4dnDE5p0Mfew9e85xBqUapce1/gfpvPm2lqSx5CBpKG859B e1IujEhkndMFqmoUHeT6lWoHDI2lD5c= X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KFOulvzN; spf=pass (imf11.hostedemail.com: domain of syoshida@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=syoshida@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: 5zn95cn44j8k8ezeu75c3iyym81iq8c9 X-Rspamd-Queue-Id: A1E104007C X-Rspamd-Server: rspam08 X-HE-Tag: 1661181842-162157 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --000000000000729b9105e6d60921 Content-Type: text/plain; charset="UTF-8" On Mon, Aug 22, 2022 at 9:14 AM John Hubbard wrote: > On 8/21/22 11:35, Shigeru Yoshida wrote: > > __get_user_pages_locked() may return the number of pages less than > > nr_pages. So __gup_longterm_locked() have to return the number of > > pages __get_user_pages_locked() returns if it succeeded, not nr_pages > > passed. > > s/passed/requested/ > > > > > Fixes: 61c63c2076d9 (mm/gup.c: simplify and fix > check_and_migrate_movable_pages() return codes) > > CC: Alistair Popple > > Reported-by: syzbot+616ff0452fec30f4dcfd@syzkaller.appspotmail.com > > Signed-off-by: Shigeru Yoshida > > --- > > mm/gup.c | 12 ++++++------ > > 1 file changed, 6 insertions(+), 6 deletions(-) > > Reviewed-by: John Hubbard > Hi John, Thank you so much for your review. I'll send v2 patch. Shigeru > > (with a couple of nits about line length, below) > > > > > diff --git a/mm/gup.c b/mm/gup.c > > index 5aa7531a703b..542cf74c59b0 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -2068,22 +2068,22 @@ static long __gup_longterm_locked(struct > mm_struct *mm, > > unsigned int gup_flags) > > { > > unsigned int flags; > > - long rc; > > + long rc, nr_pinned_pages; > > > > if (!(gup_flags & FOLL_LONGTERM)) > > return __get_user_pages_locked(mm, start, nr_pages, pages, > vmas, > > NULL, gup_flags); > > flags = memalloc_pin_save(); > > do { > > - rc = __get_user_pages_locked(mm, start, nr_pages, pages, > vmas, > > - NULL, gup_flags); > > - if (rc <= 0) > > + nr_pinned_pages = __get_user_pages_locked(mm, start, > nr_pages, pages, vmas, > > + NULL, gup_flags); > > Can you please wrap at 80 cols, though? > > > + if (nr_pinned_pages <= 0) > > break; > > - rc = check_and_migrate_movable_pages(rc, pages, gup_flags); > > + rc = check_and_migrate_movable_pages(nr_pinned_pages, > pages, gup_flags); > > Also here. > > > } while (rc == -EAGAIN); > > memalloc_pin_restore(flags); > > > > - return rc ? rc : nr_pages; > > + return rc ? rc : nr_pinned_pages; > > } > > > > static bool is_valid_gup_flags(unsigned int gup_flags) > > thanks, > -- > John Hubbard > NVIDIA > > --000000000000729b9105e6d60921 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
On Mon, Aug 22, 2022 at 9:14 AM John Hubbard <jhubbard@nvidia.com> wrote:
On 8/21/22 11:35, Shigeru Yoshida wr= ote:
> __get_user_pages_locked() may return the number of pages less than
> nr_pages.=C2=A0 So __gup_longterm_locked() have to return the number o= f
> pages __get_user_pages_locked() returns if it succeeded, not nr_pages<= br> > passed.

s/passed/requested/

>
> Fixes: 61c63c2076d9 (mm/gup.c: simplify and fix check_and_migrate_mova= ble_pages() return codes)
> CC: Alistair Popple <apopple@nvidia.com>
> Reported-by: syzbot+616ff0452fec30f4dcfd@syzkaller.a= ppspotmail.com
> Signed-off-by: Shigeru Yoshida <syoshida@redhat.com>
> ---
>=C2=A0 =C2=A0mm/gup.c | 12 ++++++------
>=C2=A0 =C2=A01 file changed, 6 insertions(+), 6 deletions(-)

Reviewed-by: John Hubbard <jhubbard@nvidia.com>

Hi John,

Thank you so much for your review.
=
I'll send v2 patch.

Shigeru
=C2= =A0

(with a couple of nits about line length, below)

>
> diff --git a/mm/gup.c b/mm/gup.c
> index 5aa7531a703b..542cf74c59b0 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2068,22 +2068,22 @@ static long __gup_longterm_locked(struct mm_st= ruct *mm,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int gup_flags)
>=C2=A0 =C2=A0{
>=C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int flags;
> -=C2=A0 =C2=A0 =C2=A0long rc;
> +=C2=A0 =C2=A0 =C2=A0long rc, nr_pinned_pages;
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0if (!(gup_flags & FOLL_LONGTERM))
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return __get_use= r_pages_locked(mm, start, nr_pages, pages, vmas,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 NULL, gup_flags);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0flags =3D memalloc_pin_save();
>=C2=A0 =C2=A0 =C2=A0 =C2=A0do {
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0rc =3D __get_user_pag= es_locked(mm, start, nr_pages, pages, vmas,
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 NULL, gup_flags);
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (rc <=3D 0)
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0nr_pinned_pages =3D _= _get_user_pages_locked(mm, start, nr_pages, pages, vmas,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0NULL, gup_flags);

Can you please wrap at 80 cols, though?

> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (nr_pinned_pages &= lt;=3D 0)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0break;
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0rc =3D check_and_migr= ate_movable_pages(rc, pages, gup_flags);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0rc =3D check_and_migr= ate_movable_pages(nr_pinned_pages, pages, gup_flags);

Also here.

>=C2=A0 =C2=A0 =C2=A0 =C2=A0} while (rc =3D=3D -EAGAIN);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0memalloc_pin_restore(flags);
>=C2=A0 =C2=A0
> -=C2=A0 =C2=A0 =C2=A0return rc ? rc : nr_pages;
> +=C2=A0 =C2=A0 =C2=A0return rc ? rc : nr_pinned_pages;
>=C2=A0 =C2=A0}
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0static bool is_valid_gup_flags(unsigned int gup_flags)

thanks,
--
John Hubbard
NVIDIA

--000000000000729b9105e6d60921--