From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D852C54FC9 for ; Tue, 21 Apr 2020 14:46:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4DC6720679 for ; Tue, 21 Apr 2020 14:46:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4DC6720679 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D4FD98E0020; Tue, 21 Apr 2020 10:46:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CFFB48E0003; Tue, 21 Apr 2020 10:46:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3C008E0020; Tue, 21 Apr 2020 10:46:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id A8FAA8E0003 for ; Tue, 21 Apr 2020 10:46:07 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 688A64FE6 for ; Tue, 21 Apr 2020 14:46:07 +0000 (UTC) X-FDA: 76732137174.11.book99_356b670176904 X-HE-Tag: book99_356b670176904 X-Filterd-Recvd-Size: 6139 Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com [209.85.128.67]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Tue, 21 Apr 2020 14:46:06 +0000 (UTC) Received: by mail-wm1-f67.google.com with SMTP id t63so3850658wmt.3 for ; Tue, 21 Apr 2020 07:46:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=svu8564LKdLnKu6gBnOesOYzjFLYf57KsDUlEEtSHNc=; b=d3t7at6PF+J1LwOri6jf3tGVL1xJxlnMsirIx3gvklYeFH6PJOzUyb8cXWlN2G2Gvk eCdvJoxLkjoMBOE+fIBOLu0priSrf9deRdsxMMq+ZVbpSPC861AKz4wTnNZzccZVN074 Ffv73+64p71nxXjDfgw8W4VvxkzcyrO+HS4vV+gqzCxYziw6SCgga4y/AxA16UFK8tkg u6uWOFXty/hIiiY76Zx3q5H/y/E1euCuejBNlRN9fAjI4niGQKJ8sdRcXMATNbeDkNtN d/znb7V3B6KMTX9cKUK8Y50VocSyGoiggEcHDXcmPiliSlL98il8JBPu1puNs/Yt7kJ+ gSCw== X-Gm-Message-State: AGi0PuYr/uuwhkkUXUbjZB6yOsuNtK87n6SnCNFoncC5dBHfUioTagBo cKPn8eBYp8GE47VgW5uF7+w= X-Google-Smtp-Source: APiQypI4nm3PpY6slnBrYwapUNpbciVZkvfg0WeWKA3FMdSCyvX194wFF/VxKCwjg63+z0vqwhX9Cw== X-Received: by 2002:a7b:cb86:: with SMTP id m6mr4984377wmi.64.1587480365842; Tue, 21 Apr 2020 07:46:05 -0700 (PDT) Received: from localhost (ip-37-188-130-62.eurotel.cz. [37.188.130.62]) by smtp.gmail.com with ESMTPSA id i17sm3811872wml.23.2020.04.21.07.46.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Apr 2020 07:46:04 -0700 (PDT) Date: Tue, 21 Apr 2020 16:46:03 +0200 From: Michal Hocko To: Peter Xu Cc: Andrew Morton , Linus Torvalds , linux-mm@kvack.org, LKML Subject: Re: [PATCH] mm, mempolicy: fix up gup usage in lookup_node Message-ID: <20200421144603.GI27314@dhcp22.suse.cz> References: <20200421071026.18394-1-mhocko@kernel.org> <20200421132916.GE420399@xz-x1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200421132916.GE420399@xz-x1> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 21-04-20 09:29:16, Peter Xu wrote: > On Tue, Apr 21, 2020 at 09:10:26AM +0200, Michal Hocko wrote: > > From: Michal Hocko > > > > ba841078cd05 ("mm/mempolicy: Allow lookup_node() to handle fatal signal") has > > added a special casing for 0 return value because that was a possible > > gup return value when interrupted by fatal signal. This has been fixed > > by ae46d2aa6a7f ("mm/gup: Let __get_user_pages_locked() return -EINTR > > for fatal signal") in the mean time so ba841078cd05 can be reverted. > > > > This patch however doesn't go all the way to revert it because the check > > for 0 is wrong and confusing here. Firstly it is inherently unsafe to > > access the page when get_user_pages_locked returns 0 (aka no page > > returned). > > Fortunatelly this will not happen because get_user_pages_locked will not > > return 0 when nr_pages > 0 unless FOLL_NOWAIT is specified which is not > > the case here. Document this potential error code in gup code while we > > are at it. > > > > Signed-off-by: Michal Hocko > > --- > > mm/gup.c | 5 +++++ > > mm/mempolicy.c | 5 +---- > > 2 files changed, 6 insertions(+), 4 deletions(-) > > > > diff --git a/mm/gup.c b/mm/gup.c > > index 50681f0286de..a8575b880baf 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -980,6 +980,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) > > * -- If nr_pages is >0, but no pages were pinned, returns -errno. > > * -- If nr_pages is >0, and some pages were pinned, returns the number of > > * pages pinned. Again, this may be less than nr_pages. > > + * -- 0 return value is possible when the fault would need to be retried. > > * > > * The caller is responsible for releasing returned @pages, via put_page(). > > * > > @@ -1247,6 +1248,10 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, > > } > > EXPORT_SYMBOL_GPL(fixup_user_fault); > > > > +/* > > + * Please note that this function, unlike __get_user_pages will not > > + * return 0 for nr_pages > 0 without FOLL_NOWAIT > > It's a bit unclear to me on whether "will not return 0" applies to "this > function" or "__get_user_pages"... Might be easier just to avoid mentioning > __get_user_pages? I really wanted to call out __get_user_pages because the semantic of 0 return value is different. If you have a suggestion how to reformulate this to be more clear then I will incorporate that. > > + */ > > static __always_inline long __get_user_pages_locked(struct task_struct *tsk, > > struct mm_struct *mm, > > unsigned long start, > > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > > index 48ba9729062e..1965e2681877 100644 > > --- a/mm/mempolicy.c > > +++ b/mm/mempolicy.c > > @@ -927,10 +927,7 @@ static int lookup_node(struct mm_struct *mm, unsigned long addr) > > > > int locked = 1; > > err = get_user_pages_locked(addr & PAGE_MASK, 1, 0, &p, &locked); > > - if (err == 0) { > > - /* E.g. GUP interrupted by fatal signal */ > > - err = -EFAULT; > > - } else if (err > 0) { > > + if (err > 0) { > > err = page_to_nid(p); > > put_page(p); > > } > > Again, this is my totally humble opinion: I'm fine with removing the comment, > however I still don't think it's helpful at all to explicitly remove a check > against invalid return value (err==0), especially if that's the only functional > change in this patch. I thought I have explained that when we have discussed last time and the changelog is explaining that as well. Checking for impossible error code is simply confusing and provokes for copy&pasting this pattern. I wouldn't really bother if I haven't seen this cargo cult pattern in the so many times. -- Michal Hocko SUSE Labs