From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 097C6EB64D8 for ; Tue, 20 Jun 2023 16:28:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9E9578D0002; Tue, 20 Jun 2023 12:28:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 999228D0001; Tue, 20 Jun 2023 12:28:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8602C8D0002; Tue, 20 Jun 2023 12:28:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 792EC8D0001 for ; Tue, 20 Jun 2023 12:28:38 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 44AD2A0604 for ; Tue, 20 Jun 2023 16:28:38 +0000 (UTC) X-FDA: 80923659516.26.4EE9DCC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 1625680015 for ; Tue, 20 Jun 2023 16:28:35 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=A1lIFAaw; spf=pass (imf02.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687278516; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pFzX5Es5mmSUfXtH4khhxU/bp/Nm0/eyecTWs7IUiJI=; b=YyQAf6mB2xtCxII1kWBxT180WQCtuWMzBuuokesMPwRQ3T5VzEF9uHTkUsIVnRf9GRsOAj Q4TCvGUBpfptHj4JeuWIz1wOK6Mrdlb/xnZRyqc+9EeC8yyes53UNQBsG1hr/ROvw1jMp6 /qxOpg5F2ehBenHoh/igUoR1bwxkh4Q= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=A1lIFAaw; spf=pass (imf02.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687278516; a=rsa-sha256; cv=none; b=n0BKzfrKsx/HcV39qVd8J7J50BgGFTPwuB/1595aAi0RdgK8/K91ZQ1nEQLu/EGSxdNAbc pBqXPNwU+0WFVs72Z3G0PByDmHuyHOTI/qwm99k7fsvZlzYj40j6v8Ju6SuzizQ0XZFZiu ik/Kpe1AaIL3uHEqsTwQMY7VZM6NJbc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687278515; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=pFzX5Es5mmSUfXtH4khhxU/bp/Nm0/eyecTWs7IUiJI=; b=A1lIFAawiV7KlSoJV312A9Vbs5DO0txdGNjxXlONofAIEJdRDMkqGe6gUGTPmrCicw7Z/1 +1Fv9xlhFLpNb+ZLTQtxIFmtELjLdOTWOzsTysvm9uAm5YV6Tew3nVm5lsd3aKdhqxPDIl orl1GyTLebwOt7G370+B4jBta5+VOoc= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-266-AU02eG2kMI2J-X9HmqNCqw-1; Tue, 20 Jun 2023 12:28:28 -0400 X-MC-Unique: AU02eG2kMI2J-X9HmqNCqw-1 Received: by mail-qk1-f197.google.com with SMTP id af79cd13be357-74faf5008bbso112656685a.0 for ; Tue, 20 Jun 2023 09:28:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687278508; x=1689870508; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=pFzX5Es5mmSUfXtH4khhxU/bp/Nm0/eyecTWs7IUiJI=; b=aeOXzqZTIRzF4uCAQATmp9H21op+mlcRmJ2f+Mi9jWtWLgNoqVAKjLknBXzzLQuwYX HjFL0xjJ36Cp+SufYRTRfCINNZ1okuWMl9xpUbAeYTQF2K/dlkB5NEGzchcTxBLBSrLy stS5WmCa0IszmfDkBldl/i9F7ywQo6HR+rUUW0Dgk3veWklKPXOW4DFB8XsDeMQDFJr/ 6ZXSQ9WSnIQnG9De1GW8KmBxINsS4L4LtIwHZy45HxoycBN9Tdhy/2S+Q0xc6+2jwuhD VZL2IuHokErS4hrRIbXpb/jxh7p4RDoG8lHZJoR93HpYXlSfZcWBA8xsOGKWgZAARGXT Yl8g== X-Gm-Message-State: AC+VfDzMz37dgZ3eTVB3v57RZ7cCRK0/7pP6+0kivoM80BEjhUMCAV5u mqweDPmA+rVbo7Jca0/mPodYkauqd3MUp8A6K9pkrhqc45de7qtPofwIXGhxCEp5+9+/GjrWpiq jn9v0aTGXXws= X-Received: by 2002:a05:620a:8c04:b0:763:a12a:7cfa with SMTP id qz4-20020a05620a8c0400b00763a12a7cfamr4683624qkn.2.1687278508239; Tue, 20 Jun 2023 09:28:28 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ71NiowKoZnXI0RXZoW8fAHCKds8vFwNyHLsEQo0yrS4SphUronLHQcIRYQAoPJ0dWjBqC9sA== X-Received: by 2002:a05:620a:8c04:b0:763:a12a:7cfa with SMTP id qz4-20020a05620a8c0400b00763a12a7cfamr4683610qkn.2.1687278507930; Tue, 20 Jun 2023 09:28:27 -0700 (PDT) Received: from x1n (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id a11-20020a05620a16cb00b0075c9abecdf8sm1303134qkn.1.2023.06.20.09.28.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Jun 2023 09:28:27 -0700 (PDT) Date: Tue, 20 Jun 2023 12:28:26 -0400 From: Peter Xu To: David Hildenbrand Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrea Arcangeli , Mike Rapoport , Matthew Wilcox , Vlastimil Babka , John Hubbard , "Kirill A . Shutemov" , James Houghton , Andrew Morton , Lorenzo Stoakes , Hugh Dickins , Mike Kravetz , Jason Gunthorpe Subject: Re: [PATCH v2 3/8] mm/hugetlb: Add page_mask for hugetlb_follow_page_mask() Message-ID: References: <20230619231044.112894-1-peterx@redhat.com> <20230619231044.112894-4-peterx@redhat.com> MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspamd-Queue-Id: 1625680015 X-Rspam-User: X-Stat-Signature: gyze1k55cogaybz3o6awxj38et35ej6s X-Rspamd-Server: rspam01 X-HE-Tag: 1687278515-875579 X-HE-Meta: U2FsdGVkX1/bpq9VCtek7o89rWgpyzeBhYoX24RWCHo87yK7zB12aNyRlAwsJ2QjNDpgaHw4PQM7BnYalzxkr9r/TaX/N3Ch8y3qrZ6orneySM8Hza8gGTcTnQdoITMgqOW0jikZmam2kTjAJuEshiEO0+J4afpvJXfJ1jtm7OiAmyhGBTgzdHJOt/F23LLfOPQezZLxRu6zIeSMr7VWU6LmV2ayeV0doRxQbz6hzOc/svu+/lDgo01nKjqiyUM1Fh2mQzBKq9bAe9gkmQc+rVFf3x8Hngokv2Rg6d/RZCdXa4Gk3szR+O6Oo9bjU2jR6ke980YJU6tyhFqNMbPeqEcF69615WKmzcxuDNW85sIrUr3gTsA24V4PARivKjre4SAEa++Tf4U049Rx6g8szYOhWgRoV+kHx3qddNpnC/g0ND/BNo7ENuFlyrq+egiGz29envrr5pT4z6S6KNT2GueBtaeVWLDAG+RK3Zd9AHaWTyJclm+aaLr9rBu4OUoZq1uITN52mIUK5Jf0hH2VikWw07NnLUHYU5XNm2dSAtI0IVMF0h/5Ur88a3yFb+LkXKZBdJVSP/rsfN/q4VqdhWkJVUQVBH2Ro0/JQUkxWVPEzKjVk/AFfg43I+UiUsLAskq9PKozLUSin8jOtNP5dToYqay6LgB0sajqSKCRU2SvGz5SK/Sz/a8qafAADYZVm4zT0eVkGsLcW5skZEGLk+Pb5phPDyF83NBq0I/ZDJexrMWuwIAIaGP1tMmHo6/Nxa4ZhmLu0O27VTOnkh3MyR9zrbzDoWil/q+51tAtMuU+4XeFO74LKbgceFmy03J1uHGHtN2Jaj1+E+MXRxzLRuUV70ejbtE74E3MfC+xG+TopIDOzKOuAMjm+v/kLDq8qZGgbnGCIvDctSnRlmi7dmZ51CcDKxJMd9UxEgBeQaPlxUNiAwh+arS/kSYANIAPWZhcFG9UK/u6ZII6+wo RAtbv9y/ evhLh/NdJ1np0lrQHjyuhM/e5XchrnNWkGtSzVE3+CSjdQNOI91gN4DeM8UNVyht4krCmkOMv+WGfB07KiwuGK/MmzaMqEq3UKdSA1dboQIkM7ofJwejERxVMZwCpAUE3yoeYsftpnfKrmKa8GmxgQmTWFPl+wfKbqFXxLsAR/FD1NvPWPWtzXQxJxUNaB6xihmXyJ/WplRU6YUkMku6ZvnJaxic+UQ9qtmzUVphncRFVRvwrRLzNnF4xwfwexq4dCq4mb4+g+PER8covEcR+RDK/WL3WPNjKI8eLdrtDFlj1LNlb7dkA19DYPrM3XibWq8l2JXuJbQWZuVKKCmZpGwB7zfoi0Et/InzBHURmqFDV2Wvr99tXG9u6pY7OR9Oi9ibf1pMngU+/CvgkSTB9fyiYYdm91rL44zZGq+j6SUxEM65yrRLw5JRLsw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 20, 2023 at 05:23:09PM +0200, David Hildenbrand wrote: > On 20.06.23 01:10, Peter Xu wrote: > > follow_page() doesn't need it, but we'll start to need it when unifying gup > > for hugetlb. > > > > Signed-off-by: Peter Xu > > --- > > include/linux/hugetlb.h | 8 +++++--- > > mm/gup.c | 3 ++- > > mm/hugetlb.c | 5 ++++- > > 3 files changed, 11 insertions(+), 5 deletions(-) > > > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > > index beb7c63d2871..2e2d89e79d6c 100644 > > --- a/include/linux/hugetlb.h > > +++ b/include/linux/hugetlb.h > > @@ -131,7 +131,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, > > int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, > > struct vm_area_struct *, struct vm_area_struct *); > > struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > > - unsigned long address, unsigned int flags); > > + unsigned long address, unsigned int flags, > > + unsigned int *page_mask); > > long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, > > struct page **, unsigned long *, unsigned long *, > > long, unsigned int, int *); > > @@ -297,8 +298,9 @@ static inline void adjust_range_if_pmd_sharing_possible( > > { > > } > > -static inline struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > > - unsigned long address, unsigned int flags) > > +static inline struct page *hugetlb_follow_page_mask( > > + struct vm_area_struct *vma, unsigned long address, unsigned int flags, > > + unsigned int *page_mask) > > { > > BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ > > } > > diff --git a/mm/gup.c b/mm/gup.c > > index abcd841d94b7..9fc9271cba8d 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -780,7 +780,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, > > * Ordinary GUP uses follow_hugetlb_page for hugetlb processing. > > */ > > if (is_vm_hugetlb_page(vma)) > > - return hugetlb_follow_page_mask(vma, address, flags); > > + return hugetlb_follow_page_mask(vma, address, flags, > > + &ctx->page_mask); > > pgd = pgd_offset(mm, address); > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 9a6918c4250a..fbf6a09c0ec4 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -6454,7 +6454,8 @@ static inline bool __follow_hugetlb_must_fault(struct vm_area_struct *vma, > > } > > struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > > - unsigned long address, unsigned int flags) > > + unsigned long address, unsigned int flags, > > + unsigned int *page_mask) > > { > > struct hstate *h = hstate_vma(vma); > > struct mm_struct *mm = vma->vm_mm; > > @@ -6499,6 +6500,8 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > > page = NULL; > > goto out; > > } > > + > > + *page_mask = ~huge_page_mask(h) >> PAGE_SHIFT; > > As discussed, can be simplified. But can be done on top (or not at all, but > it is confusing code). Since we decided to make this prettier.. At last I decided to go with this: *page_mask = (1U << huge_page_order(h)) - 1; The previous suggestion of PHYS_PFN() will do two shifts over PAGE_SIZE (the other one in huge_page_size()) which might be unnecessary, also, PHYS_ can be slightly misleading too as prefix. > > Reviewed-by: David Hildenbrand I'll take this with above change, please shoot if not applicable. Thanks, -- Peter Xu