From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDB6CC4360C for ; Fri, 27 Sep 2019 22:10:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7682521850 for ; Fri, 27 Sep 2019 22:10:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="enQ554KK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7682521850 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 049A58E0005; Fri, 27 Sep 2019 18:10:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F3B858E0001; Fri, 27 Sep 2019 18:10:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E04A78E0005; Fri, 27 Sep 2019 18:10:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id C1D148E0001 for ; Fri, 27 Sep 2019 18:10:35 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 556FA181AC9B4 for ; Fri, 27 Sep 2019 22:10:35 +0000 (UTC) X-FDA: 75982095630.24.cork22_22340174dc22a X-HE-Tag: cork22_22340174dc22a X-Filterd-Recvd-Size: 3219 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 27 Sep 2019 22:10:34 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A661D21841; Fri, 27 Sep 2019 22:10:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1569622233; bh=DMHthhQIUnoXTx45g2qnqW108p2niEVd+DqmeukHAGE=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=enQ554KKO7ySjL3h/nD3EedlCd4Z+mh0DyYErpw1GoaomayCm1YLs2f3IBMv2CGJ6 gZ3AfN4oQGYTzLO7fagnB+KVqFhnnbI/TdfXTfmkznCV8JfAHYkjfjwLh29Kt5vlKV x1hDXnJEe97D/oYcAXUNzdatwXlnnH81X9tqM4PY= Date: Fri, 27 Sep 2019 15:10:33 -0700 From: Andrew Morton To: Wei Yang Cc: aarcange@redhat.com, hughd@google.com, mike.kravetz@oracle.com, linux-mm@kvack.org Subject: Re: [PATCH v2 1/3] userfaultfd: use vma_pagesize for all huge page size calculation Message-Id: <20190927151033.aad57472652a0b3a6948df6e@linux-foundation.org> In-Reply-To: <20190927070032.2129-1-richardw.yang@linux.intel.com> References: <20190927070032.2129-1-richardw.yang@linux.intel.com> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 27 Sep 2019 15:00:30 +0800 Wei Yang wrote: > In function __mcopy_atomic_hugetlb, we use two variables to deal with > huge page size: vma_hpagesize and huge_page_size. > > Since they are the same, it is not necessary to use two different > mechanism. This patch makes it consistent by all using vma_hpagesize. > > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -262,7 +262,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, > pte_t dst_pteval; > > BUG_ON(dst_addr >= dst_start + len); > - VM_BUG_ON(dst_addr & ~huge_page_mask(h)); > + VM_BUG_ON(dst_addr & (vma_hpagesize - 1)); > > /* > * Serialize via hugetlb_fault_mutex > @@ -273,7 +273,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, > mutex_lock(&hugetlb_fault_mutex_table[hash]); > > err = -ENOMEM; > - dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h)); > + dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize); > if (!dst_pte) { > mutex_unlock(&hugetlb_fault_mutex_table[hash]); > goto out_unlock; > @@ -300,7 +300,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, > > err = copy_huge_page_from_user(page, > (const void __user *)src_addr, > - pages_per_huge_page(h), true); > + vma_hpagesize / PAGE_SIZE, > + true); > if (unlikely(err)) { > err = -EFAULT; > goto out; Looks right. We could go ahead and remove local variable `h', given that hugetlb_fault_mutex_hash() doesn't actually use its first arg..