From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24F3AC4345F for ; Wed, 24 Apr 2024 14:05:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5C0D8D0013; Wed, 24 Apr 2024 10:05:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A0B198D0011; Wed, 24 Apr 2024 10:05:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 920FA8D0013; Wed, 24 Apr 2024 10:05:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 711FB8D0011 for ; Wed, 24 Apr 2024 10:05:07 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D6EF61210D1 for ; Wed, 24 Apr 2024 14:05:06 +0000 (UTC) X-FDA: 82044597012.16.7847AE3 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf16.hostedemail.com (Postfix) with ESMTP id 3453B180021 for ; Wed, 24 Apr 2024 14:05:03 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713967504; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N8SVpjCS0JdXzD8b4f4pVJdNpT5CDn4Kb/h5243aK7c=; b=8cddRWd1VWokBU/7QFjYvGNgMGd5i5I14RRf2qiyCObSEmvm3bWhvp/xHmd7iDDKbt/mtC rIJBlFVlc83uxd0eGzWbSv7lSMgV44fsvfV4Rh5ds2CJIkDtK4Rh3TuiC5hTT7uALop4vv +dbsP7NKSih9PHzM9IFUyhZblZ+1vEE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713967504; a=rsa-sha256; cv=none; b=VWb2Kts6tx1MBdqdZCf3RlcN5NyBF+cCNHHyNf5Hrhonr9PGKic9y0UhgXP/CB7R+WGQT7 BcXZHnVVKzHk2oTqGqWCbjdrIrHLHrCf+0hulc3r0t8K1EYEiEwE8bclObD4Jl7p1IZqVg 1Uz4gCFjll0wmrm9W+794TRINm2oYrU= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C2FDB2F; Wed, 24 Apr 2024 07:05:30 -0700 (PDT) Received: from [10.1.25.156] (XHFQ2J9959.cambridge.arm.com [10.1.25.156]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7EE313F64C; Wed, 24 Apr 2024 07:05:02 -0700 (PDT) Message-ID: <2e878f93-2b45-401c-8ee1-00338e22866b@arm.com> Date: Wed, 24 Apr 2024 15:05:01 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm: add more readable thp_vma_allowable_order_foo() Content-Language: en-GB To: Kefeng Wang , Andrew Morton Cc: linux-mm@kvack.org, David Hildenbrand References: <20240424140715.5838-1-wangkefeng.wang@huawei.com> From: Ryan Roberts In-Reply-To: <20240424140715.5838-1-wangkefeng.wang@huawei.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3453B180021 X-Rspam-User: X-Stat-Signature: y61kxfz5pinyhb3i5s3n8rewt3oszfgu X-HE-Tag: 1713967503-263145 X-HE-Meta: U2FsdGVkX18BKgN8n6maJoR5KNBgdCbf868B2KSyMB8CkHyX2X7LDyE/QgHGh/D2YGTPc+2svXX1tFg420C5tPl9C0jmJZM0tCtnoHK3+fcyW2zad7F3WYnC+bdGr7v8qcUHPXLYhIJsbN5Lm9+ZzmU6dXy23xL0WrwBWISgYWzWwBWq2351exj0EM4KGY6tuQYtAuKIcp4FajR3O/ww5Rg5urAnpTOyBp5qpHqdv4MRhMTMEZFu5Lggc0zxs7JWr7F7LTdQ1eIUxWoIvY+CO5WqQHitfpufZAPbUXV80miL/hmaKMN7TwmZizJNQVHrFHlJDlFb9HKKzYU8pO+zLGvHgLBMXYiQlQJM3qj012dNTn3XXBA8m0AFt7GykMM//e6CP48480wUGB6OS/lds9BxlmjhE6cYmR2+WCXb/rWuUTMxv3K2bl4ESamuh1GgJ/4ASLhDTABMaw9+sBmpaRtAWLsZLjfXLNYEM4uX5MFcDw2vtM+NeIzefw4VkeDz5w/gED3XonJm8mYYpCFlauQLalKEhh8EGh+52ZhoIDPnXiDH5m29n0XOa+tNC4NcOUOcljLCXP1p6FLiphHchK4GKqCsg6xTg+PNcuAcxn9O+a0U/tfOoezOKHsqoWc8GTaRORIpUe4vDzerH5+Nfl1Z2o9bEc4VPI4BuHK6hBAdthIlcel3ZwT1lrTewrSsEEG0Bbqsyn/XuISvESjqIWcF9eGr5xOy+5BX2oZnM/xqj6XkyVLboalYjmaWRqnbv/D3g9A9XtVlEIod+h7a57T5p9J/lyppzuiPDK0vUMlDUPLf7Lr4WEuEC3J/B2e8nK/bgD1EoyD0FG5spIhclyj2I3r36RpkZQbws+IEa8AK5pSiDFVkiWjwoEyZLd4TfbQyKfNq0fQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 24/04/2024 15:07, Kefeng Wang wrote: > There are too many bool arguments in thp_vma_allowable_orders(), adding > some more readable thp_vma_allowable_order_foo(), > > thp_vma_allowable_orders_insmaps() is used in samps > thp_vma_allowable_order[s]_inpf() is used in page fault > thp_vma_allowable_pmd_order_inhuge is used in khugepaged scan and madvise > > Signed-off-by: Kefeng Wang Just one nit below. With that addressed: Reviewed-by: Ryan Roberts > --- > fs/proc/task_mmu.c | 3 +-- > include/linux/huge_mm.h | 14 ++++++++++++-- > mm/khugepaged.c | 20 ++++++++------------ > mm/memory.c | 8 ++++---- > 4 files changed, 25 insertions(+), 20 deletions(-) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index f4259b7edfde..1136aa97f143 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -871,8 +871,7 @@ static int show_smap(struct seq_file *m, void *v) > __show_smap(m, &mss, false); > > seq_printf(m, "THPeligible: %8u\n", > - !!thp_vma_allowable_orders(vma, vma->vm_flags, true, false, > - true, THP_ORDERS_ALL)); > + thp_vma_allowable_orders_insmaps(vma, vma->vm_flags)); > > if (arch_pkeys_enabled()) > seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index 56c7ea73090b..345cf394480b 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -83,8 +83,18 @@ extern struct kobj_attribute shmem_enabled_attr; > */ > #define THP_ORDERS_ALL (THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE) > > -#define thp_vma_allowable_order(vma, vm_flags, smaps, in_pf, enforce_sysfs, order) \ > - (!!thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, enforce_sysfs, BIT(order))) > +#define thp_vma_allowable_orders_insmaps(vma, vm_flags) \ > + (!!thp_vma_allowable_orders(vma, vm_flags, true, false, true, THP_ORDERS_ALL)) > + > +#define thp_vma_allowable_orders_inpf(vma, vm_flags, orders) \ > + (!!thp_vma_allowable_orders(vma, vm_flags, false, true, true, orders)) > + > +#define thp_vma_allowable_order_inpf(vma, vm_flags, order) \ > + (!!thp_vma_allowable_orders_inpf(vma, vm_flags, BIT(order))) > + > +#define thp_vma_allowable_pmd_order_inhuge(vma, vm_flags, enforce_sysfs) \ > + (!!thp_vma_allowable_orders(vma, vm_flags, false, false, enforce_sysfs, BIT(PMD_ORDER))) nit: Personally I'd leave the order as an argument rather than encoding it in the name. It's likely that khugepaged will grow support for non-PMD-size collapse in future. The first part of the name "thp_vma_allowable_order" is then consistent and easy to search for all variants. And perhaps "inkhuge" is more precise? > + > > #ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES > #define HPAGE_PMD_SHIFT PMD_SHIFT > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 2f73d2aa9ae8..5a27dccfda02 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -453,8 +453,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, > { > if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && > hugepage_flags_enabled()) { > - if (thp_vma_allowable_order(vma, vm_flags, false, false, true, > - PMD_ORDER)) > + if (thp_vma_allowable_pmd_order_inhuge(vma, vm_flags, true)) > __khugepaged_enter(vma->vm_mm); > } > } > @@ -909,15 +908,15 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) > return SCAN_ADDRESS_RANGE; > - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, > - cc->is_khugepaged, PMD_ORDER)) > + if (!thp_vma_allowable_pmd_order_inhuge(vma, vma->vm_flags, > + cc->is_khugepaged)) > return SCAN_VMA_CHECK; > /* > * Anon VMA expected, the address may be unmapped then > * remapped to file after khugepaged reaquired the mmap_lock. > * > - * thp_vma_allowable_order may return true for qualified file > - * vmas. > + * thp_vma_allowable_pmd_order_inhuge may return true for > + * qualified file vmas. > */ > if (expect_anon && (!(*vmap)->anon_vma || !vma_is_anonymous(*vmap))) > return SCAN_PAGE_ANON; > @@ -1493,8 +1492,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, > * and map it by a PMD, regardless of sysfs THP settings. As such, let's > * analogously elide sysfs THP settings here. > */ > - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, false, > - PMD_ORDER)) > + if (!thp_vma_allowable_pmd_order_inhuge(vma, vma->vm_flags, false)) > return SCAN_VMA_CHECK; > > /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */ > @@ -2355,8 +2353,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > progress++; > break; > } > - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, > - true, PMD_ORDER)) { > + if (!thp_vma_allowable_pmd_order_inhuge(vma, vma->vm_flags, true)) { > skip: > progress++; > continue; > @@ -2693,8 +2690,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, > > *prev = vma; > > - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, false, > - PMD_ORDER)) > + if (!thp_vma_allowable_pmd_order_inhuge(vma, vma->vm_flags, false)) > return -EINVAL; > > cc = kmalloc(sizeof(*cc), GFP_KERNEL); > diff --git a/mm/memory.c b/mm/memory.c > index 09ed76e5b8c0..8507bfda461a 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4329,8 +4329,8 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) > * for this vma. Then filter out the orders that can't be allocated over > * the faulting address and still be fully contained in the vma. > */ > - orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, > - BIT(PMD_ORDER) - 1); > + orders = thp_vma_allowable_orders_inpf(vma, vma->vm_flags, > + BIT(PMD_ORDER) - 1); > orders = thp_vma_suitable_orders(vma, vmf->address, orders); > > if (!orders) > @@ -5433,7 +5433,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, > return VM_FAULT_OOM; > retry_pud: > if (pud_none(*vmf.pud) && > - thp_vma_allowable_order(vma, vm_flags, false, true, true, PUD_ORDER)) { > + thp_vma_allowable_order_inpf(vma, vm_flags, PUD_ORDER)) { > ret = create_huge_pud(&vmf); > if (!(ret & VM_FAULT_FALLBACK)) > return ret; > @@ -5467,7 +5467,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, > goto retry_pud; > > if (pmd_none(*vmf.pmd) && > - thp_vma_allowable_order(vma, vm_flags, false, true, true, PMD_ORDER)) { > + thp_vma_allowable_order_inpf(vma, vm_flags, PMD_ORDER)) { > ret = create_huge_pmd(&vmf); > if (!(ret & VM_FAULT_FALLBACK)) > return ret;