From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A088DC021AA for ; Tue, 18 Feb 2025 16:29:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EA9B4280155; Tue, 18 Feb 2025 11:29:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E32BD280152; Tue, 18 Feb 2025 11:29:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAC40280155; Tue, 18 Feb 2025 11:29:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A9675280152 for ; Tue, 18 Feb 2025 11:29:48 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4A9A81401F4 for ; Tue, 18 Feb 2025 16:29:48 +0000 (UTC) X-FDA: 83133601656.24.FD9647F Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf09.hostedemail.com (Postfix) with ESMTP id 71C5F140011 for ; Tue, 18 Feb 2025 16:29:46 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf09.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739896186; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uH5tbQVwkazD8WiNLgRHKFGlrpN+9tykc+iunJmG2Co=; b=jYGT1qh7eftTThsHJdaZ5rLioDHKdYO4valSIhbNvx8WQSWOx3p2ew3sBbLRGJaAzoQv3F Eh93EvHSlhyClOb/EvwwyK91ys+LjZK9cZdd0PgCxIAtQ1mHvgCFbcUcdGliHzvZO1KBK2 LF2Lm00QMoD7z9fO7EOIYlFnQVCoT1s= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf09.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739896186; a=rsa-sha256; cv=none; b=cLY9VqnjMa2sRlE+e6oL0klRlYtfNtvekZ/8s3X95hnkvzG4LLKOZXDrUHgGmly9Bb/yHZ 8y44mlLp8fxpA0SAIdLvnvS7Hfn7UORQQTrZjIj659p5tkHW9sSQU2xUXfJb0zw7keXtk3 KkS+Iu0ozYbZg6Z8O2KUgbGlLmPaQWQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4AF2A16F3; Tue, 18 Feb 2025 08:30:04 -0800 (PST) Received: from [10.1.27.186] (XHFQ2J9959.cambridge.arm.com [10.1.27.186]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AEF253F59E; Tue, 18 Feb 2025 08:29:39 -0800 (PST) Message-ID: Date: Tue, 18 Feb 2025 16:29:38 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC v2 2/9] khugepaged: rename hpage_collapse_* to khugepaged_* Content-Language: en-GB To: Nico Pache , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Cc: anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu, haowenchao22@gmail.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com, surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com, zhengqi.arch@bytedance.com, jhubbard@nvidia.com, 21cnbao@gmail.com, willy@infradead.org, kirill.shutemov@linux.intel.com, david@redhat.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, sunnanyong@huawei.com, usamaarif642@gmail.com, audra@redhat.com, akpm@linux-foundation.org, rostedt@goodmis.org, mathieu.desnoyers@efficios.com, tiwai@suse.de References: <20250211003028.213461-1-npache@redhat.com> <20250211003028.213461-3-npache@redhat.com> From: Ryan Roberts In-Reply-To: <20250211003028.213461-3-npache@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 71C5F140011 X-Stat-Signature: wip78gjkmitdcy5fyrxrbiscruirffy1 X-HE-Tag: 1739896186-786130 X-HE-Meta: U2FsdGVkX19OfiWQm+PzDhPulioKinnDBOWY98krVqeQ2f5PyEy9gbGr4gNtINcNDnC94rFV64yq18GMfiQ08D2+DmjHl8AEsbef7dsB5A6AaWV7xbhJCQpEkF/I8e02ex8+NXtg6ycNC8um8hOjDsQxznk0dtMrP6t1PeQVwYRCwr7PHhNOYQE1tpWu3orotLkixyF4GBYf88FCSNvm0abOWnTsSAUEPbldpgPCMjgExDXLtzbDfF9fE9SlZa3dlTeCGUygL0WwbsM92LqqB6HlYbe9+gzRuzFMUOeFQmZuVzrxClqo7Sq3a+OtHoBnvX5Go/NGArbDNzYJ5+gWEu7dqkEBBlhtz6bOSrvS9UXOmIFvq+PvAN7FPUeu2cfpGWuDCT2Ip0JWUIFxCUbdLzNGYaSV0QhVFCfm/th7U1jsoOh686c3UUvxXNXKuL9VA4R+DKmI2jAMjlPRlvapHGNPsDpdZFzBL0nfNcZr5x7HRHCQeu+xUNcBABFSrIJUGS+qTuv4wW35WIoVGrYD0iip6fcMS6aRLwVBURcdpNm8lFuyEL9AEB0qnpiLrNxWyCVg7mn95XTvqO7Cnka2GHb6Ms+dKP3JnjVrNkoGcku/04tM42PwSSaR3Y6benHofZNGccaXCEoTTJGyZHA5gSdMCT/yIzd8aIlMpsjPLYuOORTRCM0wFowaVUieYto9nkPqD4Txl/AiU+vH6lSWR3j4ExS2NQ5iWiG6+Yinz19fJodhF2HuCfrf11zuTFFZXi9qxgCA2HmdVTrxzkK41TY5vT7I+7yuT7/vGxkeYxZ145SWpNstOsKtwmiu5tAuG7Icz6SBRqwOtRIWAHUXL4TSxYAns2YahaQnD0wMmzsMVpNocL3nCSX8pr4fU4JJ9GSCvBzZiDC0YRNF/1tX9D0jE1JPHlYpjfNppJEMoG/dIWS5IvNJwLx41ZNgPS6DHmaTFDxFjyVMFfgH0h6 kkW83oE1 63vylru7bHpU3JOruBkYKYHFO4dYq54Q3CqwFg+Pcp3HRV7W1KmGuP/+zYiiy8xUAiyx1bGUo8MVgranpJDNW3xzYS66C80N3cqHTO8rXx8x01AjrTJl/h0q0ZBzENzXbk7gvVbj/bqtblX7IHyUzjV1yliT/HtWZ000vBcpbcV2vmKe/QHb/aHHXTDbM4lo3N3q24WlgPDMMlP94TXQ/aU8PeTBh6JVFDeGICWJSx/SBAHceMKfnlUQ/5UKqw4amX59hW0bWXoOP0piEQ6W9I1EmWFSCQouA3TQv9BnP6f4pSByX61B9E2M+WbmtOgi9ElNtPVhZZkIMwp5G7Zb3sbWW0w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 11/02/2025 00:30, Nico Pache wrote: > functions in khugepaged.c use a mix of hpage_collapse and khugepaged > as the function prefix. > > rename all of them to khugepaged to keep things consistent and slightly > shorten the function names. > > Signed-off-by: Nico Pache Reviewed-by: Ryan Roberts > --- > mm/khugepaged.c | 52 ++++++++++++++++++++++++------------------------- > 1 file changed, 26 insertions(+), 26 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 46faee67378b..4c88d17250f4 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -402,14 +402,14 @@ void __init khugepaged_destroy(void) > kmem_cache_destroy(mm_slot_cache); > } > > -static inline int hpage_collapse_test_exit(struct mm_struct *mm) > +static inline int khugepaged_test_exit(struct mm_struct *mm) > { > return atomic_read(&mm->mm_users) == 0; > } > > -static inline int hpage_collapse_test_exit_or_disable(struct mm_struct *mm) > +static inline int khugepaged_test_exit_or_disable(struct mm_struct *mm) > { > - return hpage_collapse_test_exit(mm) || > + return khugepaged_test_exit(mm) || > test_bit(MMF_DISABLE_THP, &mm->flags); > } > > @@ -444,7 +444,7 @@ void __khugepaged_enter(struct mm_struct *mm) > int wakeup; > > /* __khugepaged_exit() must not run from under us */ > - VM_BUG_ON_MM(hpage_collapse_test_exit(mm), mm); > + VM_BUG_ON_MM(khugepaged_test_exit(mm), mm); > if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) > return; > > @@ -503,7 +503,7 @@ void __khugepaged_exit(struct mm_struct *mm) > } else if (mm_slot) { > /* > * This is required to serialize against > - * hpage_collapse_test_exit() (which is guaranteed to run > + * khugepaged_test_exit() (which is guaranteed to run > * under mmap sem read mode). Stop here (after we return all > * pagetables will be destroyed) until khugepaged has finished > * working on the pagetables under the mmap_lock. > @@ -606,7 +606,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > folio = page_folio(page); > VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio); > > - /* See hpage_collapse_scan_pmd(). */ > + /* See khugepaged_scan_pmd(). */ > if (folio_likely_mapped_shared(folio)) { > ++shared; > if (cc->is_khugepaged && > @@ -851,7 +851,7 @@ struct collapse_control khugepaged_collapse_control = { > .is_khugepaged = true, > }; > > -static bool hpage_collapse_scan_abort(int nid, struct collapse_control *cc) > +static bool khugepaged_scan_abort(int nid, struct collapse_control *cc) > { > int i; > > @@ -886,7 +886,7 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void) > } > > #ifdef CONFIG_NUMA > -static int hpage_collapse_find_target_node(struct collapse_control *cc) > +static int khugepaged_find_target_node(struct collapse_control *cc) > { > int nid, target_node = 0, max_value = 0; > > @@ -905,7 +905,7 @@ static int hpage_collapse_find_target_node(struct collapse_control *cc) > return target_node; > } > #else > -static int hpage_collapse_find_target_node(struct collapse_control *cc) > +static int khugepaged_find_target_node(struct collapse_control *cc) > { > return 0; > } > @@ -925,7 +925,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > struct vm_area_struct *vma; > unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; > > - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) > + if (unlikely(khugepaged_test_exit_or_disable(mm))) > return SCAN_ANY_PROCESS; > > *vmap = vma = find_vma(mm, address); > @@ -992,7 +992,7 @@ static int check_pmd_still_valid(struct mm_struct *mm, > > /* > * Bring missing pages in from swap, to complete THP collapse. > - * Only done if hpage_collapse_scan_pmd believes it is worthwhile. > + * Only done if khugepaged_scan_pmd believes it is worthwhile. > * > * Called and returns without pte mapped or spinlocks held. > * Returns result: if not SCAN_SUCCEED, mmap_lock has been released. > @@ -1078,7 +1078,7 @@ static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm, > { > gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : > GFP_TRANSHUGE); > - int node = hpage_collapse_find_target_node(cc); > + int node = khugepaged_find_target_node(cc); > struct folio *folio; > > folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask); > @@ -1264,7 +1264,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > return result; > } > > -static int hpage_collapse_scan_pmd(struct mm_struct *mm, > +static int khugepaged_scan_pmd(struct mm_struct *mm, > struct vm_area_struct *vma, > unsigned long address, bool *mmap_locked, > struct collapse_control *cc) > @@ -1380,7 +1380,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, > * hit record. > */ > node = folio_nid(folio); > - if (hpage_collapse_scan_abort(node, cc)) { > + if (khugepaged_scan_abort(node, cc)) { > result = SCAN_SCAN_ABORT; > goto out_unmap; > } > @@ -1449,7 +1449,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot) > > lockdep_assert_held(&khugepaged_mm_lock); > > - if (hpage_collapse_test_exit(mm)) { > + if (khugepaged_test_exit(mm)) { > /* free mm_slot */ > hash_del(&slot->hash); > list_del(&slot->mm_node); > @@ -1744,7 +1744,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) > if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED) > continue; > > - if (hpage_collapse_test_exit(mm)) > + if (khugepaged_test_exit(mm)) > continue; > /* > * When a vma is registered with uffd-wp, we cannot recycle > @@ -2266,7 +2266,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > return result; > } > > -static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, > +static int khugepaged_scan_file(struct mm_struct *mm, unsigned long addr, > struct file *file, pgoff_t start, > struct collapse_control *cc) > { > @@ -2311,7 +2311,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, > } > > node = folio_nid(folio); > - if (hpage_collapse_scan_abort(node, cc)) { > + if (khugepaged_scan_abort(node, cc)) { > result = SCAN_SCAN_ABORT; > break; > } > @@ -2357,7 +2357,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, > return result; > } > #else > -static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, > +static int khugepaged_scan_file(struct mm_struct *mm, unsigned long addr, > struct file *file, pgoff_t start, > struct collapse_control *cc) > { > @@ -2389,19 +2389,19 @@ static int khugepaged_collapse_single_pmd(unsigned long addr, struct mm_struct * > > mmap_read_unlock(mm); > *mmap_locked = false; > - result = hpage_collapse_scan_file(mm, addr, file, pgoff, > + result = khugepaged_scan_file(mm, addr, file, pgoff, > cc); > fput(file); > if (result == SCAN_PTE_MAPPED_HUGEPAGE) { > mmap_read_lock(mm); > - if (hpage_collapse_test_exit_or_disable(mm)) > + if (khugepaged_test_exit_or_disable(mm)) > goto end; > result = collapse_pte_mapped_thp(mm, addr, > !cc->is_khugepaged); > mmap_read_unlock(mm); > } > } else { > - result = hpage_collapse_scan_pmd(mm, vma, addr, > + result = khugepaged_scan_pmd(mm, vma, addr, > mmap_locked, cc); > } > if (result == SCAN_SUCCEED || result == SCAN_PMD_MAPPED) > @@ -2449,7 +2449,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > goto breakouterloop_mmap_lock; > > progress++; > - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) > + if (unlikely(khugepaged_test_exit_or_disable(mm))) > goto breakouterloop; > > vma_iter_init(&vmi, mm, khugepaged_scan.address); > @@ -2457,7 +2457,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > unsigned long hstart, hend; > > cond_resched(); > - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) { > + if (unlikely(khugepaged_test_exit_or_disable(mm))) { > progress++; > break; > } > @@ -2479,7 +2479,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > bool mmap_locked = true; > > cond_resched(); > - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) > + if (unlikely(khugepaged_test_exit_or_disable(mm))) > goto breakouterloop; > > VM_BUG_ON(khugepaged_scan.address < hstart || > @@ -2515,7 +2515,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > * Release the current mm_slot if this mm is about to die, or > * if we scanned all vmas of this mm. > */ > - if (hpage_collapse_test_exit(mm) || !vma) { > + if (khugepaged_test_exit(mm) || !vma) { > /* > * Make sure that if mm_users is reaching zero while > * khugepaged runs here, khugepaged_exit will find