From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 809BBC369CB for ; Wed, 23 Apr 2025 06:49:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3CFFB6B0008; Wed, 23 Apr 2025 02:49:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 358986B000A; Wed, 23 Apr 2025 02:49:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FA146B000C; Wed, 23 Apr 2025 02:49:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F33F16B0008 for ; Wed, 23 Apr 2025 02:49:07 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4D0E7B8302 for ; Wed, 23 Apr 2025 06:49:08 +0000 (UTC) X-FDA: 83364381576.16.DACBEE2 Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) by imf19.hostedemail.com (Postfix) with ESMTP id 2AAFD1A0005 for ; Wed, 23 Apr 2025 06:49:05 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=HW8xGeAU; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf19.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.98 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745390946; a=rsa-sha256; cv=none; b=1mMvFNL/5H9FJ50CNnrbbOee07ru7AvLEjRGA+J1gOJLf25ewnd70QFQFH1W5TnFi5x5N0 iKBVsBkFmXRgnsDVRuOoTeEFSN1e8ylkiLVBhLoJt7qt006w7Xbed/657lzX/r1rHZosPi 9o4vWUGvLx9eSaY9oihhQQ9gDEVJCCU= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=HW8xGeAU; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf19.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.98 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745390946; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2y0KviKeCmEsTK0cwMWTPnt4ulk5zm52OBi/NMgP9sk=; b=rS/8Y4HAapXc3CXfv/woz432oyZaNGdsqROMt83/3UjV5woVPiBDQdjmSgG25KMq4JZ6bD JAvanZ1zkK720F86n7ENtj44xHGPiyos9VuMRoQWSHpMqfbpUcePg9SoKoO7meQcOs9jm1 O0h1efm8v9D+97UiQGREBQ+x9otRQpQ= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1745390943; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=2y0KviKeCmEsTK0cwMWTPnt4ulk5zm52OBi/NMgP9sk=; b=HW8xGeAUsvsJd0kDytMdD1K9t/dmVwIU0W/kdyL/ljuCJPJ61pfnFp0st/W2U36dAhXF+lcFFegrFfJGuwtMH43fTBRii0w8TMATacLMWYMIisyzSGu+9gFmHrVMPFzy4v5zYgkfjE971gvVqJ6F2J6rMZASxk42127jSuu+ZUk= Received: from 30.74.144.121(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WXtUz9c_1745390941 cluster:ay36) by smtp.aliyun-inc.com; Wed, 23 Apr 2025 14:49:01 +0800 Message-ID: <9617de11-c98a-4610-9c17-11fb981ef3dc@linux.alibaba.com> Date: Wed, 23 Apr 2025 14:49:00 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 02/12] khugepaged: rename hpage_collapse_* to khugepaged_* To: Nico Pache , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org References: <20250417000238.74567-1-npache@redhat.com> <20250417000238.74567-3-npache@redhat.com> From: Baolin Wang In-Reply-To: <20250417000238.74567-3-npache@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 2AAFD1A0005 X-Stat-Signature: 1cip4o6jycgksq6nrcfycyo96tjfcdqj X-Rspam-User: X-HE-Tag: 1745390945-939509 X-HE-Meta: U2FsdGVkX18BhNO32/aRnxbACN86keiXNgYmnCLLnI5G3s1wpRPiEe43Dpf9qPTG59XrjAIxykehfjqWb0LotgVhGLqzUxiHmdSTt/jmTfoovIsZXZr+jqiBVwGwPJxQz4bm2+ZpEz7oCy0UbhWP8O5doAsiv23PCAW1spdaU4/C+jfvE0rm4sOJIPVTkq8ibusxPfZ1jXR1hPJXfcrM9ya0NAlTGPlV/C+XusU7nhsdSuygIfliihYWK3bX28oYDFC4MSJkdbvgDvvRcr0DRDdw6maELFLpSDTwudEcQoL7j3d24LyyDJx0L6uO61WOznIX1J9rETv5FbYxxvVwvLUjXr+dtMVgDPB073t2BAkhuSjmVKl6UhcbiHHBDGOufb7rA1Are1vNKnP+z6m0YQe6DkhkS6o1jWQjvQ4/ckbZF6Bv8zDjlKtEZIBlqpsS2rv3MhCv8MQ/WzPhUjroSD+M48i9/NGVovxZK7obX8htJF45WH7/ggHvNh8d0ltpZE2geRr7wSFuJYFYZCLKeRtu3Bcj8saRPa3zml6dm3LgLZnyuxZ/n8KY+fVF+4XjqbVrf4290tOoETZeX73jldGY4OqGVaYpDC1Y46vDEUamycIrD0FHCjJZyxeoYpftnkTjMJVDzhFKpB0sDymlxZRjftQ3O/xahNJlNFcqmSHhv7XtjXtSZeHIK1l9C6iBvdQ3gyEzicYKhLJFXFzy2uvOjt/NCXlZaFhiXiCA6tHE/t1T4fGlLHCQAnC6fjsLVfPtM+LfV4n6KEjt5X6f/4Bw62/FRJiD4vHE01ZgcNwWOuVvziaHcPtvRWG7OdGTFkji51xy0aGhqIktB84p8UECL6Q2WqIDTHcRUY+Ei/b7QYIpgx32vCnW7DWlCRVOiz7WJ/WnOamH9CBPU9mvtTgIalgMmLVBNDupRxebWU+qV7n2+BWMU9VdIAjP0N0q/RCJ3qw3V1ur6APkJ6Q yTvUKayu hROpJCxhMvqKsM4ZNuaBbnJYbDRH72vDTxS3vG6CrHgLgAek1KS74FwxOpZ3O9OUxQ/K/UO7GiamBY7mEwIDPqL85MURgOZITcazwWeJzfRW+uKRxZSQ0/LvkAHo+OIVDq4DWhkdHSAEBxMBbMSdeh1ycK+YeMRF+ZygzcvA2vPpDvdmbEXnBRlH2H3EL+8pgVgEja9KNI1CCLC/qbsOc4Vbe60q+ASvOdZv58rV+XCimwrNUDGs83zR3S1o5QZZc0LnI7+BAU0cQhLJKDLTjKEudbHi7eyDzAIu64fGn3qIA7iVXsCirTcdriejHgHQCqMwVBgk4OpFoEwiII9m2fQ3V70QPP8BboEM38K2PHk0XsDfaMIPSXyRXzvEjzTAe34dFBXgfyk469MKyZPJjYI1d2qNt1CTz87k8 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/4/17 08:02, Nico Pache wrote: > functions in khugepaged.c use a mix of hpage_collapse and khugepaged > as the function prefix. > > rename all of them to khugepaged to keep things consistent and slightly > shorten the function names. Yes, make sense to me. > Signed-off-by: Nico Pache Reviewed-by: Baolin Wang Nit: this renaming cleanup should be put in patch 1. > --- > mm/khugepaged.c | 50 ++++++++++++++++++++++++------------------------- > 1 file changed, 25 insertions(+), 25 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index cecadc4239e7..b6281c04f1e5 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -402,14 +402,14 @@ void __init khugepaged_destroy(void) > kmem_cache_destroy(mm_slot_cache); > } > > -static inline int hpage_collapse_test_exit(struct mm_struct *mm) > +static inline int khugepaged_test_exit(struct mm_struct *mm) > { > return atomic_read(&mm->mm_users) == 0; > } > > -static inline int hpage_collapse_test_exit_or_disable(struct mm_struct *mm) > +static inline int khugepaged_test_exit_or_disable(struct mm_struct *mm) > { > - return hpage_collapse_test_exit(mm) || > + return khugepaged_test_exit(mm) || > test_bit(MMF_DISABLE_THP, &mm->flags); > } > > @@ -444,7 +444,7 @@ void __khugepaged_enter(struct mm_struct *mm) > int wakeup; > > /* __khugepaged_exit() must not run from under us */ > - VM_BUG_ON_MM(hpage_collapse_test_exit(mm), mm); > + VM_BUG_ON_MM(khugepaged_test_exit(mm), mm); > if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) > return; > > @@ -503,7 +503,7 @@ void __khugepaged_exit(struct mm_struct *mm) > } else if (mm_slot) { > /* > * This is required to serialize against > - * hpage_collapse_test_exit() (which is guaranteed to run > + * khugepaged_test_exit() (which is guaranteed to run > * under mmap sem read mode). Stop here (after we return all > * pagetables will be destroyed) until khugepaged has finished > * working on the pagetables under the mmap_lock. > @@ -851,7 +851,7 @@ struct collapse_control khugepaged_collapse_control = { > .is_khugepaged = true, > }; > > -static bool hpage_collapse_scan_abort(int nid, struct collapse_control *cc) > +static bool khugepaged_scan_abort(int nid, struct collapse_control *cc) > { > int i; > > @@ -886,7 +886,7 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void) > } > > #ifdef CONFIG_NUMA > -static int hpage_collapse_find_target_node(struct collapse_control *cc) > +static int khugepaged_find_target_node(struct collapse_control *cc) > { > int nid, target_node = 0, max_value = 0; > > @@ -905,7 +905,7 @@ static int hpage_collapse_find_target_node(struct collapse_control *cc) > return target_node; > } > #else > -static int hpage_collapse_find_target_node(struct collapse_control *cc) > +static int khugepaged_find_target_node(struct collapse_control *cc) > { > return 0; > } > @@ -925,7 +925,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > struct vm_area_struct *vma; > unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; > > - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) > + if (unlikely(khugepaged_test_exit_or_disable(mm))) > return SCAN_ANY_PROCESS; > > *vmap = vma = find_vma(mm, address); > @@ -992,7 +992,7 @@ static int check_pmd_still_valid(struct mm_struct *mm, > > /* > * Bring missing pages in from swap, to complete THP collapse. > - * Only done if hpage_collapse_scan_pmd believes it is worthwhile. > + * Only done if khugepaged_scan_pmd believes it is worthwhile. > * > * Called and returns without pte mapped or spinlocks held. > * Returns result: if not SCAN_SUCCEED, mmap_lock has been released. > @@ -1078,7 +1078,7 @@ static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm, > { > gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : > GFP_TRANSHUGE); > - int node = hpage_collapse_find_target_node(cc); > + int node = khugepaged_find_target_node(cc); > struct folio *folio; > > folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask); > @@ -1264,7 +1264,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > return result; > } > > -static int hpage_collapse_scan_pmd(struct mm_struct *mm, > +static int khugepaged_scan_pmd(struct mm_struct *mm, > struct vm_area_struct *vma, > unsigned long address, bool *mmap_locked, > struct collapse_control *cc) > @@ -1378,7 +1378,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, > * hit record. > */ > node = folio_nid(folio); > - if (hpage_collapse_scan_abort(node, cc)) { > + if (khugepaged_scan_abort(node, cc)) { > result = SCAN_SCAN_ABORT; > goto out_unmap; > } > @@ -1447,7 +1447,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot) > > lockdep_assert_held(&khugepaged_mm_lock); > > - if (hpage_collapse_test_exit(mm)) { > + if (khugepaged_test_exit(mm)) { > /* free mm_slot */ > hash_del(&slot->hash); > list_del(&slot->mm_node); > @@ -1742,7 +1742,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) > if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED) > continue; > > - if (hpage_collapse_test_exit(mm)) > + if (khugepaged_test_exit(mm)) > continue; > /* > * When a vma is registered with uffd-wp, we cannot recycle > @@ -2264,7 +2264,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > return result; > } > > -static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, > +static int khugepaged_scan_file(struct mm_struct *mm, unsigned long addr, > struct file *file, pgoff_t start, > struct collapse_control *cc) > { > @@ -2309,7 +2309,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, > } > > node = folio_nid(folio); > - if (hpage_collapse_scan_abort(node, cc)) { > + if (khugepaged_scan_abort(node, cc)) { > result = SCAN_SCAN_ABORT; > break; > } > @@ -2355,7 +2355,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, > return result; > } > #else > -static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, > +static int khugepaged_scan_file(struct mm_struct *mm, unsigned long addr, > struct file *file, pgoff_t start, > struct collapse_control *cc) > { > @@ -2383,19 +2383,19 @@ static int khugepaged_collapse_single_pmd(unsigned long addr, > > mmap_read_unlock(mm); > *mmap_locked = false; > - result = hpage_collapse_scan_file(mm, addr, file, pgoff, > + result = khugepaged_scan_file(mm, addr, file, pgoff, > cc); > fput(file); > if (result == SCAN_PTE_MAPPED_HUGEPAGE) { > mmap_read_lock(mm); > - if (hpage_collapse_test_exit_or_disable(mm)) > + if (khugepaged_test_exit_or_disable(mm)) > goto end; > result = collapse_pte_mapped_thp(mm, addr, > !cc->is_khugepaged); > mmap_read_unlock(mm); > } > } else { > - result = hpage_collapse_scan_pmd(mm, vma, addr, > + result = khugepaged_scan_pmd(mm, vma, addr, > mmap_locked, cc); > } > if (cc->is_khugepaged && result == SCAN_SUCCEED) > @@ -2443,7 +2443,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > goto breakouterloop_mmap_lock; > > progress++; > - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) > + if (unlikely(khugepaged_test_exit_or_disable(mm))) > goto breakouterloop; > > vma_iter_init(&vmi, mm, khugepaged_scan.address); > @@ -2451,7 +2451,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > unsigned long hstart, hend; > > cond_resched(); > - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) { > + if (unlikely(khugepaged_test_exit_or_disable(mm))) { > progress++; > break; > } > @@ -2473,7 +2473,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > bool mmap_locked = true; > > cond_resched(); > - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) > + if (unlikely(khugepaged_test_exit_or_disable(mm))) > goto breakouterloop; > > VM_BUG_ON(khugepaged_scan.address < hstart || > @@ -2509,7 +2509,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, > * Release the current mm_slot if this mm is about to die, or > * if we scanned all vmas of this mm. > */ > - if (hpage_collapse_test_exit(mm) || !vma) { > + if (khugepaged_test_exit(mm) || !vma) { > /* > * Make sure that if mm_users is reaching zero while > * khugepaged runs here, khugepaged_exit will find