From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 154C7C3DA4B for ; Mon, 15 Jul 2024 13:32:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E5AD6B009B; Mon, 15 Jul 2024 09:32:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 899CC6B009D; Mon, 15 Jul 2024 09:32:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75DD66B009F; Mon, 15 Jul 2024 09:32:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 564406B009B for ; Mon, 15 Jul 2024 09:32:25 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 00FE4141594 for ; Mon, 15 Jul 2024 13:32:24 +0000 (UTC) X-FDA: 82342076250.07.2A61D07 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf02.hostedemail.com (Postfix) with ESMTP id 1420380012 for ; Mon, 15 Jul 2024 13:32:22 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf02.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721050301; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nxmsQLMO4mrxj8uT54Uh2Y/1ACmadgfiploms+SPcqU=; b=kIZ8gpLRt77L16ctq9zAIYDUItXY454NkK+MZe9ekv47g1oSgVVmVXFZ+kmne3LwVvTq6f 5Log4Bo6LHBFUpVQ695C6l0AFUHxkjAgIs965Yq0pRgY/GDPnPR4GF0nwFWSTuYgKsIx+n xt1AneXazfCa/BR/aOGOtnlQftKbPr4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721050301; a=rsa-sha256; cv=none; b=sKV8fGvKJxBttHd798s32gBh4jF5EXDjUK/so5WQNWm8RB1q4hhrePNLHJ1Y9PWl2oS4An vb5Ghs53Mjj258uPUYRM4/om/0DS72Z6nw6qyUeew9i5q/fpBS8RLr8w7JTICHfLl0akfT TQ2u4w08maN7ct6MbIVlyxZ+l7V29cU= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf02.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6A9A5DA7; Mon, 15 Jul 2024 06:32:47 -0700 (PDT) Received: from [10.57.77.136] (unknown [10.57.77.136]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B1CF33F73F; Mon, 15 Jul 2024 06:32:19 -0700 (PDT) Message-ID: Date: Mon, 15 Jul 2024 14:32:17 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 2/3] mm: shmem: rename shmem_is_huge() to shmem_huge_global_enabled() Content-Language: en-GB To: Baolin Wang , akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, 21cnbao@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <26dfca33f394b5cfa68e4dbda60bf5f54e41c534.1720755678.git.baolin.wang@linux.alibaba.com> From: Ryan Roberts In-Reply-To: <26dfca33f394b5cfa68e4dbda60bf5f54e41c534.1720755678.git.baolin.wang@linux.alibaba.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1420380012 X-Stat-Signature: a81q3hd6s8g47h9rkenexopxh7usidio X-Rspam-User: X-HE-Tag: 1721050342-290742 X-HE-Meta: U2FsdGVkX1+DsPqJfCRybh+8gyuUry5sgKR1d+1QyZieeEdfN5hKvTfYy4BBQSYhdU3Ka38Bp2lTDDN3MMmJmEZ5bhd3K9Cy4qO6ZpNHvLfdIdrfTYd4lkKgGg1doQOsTPFjdySTI/k9S7U64pYZ3TipfqR03ZXDr0FJ4OnpHJBs/2tAx0cxaTQA16xUZWlHsBJkg3Zx7jZnnSgy6iwDU3GE5GN9mTktmzN8BgX6iqeN/yCyYlSbW7RP+AljgGlchQQ2syinPP5N7am39C0JSDO9HUDJUMKzana3o3jHLxhNS3CrL9FlA0AsT0gVGXV5Hol+XenzgvXeHbQQbMfGMcYevjPn9SD0QEfemsVZ/ffW/qx6h9TolNH5hiEk334GLJNugFsq/V6sVHh1CiSC64+gaw5VE9ap0P8pBPZowKw0nYfLqMJEMSqNzcP4Vn3BzhADCb2m9AMmLjLTWjwkB0MgVRo3M+vdNZLhtQGTbKw3xkBlDgi30/Q08qUvfiS3Nhhg82vyjH8RVPngVLqZxJFn9u3QdZ6rha2rrkTS1QDn6GoDGETu66C1xslAZp0XY/ZeB7ysVeh/SRdxG0pKG4qs2gCQnQ8h4rrUHGxck9bqucDWLhfK5RceLx6O1u1yPaM3h/ODuW0lasV7qTtstG8+DzRRn6bXWUZkJrjy3mHTZzFiyjckE+mhUzyaK7PnqkJvnlbBj22pOGMbsruMNAfXC8gGXFp26HdruvEeTvwPq3KgBZ7FXissnTXlwO6Nb7j7bpQ0sTHqd8jpOm7C2b6MxlU8w/jUiSNsxJ6Dsq24cJ6lrGMj/t6EUB9MLtjsOYgrdpvxAGdee3PkfI4t5RHWND9VtHO7pf9wGgS26FtoVM/dFLANCbxD4a3Ac0naFw64k1+3IHZ10NI7h2CWu5s1GcJwu9cJ7bmwWOCcvGxXEkFW/XY0xpgKT1UFWC6yBxJkWgJNMd8flAsMBL7 sRarM5he 6bhxMWcuLiMY6j1q3JBlAz+v1QoZQeE3YfYzOvHFypyNxnE8PzohRa6vnIpenT++C9nsOJ1kT+TjRfaIKs4ESqnrj81rkz88QtCZKUmIqjnM4UjeWtVme4Qq5kY3lnNBBuAZS7TN0PCUyBZuGJOQNfyS/FhWw3IfVpnM5jR82tCBlvXXT7yslXYfZPPPVE7jSdNLkm+L7oT5mUFbHSWAK3gt4kCNa0PX6j4DWmdGSFAd9BSR+4m4tovoG8W0Ua/u2g9cVFx+r25KCU08= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 13/07/2024 14:24, Baolin Wang wrote: > The shmem_is_huge() is now used to check if the top-level huge page is enabled, > thus rename it to reflect its usage. > > Signed-off-by: Baolin Wang Reviewed-by: Ryan Roberts > --- > include/linux/shmem_fs.h | 9 +++++---- > mm/huge_memory.c | 5 +++-- > mm/shmem.c | 15 ++++++++------- > 3 files changed, 16 insertions(+), 13 deletions(-) > > diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h > index 1d06b1e5408a..405ee8d3589a 100644 > --- a/include/linux/shmem_fs.h > +++ b/include/linux/shmem_fs.h > @@ -111,14 +111,15 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); > int shmem_unuse(unsigned int type); > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > -extern bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force, > - struct mm_struct *mm, unsigned long vm_flags); > +extern bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force, > + struct mm_struct *mm, unsigned long vm_flags); > unsigned long shmem_allowable_huge_orders(struct inode *inode, > struct vm_area_struct *vma, pgoff_t index, > bool global_huge); > #else > -static __always_inline bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force, > - struct mm_struct *mm, unsigned long vm_flags) > +static __always_inline bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, > + bool shmem_huge_force, struct mm_struct *mm, > + unsigned long vm_flags) > { > return false; > } > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index f9696c94e211..cc9bad12be75 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -152,8 +152,9 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, > * own flags. > */ > if (!in_pf && shmem_file(vma->vm_file)) { > - bool global_huge = shmem_is_huge(file_inode(vma->vm_file), vma->vm_pgoff, > - !enforce_sysfs, vma->vm_mm, vm_flags); > + bool global_huge = shmem_huge_global_enabled(file_inode(vma->vm_file), > + vma->vm_pgoff, !enforce_sysfs, > + vma->vm_mm, vm_flags); > > if (!vma_is_anon_shmem(vma)) > return global_huge ? orders : 0; > diff --git a/mm/shmem.c b/mm/shmem.c > index db7e9808830f..1445dcd39b6f 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -548,9 +548,9 @@ static bool shmem_confirm_swap(struct address_space *mapping, > > static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER; > > -static bool __shmem_is_huge(struct inode *inode, pgoff_t index, > - bool shmem_huge_force, struct mm_struct *mm, > - unsigned long vm_flags) > +static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index, > + bool shmem_huge_force, struct mm_struct *mm, > + unsigned long vm_flags) > { > loff_t i_size; > > @@ -581,14 +581,15 @@ static bool __shmem_is_huge(struct inode *inode, pgoff_t index, > } > } > > -bool shmem_is_huge(struct inode *inode, pgoff_t index, > +bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, > bool shmem_huge_force, struct mm_struct *mm, > unsigned long vm_flags) > { > if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER) > return false; > > - return __shmem_is_huge(inode, index, shmem_huge_force, mm, vm_flags); > + return __shmem_huge_global_enabled(inode, index, shmem_huge_force, > + mm, vm_flags); > } > > #if defined(CONFIG_SYSFS) > @@ -1156,7 +1157,7 @@ static int shmem_getattr(struct mnt_idmap *idmap, > STATX_ATTR_NODUMP); > generic_fillattr(idmap, request_mask, inode, stat); > > - if (shmem_is_huge(inode, 0, false, NULL, 0)) > + if (shmem_huge_global_enabled(inode, 0, false, NULL, 0)) > stat->blksize = HPAGE_PMD_SIZE; > > if (request_mask & STATX_BTIME) { > @@ -2153,7 +2154,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, > return 0; > } > > - huge = shmem_is_huge(inode, index, false, fault_mm, > + huge = shmem_huge_global_enabled(inode, index, false, fault_mm, > vma ? vma->vm_flags : 0); > /* Find hugepage orders that are allowed for anonymous shmem. */ > if (vma && vma_is_anon_shmem(vma))