From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D543ECAAD4 for ; Sat, 27 Aug 2022 03:08:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0E25C6B0074; Fri, 26 Aug 2022 23:08:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 093406B0078; Fri, 26 Aug 2022 23:08:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9B31940007; Fri, 26 Aug 2022 23:08:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id DA8286B0074 for ; Fri, 26 Aug 2022 23:08:07 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id AD838140627 for ; Sat, 27 Aug 2022 03:08:07 +0000 (UTC) X-FDA: 79843888614.24.46030E4 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf27.hostedemail.com (Postfix) with ESMTP id 86D7240027 for ; Sat, 27 Aug 2022 03:08:06 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MF1lJ2stNzlVq5; Sat, 27 Aug 2022 11:04:44 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 27 Aug 2022 11:08:01 +0800 Subject: Re: [PATCH 3/8] hugetlb: rename remove_huge_page to hugetlb_delete_from_page_cache To: Mike Kravetz CC: Muchun Song , David Hildenbrand , Michal Hocko , Peter Xu , Naoya Horiguchi , "Aneesh Kumar K . V" , Andrea Arcangeli , "Kirill A . Shutemov" , Davidlohr Bueso , Prakash Sangappa , James Houghton , Mina Almasry , Pasha Tatashin , Axel Rasmussen , Ray Fucillo , Andrew Morton , , References: <20220824175757.20590-1-mike.kravetz@oracle.com> <20220824175757.20590-4-mike.kravetz@oracle.com> From: Miaohe Lin Message-ID: Date: Sat, 27 Aug 2022 11:08:01 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20220824175757.20590-4-mike.kravetz@oracle.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661569687; a=rsa-sha256; cv=none; b=rKxjD9BPi33VzFoRhiQ9najEbvfamf0xfDgk+2T0E/9Y+z8yXriGW6B/XeamzPjax5jdsZ R3hz/gG4hKtCgRoZH6AMvMepywapcp6g+hCq2mgXeEBhl9PHwAHrqTw5psyZ734ZZY/XUk E8tzRv7gw0bgYFuNA2Kw0e7LL0GS9Zk= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661569687; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eTeVexeblaZGKfJj6Uo7bksY8UqUBCJ3SsFDgyuLUyM=; b=ckFei7ioA0v0FOYBts7ORlM0+kVnQU2ewwtJC0N9fNUXcCD4I2uS1PLC1jQBr1ITrSHv9/ HlwdW9mVdNPV+0DxHO4fDEKepb+/1mpwCxM5bI23ntPtrc/nJeMahp7K52aGr1ETXd1E0J vQS7JwysXvcm6ox/m8Z54q2Fu6rKsFA= X-Rspamd-Server: rspam11 X-Rspam-User: X-Rspamd-Queue-Id: 86D7240027 Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Stat-Signature: fdx9yomu6tpai5jxh5x9kih9dm3e8wji X-HE-Tag: 1661569686-395372 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/8/25 1:57, Mike Kravetz wrote: > remove_huge_page removes a hugetlb page from the page cache. Change > to hugetlb_delete_from_page_cache as it is a more descriptive name. > huge_add_to_page_cache is global in scope, but only deals with hugetlb > pages. For consistency and clarity, rename to hugetlb_add_to_page_cache. > > Signed-off-by: Mike Kravetz LGTM with one nit below. Thanks. > --- > fs/hugetlbfs/inode.c | 21 ++++++++++----------- > include/linux/hugetlb.h | 2 +- > mm/hugetlb.c | 8 ++++---- > 3 files changed, 15 insertions(+), 16 deletions(-) > > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c > index dfb735a91bbb..d98c6edbd1a4 100644 > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -364,7 +364,7 @@ static int hugetlbfs_write_end(struct file *file, struct address_space *mapping, > return -EINVAL; > } > > -static void remove_huge_page(struct page *page) > +static void hugetlb_delete_from_page_cache(struct page *page) > { > ClearPageDirty(page); > ClearPageUptodate(page); > @@ -478,15 +478,14 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, > folio_lock(folio); > /* > * We must free the huge page and remove from page > - * cache (remove_huge_page) BEFORE removing the > - * region/reserve map (hugetlb_unreserve_pages). In > - * rare out of memory conditions, removal of the > - * region/reserve map could fail. Correspondingly, > - * the subpool and global reserve usage count can need > - * to be adjusted. > + * cache BEFORE removing the * region/reserve map s/the * region/the region/, i.e. remove extra "*". Reviewed-by: Miaohe Lin Thanks, Miaohe Lin > + * (hugetlb_unreserve_pages). In rare out of memory > + * conditions, removal of the region/reserve map could > + * fail. Correspondingly, the subpool and global > + * reserve usage count can need to be adjusted. > */ > VM_BUG_ON(HPageRestoreReserve(&folio->page)); > - remove_huge_page(&folio->page); > + hugetlb_delete_from_page_cache(&folio->page); > freed++; > if (!truncate_op) { > if (unlikely(hugetlb_unreserve_pages(inode, > @@ -723,7 +722,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, > } > clear_huge_page(page, addr, pages_per_huge_page(h)); > __SetPageUptodate(page); > - error = huge_add_to_page_cache(page, mapping, index); > + error = hugetlb_add_to_page_cache(page, mapping, index); > if (unlikely(error)) { > restore_reserve_on_error(h, &pseudo_vma, addr, page); > put_page(page); > @@ -735,7 +734,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, > > SetHPageMigratable(page); > /* > - * unlock_page because locked by huge_add_to_page_cache() > + * unlock_page because locked by hugetlb_add_to_page_cache() > * put_page() due to reference from alloc_huge_page() > */ > unlock_page(page); > @@ -980,7 +979,7 @@ static int hugetlbfs_error_remove_page(struct address_space *mapping, > struct inode *inode = mapping->host; > pgoff_t index = page->index; > > - remove_huge_page(page); > + hugetlb_delete_from_page_cache(page); > if (unlikely(hugetlb_unreserve_pages(inode, index, index + 1, 1))) > hugetlb_fix_reserve_counts(inode); > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index 3ec981a0d8b3..acace1a25226 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -665,7 +665,7 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, > nodemask_t *nmask, gfp_t gfp_mask); > struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, > unsigned long address); > -int huge_add_to_page_cache(struct page *page, struct address_space *mapping, > +int hugetlb_add_to_page_cache(struct page *page, struct address_space *mapping, > pgoff_t idx); > void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma, > unsigned long address, struct page *page); > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 95c6f9a5bbf0..11c02513588c 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5445,7 +5445,7 @@ static bool hugetlbfs_pagecache_present(struct hstate *h, > return page != NULL; > } > > -int huge_add_to_page_cache(struct page *page, struct address_space *mapping, > +int hugetlb_add_to_page_cache(struct page *page, struct address_space *mapping, > pgoff_t idx) > { > struct folio *folio = page_folio(page); > @@ -5586,7 +5586,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, > new_page = true; > > if (vma->vm_flags & VM_MAYSHARE) { > - int err = huge_add_to_page_cache(page, mapping, idx); > + int err = hugetlb_add_to_page_cache(page, mapping, idx); > if (err) { > restore_reserve_on_error(h, vma, haddr, page); > put_page(page); > @@ -5993,11 +5993,11 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, > > /* > * Serialization between remove_inode_hugepages() and > - * huge_add_to_page_cache() below happens through the > + * hugetlb_add_to_page_cache() below happens through the > * hugetlb_fault_mutex_table that here must be hold by > * the caller. > */ > - ret = huge_add_to_page_cache(page, mapping, idx); > + ret = hugetlb_add_to_page_cache(page, mapping, idx); > if (ret) > goto out_release_nounlock; > page_in_pagecache = true; >