linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jane Chu <jane.chu@oracle.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Miaohe Lin <linmiaohe@huawei.com>
Cc: linux-mm@kvack.org
Subject: Re: [PATCH v2 06/11] mm: Convert hugetlb_page_mapping_lock_write to folio
Date: Mon, 8 Apr 2024 16:09:26 -0700	[thread overview]
Message-ID: <4ac239d4-5be6-4d1c-ba08-af5c70ed79b8@oracle.com> (raw)
In-Reply-To: <20240408194232.118537-7-willy@infradead.org>

On 4/8/2024 12:42 PM, Matthew Wilcox (Oracle) wrote:

> The page is only used to get the mapping, so the folio will do just
> as well.  Both callers already have a folio available, so this saves
> a call to compound_head().
>
> Acked-by: Miaohe Lin <linmiaohe@huawei.com>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   include/linux/hugetlb.h | 6 +++---
>   mm/hugetlb.c            | 6 +++---
>   mm/memory-failure.c     | 2 +-
>   mm/migrate.c            | 2 +-
>   4 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 3f3e62880279..bebf4c3a53ef 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -178,7 +178,7 @@ bool hugetlbfs_pagecache_present(struct hstate *h,
>   				 struct vm_area_struct *vma,
>   				 unsigned long address);
>   
> -struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage);
> +struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio);
>   
>   extern int sysctl_hugetlb_shm_group;
>   extern struct list_head huge_boot_pages[MAX_NUMNODES];
> @@ -297,8 +297,8 @@ static inline unsigned long hugetlb_total_pages(void)
>   	return 0;
>   }
>   
> -static inline struct address_space *hugetlb_page_mapping_lock_write(
> -							struct page *hpage)
> +static inline struct address_space *hugetlb_folio_mapping_lock_write(
> +							struct folio *folio)
>   {
>   	return NULL;
>   }
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 456c81fbf8f5..707c85303e88 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2155,13 +2155,13 @@ static bool prep_compound_gigantic_folio_for_demote(struct folio *folio,
>   /*
>    * Find and lock address space (mapping) in write mode.
>    *
> - * Upon entry, the page is locked which means that page_mapping() is
> + * Upon entry, the folio is locked which means that folio_mapping() is
>    * stable.  Due to locking order, we can only trylock_write.  If we can
>    * not get the lock, simply return NULL to caller.
>    */
> -struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage)
> +struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio)
>   {
> -	struct address_space *mapping = page_mapping(hpage);
> +	struct address_space *mapping = folio_mapping(folio);
>   
>   	if (!mapping)
>   		return mapping;
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 2e64e132bba1..0a45fb7fb055 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1608,7 +1608,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
>   		 * TTU_RMAP_LOCKED to indicate we have taken the lock
>   		 * at this higher level.
>   		 */
> -		mapping = hugetlb_page_mapping_lock_write(hpage);
> +		mapping = hugetlb_folio_mapping_lock_write(folio);
>   		if (mapping) {
>   			try_to_unmap(folio, ttu|TTU_RMAP_LOCKED);
>   			i_mmap_unlock_write(mapping);
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 285072bca29c..f8da9b89e043 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1425,7 +1425,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
>   			 * semaphore in write mode here and set TTU_RMAP_LOCKED
>   			 * to let lower levels know we have taken the lock.
>   			 */
> -			mapping = hugetlb_page_mapping_lock_write(&src->page);
> +			mapping = hugetlb_folio_mapping_lock_write(src);
>   			if (unlikely(!mapping))
>   				goto unlock_put_anon;
>   

Looks good.

Reviewed-by: Jane Chu  <jane.chu@oracle.com>

-jane



  reply	other threads:[~2024-04-08 23:09 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-08 19:42 [PATCH v2 00/11] Some cleanups for memory-failure Matthew Wilcox (Oracle)
2024-04-08 19:42 ` [PATCH v2 01/11] mm/memory-failure: Remove fsdax_pgoff argument from __add_to_kill Matthew Wilcox (Oracle)
2024-04-10  9:09   ` Oscar Salvador
2024-04-08 19:42 ` [PATCH v2 02/11] mm/memory-failure: Pass addr to __add_to_kill() Matthew Wilcox (Oracle)
2024-04-08 22:32   ` Jane Chu
2024-04-10  9:11   ` Oscar Salvador
2024-04-08 19:42 ` [PATCH v2 03/11] mm: Return the address from page_mapped_in_vma() Matthew Wilcox (Oracle)
2024-04-08 22:38   ` Jane Chu
2024-04-10  9:38   ` Oscar Salvador
2024-04-11  2:56     ` Miaohe Lin
2024-04-11 17:37     ` Matthew Wilcox
2024-04-08 19:42 ` [PATCH v2 04/11] mm: Make page_mapped_in_vma conditional on CONFIG_MEMORY_FAILURE Matthew Wilcox (Oracle)
2024-04-08 22:45   ` Jane Chu
2024-04-08 22:52     ` Matthew Wilcox
2024-04-09  6:35       ` Jane Chu
2024-04-10  9:39   ` Oscar Salvador
2024-04-08 19:42 ` [PATCH v2 05/11] mm/memory-failure: Convert shake_page() to shake_folio() Matthew Wilcox (Oracle)
2024-04-08 22:53   ` Jane Chu
2024-04-08 19:42 ` [PATCH v2 06/11] mm: Convert hugetlb_page_mapping_lock_write to folio Matthew Wilcox (Oracle)
2024-04-08 23:09   ` Jane Chu [this message]
2024-04-10  9:52   ` Oscar Salvador
2024-04-08 19:42 ` [PATCH v2 07/11] mm/memory-failure: Convert memory_failure() to use a folio Matthew Wilcox (Oracle)
2024-04-09  0:34   ` Jane Chu
2024-04-10 10:21     ` Oscar Salvador
2024-04-10 14:23       ` Matthew Wilcox
2024-04-10 15:30         ` Oscar Salvador
2024-04-10 23:15       ` Jane Chu
2024-04-11  1:27         ` Jane Chu
2024-04-11  1:51           ` Matthew Wilcox
2024-04-11  9:00             ` Miaohe Lin
2024-04-11 11:23               ` Oscar Salvador
2024-04-11 12:17                 ` Matthew Wilcox
2024-04-12 19:48               ` Matthew Wilcox
2024-04-12 22:09                 ` Oscar Salvador
2024-04-15 18:47                 ` Jane Chu
2024-04-16  9:13                 ` Miaohe Lin
2024-04-08 19:42 ` [PATCH v2 08/11] mm/memory-failure: Convert hwpoison_user_mappings to take " Matthew Wilcox (Oracle)
2024-04-09  6:15   ` Jane Chu
2024-04-08 19:42 ` [PATCH v2 09/11] mm/memory-failure: Add some folio conversions to unpoison_memory Matthew Wilcox (Oracle)
2024-04-09  6:17   ` Jane Chu
2024-04-08 19:42 ` [PATCH v2 10/11] mm/memory-failure: Use folio functions throughout collect_procs() Matthew Wilcox (Oracle)
2024-04-09  6:19   ` Jane Chu
2024-04-08 19:42 ` [PATCH v2 11/11] mm/memory-failure: Pass the folio to collect_procs_ksm() Matthew Wilcox (Oracle)
2024-04-09  6:27   ` Jane Chu
2024-04-09 12:11     ` Matthew Wilcox
2024-04-09 15:15       ` Jane Chu
2024-04-09  6:28 ` [PATCH v2 00/11] Some cleanups for memory-failure Jane Chu
2024-04-09 12:12   ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4ac239d4-5be6-4d1c-ba08-af5c70ed79b8@oracle.com \
    --to=jane.chu@oracle.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox