linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <muchun.song@linux.dev>
To: Mike Kravetz <mike.kravetz@oracle.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Joao Martins <joao.m.martins@oracle.com>
Cc: Muchun Song <songmuchun@bytedance.com>,
	Oscar Salvador <osalvador@suse.de>,
	David Hildenbrand <david@redhat.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	David Rientjes <rientjes@google.com>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Naoya Horiguchi <naoya.horiguchi@linux.dev>,
	Michal Hocko <mhocko@suse.com>,
	Matthew Wilcox <willy@infradead.org>,
	Xiongchun Duan <duanxiongchun@bytedance.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH v2 11/11] hugetlb: batch TLB flushes when restoring vmemmap
Date: Thu, 7 Sep 2023 14:58:23 +0800	[thread overview]
Message-ID: <a1008514-b469-96ce-c920-4ef4a3b848a4@linux.dev> (raw)
In-Reply-To: <20230905214412.89152-12-mike.kravetz@oracle.com>



On 2023/9/6 05:44, Mike Kravetz wrote:
> Update the hugetlb_vmemmap_restore path to take a 'batch' parameter that

s/batch/flags/g

And it should be reworked since the parameter has been changed.

> indicates restoration is happening on a batch of pages.  When set, use
> the existing mechanism (VMEMMAP_NO_TLB_FLUSH) to delay TLB flushing.
> The routine hugetlb_vmemmap_restore_folios is the only user of this new
> batch parameter and it will perform a global flush after all vmemmap is
> restored.
>
> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>   mm/hugetlb_vmemmap.c | 37 +++++++++++++++++++++++--------------
>   1 file changed, 23 insertions(+), 14 deletions(-)
>
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 8c85e2c38538..11fda9d061eb 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -458,17 +458,19 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
>    * @end:	end address of the vmemmap virtual address range that we want to
>    *		remap.
>    * @reuse:	reuse address.
> + * @flags:	modify behavior for bulk operations
>    *
>    * Return: %0 on success, negative error code otherwise.
>    */
>   static int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> -			       unsigned long reuse)
> +			       unsigned long reuse, unsigned long flags)
>   {
>   	LIST_HEAD(vmemmap_pages);
>   	struct vmemmap_remap_walk walk = {
>   		.remap_pte	= vmemmap_restore_pte,
>   		.reuse_addr	= reuse,
>   		.vmemmap_pages	= &vmemmap_pages,
> +		.flags		= flags,
>   	};
>   
>   	/* See the comment in the vmemmap_remap_free(). */
> @@ -490,17 +492,7 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
>   static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON);
>   core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0);
>   
> -/**
> - * hugetlb_vmemmap_restore - restore previously optimized (by
> - *			     hugetlb_vmemmap_optimize()) vmemmap pages which
> - *			     will be reallocated and remapped.
> - * @h:		struct hstate.
> - * @head:	the head page whose vmemmap pages will be restored.
> - *
> - * Return: %0 if @head's vmemmap pages have been reallocated and remapped,
> - * negative error code otherwise.
> - */
> -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
> +static int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, unsigned long flags)
>   {
>   	int ret;
>   	unsigned long vmemmap_start = (unsigned long)head, vmemmap_end;
> @@ -521,7 +513,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
>   	 * When a HugeTLB page is freed to the buddy allocator, previously
>   	 * discarded vmemmap pages must be allocated and remapping.
>   	 */
> -	ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse);
> +	ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, flags);
>   	if (!ret) {
>   		ClearHPageVmemmapOptimized(head);
>   		static_branch_dec(&hugetlb_optimize_vmemmap_key);
> @@ -530,6 +522,21 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
>   	return ret;
>   }
>   
> +/**
> + * hugetlb_vmemmap_restore - restore previously optimized (by
> + *			     hugetlb_vmemmap_optimize()) vmemmap pages which
> + *			     will be reallocated and remapped.
> + * @h:		struct hstate.
> + * @head:	the head page whose vmemmap pages will be restored.
> + *
> + * Return: %0 if @head's vmemmap pages have been reallocated and remapped,
> + * negative error code otherwise.
> + */
> +int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
> +{
> +	return __hugetlb_vmemmap_restore(h, head, 0UL);

UL suffix could be drooped.

Thanks.

> +}
> +
>   /*
>    * This function will attempt to resore vmemmap for a list of folios.  There
>    * is no guarantee that restoration will be successful for all or any folios.
> @@ -540,7 +547,9 @@ void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *fo
>   	struct folio *folio;
>   
>   	list_for_each_entry(folio, folio_list, lru)
> -		(void)hugetlb_vmemmap_restore(h, &folio->page);
> +		(void)__hugetlb_vmemmap_restore(h, &folio->page, VMEMMAP_NO_TLB_FLUSH);
> +
> +	flush_tlb_all();
>   }
>   
>   /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */



  reply	other threads:[~2023-09-07  6:58 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-05 21:43 [PATCH v2 00/11] Batch hugetlb vmemmap modification operations Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 01/11] hugetlb: set hugetlb page flag before optimizing vmemmap Mike Kravetz
2023-09-06  0:48   ` Matthew Wilcox
2023-09-06  1:05     ` Mike Kravetz
2023-10-13 12:58   ` Naoya Horiguchi
2023-10-13 21:43     ` Mike Kravetz
2023-10-16 22:55       ` Andrew Morton
2023-10-17  3:21     ` Mike Kravetz
2023-10-18  1:58       ` Naoya Horiguchi
2023-10-18  3:43         ` Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 02/11] hugetlb: Use a folio in free_hpage_workfn() Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 03/11] hugetlb: Remove a few calls to page_folio() Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 04/11] hugetlb: Convert remove_pool_huge_page() to remove_pool_hugetlb_folio() Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 05/11] hugetlb: restructure pool allocations Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 06/11] hugetlb: perform vmemmap optimization on a list of pages Mike Kravetz
2023-09-06  7:30   ` Muchun Song
2023-09-05 21:44 ` [PATCH v2 07/11] hugetlb: perform vmemmap restoration " Mike Kravetz
2023-09-06  7:33   ` Muchun Song
2023-09-06  8:07     ` Muchun Song
2023-09-06 21:12       ` Mike Kravetz
2023-09-07  3:33         ` Muchun Song
2023-09-07 18:54           ` Mike Kravetz
2023-09-08 20:53             ` Mike Kravetz
2023-09-11  3:10               ` Muchun Song
2023-09-06 20:53     ` Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 08/11] hugetlb: batch freeing of vmemmap pages Mike Kravetz
2023-09-06  7:38   ` Muchun Song
2023-09-06 21:38     ` Mike Kravetz
2023-09-07  6:19       ` Muchun Song
2023-09-07 18:47         ` Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 09/11] hugetlb: batch PMD split for bulk vmemmap dedup Mike Kravetz
2023-09-06  8:24   ` Muchun Song
2023-09-06  9:11     ` [External] " Muchun Song
2023-09-06  9:26       ` Joao Martins
2023-09-06  9:32         ` [External] " Muchun Song
2023-09-06  9:44           ` Joao Martins
2023-09-06 11:34             ` Muchun Song
2023-09-06  9:13     ` Joao Martins
2023-09-05 21:44 ` [PATCH v2 10/11] hugetlb: batch TLB flushes when freeing vmemmap Mike Kravetz
2023-09-07  6:55   ` Muchun Song
2023-09-07 18:57     ` Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 11/11] hugetlb: batch TLB flushes when restoring vmemmap Mike Kravetz
2023-09-07  6:58   ` Muchun Song [this message]
2023-09-07 18:58     ` Mike Kravetz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a1008514-b469-96ce-c920-4ef4a3b848a4@linux.dev \
    --to=muchun.song@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=david@redhat.com \
    --cc=duanxiongchun@bytedance.com \
    --cc=joao.m.martins@oracle.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=naoya.horiguchi@linux.dev \
    --cc=osalvador@suse.de \
    --cc=rientjes@google.com \
    --cc=songmuchun@bytedance.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox