linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Miaohe Lin <linmiaohe@huawei.com>
To: Kefeng Wang <wangkefeng.wang@huawei.com>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>,
	Oscar Salvador <osalvador@suse.de>,
	Naoya Horiguchi <nao.horiguchi@gmail.com>, <linux-mm@kvack.org>
Subject: Re: [PATCH v2 4/5] mm: migrate: add isolate_folio_to_list()
Date: Tue, 20 Aug 2024 17:32:51 +0800	[thread overview]
Message-ID: <6df49109-d39b-9852-b5b2-a5b88c9329e3@huawei.com> (raw)
In-Reply-To: <20240817084941.2375713-5-wangkefeng.wang@huawei.com>

On 2024/8/17 16:49, Kefeng Wang wrote:
> Add isolate_folio_to_list() helper to try to isolate HugeTLB,
> no-LRU movable and LRU folios to a list, which will be reused by
> do_migrate_range() from memory hotplug soon, also drop the
> mf_isolate_folio() since we could directly use new helper in
> the soft_offline_in_use_page().
> 
> Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>

Thanks for your patch.

> ---
>  include/linux/migrate.h |  3 +++
>  mm/memory-failure.c     | 46 ++++++++++-------------------------------
>  mm/migrate.c            | 27 ++++++++++++++++++++++++
>  3 files changed, 41 insertions(+), 35 deletions(-)
> 
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 644be30b69c8..002e49b2ebd9 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
>  		  unsigned int *ret_succeeded);
>  struct folio *alloc_migration_target(struct folio *src, unsigned long private);
>  bool isolate_movable_page(struct page *page, isolate_mode_t mode);
> +bool isolate_folio_to_list(struct folio *folio, struct list_head *list);
>  
>  int migrate_huge_page_move_mapping(struct address_space *mapping,
>  		struct folio *dst, struct folio *src);
> @@ -91,6 +92,8 @@ static inline struct folio *alloc_migration_target(struct folio *src,
>  	{ return NULL; }
>  static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>  	{ return false; }
> +static inline bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
> +	{ return false; }
>  
>  static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
>  				  struct folio *dst, struct folio *src)
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 93848330de1f..d8298017bd99 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -2659,40 +2659,6 @@ EXPORT_SYMBOL(unpoison_memory);
>  #undef pr_fmt
>  #define pr_fmt(fmt) "Soft offline: " fmt
>  
> -static bool mf_isolate_folio(struct folio *folio, struct list_head *pagelist)
> -{
> -	bool isolated = false;
> -
> -	if (folio_test_hugetlb(folio)) {
> -		isolated = isolate_hugetlb(folio, pagelist);
> -	} else {
> -		bool lru = !__folio_test_movable(folio);
> -
> -		if (lru)
> -			isolated = folio_isolate_lru(folio);
> -		else
> -			isolated = isolate_movable_page(&folio->page,
> -							ISOLATE_UNEVICTABLE);
> -
> -		if (isolated) {
> -			list_add(&folio->lru, pagelist);
> -			if (lru)
> -				node_stat_add_folio(folio, NR_ISOLATED_ANON +
> -						    folio_is_file_lru(folio));
> -		}
> -	}
> -
> -	/*
> -	 * If we succeed to isolate the folio, we grabbed another refcount on
> -	 * the folio, so we can safely drop the one we got from get_any_page().
> -	 * If we failed to isolate the folio, it means that we cannot go further
> -	 * and we will return an error, so drop the reference we got from
> -	 * get_any_page() as well.
> -	 */
> -	folio_put(folio);
> -	return isolated;
> -}
> -
>  /*
>   * soft_offline_in_use_page handles hugetlb-pages and non-hugetlb pages.
>   * If the page is a non-dirty unmapped page-cache page, it simply invalidates.
> @@ -2744,7 +2710,7 @@ static int soft_offline_in_use_page(struct page *page)
>  		return 0;
>  	}
>  
> -	if (mf_isolate_folio(folio, &pagelist)) {
> +	if (isolate_folio_to_list(folio, &pagelist)) {
>  		ret = migrate_pages(&pagelist, alloc_migration_target, NULL,
>  			(unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE, NULL);
>  		if (!ret) {
> @@ -2766,6 +2732,16 @@ static int soft_offline_in_use_page(struct page *page)
>  			pfn, msg_page[huge], page_count(page), &page->flags);
>  		ret = -EBUSY;
>  	}
> +
> +	/*
> +	 * If we succeed to isolate the folio, we grabbed another refcount on
> +	 * the folio, so we can safely drop the one we got from get_any_page().
> +	 * If we failed to isolate the folio, it means that we cannot go further
> +	 * and we will return an error, so drop the reference we got from
> +	 * get_any_page() as well.
> +	 */
> +	folio_put(folio);

Why folio_put() is deferred here? With this change, folio will have extra two refcnt when
calling migrate_pages() above. One is from get_any_page() and another one from folio_isolate_lru().
This would lead to migrate_pages() never success. And my many testcases failed due to this
change.

Thanks.
.


  reply	other threads:[~2024-08-20  9:33 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-17  8:49 [PATCH resend v2 0/5] mm: memory_hotplug: improve do_migrate_range() Kefeng Wang
2024-08-17  8:49 ` [PATCH v2 1/5] mm: memory_hotplug: remove head variable in do_migrate_range() Kefeng Wang
2024-08-19  9:28   ` Jonathan Cameron
2024-08-19 10:41     ` Kefeng Wang
2024-08-21  7:33     ` Miaohe Lin
2024-08-26 14:42   ` David Hildenbrand
2024-08-17  8:49 ` [PATCH v2 2/5] mm: memory-failure: add unmap_posioned_folio() Kefeng Wang
2024-08-21  7:40   ` Miaohe Lin
2024-08-21  8:54     ` Kefeng Wang
2024-08-17  8:49 ` [PATCH v2 3/5] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range() Kefeng Wang
2024-08-22  6:52   ` Miaohe Lin
2024-08-22 11:35     ` Kefeng Wang
2024-08-26 14:46   ` David Hildenbrand
2024-08-27  1:13     ` Kefeng Wang
2024-08-27  2:12       ` Miaohe Lin
2024-08-27 15:11         ` David Hildenbrand
2024-08-17  8:49 ` [PATCH v2 4/5] mm: migrate: add isolate_folio_to_list() Kefeng Wang
2024-08-20  9:32   ` Miaohe Lin [this message]
2024-08-20  9:46     ` Kefeng Wang
2024-08-21  2:00       ` Miaohe Lin
2024-08-21  2:14         ` Kefeng Wang
2024-08-22  6:56           ` Miaohe Lin
2024-08-26 14:50   ` David Hildenbrand
2024-08-27  1:19     ` Kefeng Wang
2024-08-17  8:49 ` [PATCH v2 5/5] mm: memory_hotplug: unify Huge/LRU/non-LRU movable folio isolation Kefeng Wang
2024-08-22  7:20   ` Miaohe Lin
2024-08-22 12:08     ` Kefeng Wang
2024-08-26 14:55   ` David Hildenbrand
2024-08-27  1:26     ` Kefeng Wang
2024-08-27 15:10       ` David Hildenbrand
2024-08-27 15:35         ` Kefeng Wang
2024-08-27 15:38           ` David Hildenbrand
  -- strict thread matches above, loose matches on Subject: below --
2024-08-16  9:04 [PATCH v2 0/5] mm: memory_hotplug: improve do_migrate_range() Kefeng Wang
2024-08-16  9:04 ` [PATCH v2 4/5] mm: migrate: add isolate_folio_to_list() Kefeng Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6df49109-d39b-9852-b5b2-a5b88c9329e3@huawei.com \
    --to=linmiaohe@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=nao.horiguchi@gmail.com \
    --cc=osalvador@suse.de \
    --cc=wangkefeng.wang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox