From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>,
Oscar Salvador <osalvador@suse.de>,
Miaohe Lin <linmiaohe@huawei.com>,
Naoya Horiguchi <nao.horiguchi@gmail.com>, <linux-mm@kvack.org>,
<dan.carpenter@linaro.org>,
Jonathan Cameron <Jonathan.Cameron@Huawei.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH v3 5/5] mm: memory_hotplug: unify Huge/LRU/non-LRU movable folio isolation
Date: Tue, 27 Aug 2024 19:47:28 +0800 [thread overview]
Message-ID: <20240827114728.3212578-6-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20240827114728.3212578-1-wangkefeng.wang@huawei.com>
Use the isolate_folio_to_list() to unify hugetlb/LRU/non-LRU folio
isolation, which cleanup code a bit and save a few calls to
compound_head().
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/memory_hotplug.c | 45 +++++++++++++++++----------------------------
1 file changed, 17 insertions(+), 28 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 1335fb6ef7fa..5f09866d17cf 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1772,15 +1772,15 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
{
+ struct folio *folio;
unsigned long pfn;
- struct page *page;
LIST_HEAD(source);
static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL,
DEFAULT_RATELIMIT_BURST);
for (pfn = start_pfn; pfn < end_pfn; pfn++) {
- struct folio *folio;
- bool isolated;
+ struct page *page;
+ bool hugetlb;
if (!pfn_valid(pfn))
continue;
@@ -1811,34 +1811,22 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
continue;
}
- if (folio_test_hugetlb(folio)) {
- isolate_hugetlb(folio, &source);
- continue;
+ hugetlb = folio_test_hugetlb(folio);
+ if (!hugetlb) {
+ folio = folio_get_nontail_page(page);
+ if (!folio)
+ continue;
}
- if (!get_page_unless_zero(page))
- continue;
- /*
- * We can skip free pages. And we can deal with pages on
- * LRU and non-lru movable pages.
- */
- if (PageLRU(page))
- isolated = isolate_lru_page(page);
- else
- isolated = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
- if (isolated) {
- list_add_tail(&page->lru, &source);
- if (!__PageMovable(page))
- inc_node_page_state(page, NR_ISOLATED_ANON +
- page_is_file_lru(page));
-
- } else {
+ if (!isolate_folio_to_list(folio, &source)) {
if (__ratelimit(&migrate_rs)) {
pr_warn("failed to isolate pfn %lx\n", pfn);
dump_page(page, "isolation failed");
}
}
- put_page(page);
+
+ if (!hugetlb)
+ folio_put(folio);
}
if (!list_empty(&source)) {
nodemask_t nmask = node_states[N_MEMORY];
@@ -1853,7 +1841,7 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
* We have checked that migration range is on a single zone so
* we can use the nid of the first page to all the others.
*/
- mtc.nid = page_to_nid(list_first_entry(&source, struct page, lru));
+ mtc.nid = folio_nid(list_first_entry(&source, struct folio, lru));
/*
* try to allocate from a different node but reuse this node
@@ -1866,11 +1854,12 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
ret = migrate_pages(&source, alloc_migration_target, NULL,
(unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG, NULL);
if (ret) {
- list_for_each_entry(page, &source, lru) {
+ list_for_each_entry(folio, &source, lru) {
if (__ratelimit(&migrate_rs)) {
pr_warn("migrating pfn %lx failed ret:%d\n",
- page_to_pfn(page), ret);
- dump_page(page, "migration failure");
+ folio_pfn(folio), ret);
+ dump_page(&folio->page,
+ "migration failure");
}
}
putback_movable_pages(&source);
--
2.27.0
next prev parent reply other threads:[~2024-08-27 11:47 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-27 11:47 [PATCH v3 0/5] mm: memory_hotplug: improve do_migrate_range() Kefeng Wang
2024-08-27 11:47 ` [PATCH v3 1/5] mm: memory_hotplug: remove head variable in do_migrate_range() Kefeng Wang
2024-09-28 4:55 ` Matthew Wilcox
2024-09-28 8:34 ` David Hildenbrand
2024-09-28 8:39 ` David Hildenbrand
2024-09-29 1:16 ` Miaohe Lin
2024-09-29 2:04 ` Kefeng Wang
2024-09-29 2:19 ` Miaohe Lin
2024-09-30 9:25 ` David Hildenbrand
2024-10-09 7:27 ` Kefeng Wang
2024-08-27 11:47 ` [PATCH v3 2/5] mm: memory-failure: add unmap_poisoned_folio() Kefeng Wang
2024-08-31 8:16 ` Miaohe Lin
2024-08-27 11:47 ` [PATCH v3 3/5] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range() Kefeng Wang
2024-08-31 8:36 ` Miaohe Lin
2024-08-27 11:47 ` [PATCH v3 4/5] mm: migrate: add isolate_folio_to_list() Kefeng Wang
2024-08-27 11:47 ` Kefeng Wang [this message]
2024-08-29 15:05 ` [PATCH v3 5-fix/5] mm: memory_hotplug: unify Huge/LRU/non-LRU movable folio isolation fix Kefeng Wang
2024-08-29 15:19 ` David Hildenbrand
2024-08-30 1:23 ` Kefeng Wang
2024-08-31 9:01 ` [PATCH v3 5/5] mm: memory_hotplug: unify Huge/LRU/non-LRU movable folio isolation Miaohe Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240827114728.3212578-6-wangkefeng.wang@huawei.com \
--to=wangkefeng.wang@huawei.com \
--cc=Jonathan.Cameron@Huawei.com \
--cc=akpm@linux-foundation.org \
--cc=dan.carpenter@linaro.org \
--cc=david@redhat.com \
--cc=linmiaohe@huawei.com \
--cc=linux-mm@kvack.org \
--cc=nao.horiguchi@gmail.com \
--cc=osalvador@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox