linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio
@ 2024-06-26  8:53 Kefeng Wang
  2024-06-26  8:53 ` [PATCH v5 1/6] mm: move memory_failure_queue() into copy_mc_[user]_highpage() Kefeng Wang
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Kefeng Wang @ 2024-06-26  8:53 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: Tony Luck, Miaohe Lin, nao.horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	Jiaqi Yan, Hugh Dickins, Vishal Moola, Alistair Popple, Jane Chu,
	Oscar Salvador, Lance Yang, Kefeng Wang

The folio migration is widely used in kernel, memory compaction, memory
hotplug, soft offline page, numa balance, memory demote/promotion, etc,
but once access a poisoned source folio when migrating, the kernel will
panic.

There is a mechanism in the kernel to recover from uncorrectable memory
errors, ARCH_HAS_COPY_MC(eg, Machine Check Safe Memory Copy on x86), which
is already used in NVDIMM or core-mm paths(eg, CoW, khugepaged, coredump,
ksm copy), see copy_mc_to_{user,kernel}, copy_mc_{user_}highpage callers.

This series of patches provide the recovery mechanism from folio copy for
the widely used folio migration. Please note, because folio migration is
no guarantee of success, so we could chose to make folio migration tolerant
of memory failures, adding folio_mc_copy() which is a #MC versions of
folio_copy(), once accessing a poisoned source folio, we could return error
and make the folio migration fail, and this could avoid the similar panic
shown below.

  CPU: 1 PID: 88343 Comm: test_softofflin Kdump: loaded Not tainted 6.6.0
  pc : copy_page+0x10/0xc0
  lr : copy_highpage+0x38/0x50
  ...
  Call trace:
   copy_page+0x10/0xc0
   folio_copy+0x78/0x90
   migrate_folio_extra+0x54/0xa0
   move_to_new_folio+0xd8/0x1f0
   migrate_folio_move+0xb8/0x300
   migrate_pages_batch+0x528/0x788
   migrate_pages_sync+0x8c/0x258
   migrate_pages+0x440/0x528
   soft_offline_in_use_page+0x2ec/0x3c0
   soft_offline_page+0x238/0x310
   soft_offline_page_store+0x6c/0xc0
   dev_attr_store+0x20/0x40
   sysfs_kf_write+0x4c/0x68
   kernfs_fop_write_iter+0x130/0x1c8
   new_sync_write+0xa4/0x138
   vfs_write+0x238/0x2d8
   ksys_write+0x74/0x110

v5:
- revert to folio_ref_freeze() under xas_lock_irq(), since Hugh Dickins
  found issue when move freeze folio out of xas_lock_irq, thanks, and the
  fix patch[1] is same as my RFC version[2], so let's turn back to the old
  way, patch 2~4 are changed so drop the RB from v4.
- revert v4 and rebased on next-20240625 

[1] https://lore.kernel.org/linux-mm/07edaae7-ea5d-b6ae-3a10-f611946f9688@google.com/
[2] https://lore.kernel.org/linux-mm/20240129070934.3717659-7-wangkefeng.wang@huawei.com/

v4:
- return -EHWPOISON instead of -EFAULT in folio_mc_copy() and omit a ret
  variable, per Jane and Lance
- return what folio_mc_copy() returns from callers, per Jane
- move memory_failure_queue() into copy_mc_[user_]highpage() instead of
  calling it after each copy_mc_[user]_highpage caller, which avoids 
  re-using the poisoned page, per Luck, Tony

v3:
- only folio migrate recover support part since the cleanup part
  has been merged to mm-unstable
- don't introduce new folio_refs_check_and_freeze(), just move
  the check and freeze folio out, also update changelog 'mm:
  migrate: split folio_migrate_mapping()'
- reorder patches and rebased on next-20240528
- https://lore.kernel.org/linux-mm/20240528134513.2283548-1-wangkefeng.wang@huawei.com/


Kefeng Wang (6):
  mm: move memory_failure_queue() into copy_mc_[user]_highpage()
  mm: add folio_mc_copy()
  mm: migrate: split folio_migrate_mapping()
  mm: migrate: support poisoned recover from migrate folio
  fs: hugetlbfs: support poisoned recover from hugetlbfs_migrate_folio()
  mm: migrate: remove folio_migrate_copy()

 fs/aio.c                |  3 +-
 fs/hugetlbfs/inode.c    |  2 +-
 include/linux/highmem.h |  6 ++++
 include/linux/migrate.h |  1 -
 include/linux/mm.h      |  1 +
 mm/ksm.c                |  1 -
 mm/memory.c             | 12 ++-----
 mm/migrate.c            | 71 ++++++++++++++++++++++++-----------------
 mm/util.c               | 17 ++++++++++
 9 files changed, 72 insertions(+), 42 deletions(-)

-- 
2.27.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v5 1/6] mm: move memory_failure_queue() into copy_mc_[user]_highpage()
  2024-06-26  8:53 [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Kefeng Wang
@ 2024-06-26  8:53 ` Kefeng Wang
  2024-06-26  8:53 ` [PATCH v5 2/6] mm: add folio_mc_copy() Kefeng Wang
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2024-06-26  8:53 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: Tony Luck, Miaohe Lin, nao.horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	Jiaqi Yan, Hugh Dickins, Vishal Moola, Alistair Popple, Jane Chu,
	Oscar Salvador, Lance Yang, Kefeng Wang

There is a memory_failure_queue() call after copy_mc_[user]_highpage(),
see callers, eg, CoW/KSM page copy, it is used to mark the source page
as h/w poisoned and unmap it from other tasks, and the upcomming poison
recover from migrate folio will do the similar thing, so let's move the
memory_failure_queue() into the copy_mc_[user]_highpage() instead of
adding it into each user, this should also enhance the handling of
poisoned page in khugepaged.

Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/highmem.h |  6 ++++++
 mm/ksm.c                |  1 -
 mm/memory.c             | 12 +++---------
 3 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index fa6891e06316..930a591b9b61 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -352,6 +352,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
 	kunmap_local(vto);
 	kunmap_local(vfrom);
 
+	if (ret)
+		memory_failure_queue(page_to_pfn(from), 0);
+
 	return ret;
 }
 
@@ -368,6 +371,9 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
 	kunmap_local(vto);
 	kunmap_local(vfrom);
 
+	if (ret)
+		memory_failure_queue(page_to_pfn(from), 0);
+
 	return ret;
 }
 #else
diff --git a/mm/ksm.c b/mm/ksm.c
index b9a46365b830..df6bae3a5a2c 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2998,7 +2998,6 @@ struct folio *ksm_might_need_to_copy(struct folio *folio,
 		if (copy_mc_user_highpage(folio_page(new_folio, 0), page,
 								addr, vma)) {
 			folio_put(new_folio);
-			memory_failure_queue(folio_pfn(folio), 0);
 			return ERR_PTR(-EHWPOISON);
 		}
 		folio_set_dirty(new_folio);
diff --git a/mm/memory.c b/mm/memory.c
index d4f0e3df68bc..0a769f34bbb2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3022,10 +3022,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
 	unsigned long addr = vmf->address;
 
 	if (likely(src)) {
-		if (copy_mc_user_highpage(dst, src, addr, vma)) {
-			memory_failure_queue(page_to_pfn(src), 0);
+		if (copy_mc_user_highpage(dst, src, addr, vma))
 			return -EHWPOISON;
-		}
 		return 0;
 	}
 
@@ -6492,10 +6490,8 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
 
 		cond_resched();
 		if (copy_mc_user_highpage(dst_page, src_page,
-					  addr + i*PAGE_SIZE, vma)) {
-			memory_failure_queue(page_to_pfn(src_page), 0);
+					  addr + i*PAGE_SIZE, vma))
 			return -EHWPOISON;
-		}
 	}
 	return 0;
 }
@@ -6512,10 +6508,8 @@ static int copy_subpage(unsigned long addr, int idx, void *arg)
 	struct page *dst = folio_page(copy_arg->dst, idx);
 	struct page *src = folio_page(copy_arg->src, idx);
 
-	if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) {
-		memory_failure_queue(page_to_pfn(src), 0);
+	if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma))
 		return -EHWPOISON;
-	}
 	return 0;
 }
 
-- 
2.27.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v5 2/6] mm: add folio_mc_copy()
  2024-06-26  8:53 [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Kefeng Wang
  2024-06-26  8:53 ` [PATCH v5 1/6] mm: move memory_failure_queue() into copy_mc_[user]_highpage() Kefeng Wang
@ 2024-06-26  8:53 ` Kefeng Wang
  2024-06-26  8:53 ` [PATCH v5 3/6] mm: migrate: split folio_migrate_mapping() Kefeng Wang
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2024-06-26  8:53 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: Tony Luck, Miaohe Lin, nao.horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	Jiaqi Yan, Hugh Dickins, Vishal Moola, Alistair Popple, Jane Chu,
	Oscar Salvador, Lance Yang, Kefeng Wang

Add a #MC variant of folio_copy() which uses copy_mc_highpage() to
support #MC handled during folio copy, it will be used in folio
migration soon.

Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h |  1 +
 mm/util.c          | 17 +++++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index fc07b5c6fa6b..60f42daf3be8 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1301,6 +1301,7 @@ void put_pages_list(struct list_head *pages);
 
 void split_page(struct page *page, unsigned int order);
 void folio_copy(struct folio *dst, struct folio *src);
+int folio_mc_copy(struct folio *dst, struct folio *src);
 
 unsigned long nr_free_buffer_pages(void);
 
diff --git a/mm/util.c b/mm/util.c
index 1796bf46db17..10f215985fe5 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -831,6 +831,23 @@ void folio_copy(struct folio *dst, struct folio *src)
 }
 EXPORT_SYMBOL(folio_copy);
 
+int folio_mc_copy(struct folio *dst, struct folio *src)
+{
+	long nr = folio_nr_pages(src);
+	long i = 0;
+
+	for (;;) {
+		if (copy_mc_highpage(folio_page(dst, i), folio_page(src, i)))
+			return -EHWPOISON;
+		if (++i == nr)
+			break;
+		cond_resched();
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(folio_mc_copy);
+
 int sysctl_overcommit_memory __read_mostly = OVERCOMMIT_GUESS;
 int sysctl_overcommit_ratio __read_mostly = 50;
 unsigned long sysctl_overcommit_kbytes __read_mostly;
-- 
2.27.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v5 3/6] mm: migrate: split folio_migrate_mapping()
  2024-06-26  8:53 [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Kefeng Wang
  2024-06-26  8:53 ` [PATCH v5 1/6] mm: move memory_failure_queue() into copy_mc_[user]_highpage() Kefeng Wang
  2024-06-26  8:53 ` [PATCH v5 2/6] mm: add folio_mc_copy() Kefeng Wang
@ 2024-06-26  8:53 ` Kefeng Wang
  2024-06-26  8:53 ` [PATCH v5 4/6] mm: migrate: support poisoned recover from migrate folio Kefeng Wang
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2024-06-26  8:53 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: Tony Luck, Miaohe Lin, nao.horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	Jiaqi Yan, Hugh Dickins, Vishal Moola, Alistair Popple, Jane Chu,
	Oscar Salvador, Lance Yang, Kefeng Wang

The folio refcount check is moved out for both !mapping and mapping
folio, also update comment from page to folio for folio_migrate_mapping().

No functional change intended.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 40 +++++++++++++++++++++++-----------------
 1 file changed, 23 insertions(+), 17 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index c1ac9edf8e52..e97fbaed564d 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -393,29 +393,24 @@ static int folio_expected_refs(struct address_space *mapping,
 }
 
 /*
- * Replace the page in the mapping.
+ * Replace the folio in the mapping.
  *
  * The number of remaining references must be:
- * 1 for anonymous pages without a mapping
- * 2 for pages with a mapping
- * 3 for pages with a mapping and PagePrivate/PagePrivate2 set.
+ * 1 for anonymous folios without a mapping
+ * 2 for folios with a mapping
+ * 3 for folios with a mapping and PagePrivate/PagePrivate2 set.
  */
-int folio_migrate_mapping(struct address_space *mapping,
-		struct folio *newfolio, struct folio *folio, int extra_count)
+static int __folio_migrate_mapping(struct address_space *mapping,
+		struct folio *newfolio, struct folio *folio, int expected_count)
 {
 	XA_STATE(xas, &mapping->i_pages, folio_index(folio));
 	struct zone *oldzone, *newzone;
 	int dirty;
-	int expected_count = folio_expected_refs(mapping, folio) + extra_count;
 	long nr = folio_nr_pages(folio);
 	long entries, i;
 
 	if (!mapping) {
-		/* Anonymous page without mapping */
-		if (folio_ref_count(folio) != expected_count)
-			return -EAGAIN;
-
-		/* No turning back from here */
+		/* Anonymous folio without mapping, no turning back from here */
 		newfolio->index = folio->index;
 		newfolio->mapping = folio->mapping;
 		if (folio_test_swapbacked(folio))
@@ -452,7 +447,7 @@ int folio_migrate_mapping(struct address_space *mapping,
 		entries = 1;
 	}
 
-	/* Move dirty while page refs frozen and newpage not yet exposed */
+	/* Move dirty while folio refs frozen and newfolio not yet exposed */
 	dirty = folio_test_dirty(folio);
 	if (dirty) {
 		folio_clear_dirty(folio);
@@ -466,7 +461,7 @@ int folio_migrate_mapping(struct address_space *mapping,
 	}
 
 	/*
-	 * Drop cache reference from old page by unfreezing
+	 * Drop cache reference from old folio by unfreezing
 	 * to one less reference.
 	 * We know this isn't the last reference.
 	 */
@@ -477,11 +472,11 @@ int folio_migrate_mapping(struct address_space *mapping,
 
 	/*
 	 * If moved to a different zone then also account
-	 * the page for that zone. Other VM counters will be
+	 * the folio for that zone. Other VM counters will be
 	 * taken care of when we establish references to the
-	 * new page and drop references to the old page.
+	 * new folio and drop references to the old folio.
 	 *
-	 * Note that anonymous pages are accounted for
+	 * Note that anonymous folios are accounted for
 	 * via NR_FILE_PAGES and NR_ANON_MAPPED if they
 	 * are mapped to swap space.
 	 */
@@ -521,6 +516,17 @@ int folio_migrate_mapping(struct address_space *mapping,
 
 	return MIGRATEPAGE_SUCCESS;
 }
+
+int folio_migrate_mapping(struct address_space *mapping,
+		struct folio *newfolio, struct folio *folio, int extra_count)
+{
+	int expected_count = folio_expected_refs(mapping, folio) + extra_count;
+
+	if (folio_ref_count(folio) != expected_count)
+		return -EAGAIN;
+
+	return __folio_migrate_mapping(mapping, newfolio, folio, expected_count);
+}
 EXPORT_SYMBOL(folio_migrate_mapping);
 
 /*
-- 
2.27.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v5 4/6] mm: migrate: support poisoned recover from migrate folio
  2024-06-26  8:53 [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (2 preceding siblings ...)
  2024-06-26  8:53 ` [PATCH v5 3/6] mm: migrate: split folio_migrate_mapping() Kefeng Wang
@ 2024-06-26  8:53 ` Kefeng Wang
  2024-06-26  8:53 ` [PATCH v5 5/6] fs: hugetlbfs: support poisoned recover from hugetlbfs_migrate_folio() Kefeng Wang
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2024-06-26  8:53 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: Tony Luck, Miaohe Lin, nao.horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	Jiaqi Yan, Hugh Dickins, Vishal Moola, Alistair Popple, Jane Chu,
	Oscar Salvador, Lance Yang, Kefeng Wang

The folio migration is widely used in kernel, memory compaction, memory
hotplug, soft offline page, numa balance, memory demote/promotion, etc,
but once access a poisoned source folio when migrating, the kerenl will
panic.

There is a mechanism in the kernel to recover from uncorrectable memory
errors, ARCH_HAS_COPY_MC, which is already used in other core-mm paths,
eg, CoW, khugepaged, coredump, ksm copy, see copy_mc_to_{user,kernel},
copy_mc_{user_}highpage callers.

In order to support poisoned folio copy recover from migrate folio, we
chose to make folio migration tolerant of memory failures and return
error for folio migration, because folio migration is no guarantee
of success, this could avoid the similar panic shown below.

  CPU: 1 PID: 88343 Comm: test_softofflin Kdump: loaded Not tainted 6.6.0
  pc : copy_page+0x10/0xc0
  lr : copy_highpage+0x38/0x50
  ...
  Call trace:
   copy_page+0x10/0xc0
   folio_copy+0x78/0x90
   migrate_folio_extra+0x54/0xa0
   move_to_new_folio+0xd8/0x1f0
   migrate_folio_move+0xb8/0x300
   migrate_pages_batch+0x528/0x788
   migrate_pages_sync+0x8c/0x258
   migrate_pages+0x440/0x528
   soft_offline_in_use_page+0x2ec/0x3c0
   soft_offline_page+0x238/0x310
   soft_offline_page_store+0x6c/0xc0
   dev_attr_store+0x20/0x40
   sysfs_kf_write+0x4c/0x68
   kernfs_fop_write_iter+0x130/0x1c8
   new_sync_write+0xa4/0x138
   vfs_write+0x238/0x2d8
   ksys_write+0x74/0x110

Note, folio copy is moved in the begin of the __migrate_folio(), which
could simplify the error handling since there is no turning back if
folio_migrate_mapping() return success, the downside is the folio copied
even though folio_migrate_mapping() return fail, an optimization is to
check whether source folio does not have extra refs before we do folio
copy.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index e97fbaed564d..f9d700d82ea9 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -668,16 +668,24 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst,
 			   struct folio *src, void *src_private,
 			   enum migrate_mode mode)
 {
-	int rc;
+	int rc, expected_count = folio_expected_refs(mapping, src);
+
+	/* Check whether src does not have extra refs before we do more work */
+	if (folio_ref_count(src) != expected_count)
+		return -EAGAIN;
+
+	rc = folio_mc_copy(dst, src);
+	if (unlikely(rc))
+		return rc;
 
-	rc = folio_migrate_mapping(mapping, dst, src, 0);
+	rc = __folio_migrate_mapping(mapping, dst, src, expected_count);
 	if (rc != MIGRATEPAGE_SUCCESS)
 		return rc;
 
 	if (src_private)
 		folio_attach_private(dst, folio_detach_private(src));
 
-	folio_migrate_copy(dst, src);
+	folio_migrate_flags(dst, src);
 	return MIGRATEPAGE_SUCCESS;
 }
 
-- 
2.27.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v5 5/6] fs: hugetlbfs: support poisoned recover from hugetlbfs_migrate_folio()
  2024-06-26  8:53 [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (3 preceding siblings ...)
  2024-06-26  8:53 ` [PATCH v5 4/6] mm: migrate: support poisoned recover from migrate folio Kefeng Wang
@ 2024-06-26  8:53 ` Kefeng Wang
  2024-06-26  8:53 ` [PATCH v5 6/6] mm: migrate: remove folio_migrate_copy() Kefeng Wang
  2024-06-26 20:04 ` [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Andrew Morton
  6 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2024-06-26  8:53 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: Tony Luck, Miaohe Lin, nao.horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	Jiaqi Yan, Hugh Dickins, Vishal Moola, Alistair Popple, Jane Chu,
	Oscar Salvador, Lance Yang, Kefeng Wang

This is similar to __migrate_folio(), use folio_mc_copy() in HugeTLB
folio migration to avoid panic when copy from poisoned folio.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 fs/hugetlbfs/inode.c |  2 +-
 mm/migrate.c         | 10 ++++++++--
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 9456e1d55540..ecad73a4f713 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -1128,7 +1128,7 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping,
 		hugetlb_set_folio_subpool(src, NULL);
 	}
 
-	folio_migrate_copy(dst, src);
+	folio_migrate_flags(dst, src);
 
 	return MIGRATEPAGE_SUCCESS;
 }
diff --git a/mm/migrate.c b/mm/migrate.c
index f9d700d82ea9..ad78b053815a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -537,10 +537,16 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
 				   struct folio *dst, struct folio *src)
 {
 	XA_STATE(xas, &mapping->i_pages, folio_index(src));
-	int expected_count;
+	int rc, expected_count = folio_expected_refs(mapping, src);
+
+	if (folio_ref_count(src) != expected_count)
+		return -EAGAIN;
+
+	rc = folio_mc_copy(dst, src);
+	if (unlikely(rc))
+		return rc;
 
 	xas_lock_irq(&xas);
-	expected_count = folio_expected_refs(mapping, src);
 	if (!folio_ref_freeze(src, expected_count)) {
 		xas_unlock_irq(&xas);
 		return -EAGAIN;
-- 
2.27.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v5 6/6] mm: migrate: remove folio_migrate_copy()
  2024-06-26  8:53 [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (4 preceding siblings ...)
  2024-06-26  8:53 ` [PATCH v5 5/6] fs: hugetlbfs: support poisoned recover from hugetlbfs_migrate_folio() Kefeng Wang
@ 2024-06-26  8:53 ` Kefeng Wang
  2024-06-26 20:04 ` [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Andrew Morton
  6 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2024-06-26  8:53 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: Tony Luck, Miaohe Lin, nao.horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	Jiaqi Yan, Hugh Dickins, Vishal Moola, Alistair Popple, Jane Chu,
	Oscar Salvador, Lance Yang, Kefeng Wang

The folio_migrate_copy() is just a wrapper of folio_copy() and
folio_migrate_flags(), it is simple and only aio use it for now,
unfold it and remove folio_migrate_copy().

Reviewed-by: Jane Chu <jane.chu@oracle.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 fs/aio.c                | 3 ++-
 include/linux/migrate.h | 1 -
 mm/migrate.c            | 7 -------
 3 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index ed730b5f4c54..6066f64967b3 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -455,7 +455,8 @@ static int aio_migrate_folio(struct address_space *mapping, struct folio *dst,
 	 * events from being lost.
 	 */
 	spin_lock_irqsave(&ctx->completion_lock, flags);
-	folio_migrate_copy(dst, src);
+	folio_copy(dst, src);
+	folio_migrate_flags(dst, src);
 	BUG_ON(ctx->ring_folios[idx] != src);
 	ctx->ring_folios[idx] = dst;
 	spin_unlock_irqrestore(&ctx->completion_lock, flags);
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index af2579ae93f2..644be30b69c8 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -76,7 +76,6 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
 void migration_entry_wait_on_locked(swp_entry_t entry, spinlock_t *ptl)
 		__releases(ptl);
 void folio_migrate_flags(struct folio *newfolio, struct folio *folio);
-void folio_migrate_copy(struct folio *newfolio, struct folio *folio);
 int folio_migrate_mapping(struct address_space *mapping,
 		struct folio *newfolio, struct folio *folio, int extra_count);
 
diff --git a/mm/migrate.c b/mm/migrate.c
index ad78b053815a..906f6a2e4f38 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -659,13 +659,6 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
 }
 EXPORT_SYMBOL(folio_migrate_flags);
 
-void folio_migrate_copy(struct folio *newfolio, struct folio *folio)
-{
-	folio_copy(newfolio, folio);
-	folio_migrate_flags(newfolio, folio);
-}
-EXPORT_SYMBOL(folio_migrate_copy);
-
 /************************************************************
  *                    Migration functions
  ***********************************************************/
-- 
2.27.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio
  2024-06-26  8:53 [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Kefeng Wang
                   ` (5 preceding siblings ...)
  2024-06-26  8:53 ` [PATCH v5 6/6] mm: migrate: remove folio_migrate_copy() Kefeng Wang
@ 2024-06-26 20:04 ` Andrew Morton
  2024-06-27  1:06   ` Kefeng Wang
  6 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2024-06-26 20:04 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: linux-mm, Tony Luck, Miaohe Lin, nao.horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	Jiaqi Yan, Hugh Dickins, Vishal Moola, Alistair Popple, Jane Chu,
	Oscar Salvador, Lance Yang

On Wed, 26 Jun 2024 16:53:22 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:

> The folio migration is widely used in kernel, memory compaction, memory
> hotplug, soft offline page, numa balance, memory demote/promotion, etc,
> but once access a poisoned source folio when migrating, the kernel will
> panic.

Is there some simple fixup which we can prepare for -stable kernels
which will avoid this panic?


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio
  2024-06-26 20:04 ` [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Andrew Morton
@ 2024-06-27  1:06   ` Kefeng Wang
  0 siblings, 0 replies; 9+ messages in thread
From: Kefeng Wang @ 2024-06-27  1:06 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, Tony Luck, Miaohe Lin, nao.horiguchi, Matthew Wilcox,
	David Hildenbrand, Muchun Song, Benjamin LaHaise, jglisse,
	Jiaqi Yan, Hugh Dickins, Vishal Moola, Alistair Popple, Jane Chu,
	Oscar Salvador, Lance Yang



On 2024/6/27 4:04, Andrew Morton wrote:
> On Wed, 26 Jun 2024 16:53:22 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> 
>> The folio migration is widely used in kernel, memory compaction, memory
>> hotplug, soft offline page, numa balance, memory demote/promotion, etc,
>> but once access a poisoned source folio when migrating, the kernel will
>> panic.
> 
> Is there some simple fixup which we can prepare for -stable kernels
> which will avoid this panic?

No simple fixup, and also I don't think this is suitable for the
-stable, it should be an enhancement about MC safe copy and as a new
feature, not a bugfix. See other poison recovery(CoW on normal/hugetlb
page, khugepaged/ksm copy), we don't backport them too.


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-06-27  1:06 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-26  8:53 [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Kefeng Wang
2024-06-26  8:53 ` [PATCH v5 1/6] mm: move memory_failure_queue() into copy_mc_[user]_highpage() Kefeng Wang
2024-06-26  8:53 ` [PATCH v5 2/6] mm: add folio_mc_copy() Kefeng Wang
2024-06-26  8:53 ` [PATCH v5 3/6] mm: migrate: split folio_migrate_mapping() Kefeng Wang
2024-06-26  8:53 ` [PATCH v5 4/6] mm: migrate: support poisoned recover from migrate folio Kefeng Wang
2024-06-26  8:53 ` [PATCH v5 5/6] fs: hugetlbfs: support poisoned recover from hugetlbfs_migrate_folio() Kefeng Wang
2024-06-26  8:53 ` [PATCH v5 6/6] mm: migrate: remove folio_migrate_copy() Kefeng Wang
2024-06-26 20:04 ` [PATCH v5 0/6] mm: migrate: support poison recover from migrate folio Andrew Morton
2024-06-27  1:06   ` Kefeng Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox