* [PATCH 0/6] Fix fault handler's handling of poisoned tail pages
@ 2023-11-08 18:28 Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 1/6] mm: Make mapping_evict_folio() the preferred way to evict clean folios Matthew Wilcox (Oracle)
` (6 more replies)
0 siblings, 7 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-11-08 18:28 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), Naoya Horiguchi, linux-mm
Since introducing the ability to have large folios in the page cache,
it's been possible to have a hwpoisoned tail page returned from the
fault handler. We handle this situation poorly; failing to remove the
affected page from use.
This isn't a minimal patch to fix it, it's a full conversion of all
the code surrounding it. I'll take care of backports to 6.5/6.1.
Matthew Wilcox (Oracle) (6):
mm: Make mapping_evict_folio() the preferred way to evict clean folios
mm: Convert __do_fault() to use a folio
mm: Use mapping_evict_folio() in truncate_error_page()
mm: Convert soft_offline_in_use_page() to use a folio
mm: Convert isolate_page() to mf_isolate_folio()
mm: Remove invalidate_inode_page()
mm/internal.h | 2 +-
mm/memory-failure.c | 54 ++++++++++++++++++++++-----------------------
mm/memory.c | 20 ++++++++---------
mm/truncate.c | 42 ++++++++++++++---------------------
4 files changed, 55 insertions(+), 63 deletions(-)
--
2.42.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/6] mm: Make mapping_evict_folio() the preferred way to evict clean folios
2023-11-08 18:28 [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Matthew Wilcox (Oracle)
@ 2023-11-08 18:28 ` Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 2/6] mm: Convert __do_fault() to use a folio Matthew Wilcox (Oracle)
` (5 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-11-08 18:28 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), Naoya Horiguchi, linux-mm
invalidate_inode_page() does very little beyond calling
mapping_evict_folio(). Move the check for mapping being NULL into
mapping_evict_folio() and make it available to the rest of the MM
for use in the next few patches.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/internal.h | 1 +
mm/truncate.c | 33 ++++++++++++++++-----------------
2 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index b61034bd50f5..687d89d317d0 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -138,6 +138,7 @@ void filemap_free_folio(struct address_space *mapping, struct folio *folio);
int truncate_inode_folio(struct address_space *mapping, struct folio *folio);
bool truncate_inode_partial_folio(struct folio *folio, loff_t start,
loff_t end);
+long mapping_evict_folio(struct address_space *mapping, struct folio *folio);
long invalidate_inode_page(struct page *page);
unsigned long mapping_try_invalidate(struct address_space *mapping,
pgoff_t start, pgoff_t end, unsigned long *nr_failed);
diff --git a/mm/truncate.c b/mm/truncate.c
index 8e3aa9e8618e..1d516e51e29d 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -266,9 +266,22 @@ int generic_error_remove_page(struct address_space *mapping, struct page *page)
}
EXPORT_SYMBOL(generic_error_remove_page);
-static long mapping_evict_folio(struct address_space *mapping,
- struct folio *folio)
+/**
+ * mapping_evict_folio() - Remove an unused folio from the page-cache.
+ * @mapping: The mapping this folio belongs to.
+ * @folio: The folio to remove.
+ *
+ * Safely remove one folio from the page cache.
+ * It only drops clean, unused folios.
+ *
+ * Context: Folio must be locked.
+ * Return: The number of pages successfully removed.
+ */
+long mapping_evict_folio(struct address_space *mapping, struct folio *folio)
{
+ /* The page may have been truncated before it was locked */
+ if (!mapping)
+ return 0;
if (folio_test_dirty(folio) || folio_test_writeback(folio))
return 0;
/* The refcount will be elevated if any page in the folio is mapped */
@@ -281,25 +294,11 @@ static long mapping_evict_folio(struct address_space *mapping,
return remove_mapping(mapping, folio);
}
-/**
- * invalidate_inode_page() - Remove an unused page from the pagecache.
- * @page: The page to remove.
- *
- * Safely invalidate one page from its pagecache mapping.
- * It only drops clean, unused pages.
- *
- * Context: Page must be locked.
- * Return: The number of pages successfully removed.
- */
long invalidate_inode_page(struct page *page)
{
struct folio *folio = page_folio(page);
- struct address_space *mapping = folio_mapping(folio);
- /* The page may have been truncated before it was locked */
- if (!mapping)
- return 0;
- return mapping_evict_folio(mapping, folio);
+ return mapping_evict_folio(folio_mapping(folio), folio);
}
/**
--
2.42.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 2/6] mm: Convert __do_fault() to use a folio
2023-11-08 18:28 [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 1/6] mm: Make mapping_evict_folio() the preferred way to evict clean folios Matthew Wilcox (Oracle)
@ 2023-11-08 18:28 ` Matthew Wilcox (Oracle)
2023-11-08 21:07 ` Andrew Morton
2023-11-08 18:28 ` [PATCH 3/6] mm: Use mapping_evict_folio() in truncate_error_page() Matthew Wilcox (Oracle)
` (4 subsequent siblings)
6 siblings, 1 reply; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-11-08 18:28 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), Naoya Horiguchi, linux-mm, stable
Convert vmf->page to a folio as soon as we're going to use it. This fixes
a bug if the fault handler returns a tail page with hardware poison;
tail pages have an invalid page->index, so we would fail to unmap the
page from the page tables. We actually have to unmap the entire folio (or
mapping_evict_folio() will fail), so use unmap_mapping_folio() instead.
This also saves various calls to compound_head() hidden in lock_page(),
put_page(), etc.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Fixes: 793917d997df ("mm/readahead: Add large folio readahead")
Cc: stable@vger.kernel.org
---
mm/memory.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 1f18ed4a5497..c2ee303ba6b3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4239,6 +4239,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
static vm_fault_t __do_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
+ struct folio *folio;
vm_fault_t ret;
/*
@@ -4267,27 +4268,26 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
VM_FAULT_DONE_COW)))
return ret;
+ folio = page_folio(vmf->page);
if (unlikely(PageHWPoison(vmf->page))) {
- struct page *page = vmf->page;
vm_fault_t poisonret = VM_FAULT_HWPOISON;
if (ret & VM_FAULT_LOCKED) {
- if (page_mapped(page))
- unmap_mapping_pages(page_mapping(page),
- page->index, 1, false);
- /* Retry if a clean page was removed from the cache. */
- if (invalidate_inode_page(page))
+ if (page_mapped(vmf->page))
+ unmap_mapping_folio(folio);
+ /* Retry if a clean folio was removed from the cache. */
+ if (mapping_evict_folio(folio->mapping, folio))
poisonret = VM_FAULT_NOPAGE;
- unlock_page(page);
+ folio_unlock(folio);
}
- put_page(page);
+ folio_put(folio);
vmf->page = NULL;
return poisonret;
}
if (unlikely(!(ret & VM_FAULT_LOCKED)))
- lock_page(vmf->page);
+ folio_lock(folio);
else
- VM_BUG_ON_PAGE(!PageLocked(vmf->page), vmf->page);
+ VM_BUG_ON_PAGE(!folio_test_locked(folio), vmf->page);
return ret;
}
--
2.42.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 3/6] mm: Use mapping_evict_folio() in truncate_error_page()
2023-11-08 18:28 [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 1/6] mm: Make mapping_evict_folio() the preferred way to evict clean folios Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 2/6] mm: Convert __do_fault() to use a folio Matthew Wilcox (Oracle)
@ 2023-11-08 18:28 ` Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 4/6] mm: Convert soft_offline_in_use_page() to use a folio Matthew Wilcox (Oracle)
` (3 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-11-08 18:28 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), Naoya Horiguchi, linux-mm
We already have the folio and the mapping, so replace the call to
invalidate_inode_page() with mapping_evict_folio().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/memory-failure.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 660c21859118..9f03952e6d38 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -930,10 +930,10 @@ static int delete_from_lru_cache(struct page *p)
static int truncate_error_page(struct page *p, unsigned long pfn,
struct address_space *mapping)
{
+ struct folio *folio = page_folio(p);
int ret = MF_FAILED;
if (mapping->a_ops->error_remove_page) {
- struct folio *folio = page_folio(p);
int err = mapping->a_ops->error_remove_page(mapping, p);
if (err != 0)
@@ -947,7 +947,7 @@ static int truncate_error_page(struct page *p, unsigned long pfn,
* If the file system doesn't support it just invalidate
* This fails on dirty or anything with private pages
*/
- if (invalidate_inode_page(p))
+ if (mapping_evict_folio(mapping, folio))
ret = MF_RECOVERED;
else
pr_info("%#lx: Failed to invalidate\n", pfn);
--
2.42.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 4/6] mm: Convert soft_offline_in_use_page() to use a folio
2023-11-08 18:28 [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Matthew Wilcox (Oracle)
` (2 preceding siblings ...)
2023-11-08 18:28 ` [PATCH 3/6] mm: Use mapping_evict_folio() in truncate_error_page() Matthew Wilcox (Oracle)
@ 2023-11-08 18:28 ` Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 5/6] mm: Convert isolate_page() to mf_isolate_folio() Matthew Wilcox (Oracle)
` (2 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-11-08 18:28 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), Naoya Horiguchi, linux-mm
Replace the existing head-page logic with folio logic.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/memory-failure.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 9f03952e6d38..075db5b5ad5e 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2645,40 +2645,40 @@ static int soft_offline_in_use_page(struct page *page)
{
long ret = 0;
unsigned long pfn = page_to_pfn(page);
- struct page *hpage = compound_head(page);
+ struct folio *folio = page_folio(page);
char const *msg_page[] = {"page", "hugepage"};
- bool huge = PageHuge(page);
+ bool huge = folio_test_hugetlb(folio);
LIST_HEAD(pagelist);
struct migration_target_control mtc = {
.nid = NUMA_NO_NODE,
.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
};
- if (!huge && PageTransHuge(hpage)) {
+ if (!huge && folio_test_large(folio)) {
if (try_to_split_thp_page(page)) {
pr_info("soft offline: %#lx: thp split failed\n", pfn);
return -EBUSY;
}
- hpage = page;
+ folio = page_folio(page);
}
- lock_page(page);
+ folio_lock(folio);
if (!huge)
- wait_on_page_writeback(page);
+ folio_wait_writeback(folio);
if (PageHWPoison(page)) {
- unlock_page(page);
- put_page(page);
+ folio_unlock(folio);
+ folio_put(folio);
pr_info("soft offline: %#lx page already poisoned\n", pfn);
return 0;
}
- if (!huge && PageLRU(page) && !PageSwapCache(page))
+ if (!huge && folio_test_lru(folio) && !folio_test_swapcache(folio))
/*
* Try to invalidate first. This should work for
* non dirty unmapped page cache pages.
*/
- ret = invalidate_inode_page(page);
- unlock_page(page);
+ ret = mapping_evict_folio(folio_mapping(folio), folio);
+ folio_unlock(folio);
if (ret) {
pr_info("soft_offline: %#lx: invalidated\n", pfn);
@@ -2686,7 +2686,7 @@ static int soft_offline_in_use_page(struct page *page)
return 0;
}
- if (isolate_page(hpage, &pagelist)) {
+ if (isolate_page(&folio->page, &pagelist)) {
ret = migrate_pages(&pagelist, alloc_migration_target, NULL,
(unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE, NULL);
if (!ret) {
--
2.42.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 5/6] mm: Convert isolate_page() to mf_isolate_folio()
2023-11-08 18:28 [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Matthew Wilcox (Oracle)
` (3 preceding siblings ...)
2023-11-08 18:28 ` [PATCH 4/6] mm: Convert soft_offline_in_use_page() to use a folio Matthew Wilcox (Oracle)
@ 2023-11-08 18:28 ` Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 6/6] mm: Remove invalidate_inode_page() Matthew Wilcox (Oracle)
2023-11-08 21:10 ` [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Andrew Morton
6 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-11-08 18:28 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), Naoya Horiguchi, linux-mm
The only caller now has a folio, so pass it in and operate on it.
Saves many page->folio conversions and introduces only one folio->page
conversion when calling isolate_movable_page().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/memory-failure.c | 28 ++++++++++++++--------------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 075db5b5ad5e..b601f59ed062 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2602,37 +2602,37 @@ int unpoison_memory(unsigned long pfn)
}
EXPORT_SYMBOL(unpoison_memory);
-static bool isolate_page(struct page *page, struct list_head *pagelist)
+static bool mf_isolate_folio(struct folio *folio, struct list_head *pagelist)
{
bool isolated = false;
- if (PageHuge(page)) {
- isolated = isolate_hugetlb(page_folio(page), pagelist);
+ if (folio_test_hugetlb(folio)) {
+ isolated = isolate_hugetlb(folio, pagelist);
} else {
- bool lru = !__PageMovable(page);
+ bool lru = !__folio_test_movable(folio);
if (lru)
- isolated = isolate_lru_page(page);
+ isolated = folio_isolate_lru(folio);
else
- isolated = isolate_movable_page(page,
+ isolated = isolate_movable_page(&folio->page,
ISOLATE_UNEVICTABLE);
if (isolated) {
- list_add(&page->lru, pagelist);
+ list_add(&folio->lru, pagelist);
if (lru)
- inc_node_page_state(page, NR_ISOLATED_ANON +
- page_is_file_lru(page));
+ node_stat_add_folio(folio, NR_ISOLATED_ANON +
+ folio_is_file_lru(folio));
}
}
/*
- * If we succeed to isolate the page, we grabbed another refcount on
- * the page, so we can safely drop the one we got from get_any_page().
- * If we failed to isolate the page, it means that we cannot go further
+ * If we succeed to isolate the folio, we grabbed another refcount on
+ * the folio, so we can safely drop the one we got from get_any_page().
+ * If we failed to isolate the folio, it means that we cannot go further
* and we will return an error, so drop the reference we got from
* get_any_page() as well.
*/
- put_page(page);
+ folio_put(folio);
return isolated;
}
@@ -2686,7 +2686,7 @@ static int soft_offline_in_use_page(struct page *page)
return 0;
}
- if (isolate_page(&folio->page, &pagelist)) {
+ if (mf_isolate_folio(folio, &pagelist)) {
ret = migrate_pages(&pagelist, alloc_migration_target, NULL,
(unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE, NULL);
if (!ret) {
--
2.42.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 6/6] mm: Remove invalidate_inode_page()
2023-11-08 18:28 [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Matthew Wilcox (Oracle)
` (4 preceding siblings ...)
2023-11-08 18:28 ` [PATCH 5/6] mm: Convert isolate_page() to mf_isolate_folio() Matthew Wilcox (Oracle)
@ 2023-11-08 18:28 ` Matthew Wilcox (Oracle)
2023-11-08 21:10 ` [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Andrew Morton
6 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-11-08 18:28 UTC (permalink / raw)
To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), Naoya Horiguchi, linux-mm
All callers are now converted to call mapping_evict_folio().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/internal.h | 1 -
mm/truncate.c | 11 ++---------
2 files changed, 2 insertions(+), 10 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 687d89d317d0..7e84ec0219b1 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -139,7 +139,6 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio);
bool truncate_inode_partial_folio(struct folio *folio, loff_t start,
loff_t end);
long mapping_evict_folio(struct address_space *mapping, struct folio *folio);
-long invalidate_inode_page(struct page *page);
unsigned long mapping_try_invalidate(struct address_space *mapping,
pgoff_t start, pgoff_t end, unsigned long *nr_failed);
diff --git a/mm/truncate.c b/mm/truncate.c
index 1d516e51e29d..52e3a703e7b2 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -294,13 +294,6 @@ long mapping_evict_folio(struct address_space *mapping, struct folio *folio)
return remove_mapping(mapping, folio);
}
-long invalidate_inode_page(struct page *page)
-{
- struct folio *folio = page_folio(page);
-
- return mapping_evict_folio(folio_mapping(folio), folio);
-}
-
/**
* truncate_inode_pages_range - truncate range of pages specified by start & end byte offsets
* @mapping: mapping to truncate
@@ -559,9 +552,9 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping,
EXPORT_SYMBOL(invalidate_mapping_pages);
/*
- * This is like invalidate_inode_page(), except it ignores the page's
+ * This is like mapping_evict_folio(), except it ignores the folio's
* refcount. We do this because invalidate_inode_pages2() needs stronger
- * invalidation guarantees, and cannot afford to leave pages behind because
+ * invalidation guarantees, and cannot afford to leave folios behind because
* shrink_page_list() has a temp ref on them, or because they're transiently
* sitting in the folio_add_lru() caches.
*/
--
2.42.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/6] mm: Convert __do_fault() to use a folio
2023-11-08 18:28 ` [PATCH 2/6] mm: Convert __do_fault() to use a folio Matthew Wilcox (Oracle)
@ 2023-11-08 21:07 ` Andrew Morton
2023-11-08 21:48 ` Matthew Wilcox
0 siblings, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2023-11-08 21:07 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: Naoya Horiguchi, linux-mm, stable
On Wed, 8 Nov 2023 18:28:05 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
> Convert vmf->page to a folio as soon as we're going to use it. This fixes
> a bug if the fault handler returns a tail page with hardware poison;
> tail pages have an invalid page->index, so we would fail to unmap the
> page from the page tables. We actually have to unmap the entire folio (or
> mapping_evict_folio() will
Would we merely fail to unmap or is there a possibility of unmapping
some random innocent other page?
How might this bug manifest in userspace, worst case?
> fail), so use unmap_mapping_folio() instead.
>
> This also saves various calls to compound_head() hidden in lock_page(),
> put_page(), etc.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Fixes: 793917d997df ("mm/readahead: Add large folio readahead")
> Cc: stable@vger.kernel.org
As it's cc:stable I'll pluck this patch out of the rest of the series
and shall stage it for 6.7-rcX, via mm-hotfixes-unstable ->
mm-hotfixes-stable -> Linus. Unless this bug is a very minor thing?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/6] Fix fault handler's handling of poisoned tail pages
2023-11-08 18:28 [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Matthew Wilcox (Oracle)
` (5 preceding siblings ...)
2023-11-08 18:28 ` [PATCH 6/6] mm: Remove invalidate_inode_page() Matthew Wilcox (Oracle)
@ 2023-11-08 21:10 ` Andrew Morton
2023-11-08 21:49 ` Matthew Wilcox
6 siblings, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2023-11-08 21:10 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: Naoya Horiguchi, linux-mm
On Wed, 8 Nov 2023 18:28:03 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
> This isn't a minimal patch to fix it, it's a full conversion of all
> the code surrounding it. I'll take care of backports to 6.5/6.1.
Ah. So you think all 6 should be backported?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/6] mm: Convert __do_fault() to use a folio
2023-11-08 21:07 ` Andrew Morton
@ 2023-11-08 21:48 ` Matthew Wilcox
0 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2023-11-08 21:48 UTC (permalink / raw)
To: Andrew Morton; +Cc: Naoya Horiguchi, linux-mm, stable
On Wed, Nov 08, 2023 at 01:07:51PM -0800, Andrew Morton wrote:
> On Wed, 8 Nov 2023 18:28:05 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
>
> > Convert vmf->page to a folio as soon as we're going to use it. This fixes
> > a bug if the fault handler returns a tail page with hardware poison;
> > tail pages have an invalid page->index, so we would fail to unmap the
> > page from the page tables. We actually have to unmap the entire folio (or
> > mapping_evict_folio() will
>
> Would we merely fail to unmap or is there a possibility of unmapping
> some random innocent other page?
>
> How might this bug manifest in userspace, worst case?
I think we might unmap a random other page in this file. But then the
next fault on that page will bring it back in, so it's only going to be
a tiny performance blip. And we've just found a hwpoisoned page which
is going to cause all kinds of other excitement, so I doubt it'll be
noticed in the grand scheme of things.
> As it's cc:stable I'll pluck this patch out of the rest of the series
> and shall stage it for 6.7-rcX, via mm-hotfixes-unstable ->
> mm-hotfixes-stable -> Linus. Unless this bug is a very minor thing?
I think it's minor enough that it can wait for 6.8. Unless anyone
disagrees?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/6] Fix fault handler's handling of poisoned tail pages
2023-11-08 21:10 ` [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Andrew Morton
@ 2023-11-08 21:49 ` Matthew Wilcox
0 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2023-11-08 21:49 UTC (permalink / raw)
To: Andrew Morton; +Cc: Naoya Horiguchi, linux-mm
On Wed, Nov 08, 2023 at 01:10:09PM -0800, Andrew Morton wrote:
> On Wed, 8 Nov 2023 18:28:03 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
>
> > This isn't a minimal patch to fix it, it's a full conversion of all
> > the code surrounding it. I'll take care of backports to 6.5/6.1.
>
> Ah. So you think all 6 should be backported?
More that I'd do it differently for a backport. I'll need to look at
6.1 and see what infrastructure we have there, and whether it's worth
creating the missing infrastructure or just bodging it.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-11-08 21:49 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-08 18:28 [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 1/6] mm: Make mapping_evict_folio() the preferred way to evict clean folios Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 2/6] mm: Convert __do_fault() to use a folio Matthew Wilcox (Oracle)
2023-11-08 21:07 ` Andrew Morton
2023-11-08 21:48 ` Matthew Wilcox
2023-11-08 18:28 ` [PATCH 3/6] mm: Use mapping_evict_folio() in truncate_error_page() Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 4/6] mm: Convert soft_offline_in_use_page() to use a folio Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 5/6] mm: Convert isolate_page() to mf_isolate_folio() Matthew Wilcox (Oracle)
2023-11-08 18:28 ` [PATCH 6/6] mm: Remove invalidate_inode_page() Matthew Wilcox (Oracle)
2023-11-08 21:10 ` [PATCH 0/6] Fix fault handler's handling of poisoned tail pages Andrew Morton
2023-11-08 21:49 ` Matthew Wilcox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox