From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-mm@kvack.org, Miaohe Lin <linmiaohe@huawei.com>,
Naoya Horiguchi <naoya.horiguchi@nec.com>
Subject: [PATCH 3/8] mm: Return the address from page_mapped_in_vma()
Date: Thu, 29 Feb 2024 21:20:29 +0000 [thread overview]
Message-ID: <20240229212036.2160900-4-willy@infradead.org> (raw)
In-Reply-To: <20240229212036.2160900-1-willy@infradead.org>
The only user of this function calls page_address_in_vma() immediately
after page_mapped_in_vma() calculates it and uses it to return true/false.
Return the address instead, allowing memory-failure to skip the call
to page_address_in_vma().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/rmap.h | 2 +-
mm/memory-failure.c | 22 ++++++++++++++--------
mm/page_vma_mapped.c | 14 +++++++-------
3 files changed, 22 insertions(+), 16 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index b7944a833668..ba027a4d9abf 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -698,7 +698,7 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked);
-int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma);
+unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma);
/*
* rmap_walk_control: To control rmap traversing for specific needs
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 7f8473c08ae3..40a8964954e5 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -462,10 +462,11 @@ static void __add_to_kill(struct task_struct *tsk, struct page *p,
}
static void add_to_kill_anon_file(struct task_struct *tsk, struct page *p,
- struct vm_area_struct *vma,
- struct list_head *to_kill)
+ struct vm_area_struct *vma, struct list_head *to_kill,
+ unsigned long addr)
{
- unsigned long addr = page_address_in_vma(p, vma);
+ if (addr == -EFAULT)
+ return;
__add_to_kill(tsk, p, vma, to_kill, addr);
}
@@ -609,12 +610,13 @@ static void collect_procs_anon(struct folio *folio, struct page *page,
continue;
anon_vma_interval_tree_foreach(vmac, &av->rb_root,
pgoff, pgoff) {
+ unsigned long addr;
+
vma = vmac->vma;
if (vma->vm_mm != t->mm)
continue;
- if (!page_mapped_in_vma(page, vma))
- continue;
- add_to_kill_anon_file(t, page, vma, to_kill);
+ addr = page_mapped_in_vma(page, vma);
+ add_to_kill_anon_file(t, page, vma, to_kill, addr);
}
}
rcu_read_unlock();
@@ -642,6 +644,8 @@ static void collect_procs_file(struct folio *folio, struct page *page,
continue;
vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff,
pgoff) {
+ unsigned long addr;
+
/*
* Send early kill signal to tasks where a vma covers
* the page but the corrupted page is not necessarily
@@ -649,8 +653,10 @@ static void collect_procs_file(struct folio *folio, struct page *page,
* Assume applications who requested early kill want
* to be informed of all such data corruptions.
*/
- if (vma->vm_mm == t->mm)
- add_to_kill_anon_file(t, page, vma, to_kill);
+ if (vma->vm_mm != t->mm)
+ continue;
+ addr = page_address_in_vma(page, vma);
+ add_to_kill_anon_file(t, page, vma, to_kill, addr);
}
}
rcu_read_unlock();
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 74d2de15fb5e..e9e208b4ac4b 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -319,11 +319,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
* @page: the page to test
* @vma: the VMA to test
*
- * Returns 1 if the page is mapped into the page tables of the VMA, 0
- * if the page is not mapped into the page tables of this VMA. Only
- * valid for normal file or anonymous VMAs.
+ * Return: If the page is mapped into the page tables of the VMA, the
+ * address that the page is mapped at. -EFAULT if the page is not mapped.
+ * Only valid for normal file or anonymous VMAs.
*/
-int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
+unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
{
struct page_vma_mapped_walk pvmw = {
.pfn = page_to_pfn(page),
@@ -334,9 +334,9 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
pvmw.address = vma_address(page, vma);
if (pvmw.address == -EFAULT)
- return 0;
+ return -EFAULT;
if (!page_vma_mapped_walk(&pvmw))
- return 0;
+ return -EFAULT;
page_vma_mapped_walk_done(&pvmw);
- return 1;
+ return pvmw.address;
}
--
2.43.0
next prev parent reply other threads:[~2024-02-29 21:20 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-29 21:20 [PATCH 0/8] Some cleanups for memory-failure Matthew Wilcox (Oracle)
2024-02-29 21:20 ` [PATCH 1/8] mm/memory-failure: Remove fsdax_pgoff argument from __add_to_kill Matthew Wilcox (Oracle)
2024-03-04 12:09 ` Miaohe Lin
2024-03-13 2:07 ` Jane Chu
2024-03-13 3:23 ` Matthew Wilcox
2024-03-13 18:11 ` Jane Chu
2024-03-14 3:51 ` Matthew Wilcox
2024-03-14 17:54 ` Jane Chu
2024-03-19 0:36 ` Dan Williams
2024-02-29 21:20 ` [PATCH 2/8] mm/memory-failure: Pass addr to __add_to_kill() Matthew Wilcox (Oracle)
2024-03-04 12:10 ` Miaohe Lin
2024-02-29 21:20 ` Matthew Wilcox (Oracle) [this message]
2024-03-04 12:31 ` [PATCH 3/8] mm: Return the address from page_mapped_in_vma() Miaohe Lin
2024-03-05 20:09 ` Matthew Wilcox
2024-03-06 8:10 ` Miaohe Lin
2024-03-06 8:17 ` Miaohe Lin
2024-02-29 21:20 ` [PATCH 4/8] mm/memory-failure: Convert shake_page() to shake_folio() Matthew Wilcox (Oracle)
2024-03-06 9:31 ` Miaohe Lin
2024-04-08 15:36 ` Matthew Wilcox
2024-04-08 18:31 ` Jane Chu
2024-04-10 4:01 ` Miaohe Lin
2024-02-29 21:20 ` [PATCH 5/8] mm: Convert hugetlb_page_mapping_lock_write to folio Matthew Wilcox (Oracle)
2024-03-08 8:33 ` Miaohe Lin
2024-02-29 21:20 ` [PATCH 6/8] mm/memory-failure: Convert memory_failure() to use a folio Matthew Wilcox (Oracle)
2024-03-08 8:48 ` Miaohe Lin
2024-03-11 12:31 ` Matthew Wilcox
2024-03-12 7:07 ` Miaohe Lin
2024-03-12 14:14 ` Matthew Wilcox
2024-03-13 1:23 ` Jane Chu
2024-03-14 2:34 ` Miaohe Lin
2024-03-14 18:15 ` Jane Chu
2024-03-15 6:25 ` Miaohe Lin
2024-03-15 8:32 ` Miaohe Lin
2024-03-15 19:22 ` Jane Chu
2024-03-18 2:28 ` Miaohe Lin
2024-02-29 21:20 ` [PATCH 7/8] mm/memory-failure: Convert hwpoison_user_mappings to take " Matthew Wilcox (Oracle)
2024-03-11 11:44 ` Miaohe Lin
2024-02-29 21:20 ` [PATCH 8/8] mm/memory-failure: Add some folio conversions to unpoison_memory Matthew Wilcox (Oracle)
2024-03-11 11:29 ` Miaohe Lin
2024-03-01 6:28 ` [PATCH 0/8] Some cleanups for memory-failure Miaohe Lin
2024-03-01 12:40 ` Muhammad Usama Anjum
2024-03-04 1:55 ` Miaohe Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240229212036.2160900-4-willy@infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=linmiaohe@huawei.com \
--cc=linux-mm@kvack.org \
--cc=naoya.horiguchi@nec.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox