From: Miaohe Lin <linmiaohe@huawei.com>
To: <akpm@linux-foundation.org>, <mike.kravetz@oracle.com>,
<naoya.horiguchi@nec.com>
Cc: <ying.huang@intel.com>, <hch@lst.de>, <dhowells@redhat.com>,
<cl@linux.com>, <david@redhat.com>, <linux-mm@kvack.org>,
<linux-kernel@vger.kernel.org>, <linmiaohe@huawei.com>
Subject: [PATCH v2 3/4] mm/migration: return errno when isolate_huge_page failed
Date: Mon, 25 Apr 2022 21:27:22 +0800 [thread overview]
Message-ID: <20220425132723.34824-4-linmiaohe@huawei.com> (raw)
In-Reply-To: <20220425132723.34824-1-linmiaohe@huawei.com>
We might fail to isolate huge page due to e.g. the page is under migration
which cleared HPageMigratable. So we should return -EBUSY in this case
rather than always return 1 which could confuse the user. Also we make
the prototype of isolate_huge_page consistent with isolate_lru_page to
improve the readability.
Fixes: e8db67eb0ded ("mm: migrate: move_pages() supports thp migration")
Suggested-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
include/linux/hugetlb.h | 6 +++---
mm/gup.c | 2 +-
mm/hugetlb.c | 11 +++++------
mm/memory-failure.c | 2 +-
mm/mempolicy.c | 2 +-
mm/migrate.c | 5 +++--
6 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 04f0186b089b..306d6ef3fa22 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -170,7 +170,7 @@ bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
vm_flags_t vm_flags);
long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
long freed);
-bool isolate_huge_page(struct page *page, struct list_head *list);
+int isolate_huge_page(struct page *page, struct list_head *list);
int get_hwpoison_huge_page(struct page *page, bool *hugetlb);
int get_huge_page_for_hwpoison(unsigned long pfn, int flags);
void putback_active_hugepage(struct page *page);
@@ -376,9 +376,9 @@ static inline pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
return NULL;
}
-static inline bool isolate_huge_page(struct page *page, struct list_head *list)
+static inline int isolate_huge_page(struct page *page, struct list_head *list)
{
- return false;
+ return -EBUSY;
}
static inline int get_hwpoison_huge_page(struct page *page, bool *hugetlb)
diff --git a/mm/gup.c b/mm/gup.c
index 5c17d4816441..c15d41636e8e 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1869,7 +1869,7 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages,
* Try to move out any movable page before pinning the range.
*/
if (folio_test_hugetlb(folio)) {
- if (!isolate_huge_page(&folio->page,
+ if (isolate_huge_page(&folio->page,
&movable_page_list))
isolation_error_count++;
continue;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 74c9964c1b11..098f81e8550d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2766,8 +2766,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
* Fail with -EBUSY if not possible.
*/
spin_unlock_irq(&hugetlb_lock);
- if (!isolate_huge_page(old_page, list))
- ret = -EBUSY;
+ ret = isolate_huge_page(old_page, list);
spin_lock_irq(&hugetlb_lock);
goto free_new;
} else if (!HPageFreed(old_page)) {
@@ -2843,7 +2842,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
if (hstate_is_gigantic(h))
return -ENOMEM;
- if (page_count(head) && isolate_huge_page(head, list))
+ if (page_count(head) && !isolate_huge_page(head, list))
ret = 0;
else if (!page_count(head))
ret = alloc_and_dissolve_huge_page(h, head, list);
@@ -6940,15 +6939,15 @@ follow_huge_pgd(struct mm_struct *mm, unsigned long address, pgd_t *pgd, int fla
return pte_page(*(pte_t *)pgd) + ((address & ~PGDIR_MASK) >> PAGE_SHIFT);
}
-bool isolate_huge_page(struct page *page, struct list_head *list)
+int isolate_huge_page(struct page *page, struct list_head *list)
{
- bool ret = true;
+ int ret = 0;
spin_lock_irq(&hugetlb_lock);
if (!PageHeadHuge(page) ||
!HPageMigratable(page) ||
!get_page_unless_zero(page)) {
- ret = false;
+ ret = -EBUSY;
goto unlock;
}
ClearHPageMigratable(page);
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 1d117190c350..a83d32bbc567 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2203,7 +2203,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist)
bool lru = PageLRU(page);
if (PageHuge(page)) {
- isolated = isolate_huge_page(page, pagelist);
+ isolated = !isolate_huge_page(page, pagelist);
} else {
if (lru)
isolated = !isolate_lru_page(page);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index e4f125e48cc4..a4467c4e9f8d 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -602,7 +602,7 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
/* With MPOL_MF_MOVE, we migrate only unshared hugepage. */
if (flags & (MPOL_MF_MOVE_ALL) ||
(flags & MPOL_MF_MOVE && page_mapcount(page) == 1)) {
- if (!isolate_huge_page(page, qp->pagelist) &&
+ if (isolate_huge_page(page, qp->pagelist) &&
(flags & MPOL_MF_STRICT))
/*
* Failed to isolate page but allow migrating pages
diff --git a/mm/migrate.c b/mm/migrate.c
index 0fc4651b3e39..c937a496239b 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1628,8 +1628,9 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
if (PageHuge(page)) {
if (PageHead(page)) {
- isolate_huge_page(page, pagelist);
- err = 1;
+ err = isolate_huge_page(page, pagelist);
+ if (!err)
+ err = 1;
}
} else {
struct page *head;
--
2.23.0
next prev parent reply other threads:[~2022-04-25 13:27 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-25 13:27 [PATCH v2 0/4] A few cleanup and fixup patches for migration Miaohe Lin
2022-04-25 13:27 ` [PATCH v2 1/4] mm/migration: reduce the rcu lock duration Miaohe Lin
2022-04-29 9:54 ` David Hildenbrand
2022-05-09 3:14 ` Miaohe Lin
2022-05-24 12:36 ` Miaohe Lin
2022-05-06 3:23 ` ying.huang
2022-05-09 3:20 ` Miaohe Lin
2022-04-25 13:27 ` [PATCH v2 2/4] mm/migration: remove unneeded lock page and PageMovable check Miaohe Lin
2022-04-29 10:07 ` David Hildenbrand
2022-05-09 8:51 ` Miaohe Lin
2022-05-11 15:23 ` David Hildenbrand
2022-05-12 2:25 ` Miaohe Lin
2022-05-12 7:10 ` David Hildenbrand
2022-05-12 13:26 ` Miaohe Lin
2022-05-12 16:50 ` David Hildenbrand
2022-05-16 2:44 ` Miaohe Lin
2022-05-31 11:59 ` David Hildenbrand
2022-05-31 12:37 ` Miaohe Lin
2022-06-01 10:31 ` David Hildenbrand
2022-06-02 7:40 ` Miaohe Lin
2022-06-02 8:47 ` David Hildenbrand
2022-06-07 2:20 ` Miaohe Lin
2022-06-08 10:05 ` David Hildenbrand
2022-06-08 13:31 ` Miaohe Lin
2022-05-24 12:47 ` Miaohe Lin
2022-04-25 13:27 ` Miaohe Lin [this message]
2022-04-29 10:08 ` [PATCH v2 3/4] mm/migration: return errno when isolate_huge_page failed David Hildenbrand
2022-05-09 8:03 ` Miaohe Lin
2022-04-29 11:36 ` Muchun Song
2022-05-09 3:23 ` Miaohe Lin
2022-05-09 4:21 ` Muchun Song
2022-05-09 7:51 ` Miaohe Lin
2022-04-25 13:27 ` [PATCH v2 4/4] mm/migration: fix potential pte_unmap on an not mapped pte Miaohe Lin
2022-04-29 9:48 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220425132723.34824-4-linmiaohe@huawei.com \
--to=linmiaohe@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=david@redhat.com \
--cc=dhowells@redhat.com \
--cc=hch@lst.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=naoya.horiguchi@nec.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox