linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH mm-unstable v3 0/6] Convert various mempolicy.c functions to use folios
@ 2023-01-25 23:41 Vishal Moola (Oracle)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 1/6] mm: Add folio_estimated_mapcount() Vishal Moola (Oracle)
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2023-01-25 23:41 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, linux-kernel, Vishal Moola (Oracle)

This patch series converts migrate_page_add() and queue_pages_required()
to migrate_folio_add() and queue_page_required(). It also converts the
callers of the functions to use folios as well, and introduces a helper
function to estimate a folio's mapcount.

---
v3:
  - Move folio_estimated_mapcount() to fix build warnings
  - Add comments to folio_estimated_mapcount() to make its function and
purpose clear

v2:
  - Introduce folio_estimated_mapcount() to replace page_mapcount() in
migrate_page_add() and queue_pages_hugetlb().
  - Elaborate on the comments to make it clear what the mapcount check is
for and why it is being done this way.

Vishal Moola (Oracle) (6):
  mm: Add folio_estimated_mapcount()
  mm/mempolicy: Convert queue_pages_pmd() to queue_folios_pmd()
  mm/mempolicy: Convert queue_pages_pte_range() to
    queue_folios_pte_range()
  mm/mempolicy: Convert queue_pages_hugetlb() to queue_folios_hugetlb()
  mm/mempolicy: Convert queue_pages_required() to queue_folio_required()
  mm/mempolicy: Convert migrate_page_add() to migrate_folio_add()

 include/linux/mm.h |  17 +++++++
 mm/mempolicy.c     | 122 ++++++++++++++++++++++++---------------------
 2 files changed, 82 insertions(+), 57 deletions(-)

-- 
2.38.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH mm-unstable v3 1/6] mm: Add folio_estimated_mapcount()
  2023-01-25 23:41 [PATCH mm-unstable v3 0/6] Convert various mempolicy.c functions to use folios Vishal Moola (Oracle)
@ 2023-01-25 23:41 ` Vishal Moola (Oracle)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 2/6] mm/mempolicy: Convert queue_pages_pmd() to queue_folios_pmd() Vishal Moola (Oracle)
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2023-01-25 23:41 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, linux-kernel, Vishal Moola (Oracle)

folio_estimated_mapcount() takes in a folio and returns the precise
number of times the first subpage of the folio is mapped.

This function aims to provide an estimate for the mapcount of a subpage
within a folio. This is necessary for folio conversions where we care
about the mapcount of a subpage, but not necessarily the whole folio.

This is in contrast to folio_mapcount() which calculates the total
number of the times a folio and all its subpages are mapped.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index c9db257f09b3..fdd5b77ac209 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1817,6 +1817,23 @@ static inline size_t folio_size(struct folio *folio)
 	return PAGE_SIZE << folio_order(folio);
 }
 
+/**
+ * folio_estimated_mapcount - Estimate a folio's per-page mapcount.
+ * @folio: The folio.
+ *
+ * folio_estimated_mapcount() aims to serve as a function to efficiently
+ * estimate the number of times each page in a folio is mapped.
+ * This may not be accurate for large folios. If you want exact mapcounts,
+ * look at page_mapcount() or folio_total_mapcount().
+ *
+ * Return: The precise mapcount of the first subpage, meant to estimate
+ * the precise mapcount of any subpage.
+ */
+static inline int folio_estimated_mapcount(struct folio *folio)
+{
+	return page_mapcount(folio_page(folio, 0));
+}
+
 #ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE
 static inline int arch_make_page_accessible(struct page *page)
 {
-- 
2.38.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH mm-unstable v3 2/6] mm/mempolicy: Convert queue_pages_pmd() to queue_folios_pmd()
  2023-01-25 23:41 [PATCH mm-unstable v3 0/6] Convert various mempolicy.c functions to use folios Vishal Moola (Oracle)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 1/6] mm: Add folio_estimated_mapcount() Vishal Moola (Oracle)
@ 2023-01-25 23:41 ` Vishal Moola (Oracle)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 3/6] mm/mempolicy: Convert queue_pages_pte_range() to queue_folios_pte_range() Vishal Moola (Oracle)
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2023-01-25 23:41 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, linux-kernel, Vishal Moola (Oracle)

The function now operates on a folio instead of
the page associated with a pmd.

This change is in preparation for the conversion of
queue_pages_required() to queue_folio_required() and migrate_page_add()
to migrate_folio_add().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 mm/mempolicy.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index fd99d303e34f..00fffa93adae 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -442,21 +442,21 @@ static inline bool queue_pages_required(struct page *page,
 }
 
 /*
- * queue_pages_pmd() has three possible return values:
- * 0 - pages are placed on the right node or queued successfully, or
+ * queue_folios_pmd() has three possible return values:
+ * 0 - folios are placed on the right node or queued successfully, or
  *     special page is met, i.e. huge zero page.
- * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
+ * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
  *     specified.
  * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an
- *        existing page was already on a node that does not follow the
+ *        existing folio was already on a node that does not follow the
  *        policy.
  */
-static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
 				unsigned long end, struct mm_walk *walk)
 	__releases(ptl)
 {
 	int ret = 0;
-	struct page *page;
+	struct folio *folio;
 	struct queue_pages *qp = walk->private;
 	unsigned long flags;
 
@@ -464,19 +464,19 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
 		ret = -EIO;
 		goto unlock;
 	}
-	page = pmd_page(*pmd);
-	if (is_huge_zero_page(page)) {
+	folio = pfn_folio(pmd_pfn(*pmd));
+	if (is_huge_zero_page(&folio->page)) {
 		walk->action = ACTION_CONTINUE;
 		goto unlock;
 	}
-	if (!queue_pages_required(page, qp))
+	if (!queue_pages_required(&folio->page, qp))
 		goto unlock;
 
 	flags = qp->flags;
-	/* go to thp migration */
+	/* go to folio migration */
 	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
 		if (!vma_migratable(walk->vma) ||
-		    migrate_page_add(page, qp->pagelist, flags)) {
+		    migrate_page_add(&folio->page, qp->pagelist, flags)) {
 			ret = 1;
 			goto unlock;
 		}
@@ -512,7 +512,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 
 	ptl = pmd_trans_huge_lock(pmd, vma);
 	if (ptl)
-		return queue_pages_pmd(pmd, ptl, addr, end, walk);
+		return queue_folios_pmd(pmd, ptl, addr, end, walk);
 
 	if (pmd_trans_unstable(pmd))
 		return 0;
-- 
2.38.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH mm-unstable v3 3/6] mm/mempolicy: Convert queue_pages_pte_range() to queue_folios_pte_range()
  2023-01-25 23:41 [PATCH mm-unstable v3 0/6] Convert various mempolicy.c functions to use folios Vishal Moola (Oracle)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 1/6] mm: Add folio_estimated_mapcount() Vishal Moola (Oracle)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 2/6] mm/mempolicy: Convert queue_pages_pmd() to queue_folios_pmd() Vishal Moola (Oracle)
@ 2023-01-25 23:41 ` Vishal Moola (Oracle)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 4/6] mm/mempolicy: Convert queue_pages_hugetlb() to queue_folios_hugetlb() Vishal Moola (Oracle)
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2023-01-25 23:41 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, linux-kernel, Vishal Moola (Oracle)

This function now operates on folios associated with ptes instead of
pages.

This change is in preparation for the conversion of
queue_pages_required() to queue_folio_required() and migrate_page_add()
to migrate_folio_add().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 mm/mempolicy.c | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 00fffa93adae..ae9d16124f45 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -491,19 +491,19 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
  * Scan through pages checking if pages follow certain conditions,
  * and move them to the pagelist if they do.
  *
- * queue_pages_pte_range() has three possible return values:
- * 0 - pages are placed on the right node or queued successfully, or
+ * queue_folios_pte_range() has three possible return values:
+ * 0 - folios are placed on the right node or queued successfully, or
  *     special page is met, i.e. zero page.
- * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
+ * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
  *     specified.
- * -EIO - only MPOL_MF_STRICT was specified and an existing page was already
+ * -EIO - only MPOL_MF_STRICT was specified and an existing folio was already
  *        on a node that does not follow the policy.
  */
-static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
+static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
 			unsigned long end, struct mm_walk *walk)
 {
 	struct vm_area_struct *vma = walk->vma;
-	struct page *page;
+	struct folio *folio;
 	struct queue_pages *qp = walk->private;
 	unsigned long flags = qp->flags;
 	bool has_unmovable = false;
@@ -521,16 +521,16 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 	for (; addr != end; pte++, addr += PAGE_SIZE) {
 		if (!pte_present(*pte))
 			continue;
-		page = vm_normal_page(vma, addr, *pte);
-		if (!page || is_zone_device_page(page))
+		folio = vm_normal_folio(vma, addr, *pte);
+		if (!folio || folio_is_zone_device(folio))
 			continue;
 		/*
-		 * vm_normal_page() filters out zero pages, but there might
-		 * still be PageReserved pages to skip, perhaps in a VDSO.
+		 * vm_normal_folio() filters out zero pages, but there might
+		 * still be reserved folios to skip, perhaps in a VDSO.
 		 */
-		if (PageReserved(page))
+		if (folio_test_reserved(folio))
 			continue;
-		if (!queue_pages_required(page, qp))
+		if (!queue_pages_required(&folio->page, qp))
 			continue;
 		if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
 			/* MPOL_MF_STRICT must be specified if we get here */
@@ -544,7 +544,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 			 * temporary off LRU pages in the range.  Still
 			 * need migrate other LRU pages.
 			 */
-			if (migrate_page_add(page, qp->pagelist, flags))
+			if (migrate_page_add(&folio->page, qp->pagelist, flags))
 				has_unmovable = true;
 		} else
 			break;
@@ -703,7 +703,7 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
 
 static const struct mm_walk_ops queue_pages_walk_ops = {
 	.hugetlb_entry		= queue_pages_hugetlb,
-	.pmd_entry		= queue_pages_pte_range,
+	.pmd_entry		= queue_folios_pte_range,
 	.test_walk		= queue_pages_test_walk,
 };
 
-- 
2.38.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH mm-unstable v3 4/6] mm/mempolicy: Convert queue_pages_hugetlb() to queue_folios_hugetlb()
  2023-01-25 23:41 [PATCH mm-unstable v3 0/6] Convert various mempolicy.c functions to use folios Vishal Moola (Oracle)
                   ` (2 preceding siblings ...)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 3/6] mm/mempolicy: Convert queue_pages_pte_range() to queue_folios_pte_range() Vishal Moola (Oracle)
@ 2023-01-25 23:41 ` Vishal Moola (Oracle)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 5/6] mm/mempolicy: Convert queue_pages_required() to queue_folio_required() Vishal Moola (Oracle)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 6/6] mm/mempolicy: Convert migrate_page_add() to migrate_folio_add() Vishal Moola (Oracle)
  5 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2023-01-25 23:41 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, linux-kernel, Vishal Moola (Oracle)

This change is in preparation for the conversion of
queue_pages_required() to queue_folio_required() and migrate_page_add()
to migrate_folio_add().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 mm/mempolicy.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index ae9d16124f45..ea8cac447e04 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -558,7 +558,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
 	return addr != end ? -EIO : 0;
 }
 
-static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
+static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask,
 			       unsigned long addr, unsigned long end,
 			       struct mm_walk *walk)
 {
@@ -566,7 +566,7 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
 #ifdef CONFIG_HUGETLB_PAGE
 	struct queue_pages *qp = walk->private;
 	unsigned long flags = (qp->flags & MPOL_MF_VALID);
-	struct page *page;
+	struct folio *folio;
 	spinlock_t *ptl;
 	pte_t entry;
 
@@ -574,13 +574,13 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
 	entry = huge_ptep_get(pte);
 	if (!pte_present(entry))
 		goto unlock;
-	page = pte_page(entry);
-	if (!queue_pages_required(page, qp))
+	folio = pfn_folio(pte_pfn(entry));
+	if (!queue_pages_required(&folio->page, qp))
 		goto unlock;
 
 	if (flags == MPOL_MF_STRICT) {
 		/*
-		 * STRICT alone means only detecting misplaced page and no
+		 * STRICT alone means only detecting misplaced folio and no
 		 * need to further check other vma.
 		 */
 		ret = -EIO;
@@ -591,20 +591,27 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
 		/*
 		 * Must be STRICT with MOVE*, otherwise .test_walk() have
 		 * stopped walking current vma.
-		 * Detecting misplaced page but allow migrating pages which
+		 * Detecting misplaced folio but allow migrating folios which
 		 * have been queued.
 		 */
 		ret = 1;
 		goto unlock;
 	}
 
-	/* With MPOL_MF_MOVE, we migrate only unshared hugepage. */
+	/*
+	 * With MPOL_MF_MOVE, we try to migrate only unshared folios. If it
+	 * is shared it is likely not worth migrating.
+	 *
+	 * To check if the folio is shared, ideally we want to make sure
+	 * every page is mapped to the same process. Doing that is very
+	 * expensive, so check the estimated mapcount of the folio instead.
+	 */
 	if (flags & (MPOL_MF_MOVE_ALL) ||
-	    (flags & MPOL_MF_MOVE && page_mapcount(page) == 1)) {
-		if (isolate_hugetlb(page_folio(page), qp->pagelist) &&
+	    (flags & MPOL_MF_MOVE && folio_estimated_mapcount(folio) == 1)) {
+		if (isolate_hugetlb(folio, qp->pagelist) &&
 			(flags & MPOL_MF_STRICT))
 			/*
-			 * Failed to isolate page but allow migrating pages
+			 * Failed to isolate folio but allow migrating folios
 			 * which have been queued.
 			 */
 			ret = 1;
@@ -702,7 +709,7 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
 }
 
 static const struct mm_walk_ops queue_pages_walk_ops = {
-	.hugetlb_entry		= queue_pages_hugetlb,
+	.hugetlb_entry		= queue_folios_hugetlb,
 	.pmd_entry		= queue_folios_pte_range,
 	.test_walk		= queue_pages_test_walk,
 };
-- 
2.38.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH mm-unstable v3 5/6] mm/mempolicy: Convert queue_pages_required() to queue_folio_required()
  2023-01-25 23:41 [PATCH mm-unstable v3 0/6] Convert various mempolicy.c functions to use folios Vishal Moola (Oracle)
                   ` (3 preceding siblings ...)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 4/6] mm/mempolicy: Convert queue_pages_hugetlb() to queue_folios_hugetlb() Vishal Moola (Oracle)
@ 2023-01-25 23:41 ` Vishal Moola (Oracle)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 6/6] mm/mempolicy: Convert migrate_page_add() to migrate_folio_add() Vishal Moola (Oracle)
  5 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2023-01-25 23:41 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, linux-kernel, Vishal Moola (Oracle)

Replace queue_pages_required() with queue_folio_required().
queue_folio_required() does the same as queue_pages_required(), except
takes in a folio instead of a page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 mm/mempolicy.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index ea8cac447e04..da87644430e3 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -427,15 +427,15 @@ struct queue_pages {
 };
 
 /*
- * Check if the page's nid is in qp->nmask.
+ * Check if the folio's nid is in qp->nmask.
  *
  * If MPOL_MF_INVERT is set in qp->flags, check if the nid is
  * in the invert of qp->nmask.
  */
-static inline bool queue_pages_required(struct page *page,
+static inline bool queue_folio_required(struct folio *folio,
 					struct queue_pages *qp)
 {
-	int nid = page_to_nid(page);
+	int nid = folio_nid(folio);
 	unsigned long flags = qp->flags;
 
 	return node_isset(nid, *qp->nmask) == !(flags & MPOL_MF_INVERT);
@@ -469,7 +469,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
 		walk->action = ACTION_CONTINUE;
 		goto unlock;
 	}
-	if (!queue_pages_required(&folio->page, qp))
+	if (!queue_folio_required(folio, qp))
 		goto unlock;
 
 	flags = qp->flags;
@@ -530,7 +530,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
 		 */
 		if (folio_test_reserved(folio))
 			continue;
-		if (!queue_pages_required(&folio->page, qp))
+		if (!queue_folio_required(folio, qp))
 			continue;
 		if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
 			/* MPOL_MF_STRICT must be specified if we get here */
@@ -575,7 +575,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask,
 	if (!pte_present(entry))
 		goto unlock;
 	folio = pfn_folio(pte_pfn(entry));
-	if (!queue_pages_required(&folio->page, qp))
+	if (!queue_folio_required(folio, qp))
 		goto unlock;
 
 	if (flags == MPOL_MF_STRICT) {
-- 
2.38.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH mm-unstable v3 6/6] mm/mempolicy: Convert migrate_page_add() to migrate_folio_add()
  2023-01-25 23:41 [PATCH mm-unstable v3 0/6] Convert various mempolicy.c functions to use folios Vishal Moola (Oracle)
                   ` (4 preceding siblings ...)
  2023-01-25 23:41 ` [PATCH mm-unstable v3 5/6] mm/mempolicy: Convert queue_pages_required() to queue_folio_required() Vishal Moola (Oracle)
@ 2023-01-25 23:41 ` Vishal Moola (Oracle)
  5 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2023-01-25 23:41 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, linux-kernel, Vishal Moola (Oracle)

Replace migrate_page_add() with migrate_folio_add().
migrate_folio_add() does the same a migrate_page_add() but takes in a
folio instead of a page. This removes a couple of calls to
compound_head().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 mm/mempolicy.c | 39 ++++++++++++++++++++-------------------
 1 file changed, 20 insertions(+), 19 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index da87644430e3..9bb4600c4294 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -414,7 +414,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
 	},
 };
 
-static int migrate_page_add(struct page *page, struct list_head *pagelist,
+static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
 				unsigned long flags);
 
 struct queue_pages {
@@ -476,7 +476,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
 	/* go to folio migration */
 	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
 		if (!vma_migratable(walk->vma) ||
-		    migrate_page_add(&folio->page, qp->pagelist, flags)) {
+		    migrate_folio_add(folio, qp->pagelist, flags)) {
 			ret = 1;
 			goto unlock;
 		}
@@ -544,7 +544,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
 			 * temporary off LRU pages in the range.  Still
 			 * need migrate other LRU pages.
 			 */
-			if (migrate_page_add(&folio->page, qp->pagelist, flags))
+			if (migrate_folio_add(folio, qp->pagelist, flags))
 				has_unmovable = true;
 		} else
 			break;
@@ -1029,27 +1029,28 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
 }
 
 #ifdef CONFIG_MIGRATION
-/*
- * page migration, thp tail pages can be passed.
- */
-static int migrate_page_add(struct page *page, struct list_head *pagelist,
+static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
 				unsigned long flags)
 {
-	struct page *head = compound_head(page);
 	/*
-	 * Avoid migrating a page that is shared with others.
+	 * We try to migrate only unshared folios. If it is shared it
+	 * is likely not worth migrating.
+	 *
+	 * To check if the folio is shared, ideally we want to make sure
+	 * every page is mapped to the same process. Doing that is very
+	 * expensive, so check the estimated mapcount of the folio instead.
 	 */
-	if ((flags & MPOL_MF_MOVE_ALL) || page_mapcount(head) == 1) {
-		if (!isolate_lru_page(head)) {
-			list_add_tail(&head->lru, pagelist);
-			mod_node_page_state(page_pgdat(head),
-				NR_ISOLATED_ANON + page_is_file_lru(head),
-				thp_nr_pages(head));
+	if ((flags & MPOL_MF_MOVE_ALL) || folio_estimated_mapcount(folio) == 1) {
+		if (!folio_isolate_lru(folio)) {
+			list_add_tail(&folio->lru, foliolist);
+			node_stat_mod_folio(folio,
+				NR_ISOLATED_ANON + folio_is_file_lru(folio),
+				folio_nr_pages(folio));
 		} else if (flags & MPOL_MF_STRICT) {
 			/*
-			 * Non-movable page may reach here.  And, there may be
-			 * temporary off LRU pages or non-LRU movable pages.
-			 * Treat them as unmovable pages since they can't be
+			 * Non-movable folio may reach here.  And, there may be
+			 * temporary off LRU folios or non-LRU movable folios.
+			 * Treat them as unmovable folios since they can't be
 			 * isolated, so they can't be moved at the moment.  It
 			 * should return -EIO for this case too.
 			 */
@@ -1241,7 +1242,7 @@ static struct page *new_page(struct page *page, unsigned long start)
 }
 #else
 
-static int migrate_page_add(struct page *page, struct list_head *pagelist,
+static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
 				unsigned long flags)
 {
 	return -EIO;
-- 
2.38.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-01-25 23:41 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-25 23:41 [PATCH mm-unstable v3 0/6] Convert various mempolicy.c functions to use folios Vishal Moola (Oracle)
2023-01-25 23:41 ` [PATCH mm-unstable v3 1/6] mm: Add folio_estimated_mapcount() Vishal Moola (Oracle)
2023-01-25 23:41 ` [PATCH mm-unstable v3 2/6] mm/mempolicy: Convert queue_pages_pmd() to queue_folios_pmd() Vishal Moola (Oracle)
2023-01-25 23:41 ` [PATCH mm-unstable v3 3/6] mm/mempolicy: Convert queue_pages_pte_range() to queue_folios_pte_range() Vishal Moola (Oracle)
2023-01-25 23:41 ` [PATCH mm-unstable v3 4/6] mm/mempolicy: Convert queue_pages_hugetlb() to queue_folios_hugetlb() Vishal Moola (Oracle)
2023-01-25 23:41 ` [PATCH mm-unstable v3 5/6] mm/mempolicy: Convert queue_pages_required() to queue_folio_required() Vishal Moola (Oracle)
2023-01-25 23:41 ` [PATCH mm-unstable v3 6/6] mm/mempolicy: Convert migrate_page_add() to migrate_folio_add() Vishal Moola (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox