* [PATCH v2 0/3] Convert deactivate_page() to deactivate_folio()
@ 2022-12-07 23:01 Vishal Moola (Oracle)
2022-12-07 23:01 ` [PATCH v2 1/3] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios Vishal Moola (Oracle)
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2022-12-07 23:01 UTC (permalink / raw)
To: linux-mm; +Cc: damon, linux-kernel, akpm, sj, Vishal Moola (Oracle)
Deactivate_page() has already been converted to use folios. This patch
series modifies the callers of deactivate_page() to use folios. It also
converts deactivate_page() to folio_deactivate() which takes in a folio.
---
v2:
Fix a compilation issue
Some minor rewording of comments/descriptions
Vishal Moola (Oracle) (3):
madvise: Convert madvise_cold_or_pageout_pte_range() to use folios
mm/damon: Convert damon_pa_mark_accessed_or_deactivate() to use folios
mm/swap: Convert deactivate_page() to folio_deactivate()
include/linux/swap.h | 2 +-
mm/damon/paddr.c | 11 ++++--
mm/madvise.c | 88 ++++++++++++++++++++++----------------------
mm/swap.c | 14 +++----
4 files changed, 59 insertions(+), 56 deletions(-)
--
2.38.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/3] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios
2022-12-07 23:01 [PATCH v2 0/3] Convert deactivate_page() to deactivate_folio() Vishal Moola (Oracle)
@ 2022-12-07 23:01 ` Vishal Moola (Oracle)
2022-12-07 23:09 ` Matthew Wilcox
2022-12-07 23:01 ` [PATCH v2 2/3] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() " Vishal Moola (Oracle)
2022-12-07 23:01 ` [PATCH v2 3/3] mm/swap: Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
2 siblings, 1 reply; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2022-12-07 23:01 UTC (permalink / raw)
To: linux-mm; +Cc: damon, linux-kernel, akpm, sj, Vishal Moola (Oracle)
This change removes a number of calls to compound_head(), and saves 1319
bytes of kernel text.
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
mm/madvise.c | 88 +++++++++++++++++++++++++++-------------------------
1 file changed, 45 insertions(+), 43 deletions(-)
diff --git a/mm/madvise.c b/mm/madvise.c
index 2baa93ca2310..b323672c969d 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -332,8 +332,9 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
struct vm_area_struct *vma = walk->vma;
pte_t *orig_pte, *pte, ptent;
spinlock_t *ptl;
+ struct folio *folio = NULL;
struct page *page = NULL;
- LIST_HEAD(page_list);
+ LIST_HEAD(folio_list);
if (fatal_signal_pending(current))
return -EINTR;
@@ -358,23 +359,23 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
goto huge_unlock;
}
- page = pmd_page(orig_pmd);
+ folio = pfn_folio(pmd_pfn(orig_pmd));
- /* Do not interfere with other mappings of this page */
- if (page_mapcount(page) != 1)
+ /* Do not interfere with other mappings of this folio */
+ if (folio_mapcount(folio) != 1)
goto huge_unlock;
if (next - addr != HPAGE_PMD_SIZE) {
int err;
- get_page(page);
+ folio_get(folio);
spin_unlock(ptl);
- lock_page(page);
- err = split_huge_page(page);
- unlock_page(page);
- put_page(page);
+ folio_lock(folio);
+ err = split_folio(folio);
+ folio_unlock(folio);
+ folio_put(folio);
if (!err)
- goto regular_page;
+ goto regular_folio;
return 0;
}
@@ -386,25 +387,25 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
}
- ClearPageReferenced(page);
- test_and_clear_page_young(page);
+ folio_clear_referenced(folio);
+ folio_test_clear_young(folio);
if (pageout) {
- if (!isolate_lru_page(page)) {
- if (PageUnevictable(page))
- putback_lru_page(page);
+ if (!folio_isolate_lru(folio)) {
+ if (folio_test_unevictable(folio))
+ folio_putback_lru(folio);
else
- list_add(&page->lru, &page_list);
+ list_add(&folio->lru, &folio_list);
}
} else
- deactivate_page(page);
+ deactivate_page(&folio->page);
huge_unlock:
spin_unlock(ptl);
if (pageout)
- reclaim_pages(&page_list);
+ reclaim_pages(&folio_list);
return 0;
}
-regular_page:
+regular_folio:
if (pmd_trans_unstable(pmd))
return 0;
#endif
@@ -424,28 +425,29 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
page = vm_normal_page(vma, addr, ptent);
if (!page || is_zone_device_page(page))
continue;
+ folio = page_folio(page);
/*
* Creating a THP page is expensive so split it only if we
* are sure it's worth. Split it if we are only owner.
*/
- if (PageTransCompound(page)) {
- if (page_mapcount(page) != 1)
+ if (folio_test_large(folio)) {
+ if (folio_mapcount(folio) != 1)
break;
- get_page(page);
- if (!trylock_page(page)) {
- put_page(page);
+ folio_get(folio);
+ if (!folio_trylock(folio)) {
+ folio_put(folio);
break;
}
pte_unmap_unlock(orig_pte, ptl);
- if (split_huge_page(page)) {
- unlock_page(page);
- put_page(page);
+ if (split_folio(folio)) {
+ folio_unlock(folio);
+ folio_put(folio);
orig_pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
break;
}
- unlock_page(page);
- put_page(page);
+ folio_unlock(folio);
+ folio_put(folio);
orig_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
pte--;
addr -= PAGE_SIZE;
@@ -453,13 +455,13 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
}
/*
- * Do not interfere with other mappings of this page and
- * non-LRU page.
+ * Do not interfere with other mappings of this folio and
+ * non-LRU folio.
*/
- if (!PageLRU(page) || page_mapcount(page) != 1)
+ if (!folio_test_lru(folio))
continue;
- VM_BUG_ON_PAGE(PageTransCompound(page), page);
+ VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
if (pte_young(ptent)) {
ptent = ptep_get_and_clear_full(mm, addr, pte,
@@ -470,28 +472,28 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
}
/*
- * We are deactivating a page for accelerating reclaiming.
- * VM couldn't reclaim the page unless we clear PG_young.
+ * We are deactivating a folio for accelerating reclaiming.
+ * VM couldn't reclaim the folio unless we clear PG_young.
* As a side effect, it makes confuse idle-page tracking
* because they will miss recent referenced history.
*/
- ClearPageReferenced(page);
- test_and_clear_page_young(page);
+ folio_clear_referenced(folio);
+ folio_test_clear_young(folio);
if (pageout) {
- if (!isolate_lru_page(page)) {
- if (PageUnevictable(page))
- putback_lru_page(page);
+ if (!folio_isolate_lru(folio)) {
+ if (folio_test_unevictable(folio))
+ folio_putback_lru(folio);
else
- list_add(&page->lru, &page_list);
+ list_add(&folio->lru, &folio_list);
}
} else
- deactivate_page(page);
+ deactivate_page(&folio->page);
}
arch_leave_lazy_mmu_mode();
pte_unmap_unlock(orig_pte, ptl);
if (pageout)
- reclaim_pages(&page_list);
+ reclaim_pages(&folio_list);
cond_resched();
return 0;
--
2.38.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 2/3] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() to use folios
2022-12-07 23:01 [PATCH v2 0/3] Convert deactivate_page() to deactivate_folio() Vishal Moola (Oracle)
2022-12-07 23:01 ` [PATCH v2 1/3] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios Vishal Moola (Oracle)
@ 2022-12-07 23:01 ` Vishal Moola (Oracle)
2022-12-07 23:01 ` [PATCH v2 3/3] mm/swap: Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
2 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2022-12-07 23:01 UTC (permalink / raw)
To: linux-mm; +Cc: damon, linux-kernel, akpm, sj, Vishal Moola (Oracle)
This change replaces 2 calls to compound_head() with one. This is in
preparation for the conversion of deactivate_page() to
folio_deactivate().
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
mm/damon/paddr.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index e1a4315c4be6..73548bc82297 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -238,15 +238,18 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
struct page *page = damon_get_page(PHYS_PFN(addr));
+ struct folio *folio;
if (!page)
continue;
+ folio = page_folio(page);
+
if (mark_accessed)
- mark_page_accessed(page);
+ folio_mark_accessed(folio);
else
- deactivate_page(page);
- put_page(page);
- applied++;
+ deactivate_page(&folio->page);
+ folio_put(folio);
+ applied += folio_nr_pages(folio);
}
return applied * PAGE_SIZE;
}
--
2.38.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 3/3] mm/swap: Convert deactivate_page() to folio_deactivate()
2022-12-07 23:01 [PATCH v2 0/3] Convert deactivate_page() to deactivate_folio() Vishal Moola (Oracle)
2022-12-07 23:01 ` [PATCH v2 1/3] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios Vishal Moola (Oracle)
2022-12-07 23:01 ` [PATCH v2 2/3] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() " Vishal Moola (Oracle)
@ 2022-12-07 23:01 ` Vishal Moola (Oracle)
2022-12-07 23:11 ` Matthew Wilcox
2 siblings, 1 reply; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2022-12-07 23:01 UTC (permalink / raw)
To: linux-mm; +Cc: damon, linux-kernel, akpm, sj, Vishal Moola (Oracle)
Deactivate_page() has already been converted to use folios, this change
converts it to take in a folio argument instead of calling page_folio().
It also renames the function folio_deactivate() to be more consistent
with other folio functions.
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
include/linux/swap.h | 2 +-
mm/damon/paddr.c | 2 +-
mm/madvise.c | 4 ++--
mm/swap.c | 14 ++++++--------
4 files changed, 10 insertions(+), 12 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index a18cf4b7c724..94aa709ed933 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -409,7 +409,7 @@ extern void lru_add_drain(void);
extern void lru_add_drain_cpu(int cpu);
extern void lru_add_drain_cpu_zone(struct zone *zone);
extern void lru_add_drain_all(void);
-extern void deactivate_page(struct page *page);
+extern void folio_deactivate(struct folio *folio);
extern void mark_page_lazyfree(struct page *page);
extern void swap_setup(void);
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 73548bc82297..6b36de1396a4 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -247,7 +247,7 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
if (mark_accessed)
folio_mark_accessed(folio);
else
- deactivate_page(&folio->page);
+ folio_deactivate(folio);
folio_put(folio);
applied += folio_nr_pages(folio);
}
diff --git a/mm/madvise.c b/mm/madvise.c
index b323672c969d..9667d052b52f 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -397,7 +397,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
list_add(&folio->lru, &folio_list);
}
} else
- deactivate_page(&folio->page);
+ folio_deactivate(folio);
huge_unlock:
spin_unlock(ptl);
if (pageout)
@@ -487,7 +487,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
list_add(&folio->lru, &folio_list);
}
} else
- deactivate_page(&folio->page);
+ folio_deactivate(folio);
}
arch_leave_lazy_mmu_mode();
diff --git a/mm/swap.c b/mm/swap.c
index 955930f41d20..9cc8215acdbb 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -720,17 +720,15 @@ void deactivate_file_folio(struct folio *folio)
}
/*
- * deactivate_page - deactivate a page
- * @page: page to deactivate
+ * folio_deactivate - deactivate a folio
+ * @folio: folio to deactivate
*
- * deactivate_page() moves @page to the inactive list if @page was on the active
- * list and was not an unevictable page. This is done to accelerate the reclaim
- * of @page.
+ * folio_deactivate() moves @folio to the inactive list if @folio was on the
+ * active list and was not unevictable. This is done to accelerate the
+ * reclaim of @folio.
*/
-void deactivate_page(struct page *page)
+void folio_deactivate(struct folio *folio)
{
- struct folio *folio = page_folio(page);
-
if (folio_test_lru(folio) && !folio_test_unevictable(folio) &&
(folio_test_active(folio) || lru_gen_enabled())) {
struct folio_batch *fbatch;
--
2.38.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/3] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios
2022-12-07 23:01 ` [PATCH v2 1/3] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios Vishal Moola (Oracle)
@ 2022-12-07 23:09 ` Matthew Wilcox
2022-12-07 23:45 ` Vishal Moola
0 siblings, 1 reply; 7+ messages in thread
From: Matthew Wilcox @ 2022-12-07 23:09 UTC (permalink / raw)
To: Vishal Moola (Oracle); +Cc: linux-mm, damon, linux-kernel, akpm, sj
On Wed, Dec 07, 2022 at 03:01:50PM -0800, Vishal Moola (Oracle) wrote:
> @@ -424,28 +425,29 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> page = vm_normal_page(vma, addr, ptent);
> if (!page || is_zone_device_page(page))
> continue;
> + folio = page_folio(page);
Maybe we should add a vm_normal_folio() first? That way we could get
rid of the 'struct page' in this function entirely.
> @@ -453,13 +455,13 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> }
>
> /*
> - * Do not interfere with other mappings of this page and
> - * non-LRU page.
> + * Do not interfere with other mappings of this folio and
> + * non-LRU folio.
> */
> - if (!PageLRU(page) || page_mapcount(page) != 1)
> + if (!folio_test_lru(folio))
Why has the test for folio_mapcount() disappeared?
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 3/3] mm/swap: Convert deactivate_page() to folio_deactivate()
2022-12-07 23:01 ` [PATCH v2 3/3] mm/swap: Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
@ 2022-12-07 23:11 ` Matthew Wilcox
0 siblings, 0 replies; 7+ messages in thread
From: Matthew Wilcox @ 2022-12-07 23:11 UTC (permalink / raw)
To: Vishal Moola (Oracle); +Cc: linux-mm, damon, linux-kernel, akpm, sj
On Wed, Dec 07, 2022 at 03:01:52PM -0800, Vishal Moola (Oracle) wrote:
> +++ b/include/linux/swap.h
> @@ -409,7 +409,7 @@ extern void lru_add_drain(void);
> extern void lru_add_drain_cpu(int cpu);
> extern void lru_add_drain_cpu_zone(struct zone *zone);
> extern void lru_add_drain_all(void);
> -extern void deactivate_page(struct page *page);
> +extern void folio_deactivate(struct folio *folio);
> extern void mark_page_lazyfree(struct page *page);
> extern void swap_setup(void);
Using 'extern' on function prototypes is now unfashionable; you can drop
it if you respin this patch.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/3] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios
2022-12-07 23:09 ` Matthew Wilcox
@ 2022-12-07 23:45 ` Vishal Moola
0 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola @ 2022-12-07 23:45 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: linux-mm, damon, linux-kernel, akpm, sj
On Wed, Dec 7, 2022 at 3:09 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Dec 07, 2022 at 03:01:50PM -0800, Vishal Moola (Oracle) wrote:
> > @@ -424,28 +425,29 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> > page = vm_normal_page(vma, addr, ptent);
> > if (!page || is_zone_device_page(page))
> > continue;
> > + folio = page_folio(page);
>
> Maybe we should add a vm_normal_folio() first? That way we could get
> rid of the 'struct page' in this function entirely.
Yeah, I'll do that. Many other callers will benefit from it later as well.
> > @@ -453,13 +455,13 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> > }
> >
> > /*
> > - * Do not interfere with other mappings of this page and
> > - * non-LRU page.
> > + * Do not interfere with other mappings of this folio and
> > + * non-LRU folio.
> > */
> > - if (!PageLRU(page) || page_mapcount(page) != 1)
> > + if (!folio_test_lru(folio))
>
> Why has the test for folio_mapcount() disappeared?
Oops, that page_mapcount() should have been replaced
with a folio_mapcount(). It appears I accidentally removed it.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-12-07 23:45 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-07 23:01 [PATCH v2 0/3] Convert deactivate_page() to deactivate_folio() Vishal Moola (Oracle)
2022-12-07 23:01 ` [PATCH v2 1/3] madvise: Convert madvise_cold_or_pageout_pte_range() to use folios Vishal Moola (Oracle)
2022-12-07 23:09 ` Matthew Wilcox
2022-12-07 23:45 ` Vishal Moola
2022-12-07 23:01 ` [PATCH v2 2/3] mm/damon: Convert damon_pa_mark_accessed_or_deactivate() " Vishal Moola (Oracle)
2022-12-07 23:01 ` [PATCH v2 3/3] mm/swap: Convert deactivate_page() to folio_deactivate() Vishal Moola (Oracle)
2022-12-07 23:11 ` Matthew Wilcox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox