linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Remove some lruvec page accounting functions
@ 2023-12-22 20:28 Matthew Wilcox (Oracle)
  2023-12-22 20:28 ` [PATCH 1/4] mm: Remove inc/dec lruvec page state functions Matthew Wilcox (Oracle)
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-12-22 20:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle), linux-mm, Johannes Weiner, Vlastimil Babka

Some functions are now unused; remove them.  Make
__mod_lruvec_page_state() unused and then remove it.

Based on next-20231222.  Compile tested only.

Matthew Wilcox (Oracle) (4):
  mm: Remove inc/dec lruvec page state functions
  slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use
    folios
  mm/khugepaged: Use a folio more in collapse_file()
  mm/memcontrol: Remove __mod_lruvec_page_state()

 include/linux/gfp.h    |  9 +++++++
 include/linux/vmstat.h | 60 +++++++++++++-----------------------------
 mm/khugepaged.c        | 16 +++++------
 mm/memcontrol.c        |  9 +++----
 mm/slub.c              | 15 +++++------
 5 files changed, 46 insertions(+), 63 deletions(-)

-- 
2.43.0



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/4] mm: Remove inc/dec lruvec page state functions
  2023-12-22 20:28 [PATCH 0/4] Remove some lruvec page accounting functions Matthew Wilcox (Oracle)
@ 2023-12-22 20:28 ` Matthew Wilcox (Oracle)
  2023-12-22 20:28 ` [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-12-22 20:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle), linux-mm, Johannes Weiner, Vlastimil Babka

All callers of these have been converted to their folio equivalents.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/vmstat.h | 24 ------------------------
 1 file changed, 24 deletions(-)

diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index fed855bae6d8..147ae73e0ee7 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -597,18 +597,6 @@ static inline void mod_lruvec_page_state(struct page *page,
 
 #endif /* CONFIG_MEMCG */
 
-static inline void __inc_lruvec_page_state(struct page *page,
-					   enum node_stat_item idx)
-{
-	__mod_lruvec_page_state(page, idx, 1);
-}
-
-static inline void __dec_lruvec_page_state(struct page *page,
-					   enum node_stat_item idx)
-{
-	__mod_lruvec_page_state(page, idx, -1);
-}
-
 static inline void __lruvec_stat_mod_folio(struct folio *folio,
 					   enum node_stat_item idx, int val)
 {
@@ -627,18 +615,6 @@ static inline void __lruvec_stat_sub_folio(struct folio *folio,
 	__lruvec_stat_mod_folio(folio, idx, -folio_nr_pages(folio));
 }
 
-static inline void inc_lruvec_page_state(struct page *page,
-					 enum node_stat_item idx)
-{
-	mod_lruvec_page_state(page, idx, 1);
-}
-
-static inline void dec_lruvec_page_state(struct page *page,
-					 enum node_stat_item idx)
-{
-	mod_lruvec_page_state(page, idx, -1);
-}
-
 static inline void lruvec_stat_mod_folio(struct folio *folio,
 					 enum node_stat_item idx, int val)
 {
-- 
2.43.0



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios
  2023-12-22 20:28 [PATCH 0/4] Remove some lruvec page accounting functions Matthew Wilcox (Oracle)
  2023-12-22 20:28 ` [PATCH 1/4] mm: Remove inc/dec lruvec page state functions Matthew Wilcox (Oracle)
@ 2023-12-22 20:28 ` Matthew Wilcox (Oracle)
  2023-12-22 23:11   ` Hyeonggon Yoo
  2023-12-27 22:01   ` Andrew Morton
  2023-12-22 20:28 ` [PATCH 3/4] mm/khugepaged: Use a folio more in collapse_file() Matthew Wilcox (Oracle)
  2023-12-22 20:28 ` [PATCH 4/4] mm/memcontrol: Remove __mod_lruvec_page_state() Matthew Wilcox (Oracle)
  3 siblings, 2 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-12-22 20:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle), linux-mm, Johannes Weiner, Vlastimil Babka

Add folio_alloc_node() to replace alloc_pages_node() and then use
folio APIs throughout instead of converting back to pages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/gfp.h |  9 +++++++++
 mm/slub.c           | 15 +++++++--------
 2 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index de292a007138..d56c1d7b5c5a 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -247,6 +247,15 @@ struct folio *__folio_alloc_node(gfp_t gfp, unsigned int order, int nid)
 	return __folio_alloc(gfp, order, nid, NULL);
 }
 
+static inline
+struct folio *folio_alloc_node(gfp_t gfp, unsigned int order, int nid)
+{
+	if (nid == NUMA_NO_NODE)
+		nid = numa_mem_id();
+
+	return __folio_alloc_node(gfp, order, nid);
+}
+
 /*
  * Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE,
  * prefer the current CPU's closest node. Otherwise node must be valid and
diff --git a/mm/slub.c b/mm/slub.c
index 35aa706dc318..261f01915d9b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3919,18 +3919,17 @@ EXPORT_SYMBOL(kmem_cache_alloc_node);
  */
 static void *__kmalloc_large_node(size_t size, gfp_t flags, int node)
 {
-	struct page *page;
+	struct folio *folio;
 	void *ptr = NULL;
 	unsigned int order = get_order(size);
 
 	if (unlikely(flags & GFP_SLAB_BUG_MASK))
 		flags = kmalloc_fix_flags(flags);
 
-	flags |= __GFP_COMP;
-	page = alloc_pages_node(node, flags, order);
-	if (page) {
-		ptr = page_address(page);
-		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+	folio = folio_alloc_node(flags, order, node);
+	if (folio) {
+		ptr = folio_address(folio);
+		lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B,
 				      PAGE_SIZE << order);
 	}
 
@@ -4379,9 +4378,9 @@ static void free_large_kmalloc(struct folio *folio, void *object)
 	kasan_kfree_large(object);
 	kmsan_kfree_large(object);
 
-	mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B,
+	lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B,
 			      -(PAGE_SIZE << order));
-	__free_pages(folio_page(folio, 0), order);
+	folio_put(folio);
 }
 
 /**
-- 
2.43.0



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 3/4] mm/khugepaged: Use a folio more in collapse_file()
  2023-12-22 20:28 [PATCH 0/4] Remove some lruvec page accounting functions Matthew Wilcox (Oracle)
  2023-12-22 20:28 ` [PATCH 1/4] mm: Remove inc/dec lruvec page state functions Matthew Wilcox (Oracle)
  2023-12-22 20:28 ` [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios Matthew Wilcox (Oracle)
@ 2023-12-22 20:28 ` Matthew Wilcox (Oracle)
  2023-12-22 20:28 ` [PATCH 4/4] mm/memcontrol: Remove __mod_lruvec_page_state() Matthew Wilcox (Oracle)
  3 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-12-22 20:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle), linux-mm, Johannes Weiner, Vlastimil Babka

This function is not yet fully converted to the folio API, but
this removes a few uses of old APIs.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/khugepaged.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 13c6eadbeda3..b9b0742e4d9a 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2126,23 +2126,23 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 		xas_lock_irq(&xas);
 	}
 
-	nr = thp_nr_pages(hpage);
+	folio = page_folio(hpage);
+	nr = folio_nr_pages(folio);
 	if (is_shmem)
-		__mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr);
+		__lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr);
 	else
-		__mod_lruvec_page_state(hpage, NR_FILE_THPS, nr);
+		__lruvec_stat_mod_folio(folio, NR_FILE_THPS, nr);
 
 	if (nr_none) {
-		__mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none);
+		__lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_none);
 		/* nr_none is always 0 for non-shmem. */
-		__mod_lruvec_page_state(hpage, NR_SHMEM, nr_none);
+		__lruvec_stat_mod_folio(folio, NR_SHMEM, nr_none);
 	}
 
 	/*
 	 * Mark hpage as uptodate before inserting it into the page cache so
 	 * that it isn't mistaken for an fallocated but unwritten page.
 	 */
-	folio = page_folio(hpage);
 	folio_mark_uptodate(folio);
 	folio_ref_add(folio, HPAGE_PMD_NR - 1);
 
@@ -2152,7 +2152,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 
 	/* Join all the small entries into a single multi-index entry. */
 	xas_set_order(&xas, start, HPAGE_PMD_ORDER);
-	xas_store(&xas, hpage);
+	xas_store(&xas, folio);
 	WARN_ON_ONCE(xas_error(&xas));
 	xas_unlock_irq(&xas);
 
@@ -2163,7 +2163,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 	retract_page_tables(mapping, start);
 	if (cc && !cc->is_khugepaged)
 		result = SCAN_PTE_MAPPED_HUGEPAGE;
-	unlock_page(hpage);
+	folio_unlock(folio);
 
 	/*
 	 * The collapse has succeeded, so free the old pages.
-- 
2.43.0



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 4/4] mm/memcontrol: Remove __mod_lruvec_page_state()
  2023-12-22 20:28 [PATCH 0/4] Remove some lruvec page accounting functions Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2023-12-22 20:28 ` [PATCH 3/4] mm/khugepaged: Use a folio more in collapse_file() Matthew Wilcox (Oracle)
@ 2023-12-22 20:28 ` Matthew Wilcox (Oracle)
  3 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-12-22 20:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle), linux-mm, Johannes Weiner, Vlastimil Babka

There are no more callers of __mod_lruvec_page_state(), so convert
the implementation to __lruvec_stat_mod_folio(), removing two calls to
compound_head() (one explicit, one hidden inside page_memcg()).

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/vmstat.h | 36 ++++++++++++++++++------------------
 mm/memcontrol.c        |  9 ++++-----
 2 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 147ae73e0ee7..343906a98d6e 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -556,19 +556,25 @@ static inline void mod_lruvec_state(struct lruvec *lruvec,
 	local_irq_restore(flags);
 }
 
-void __mod_lruvec_page_state(struct page *page,
+void __lruvec_stat_mod_folio(struct folio *folio,
 			     enum node_stat_item idx, int val);
 
-static inline void mod_lruvec_page_state(struct page *page,
+static inline void lruvec_stat_mod_folio(struct folio *folio,
 					 enum node_stat_item idx, int val)
 {
 	unsigned long flags;
 
 	local_irq_save(flags);
-	__mod_lruvec_page_state(page, idx, val);
+	__lruvec_stat_mod_folio(folio, idx, val);
 	local_irq_restore(flags);
 }
 
+static inline void mod_lruvec_page_state(struct page *page,
+					 enum node_stat_item idx, int val)
+{
+	lruvec_stat_mod_folio(page_folio(page), idx, val);
+}
+
 #else
 
 static inline void __mod_lruvec_state(struct lruvec *lruvec,
@@ -583,10 +589,16 @@ static inline void mod_lruvec_state(struct lruvec *lruvec,
 	mod_node_page_state(lruvec_pgdat(lruvec), idx, val);
 }
 
-static inline void __mod_lruvec_page_state(struct page *page,
-					   enum node_stat_item idx, int val)
+static inline void __lruvec_stat_mod_folio(struct folio *folio,
+					 enum node_stat_item idx, int val)
 {
-	__mod_node_page_state(page_pgdat(page), idx, val);
+	__mod_node_page_state(folio_pgdat(folio), idx, val);
+}
+
+static inline void lruvec_stat_mod_folio(struct folio *folio,
+					 enum node_stat_item idx, int val)
+{
+	mod_node_page_state(folio_pgdat(folio), idx, val);
 }
 
 static inline void mod_lruvec_page_state(struct page *page,
@@ -597,12 +609,6 @@ static inline void mod_lruvec_page_state(struct page *page,
 
 #endif /* CONFIG_MEMCG */
 
-static inline void __lruvec_stat_mod_folio(struct folio *folio,
-					   enum node_stat_item idx, int val)
-{
-	__mod_lruvec_page_state(&folio->page, idx, val);
-}
-
 static inline void __lruvec_stat_add_folio(struct folio *folio,
 					   enum node_stat_item idx)
 {
@@ -615,12 +621,6 @@ static inline void __lruvec_stat_sub_folio(struct folio *folio,
 	__lruvec_stat_mod_folio(folio, idx, -folio_nr_pages(folio));
 }
 
-static inline void lruvec_stat_mod_folio(struct folio *folio,
-					 enum node_stat_item idx, int val)
-{
-	mod_lruvec_page_state(&folio->page, idx, val);
-}
-
 static inline void lruvec_stat_add_folio(struct folio *folio,
 					 enum node_stat_item idx)
 {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 36bb18d7b397..7a759554bec6 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -891,16 +891,15 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
 		__mod_memcg_lruvec_state(lruvec, idx, val);
 }
 
-void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx,
+void __lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx,
 			     int val)
 {
-	struct page *head = compound_head(page); /* rmap on tail pages */
 	struct mem_cgroup *memcg;
-	pg_data_t *pgdat = page_pgdat(page);
+	pg_data_t *pgdat = folio_pgdat(folio);
 	struct lruvec *lruvec;
 
 	rcu_read_lock();
-	memcg = page_memcg(head);
+	memcg = folio_memcg(folio);
 	/* Untracked pages have no memcg, no lruvec. Update only the node */
 	if (!memcg) {
 		rcu_read_unlock();
@@ -912,7 +911,7 @@ void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx,
 	__mod_lruvec_state(lruvec, idx, val);
 	rcu_read_unlock();
 }
-EXPORT_SYMBOL(__mod_lruvec_page_state);
+EXPORT_SYMBOL(__lruvec_stat_mod_folio);
 
 void __mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val)
 {
-- 
2.43.0



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios
  2023-12-22 20:28 ` [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios Matthew Wilcox (Oracle)
@ 2023-12-22 23:11   ` Hyeonggon Yoo
  2023-12-22 23:13     ` Matthew Wilcox
  2023-12-23  6:09     ` Matthew Wilcox
  2023-12-27 22:01   ` Andrew Morton
  1 sibling, 2 replies; 11+ messages in thread
From: Hyeonggon Yoo @ 2023-12-22 23:11 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Andrew Morton, linux-mm, Johannes Weiner, Vlastimil Babka

On Sat, Dec 23, 2023 at 5:28 AM Matthew Wilcox (Oracle)
<willy@infradead.org> wrote:
>
> Add folio_alloc_node() to replace alloc_pages_node() and then use
> folio APIs throughout instead of converting back to pages.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
[...]
> diff --git a/mm/slub.c b/mm/slub.c
> index 35aa706dc318..261f01915d9b 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3919,18 +3919,17 @@ EXPORT_SYMBOL(kmem_cache_alloc_node);
>   */
>  static void *__kmalloc_large_node(size_t size, gfp_t flags, int node)
>  {
> -       struct page *page;
> +       struct folio *folio;
>         void *ptr = NULL;
>         unsigned int order = get_order(size);
>
>         if (unlikely(flags & GFP_SLAB_BUG_MASK))
>                 flags = kmalloc_fix_flags(flags);
>
> -       flags |= __GFP_COMP;
> -       page = alloc_pages_node(node, flags, order);
> -       if (page) {
> -               ptr = page_address(page);
> -               mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> +       folio = folio_alloc_node(flags, order, node);

folio_alloc_node()
->__folio_alloc_node()
->__folio_alloc()
->page_rmappable_folio()
->folio_prep_large_rmappable()

I think it's not intentional to call this?

> +       if (folio) {
> +               ptr = folio_address(folio);
> +               lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B,
>                                       PAGE_SIZE << order);
>         }

Thanks,
Hyeonggon


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios
  2023-12-22 23:11   ` Hyeonggon Yoo
@ 2023-12-22 23:13     ` Matthew Wilcox
  2023-12-23  6:09     ` Matthew Wilcox
  1 sibling, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2023-12-22 23:13 UTC (permalink / raw)
  To: Hyeonggon Yoo; +Cc: Andrew Morton, linux-mm, Johannes Weiner, Vlastimil Babka

On Sat, Dec 23, 2023 at 08:11:11AM +0900, Hyeonggon Yoo wrote:
> > -       flags |= __GFP_COMP;
> > -       page = alloc_pages_node(node, flags, order);
> > -       if (page) {
> > -               ptr = page_address(page);
> > -               mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > +       folio = folio_alloc_node(flags, order, node);
> 
> folio_alloc_node()
> ->__folio_alloc_node()
> ->__folio_alloc()
> ->page_rmappable_folio()
> ->folio_prep_large_rmappable()
> 
> I think it's not intentional to call this?

Oh, hm, you're right.

I withdraw this patch.  I need to think about this a little more.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios
  2023-12-22 23:11   ` Hyeonggon Yoo
  2023-12-22 23:13     ` Matthew Wilcox
@ 2023-12-23  6:09     ` Matthew Wilcox
  1 sibling, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2023-12-23  6:09 UTC (permalink / raw)
  To: Hyeonggon Yoo; +Cc: Andrew Morton, linux-mm, Johannes Weiner, Vlastimil Babka

On Sat, Dec 23, 2023 at 08:11:11AM +0900, Hyeonggon Yoo wrote:
> > +       folio = folio_alloc_node(flags, order, node);
> 
> folio_alloc_node()
> ->__folio_alloc_node()
> ->__folio_alloc()
> ->page_rmappable_folio()
> ->folio_prep_large_rmappable()
> 
> I think it's not intentional to call this?

I've been thinking about this, and obviously I got bitten by two of
the meanings of folio (both "not a tail page" and "mmapable memory").
And that leads me to thinking about how this will look when we allocate
memdescs separately from pages.

I don't think we should try to keep memcg_data at the same offset in
struct slab and struct folio (once we're out of our current transitional
period).  So we need to stop casting from folio to slab and vice versa.
Which means that slab can't call __lruvec_stat_mod_folio().

I'll try and get something together to support this, both for the
current layout and once memdescs are separated from struct page.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios
  2023-12-22 20:28 ` [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios Matthew Wilcox (Oracle)
  2023-12-22 23:11   ` Hyeonggon Yoo
@ 2023-12-27 22:01   ` Andrew Morton
  2023-12-28  4:15     ` Hyeonggon Yoo
  2024-01-02 15:58     ` Vlastimil Babka
  1 sibling, 2 replies; 11+ messages in thread
From: Andrew Morton @ 2023-12-27 22:01 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: linux-mm, Johannes Weiner, Vlastimil Babka, Stephen Rothwell

On Fri, 22 Dec 2023 20:28:05 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:

> Add folio_alloc_node() to replace alloc_pages_node() and then use
> folio APIs throughout instead of converting back to pages.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/gfp.h |  9 +++++++++
>  mm/slub.c           | 15 +++++++--------

This depends on changes which are in Vlastimil's tree and linux-next. 
So I reworked it to not do that, which means there will be a resolution
for Linus to do, which Stephen will tell us about.  It's simple, just
from code motion.

Maybe mm.git should include the slab tree, I haven't really considered
what would be the implications of that.


 include/linux/gfp.h |    9 +++++++++
 mm/slab_common.c    |   15 +++++++--------
 2 files changed, 16 insertions(+), 8 deletions(-)

--- a/include/linux/gfp.h~slab-convert-__kmalloc_large_node-and-free_large_kmalloc-to-use-folios
+++ a/include/linux/gfp.h
@@ -247,6 +247,15 @@ struct folio *__folio_alloc_node(gfp_t g
 	return __folio_alloc(gfp, order, nid, NULL);
 }
 
+static inline
+struct folio *folio_alloc_node(gfp_t gfp, unsigned int order, int nid)
+{
+	if (nid == NUMA_NO_NODE)
+		nid = numa_mem_id();
+
+	return __folio_alloc_node(gfp, order, nid);
+}
+
 /*
  * Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE,
  * prefer the current CPU's closest node. Otherwise node must be valid and
--- a/mm/slab_common.c~slab-convert-__kmalloc_large_node-and-free_large_kmalloc-to-use-folios
+++ a/mm/slab_common.c
@@ -979,9 +979,9 @@ void free_large_kmalloc(struct folio *fo
 	kasan_kfree_large(object);
 	kmsan_kfree_large(object);
 
-	mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B,
+	lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B,
 			      -(PAGE_SIZE << order));
-	__free_pages(folio_page(folio, 0), order);
+	folio_put(folio);
 }
 
 static void *__kmalloc_large_node(size_t size, gfp_t flags, int node);
@@ -1137,18 +1137,17 @@ gfp_t kmalloc_fix_flags(gfp_t flags)
 
 static void *__kmalloc_large_node(size_t size, gfp_t flags, int node)
 {
-	struct page *page;
+	struct folio *folio;
 	void *ptr = NULL;
 	unsigned int order = get_order(size);
 
 	if (unlikely(flags & GFP_SLAB_BUG_MASK))
 		flags = kmalloc_fix_flags(flags);
 
-	flags |= __GFP_COMP;
-	page = alloc_pages_node(node, flags, order);
-	if (page) {
-		ptr = page_address(page);
-		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+	folio = folio_alloc_node(flags, order, node);
+	if (folio) {
+		ptr = folio_address(folio);
+		lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B,
 				      PAGE_SIZE << order);
 	}
 
_



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios
  2023-12-27 22:01   ` Andrew Morton
@ 2023-12-28  4:15     ` Hyeonggon Yoo
  2024-01-02 15:58     ` Vlastimil Babka
  1 sibling, 0 replies; 11+ messages in thread
From: Hyeonggon Yoo @ 2023-12-28  4:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle),
	linux-mm, Johannes Weiner, Vlastimil Babka, Stephen Rothwell

On Thu, Dec 28, 2023 at 8:04 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Fri, 22 Dec 2023 20:28:05 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
>
> > Add folio_alloc_node() to replace alloc_pages_node() and then use
> > folio APIs throughout instead of converting back to pages.
> >
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > ---
> >  include/linux/gfp.h |  9 +++++++++
> >  mm/slub.c           | 15 +++++++--------
>
> This depends on changes which are in Vlastimil's tree and linux-next.
> So I reworked it to not do that, which means there will be a resolution
> for Linus to do, which Stephen will tell us about.  It's simple, just
> from code motion.

I think you're missing that Matthew withdrew this patch?


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios
  2023-12-27 22:01   ` Andrew Morton
  2023-12-28  4:15     ` Hyeonggon Yoo
@ 2024-01-02 15:58     ` Vlastimil Babka
  1 sibling, 0 replies; 11+ messages in thread
From: Vlastimil Babka @ 2024-01-02 15:58 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox (Oracle)
  Cc: linux-mm, Johannes Weiner, Stephen Rothwell

On 12/27/23 23:01, Andrew Morton wrote:
> On Fri, 22 Dec 2023 20:28:05 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
> 
>> Add folio_alloc_node() to replace alloc_pages_node() and then use
>> folio APIs throughout instead of converting back to pages.
>> 
>> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>> ---
>>  include/linux/gfp.h |  9 +++++++++
>>  mm/slub.c           | 15 +++++++--------
> 
> This depends on changes which are in Vlastimil's tree and linux-next. 
> So I reworked it to not do that, which means there will be a resolution
> for Linus to do, which Stephen will tell us about.  It's simple, just
> from code motion.

Basing series on a specific tree (mm in this case) and not whole linux-next
would be the way. But also Matthew said in v2 he didn't expect the series to
be picked up for 6.8 at this point so it was fair to use linux-next for
review, and for a final posting for 6.9 it could simply have 6.8-rc1 as a base.

> Maybe mm.git should include the slab tree, I haven't really considered
> what would be the implications of that.

I think this cycle is exceptional in that SLAB removal is unusually large,
so normally there should be little to no conflicts. We can revisit if this
becomes more common?



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2024-01-02 15:58 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-22 20:28 [PATCH 0/4] Remove some lruvec page accounting functions Matthew Wilcox (Oracle)
2023-12-22 20:28 ` [PATCH 1/4] mm: Remove inc/dec lruvec page state functions Matthew Wilcox (Oracle)
2023-12-22 20:28 ` [PATCH 2/4] slab: Convert __kmalloc_large_node() and free_large_kmalloc() to use folios Matthew Wilcox (Oracle)
2023-12-22 23:11   ` Hyeonggon Yoo
2023-12-22 23:13     ` Matthew Wilcox
2023-12-23  6:09     ` Matthew Wilcox
2023-12-27 22:01   ` Andrew Morton
2023-12-28  4:15     ` Hyeonggon Yoo
2024-01-02 15:58     ` Vlastimil Babka
2023-12-22 20:28 ` [PATCH 3/4] mm/khugepaged: Use a folio more in collapse_file() Matthew Wilcox (Oracle)
2023-12-22 20:28 ` [PATCH 4/4] mm/memcontrol: Remove __mod_lruvec_page_state() Matthew Wilcox (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox