linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/codetag: swap tags when migrate pages
@ 2024-11-28 10:26 David Wang
  2024-11-28 19:53 ` Suren Baghdasaryan
  2024-11-29  2:52 ` [PATCH v2] " David Wang
  0 siblings, 2 replies; 8+ messages in thread
From: David Wang @ 2024-11-28 10:26 UTC (permalink / raw)
  To: surenb, kent.overstreet, yuzhao, akpm; +Cc: linux-mm, linux-kernel, David Wang

The initial solution for codetag adjustment during page migration
uses three kinds of low level plumbings, those steps can be replaced
by swapping tags, which only needs one kind of low level plumbing,
and code is more clear.

Signed-off-by: David Wang <00107082@163.com>
Link: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/
---
 include/linux/pgalloc_tag.h |  4 ++--
 lib/alloc_tag.c             | 35 +++++++++++++++++++----------------
 mm/migrate.c                |  2 +-
 3 files changed, 22 insertions(+), 19 deletions(-)

diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h
index 0e43ab653ab6..3469c4b20105 100644
--- a/include/linux/pgalloc_tag.h
+++ b/include/linux/pgalloc_tag.h
@@ -231,7 +231,7 @@ static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr)
 }
 
 void pgalloc_tag_split(struct folio *folio, int old_order, int new_order);
-void pgalloc_tag_copy(struct folio *new, struct folio *old);
+void pgalloc_tag_swap(struct folio *new, struct folio *old);
 
 void __init alloc_tag_sec_init(void);
 
@@ -245,7 +245,7 @@ static inline struct alloc_tag *pgalloc_tag_get(struct page *page) { return NULL
 static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr) {}
 static inline void alloc_tag_sec_init(void) {}
 static inline void pgalloc_tag_split(struct folio *folio, int old_order, int new_order) {}
-static inline void pgalloc_tag_copy(struct folio *new, struct folio *old) {}
+static inline void pgalloc_tag_swap(struct folio *new, struct folio *old) {}
 
 #endif /* CONFIG_MEM_ALLOC_PROFILING */
 
diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
index 2414a7ee7ec7..b45efde50c40 100644
--- a/lib/alloc_tag.c
+++ b/lib/alloc_tag.c
@@ -189,26 +189,29 @@ void pgalloc_tag_split(struct folio *folio, int old_order, int new_order)
 	}
 }
 
-void pgalloc_tag_copy(struct folio *new, struct folio *old)
+void pgalloc_tag_swap(struct folio *new, struct folio *old)
 {
-	union pgtag_ref_handle handle;
-	union codetag_ref ref;
-	struct alloc_tag *tag;
+	union pgtag_ref_handle handles[2];
+	union codetag_ref refs[2];
+	struct alloc_tag *tags[2];
+	struct folio *folios[2] = {new, old};
+	int i;
 
-	tag = pgalloc_tag_get(&old->page);
-	if (!tag)
-		return;
+	for (i = 0; i < 2; i++) {
+		tags[i] = pgalloc_tag_get(&folios[i]->page);
+		if (!tags[i])
+			return;
+		if (!get_page_tag_ref(&folios[i]->page, &refs[i], &handles[i]))
+			return;
+	}
 
-	if (!get_page_tag_ref(&new->page, &ref, &handle))
-		return;
+	swap(tags[0], tags[1]);
 
-	/* Clear the old ref to the original allocation tag. */
-	clear_page_tag_ref(&old->page);
-	/* Decrement the counters of the tag on get_new_folio. */
-	alloc_tag_sub(&ref, folio_size(new));
-	__alloc_tag_ref_set(&ref, tag);
-	update_page_tag_ref(handle, &ref);
-	put_page_tag_ref(handle);
+	for (i = 0; i < 2; i++) {
+		__alloc_tag_ref_set(&refs[i], tags[i]);
+		update_page_tag_ref(handles[i], &refs[i]);
+		put_page_tag_ref(handles[i]);
+	}
 }
 
 static void shutdown_mem_profiling(bool remove_file)
diff --git a/mm/migrate.c b/mm/migrate.c
index 2ce6b4b814df..cc68583c86f9 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -745,7 +745,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
 		folio_set_readahead(newfolio);
 
 	folio_copy_owner(newfolio, folio);
-	pgalloc_tag_copy(newfolio, folio);
+	pgalloc_tag_swap(newfolio, folio);
 
 	mem_cgroup_migrate(folio, newfolio);
 }
-- 
2.39.2



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/codetag: swap tags when migrate pages
  2024-11-28 10:26 [PATCH] mm/codetag: swap tags when migrate pages David Wang
@ 2024-11-28 19:53 ` Suren Baghdasaryan
  2024-11-28 19:54   ` Suren Baghdasaryan
  2024-11-29  1:13   ` David Wang
  2024-11-29  2:52 ` [PATCH v2] " David Wang
  1 sibling, 2 replies; 8+ messages in thread
From: Suren Baghdasaryan @ 2024-11-28 19:53 UTC (permalink / raw)
  To: David Wang; +Cc: kent.overstreet, yuzhao, akpm, linux-mm, linux-kernel

On Thu, Nov 28, 2024 at 2:26 AM David Wang <00107082@163.com> wrote:
>
> The initial solution for codetag adjustment during page migration
> uses three kinds of low level plumbings, those steps can be replaced
> by swapping tags, which only needs one kind of low level plumbing,
> and code is more clear.

This description does not explain the real issue. I would suggest
something like:

Current solution to adjust codetag references during page migration is
done in 3 steps:
1. sets the codetag reference of the old page as empty (not pointing
to any codetag);
2. subtracts counters of the new page to compensate for its own allocation;
3. sets codetag reference of the new page to point to the codetag of
the old page.
This does not work if CONFIG_MEM_ALLOC_PROFILING_DEBUG=n because
set_codetag_empty() becomes NOOP. Instead, let's simply swap codetag
references so that the new page is referencing the old codetag and the
old page is referencing the new codetag. This way accounting stays
valid and the logic makes more sense.

Fixes: e0a955bf7f61 ("mm/codetag: add pgalloc_tag_copy()")

>
> Signed-off-by: David Wang <00107082@163.com>
> Link: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/
 This above Link: seems unusual. Maybe uses Closes instead like this:

Closed: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/

> ---
>  include/linux/pgalloc_tag.h |  4 ++--
>  lib/alloc_tag.c             | 35 +++++++++++++++++++----------------
>  mm/migrate.c                |  2 +-
>  3 files changed, 22 insertions(+), 19 deletions(-)
>
> diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h
> index 0e43ab653ab6..3469c4b20105 100644
> --- a/include/linux/pgalloc_tag.h
> +++ b/include/linux/pgalloc_tag.h
> @@ -231,7 +231,7 @@ static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr)
>  }
>
>  void pgalloc_tag_split(struct folio *folio, int old_order, int new_order);
> -void pgalloc_tag_copy(struct folio *new, struct folio *old);
> +void pgalloc_tag_swap(struct folio *new, struct folio *old);
>
>  void __init alloc_tag_sec_init(void);
>
> @@ -245,7 +245,7 @@ static inline struct alloc_tag *pgalloc_tag_get(struct page *page) { return NULL
>  static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr) {}
>  static inline void alloc_tag_sec_init(void) {}
>  static inline void pgalloc_tag_split(struct folio *folio, int old_order, int new_order) {}
> -static inline void pgalloc_tag_copy(struct folio *new, struct folio *old) {}
> +static inline void pgalloc_tag_swap(struct folio *new, struct folio *old) {}
>
>  #endif /* CONFIG_MEM_ALLOC_PROFILING */
>
> diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
> index 2414a7ee7ec7..b45efde50c40 100644
> --- a/lib/alloc_tag.c
> +++ b/lib/alloc_tag.c
> @@ -189,26 +189,29 @@ void pgalloc_tag_split(struct folio *folio, int old_order, int new_order)
>         }
>  }
>
> -void pgalloc_tag_copy(struct folio *new, struct folio *old)
> +void pgalloc_tag_swap(struct folio *new, struct folio *old)
>  {
> -       union pgtag_ref_handle handle;
> -       union codetag_ref ref;
> -       struct alloc_tag *tag;
> +       union pgtag_ref_handle handles[2];
> +       union codetag_ref refs[2];
> +       struct alloc_tag *tags[2];
> +       struct folio *folios[2] = {new, old};
> +       int i;
>
> -       tag = pgalloc_tag_get(&old->page);
> -       if (!tag)
> -               return;
> +       for (i = 0; i < 2; i++) {
> +               tags[i] = pgalloc_tag_get(&folios[i]->page);
> +               if (!tags[i])
> +                       return;
> +               if (!get_page_tag_ref(&folios[i]->page, &refs[i], &handles[i]))
> +                       return;

If any of the above getters fail on the second cycle, you will miss
calling put_page_tag_ref(handles[0]) and releasing the page_tag_ref
you obtained on the first cycle. It might be cleaner to drop the use
of arrays and use new/old.

> +       }
>
> -       if (!get_page_tag_ref(&new->page, &ref, &handle))
> -               return;
> +       swap(tags[0], tags[1]);
>
> -       /* Clear the old ref to the original allocation tag. */
> -       clear_page_tag_ref(&old->page);
> -       /* Decrement the counters of the tag on get_new_folio. */
> -       alloc_tag_sub(&ref, folio_size(new));
> -       __alloc_tag_ref_set(&ref, tag);
> -       update_page_tag_ref(handle, &ref);
> -       put_page_tag_ref(handle);
> +       for (i = 0; i < 2; i++) {
> +               __alloc_tag_ref_set(&refs[i], tags[i]);
> +               update_page_tag_ref(handles[i], &refs[i]);
> +               put_page_tag_ref(handles[i]);
> +       }
>  }
>
>  static void shutdown_mem_profiling(bool remove_file)
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 2ce6b4b814df..cc68583c86f9 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -745,7 +745,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
>                 folio_set_readahead(newfolio);
>
>         folio_copy_owner(newfolio, folio);
> -       pgalloc_tag_copy(newfolio, folio);
> +       pgalloc_tag_swap(newfolio, folio);
>
>         mem_cgroup_migrate(folio, newfolio);
>  }
> --
> 2.39.2
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/codetag: swap tags when migrate pages
  2024-11-28 19:53 ` Suren Baghdasaryan
@ 2024-11-28 19:54   ` Suren Baghdasaryan
  2024-11-29  1:13   ` David Wang
  1 sibling, 0 replies; 8+ messages in thread
From: Suren Baghdasaryan @ 2024-11-28 19:54 UTC (permalink / raw)
  To: David Wang; +Cc: kent.overstreet, yuzhao, akpm, linux-mm, linux-kernel

On Thu, Nov 28, 2024 at 11:53 AM Suren Baghdasaryan <surenb@google.com> wrote:
>
> On Thu, Nov 28, 2024 at 2:26 AM David Wang <00107082@163.com> wrote:
> >
> > The initial solution for codetag adjustment during page migration
> > uses three kinds of low level plumbings, those steps can be replaced
> > by swapping tags, which only needs one kind of low level plumbing,
> > and code is more clear.
>
> This description does not explain the real issue. I would suggest
> something like:
>
> Current solution to adjust codetag references during page migration is
> done in 3 steps:
> 1. sets the codetag reference of the old page as empty (not pointing
> to any codetag);
> 2. subtracts counters of the new page to compensate for its own allocation;
> 3. sets codetag reference of the new page to point to the codetag of
> the old page.
> This does not work if CONFIG_MEM_ALLOC_PROFILING_DEBUG=n because
> set_codetag_empty() becomes NOOP. Instead, let's simply swap codetag
> references so that the new page is referencing the old codetag and the
> old page is referencing the new codetag. This way accounting stays
> valid and the logic makes more sense.
>
> Fixes: e0a955bf7f61 ("mm/codetag: add pgalloc_tag_copy()")
>
> >
> > Signed-off-by: David Wang <00107082@163.com>
> > Link: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/
>  This above Link: seems unusual. Maybe uses Closes instead like this:
>
> Closed: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/

Sorry, fat fingered. Should have been:

This above Link: seems unusual. Maybe use Closes instead like this:
Closes: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/

>
> > ---
> >  include/linux/pgalloc_tag.h |  4 ++--
> >  lib/alloc_tag.c             | 35 +++++++++++++++++++----------------
> >  mm/migrate.c                |  2 +-
> >  3 files changed, 22 insertions(+), 19 deletions(-)
> >
> > diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h
> > index 0e43ab653ab6..3469c4b20105 100644
> > --- a/include/linux/pgalloc_tag.h
> > +++ b/include/linux/pgalloc_tag.h
> > @@ -231,7 +231,7 @@ static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr)
> >  }
> >
> >  void pgalloc_tag_split(struct folio *folio, int old_order, int new_order);
> > -void pgalloc_tag_copy(struct folio *new, struct folio *old);
> > +void pgalloc_tag_swap(struct folio *new, struct folio *old);
> >
> >  void __init alloc_tag_sec_init(void);
> >
> > @@ -245,7 +245,7 @@ static inline struct alloc_tag *pgalloc_tag_get(struct page *page) { return NULL
> >  static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr) {}
> >  static inline void alloc_tag_sec_init(void) {}
> >  static inline void pgalloc_tag_split(struct folio *folio, int old_order, int new_order) {}
> > -static inline void pgalloc_tag_copy(struct folio *new, struct folio *old) {}
> > +static inline void pgalloc_tag_swap(struct folio *new, struct folio *old) {}
> >
> >  #endif /* CONFIG_MEM_ALLOC_PROFILING */
> >
> > diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
> > index 2414a7ee7ec7..b45efde50c40 100644
> > --- a/lib/alloc_tag.c
> > +++ b/lib/alloc_tag.c
> > @@ -189,26 +189,29 @@ void pgalloc_tag_split(struct folio *folio, int old_order, int new_order)
> >         }
> >  }
> >
> > -void pgalloc_tag_copy(struct folio *new, struct folio *old)
> > +void pgalloc_tag_swap(struct folio *new, struct folio *old)
> >  {
> > -       union pgtag_ref_handle handle;
> > -       union codetag_ref ref;
> > -       struct alloc_tag *tag;
> > +       union pgtag_ref_handle handles[2];
> > +       union codetag_ref refs[2];
> > +       struct alloc_tag *tags[2];
> > +       struct folio *folios[2] = {new, old};
> > +       int i;
> >
> > -       tag = pgalloc_tag_get(&old->page);
> > -       if (!tag)
> > -               return;
> > +       for (i = 0; i < 2; i++) {
> > +               tags[i] = pgalloc_tag_get(&folios[i]->page);
> > +               if (!tags[i])
> > +                       return;
> > +               if (!get_page_tag_ref(&folios[i]->page, &refs[i], &handles[i]))
> > +                       return;
>
> If any of the above getters fail on the second cycle, you will miss
> calling put_page_tag_ref(handles[0]) and releasing the page_tag_ref
> you obtained on the first cycle. It might be cleaner to drop the use
> of arrays and use new/old.
>
> > +       }
> >
> > -       if (!get_page_tag_ref(&new->page, &ref, &handle))
> > -               return;
> > +       swap(tags[0], tags[1]);
> >
> > -       /* Clear the old ref to the original allocation tag. */
> > -       clear_page_tag_ref(&old->page);
> > -       /* Decrement the counters of the tag on get_new_folio. */
> > -       alloc_tag_sub(&ref, folio_size(new));
> > -       __alloc_tag_ref_set(&ref, tag);
> > -       update_page_tag_ref(handle, &ref);
> > -       put_page_tag_ref(handle);
> > +       for (i = 0; i < 2; i++) {
> > +               __alloc_tag_ref_set(&refs[i], tags[i]);
> > +               update_page_tag_ref(handles[i], &refs[i]);
> > +               put_page_tag_ref(handles[i]);
> > +       }
> >  }
> >
> >  static void shutdown_mem_profiling(bool remove_file)
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 2ce6b4b814df..cc68583c86f9 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -745,7 +745,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
> >                 folio_set_readahead(newfolio);
> >
> >         folio_copy_owner(newfolio, folio);
> > -       pgalloc_tag_copy(newfolio, folio);
> > +       pgalloc_tag_swap(newfolio, folio);
> >
> >         mem_cgroup_migrate(folio, newfolio);
> >  }
> > --
> > 2.39.2
> >


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/codetag: swap tags when migrate pages
  2024-11-28 19:53 ` Suren Baghdasaryan
  2024-11-28 19:54   ` Suren Baghdasaryan
@ 2024-11-29  1:13   ` David Wang
  1 sibling, 0 replies; 8+ messages in thread
From: David Wang @ 2024-11-29  1:13 UTC (permalink / raw)
  To: Suren Baghdasaryan; +Cc: kent.overstreet, yuzhao, akpm, linux-mm, linux-kernel

Hi,

At 2024-11-29 03:53:28, "Suren Baghdasaryan" <surenb@google.com> wrote:
>On Thu, Nov 28, 2024 at 2:26 AM David Wang <00107082@163.com> wrote:
>>
>> The initial solution for codetag adjustment during page migration
>> uses three kinds of low level plumbings, those steps can be replaced
>> by swapping tags, which only needs one kind of low level plumbing,
>> and code is more clear.
>
>This description does not explain the real issue. I would suggest
>something like:
>
>Current solution to adjust codetag references during page migration is
>done in 3 steps:
>1. sets the codetag reference of the old page as empty (not pointing
>to any codetag);
>2. subtracts counters of the new page to compensate for its own allocation;
>3. sets codetag reference of the new page to point to the codetag of
>the old page.
>This does not work if CONFIG_MEM_ALLOC_PROFILING_DEBUG=n because
>set_codetag_empty() becomes NOOP. Instead, let's simply swap codetag
>references so that the new page is referencing the old codetag and the
>old page is referencing the new codetag. This way accounting stays
>valid and the logic makes more sense.
>
>Fixes: e0a955bf7f61 ("mm/codetag: add pgalloc_tag_copy()")
>
>>
>> Signed-off-by: David Wang <00107082@163.com>
>> Link: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/
> This above Link: seems unusual. Maybe uses Closes instead like this:
>
>Closed: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/

>

Copy that~!
 

>> ---
>>  include/linux/pgalloc_tag.h |  4 ++--
>>  lib/alloc_tag.c             | 35 +++++++++++++++++++----------------
>>  mm/migrate.c                |  2 +-
>>  3 files changed, 22 insertions(+), 19 deletions(-)
>>
>> diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h
>> index 0e43ab653ab6..3469c4b20105 100644
>> --- a/include/linux/pgalloc_tag.h
>> +++ b/include/linux/pgalloc_tag.h
>> @@ -231,7 +231,7 @@ static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr)
>>  }
>>
>>  void pgalloc_tag_split(struct folio *folio, int old_order, int new_order);
>> -void pgalloc_tag_copy(struct folio *new, struct folio *old);
>> +void pgalloc_tag_swap(struct folio *new, struct folio *old);
>>
>>  void __init alloc_tag_sec_init(void);
>>
>> @@ -245,7 +245,7 @@ static inline struct alloc_tag *pgalloc_tag_get(struct page *page) { return NULL
>>  static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr) {}
>>  static inline void alloc_tag_sec_init(void) {}
>>  static inline void pgalloc_tag_split(struct folio *folio, int old_order, int new_order) {}
>> -static inline void pgalloc_tag_copy(struct folio *new, struct folio *old) {}
>> +static inline void pgalloc_tag_swap(struct folio *new, struct folio *old) {}
>>
>>  #endif /* CONFIG_MEM_ALLOC_PROFILING */
>>
>> diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
>> index 2414a7ee7ec7..b45efde50c40 100644
>> --- a/lib/alloc_tag.c
>> +++ b/lib/alloc_tag.c
>> @@ -189,26 +189,29 @@ void pgalloc_tag_split(struct folio *folio, int old_order, int new_order)
>>         }
>>  }
>>
>> -void pgalloc_tag_copy(struct folio *new, struct folio *old)
>> +void pgalloc_tag_swap(struct folio *new, struct folio *old)
>>  {
>> -       union pgtag_ref_handle handle;
>> -       union codetag_ref ref;
>> -       struct alloc_tag *tag;
>> +       union pgtag_ref_handle handles[2];
>> +       union codetag_ref refs[2];
>> +       struct alloc_tag *tags[2];
>> +       struct folio *folios[2] = {new, old};
>> +       int i;
>>
>> -       tag = pgalloc_tag_get(&old->page);
>> -       if (!tag)
>> -               return;
>> +       for (i = 0; i < 2; i++) {
>> +               tags[i] = pgalloc_tag_get(&folios[i]->page);
>> +               if (!tags[i])
>> +                       return;
>> +               if (!get_page_tag_ref(&folios[i]->page, &refs[i], &handles[i]))
>> +                       return;
>
>If any of the above getters fail on the second cycle, you will miss
>calling put_page_tag_ref(handles[0]) and releasing the page_tag_ref
>you obtained on the first cycle. It might be cleaner to drop the use
>of arrays and use new/old.


Thanks for catching this. I will make the change.



>
>> +       }
>>
>> -       if (!get_page_tag_ref(&new->page, &ref, &handle))
>> -               return;
>> +       swap(tags[0], tags[1]);
>>
>> -       /* Clear the old ref to the original allocation tag. */
>> -       clear_page_tag_ref(&old->page);
>> -       /* Decrement the counters of the tag on get_new_folio. */
>> -       alloc_tag_sub(&ref, folio_size(new));
>> -       __alloc_tag_ref_set(&ref, tag);
>> -       update_page_tag_ref(handle, &ref);
>> -       put_page_tag_ref(handle);
>> +       for (i = 0; i < 2; i++) {
>> +               __alloc_tag_ref_set(&refs[i], tags[i]);
>> +               update_page_tag_ref(handles[i], &refs[i]);
>> +               put_page_tag_ref(handles[i]);
>> +       }
>>  }
>>
>>  static void shutdown_mem_profiling(bool remove_file)
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 2ce6b4b814df..cc68583c86f9 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -745,7 +745,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
>>                 folio_set_readahead(newfolio);
>>
>>         folio_copy_owner(newfolio, folio);
>> -       pgalloc_tag_copy(newfolio, folio);
>> +       pgalloc_tag_swap(newfolio, folio);
>>
>>         mem_cgroup_migrate(folio, newfolio);
>>  }
>> --
>> 2.39.2
>>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2] mm/codetag: swap tags when migrate pages
  2024-11-28 10:26 [PATCH] mm/codetag: swap tags when migrate pages David Wang
  2024-11-28 19:53 ` Suren Baghdasaryan
@ 2024-11-29  2:52 ` David Wang
  2024-11-29 22:59   ` Suren Baghdasaryan
  2024-12-02 22:19   ` Yu Zhao
  1 sibling, 2 replies; 8+ messages in thread
From: David Wang @ 2024-11-29  2:52 UTC (permalink / raw)
  To: surenb, yuzhao, kent.overstreet, akpm; +Cc: linux-mm, linux-kernel, David Wang

Current solution to adjust codetag references during page migration is
done in 3 steps:
1. sets the codetag reference of the old page as empty (not pointing
to any codetag);
2. subtracts counters of the new page to compensate for its own allocation;
3. sets codetag reference of the new page to point to the codetag of
the old page.
This does not work if CONFIG_MEM_ALLOC_PROFILING_DEBUG=n because
set_codetag_empty() becomes NOOP. Instead, let's simply swap codetag
references so that the new page is referencing the old codetag and the
old page is referencing the new codetag. This way accounting stays
valid and the logic makes more sense.

Fixes: e0a955bf7f61 ("mm/codetag: add pgalloc_tag_copy()")
Signed-off-by: David Wang <00107082@163.com>
Closes: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/
Acked-by: Suren Baghdasaryan <surenb@google.com>
Suggested-by: Suren Baghdasaryan <surenb@google.com>
---
 include/linux/pgalloc_tag.h |  4 ++--
 lib/alloc_tag.c             | 36 ++++++++++++++++++++++--------------
 mm/migrate.c                |  2 +-
 3 files changed, 25 insertions(+), 17 deletions(-)

diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h
index 0e43ab653ab6..3469c4b20105 100644
--- a/include/linux/pgalloc_tag.h
+++ b/include/linux/pgalloc_tag.h
@@ -231,7 +231,7 @@ static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr)
 }
 
 void pgalloc_tag_split(struct folio *folio, int old_order, int new_order);
-void pgalloc_tag_copy(struct folio *new, struct folio *old);
+void pgalloc_tag_swap(struct folio *new, struct folio *old);
 
 void __init alloc_tag_sec_init(void);
 
@@ -245,7 +245,7 @@ static inline struct alloc_tag *pgalloc_tag_get(struct page *page) { return NULL
 static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr) {}
 static inline void alloc_tag_sec_init(void) {}
 static inline void pgalloc_tag_split(struct folio *folio, int old_order, int new_order) {}
-static inline void pgalloc_tag_copy(struct folio *new, struct folio *old) {}
+static inline void pgalloc_tag_swap(struct folio *new, struct folio *old) {}
 
 #endif /* CONFIG_MEM_ALLOC_PROFILING */
 
diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
index 2414a7ee7ec7..35f7560a309a 100644
--- a/lib/alloc_tag.c
+++ b/lib/alloc_tag.c
@@ -189,26 +189,34 @@ void pgalloc_tag_split(struct folio *folio, int old_order, int new_order)
 	}
 }
 
-void pgalloc_tag_copy(struct folio *new, struct folio *old)
+void pgalloc_tag_swap(struct folio *new, struct folio *old)
 {
-	union pgtag_ref_handle handle;
-	union codetag_ref ref;
-	struct alloc_tag *tag;
+	union pgtag_ref_handle handle_old, handle_new;
+	union codetag_ref ref_old, ref_new;
+	struct alloc_tag *tag_old, *tag_new;
 
-	tag = pgalloc_tag_get(&old->page);
-	if (!tag)
+	tag_old = pgalloc_tag_get(&old->page);
+	if (!tag_old)
+		return;
+	tag_new = pgalloc_tag_get(&new->page);
+	if (!tag_new)
 		return;
 
-	if (!get_page_tag_ref(&new->page, &ref, &handle))
+	if (!get_page_tag_ref(&old->page, &ref_old, &handle_old))
 		return;
+	if (!get_page_tag_ref(&new->page, &ref_new, &handle_new)) {
+		put_page_tag_ref(handle_old);
+		return;
+	}
+
+	/* swap tags */
+	__alloc_tag_ref_set(&ref_old, tag_new);
+	update_page_tag_ref(handle_old, &ref_old);
+	__alloc_tag_ref_set(&ref_new, tag_old);
+	update_page_tag_ref(handle_new, &ref_new);
 
-	/* Clear the old ref to the original allocation tag. */
-	clear_page_tag_ref(&old->page);
-	/* Decrement the counters of the tag on get_new_folio. */
-	alloc_tag_sub(&ref, folio_size(new));
-	__alloc_tag_ref_set(&ref, tag);
-	update_page_tag_ref(handle, &ref);
-	put_page_tag_ref(handle);
+	put_page_tag_ref(handle_old);
+	put_page_tag_ref(handle_new);
 }
 
 static void shutdown_mem_profiling(bool remove_file)
diff --git a/mm/migrate.c b/mm/migrate.c
index 2ce6b4b814df..cc68583c86f9 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -745,7 +745,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
 		folio_set_readahead(newfolio);
 
 	folio_copy_owner(newfolio, folio);
-	pgalloc_tag_copy(newfolio, folio);
+	pgalloc_tag_swap(newfolio, folio);
 
 	mem_cgroup_migrate(folio, newfolio);
 }
-- 
2.39.2



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/codetag: swap tags when migrate pages
  2024-11-29  2:52 ` [PATCH v2] " David Wang
@ 2024-11-29 22:59   ` Suren Baghdasaryan
  2024-11-29 23:43     ` David Wang
  2024-12-02 22:19   ` Yu Zhao
  1 sibling, 1 reply; 8+ messages in thread
From: Suren Baghdasaryan @ 2024-11-29 22:59 UTC (permalink / raw)
  To: David Wang; +Cc: yuzhao, kent.overstreet, akpm, linux-mm, linux-kernel

On Thu, Nov 28, 2024 at 6:52 PM David Wang <00107082@163.com> wrote:
>
> Current solution to adjust codetag references during page migration is
> done in 3 steps:
> 1. sets the codetag reference of the old page as empty (not pointing
> to any codetag);
> 2. subtracts counters of the new page to compensate for its own allocation;
> 3. sets codetag reference of the new page to point to the codetag of
> the old page.
> This does not work if CONFIG_MEM_ALLOC_PROFILING_DEBUG=n because
> set_codetag_empty() becomes NOOP. Instead, let's simply swap codetag
> references so that the new page is referencing the old codetag and the
> old page is referencing the new codetag. This way accounting stays
> valid and the logic makes more sense.
>
> Fixes: e0a955bf7f61 ("mm/codetag: add pgalloc_tag_copy()")
> Signed-off-by: David Wang <00107082@163.com>
> Closes: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/
> Acked-by: Suren Baghdasaryan <surenb@google.com>
> Suggested-by: Suren Baghdasaryan <surenb@google.com>

Thanks! This looks fine to me but IIRC I did not Ack this patch
before. In the future please do not add Acked-by until you get an
explicit Ack. That said,

Acked-by: Suren Baghdasaryan <surenb@google.com>

> ---
>  include/linux/pgalloc_tag.h |  4 ++--
>  lib/alloc_tag.c             | 36 ++++++++++++++++++++++--------------
>  mm/migrate.c                |  2 +-
>  3 files changed, 25 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h
> index 0e43ab653ab6..3469c4b20105 100644
> --- a/include/linux/pgalloc_tag.h
> +++ b/include/linux/pgalloc_tag.h
> @@ -231,7 +231,7 @@ static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr)
>  }
>
>  void pgalloc_tag_split(struct folio *folio, int old_order, int new_order);
> -void pgalloc_tag_copy(struct folio *new, struct folio *old);
> +void pgalloc_tag_swap(struct folio *new, struct folio *old);
>
>  void __init alloc_tag_sec_init(void);
>
> @@ -245,7 +245,7 @@ static inline struct alloc_tag *pgalloc_tag_get(struct page *page) { return NULL
>  static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr) {}
>  static inline void alloc_tag_sec_init(void) {}
>  static inline void pgalloc_tag_split(struct folio *folio, int old_order, int new_order) {}
> -static inline void pgalloc_tag_copy(struct folio *new, struct folio *old) {}
> +static inline void pgalloc_tag_swap(struct folio *new, struct folio *old) {}
>
>  #endif /* CONFIG_MEM_ALLOC_PROFILING */
>
> diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
> index 2414a7ee7ec7..35f7560a309a 100644
> --- a/lib/alloc_tag.c
> +++ b/lib/alloc_tag.c
> @@ -189,26 +189,34 @@ void pgalloc_tag_split(struct folio *folio, int old_order, int new_order)
>         }
>  }
>
> -void pgalloc_tag_copy(struct folio *new, struct folio *old)
> +void pgalloc_tag_swap(struct folio *new, struct folio *old)
>  {
> -       union pgtag_ref_handle handle;
> -       union codetag_ref ref;
> -       struct alloc_tag *tag;
> +       union pgtag_ref_handle handle_old, handle_new;
> +       union codetag_ref ref_old, ref_new;
> +       struct alloc_tag *tag_old, *tag_new;
>
> -       tag = pgalloc_tag_get(&old->page);
> -       if (!tag)
> +       tag_old = pgalloc_tag_get(&old->page);
> +       if (!tag_old)
> +               return;
> +       tag_new = pgalloc_tag_get(&new->page);
> +       if (!tag_new)
>                 return;
>
> -       if (!get_page_tag_ref(&new->page, &ref, &handle))
> +       if (!get_page_tag_ref(&old->page, &ref_old, &handle_old))
>                 return;
> +       if (!get_page_tag_ref(&new->page, &ref_new, &handle_new)) {
> +               put_page_tag_ref(handle_old);
> +               return;
> +       }
> +
> +       /* swap tags */
> +       __alloc_tag_ref_set(&ref_old, tag_new);
> +       update_page_tag_ref(handle_old, &ref_old);
> +       __alloc_tag_ref_set(&ref_new, tag_old);
> +       update_page_tag_ref(handle_new, &ref_new);
>
> -       /* Clear the old ref to the original allocation tag. */
> -       clear_page_tag_ref(&old->page);
> -       /* Decrement the counters of the tag on get_new_folio. */
> -       alloc_tag_sub(&ref, folio_size(new));
> -       __alloc_tag_ref_set(&ref, tag);
> -       update_page_tag_ref(handle, &ref);
> -       put_page_tag_ref(handle);
> +       put_page_tag_ref(handle_old);
> +       put_page_tag_ref(handle_new);
>  }
>
>  static void shutdown_mem_profiling(bool remove_file)
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 2ce6b4b814df..cc68583c86f9 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -745,7 +745,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
>                 folio_set_readahead(newfolio);
>
>         folio_copy_owner(newfolio, folio);
> -       pgalloc_tag_copy(newfolio, folio);
> +       pgalloc_tag_swap(newfolio, folio);
>
>         mem_cgroup_migrate(folio, newfolio);
>  }
> --
> 2.39.2
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/codetag: swap tags when migrate pages
  2024-11-29 22:59   ` Suren Baghdasaryan
@ 2024-11-29 23:43     ` David Wang
  0 siblings, 0 replies; 8+ messages in thread
From: David Wang @ 2024-11-29 23:43 UTC (permalink / raw)
  To: Suren Baghdasaryan; +Cc: yuzhao, kent.overstreet, akpm, linux-mm, linux-kernel


At 2024-11-30 06:59:26, "Suren Baghdasaryan" <surenb@google.com> wrote:
>On Thu, Nov 28, 2024 at 6:52 PM David Wang <00107082@163.com> wrote:
>>
>> Current solution to adjust codetag references during page migration is
>> done in 3 steps:
>> 1. sets the codetag reference of the old page as empty (not pointing
>> to any codetag);
>> 2. subtracts counters of the new page to compensate for its own allocation;
>> 3. sets codetag reference of the new page to point to the codetag of
>> the old page.
>> This does not work if CONFIG_MEM_ALLOC_PROFILING_DEBUG=n because
>> set_codetag_empty() becomes NOOP. Instead, let's simply swap codetag
>> references so that the new page is referencing the old codetag and the
>> old page is referencing the new codetag. This way accounting stays
>> valid and the logic makes more sense.
>>
>> Fixes: e0a955bf7f61 ("mm/codetag: add pgalloc_tag_copy()")
>> Signed-off-by: David Wang <00107082@163.com>
>> Closes: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/
>> Acked-by: Suren Baghdasaryan <surenb@google.com>
>> Suggested-by: Suren Baghdasaryan <surenb@google.com>
>
>Thanks! This looks fine to me but IIRC I did not Ack this patch

Sorry, I am kind of new to the kernel development process, still catching up. 

>before. In the future please do not add Acked-by until you get an
>explicit Ack. That said,

Roger that~
I was wondering when "Acked-by~ or "Rreviewd-by" were added in other commits. Now I understand,
I am not supposed to do that. 


Thanks
David

>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/codetag: swap tags when migrate pages
  2024-11-29  2:52 ` [PATCH v2] " David Wang
  2024-11-29 22:59   ` Suren Baghdasaryan
@ 2024-12-02 22:19   ` Yu Zhao
  1 sibling, 0 replies; 8+ messages in thread
From: Yu Zhao @ 2024-12-02 22:19 UTC (permalink / raw)
  To: David Wang; +Cc: surenb, kent.overstreet, akpm, linux-mm, linux-kernel

On Thu, Nov 28, 2024 at 7:52 PM David Wang <00107082@163.com> wrote:
>
> Current solution to adjust codetag references during page migration is
> done in 3 steps:
> 1. sets the codetag reference of the old page as empty (not pointing
> to any codetag);
> 2. subtracts counters of the new page to compensate for its own allocation;
> 3. sets codetag reference of the new page to point to the codetag of
> the old page.
> This does not work if CONFIG_MEM_ALLOC_PROFILING_DEBUG=n because
> set_codetag_empty() becomes NOOP. Instead, let's simply swap codetag
> references so that the new page is referencing the old codetag and the
> old page is referencing the new codetag. This way accounting stays
> valid and the logic makes more sense.
>
> Fixes: e0a955bf7f61 ("mm/codetag: add pgalloc_tag_copy()")
> Signed-off-by: David Wang <00107082@163.com>
> Closes: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/
> Acked-by: Suren Baghdasaryan <surenb@google.com>
> Suggested-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Yu Zhao <yuzhao@google.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-12-05 15:33 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-28 10:26 [PATCH] mm/codetag: swap tags when migrate pages David Wang
2024-11-28 19:53 ` Suren Baghdasaryan
2024-11-28 19:54   ` Suren Baghdasaryan
2024-11-29  1:13   ` David Wang
2024-11-29  2:52 ` [PATCH v2] " David Wang
2024-11-29 22:59   ` Suren Baghdasaryan
2024-11-29 23:43     ` David Wang
2024-12-02 22:19   ` Yu Zhao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox