linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/1] mm: Xen PV regression after THP PTE optimization
@ 2025-05-02 21:50 Petr Vaněk
  2025-05-02 21:50 ` [PATCH v2 1/1] mm: fix folio_pte_batch() on XEN PV Petr Vaněk
  0 siblings, 1 reply; 6+ messages in thread
From: Petr Vaněk @ 2025-05-02 21:50 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: David Hildenbrand, Andrew Morton, Ryan Roberts, xen-devel, x86,
	stable, Petr Vaněk

Hi all,

I recently discovered an mm regression introduced in kernel version 6.9
that affects systems running as a Xen PV domain [1]. Original fix
proposal wasn't ideal, but it sparked a discussion which helped us
fully understand the root cause.

The new v2 patch contains changes based on David Hildenbrand's proposal
to cap max_nr to the number of PFNs that actually remain in the folio
and to clean up the loop.

Thanks,
Petr

[1] https://lore.kernel.org/lkml/20250429142237.22138-1-arkamar@atlas.cz

Petr Vaněk (1):
  mm: fix folio_pte_batch() on XEN PV

 mm/internal.h | 27 +++++++++++----------------
 1 file changed, 11 insertions(+), 16 deletions(-)

-- 
2.48.1



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/1] mm: fix folio_pte_batch() on XEN PV
  2025-05-02 21:50 [PATCH v2 0/1] mm: Xen PV regression after THP PTE optimization Petr Vaněk
@ 2025-05-02 21:50 ` Petr Vaněk
  2025-05-04  1:28   ` Andrew Morton
  0 siblings, 1 reply; 6+ messages in thread
From: Petr Vaněk @ 2025-05-02 21:50 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: David Hildenbrand, Andrew Morton, Ryan Roberts, xen-devel, x86,
	stable, Petr Vaněk

On XEN PV, folio_pte_batch() can incorrectly batch beyond the end of a
folio due to a corner case in pte_advance_pfn(). Specifically, when the
PFN following the folio maps to an invalidated MFN,

	expected_pte = pte_advance_pfn(expected_pte, nr);

produces a pte_none(). If the actual next PTE in memory is also
pte_none(), the pte_same() succeeds,

	if (!pte_same(pte, expected_pte))
		break;

the loop is not broken, and batching continues into unrelated memory.

For example, with a 4-page folio, the PTE layout might look like this:

[   53.465673] [ T2552] folio_pte_batch: printing PTE values at addr=0x7f1ac9dc5000
[   53.465674] [ T2552]   PTE[453] = 000000010085c125
[   53.465679] [ T2552]   PTE[454] = 000000010085d125
[   53.465682] [ T2552]   PTE[455] = 000000010085e125
[   53.465684] [ T2552]   PTE[456] = 000000010085f125
[   53.465686] [ T2552]   PTE[457] = 0000000000000000 <-- not present
[   53.465689] [ T2552]   PTE[458] = 0000000101da7125

pte_advance_pfn(PTE[456]) returns a pte_none() due to invalid PFN->MFN
mapping. The next actual PTE (PTE[457]) is also pte_none(), so the loop
continues and includes PTE[457] in the batch, resulting in 5 batched
entries for a 4-page folio. This triggers the following warning:

[   53.465751] [ T2552] page: refcount:85 mapcount:20 mapping:ffff88813ff4f6a8 index:0x110 pfn:0x10085c
[   53.465754] [ T2552] head: order:2 mapcount:80 entire_mapcount:0 nr_pages_mapped:4 pincount:0
[   53.465756] [ T2552] memcg:ffff888003573000
[   53.465758] [ T2552] aops:0xffffffff8226fd20 ino:82467c dentry name(?):"libc.so.6"
[   53.465761] [ T2552] flags: 0x2000000000416c(referenced|uptodate|lru|active|private|head|node=0|zone=2)
[   53.465764] [ T2552] raw: 002000000000416c ffffea0004021f08 ffffea0004021908 ffff88813ff4f6a8
[   53.465767] [ T2552] raw: 0000000000000110 ffff888133d8bd40 0000005500000013 ffff888003573000
[   53.465768] [ T2552] head: 002000000000416c ffffea0004021f08 ffffea0004021908 ffff88813ff4f6a8
[   53.465770] [ T2552] head: 0000000000000110 ffff888133d8bd40 0000005500000013 ffff888003573000
[   53.465772] [ T2552] head: 0020000000000202 ffffea0004021701 000000040000004f 00000000ffffffff
[   53.465774] [ T2552] head: 0000000300000003 8000000300000002 0000000000000013 0000000000000004
[   53.465775] [ T2552] page dumped because: VM_WARN_ON_FOLIO((_Generic((page + nr_pages - 1), const struct page *: (const struct folio *)_compound_head(page + nr_pages - 1), struct page *: (struct folio *)_compound_head(page + nr_pages - 1))) != folio)

Original code works as expected everywhere, except on XEN PV, where
pte_advance_pfn() can yield a pte_none() after balloon inflation due to
MFNs invalidation. In XEN, pte_advance_pfn() ends up calling
__pte()->xen_make_pte()->pte_pfn_to_mfn(), which returns pte_none() when
mfn == INVALID_P2M_ENTRY.

The pte_pfn_to_mfn() documents that nastiness:

	If there's no mfn for the pfn, then just create an
	empty non-present pte.  Unfortunately this loses
	information about the original pfn, so
	pte_mfn_to_pfn is asymmetric.

While such hacks should certainly be removed, we can do better in
folio_pte_batch() and simply check ahead of time how many PTEs we can
possibly batch in our folio.

This way, we can not only fix the issue but cleanup the code: removing the
pte_pfn() check inside the loop body and avoiding end_ptr comparison +
arithmetic.

Fixes: f8d937761d65 ("mm/memory: optimize fork() with PTE-mapped THP")
Cc: stable@vger.kernel.org
Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Petr Vaněk <arkamar@atlas.cz>
---
 mm/internal.h | 27 +++++++++++----------------
 1 file changed, 11 insertions(+), 16 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index e9695baa5922..25a29872c634 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -248,11 +248,9 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
 		pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
 		bool *any_writable, bool *any_young, bool *any_dirty)
 {
-	unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
-	const pte_t *end_ptep = start_ptep + max_nr;
 	pte_t expected_pte, *ptep;
 	bool writable, young, dirty;
-	int nr;
+	int nr, cur_nr;
 
 	if (any_writable)
 		*any_writable = false;
@@ -265,11 +263,15 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
 	VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
 	VM_WARN_ON_FOLIO(page_folio(pfn_to_page(pte_pfn(pte))) != folio, folio);
 
+	/* Limit max_nr to the actual remaining PFNs in the folio we could batch. */
+	max_nr = min_t(unsigned long, max_nr,
+		       folio_pfn(folio) + folio_nr_pages(folio) - pte_pfn(pte));
+
 	nr = pte_batch_hint(start_ptep, pte);
 	expected_pte = __pte_batch_clear_ignored(pte_advance_pfn(pte, nr), flags);
 	ptep = start_ptep + nr;
 
-	while (ptep < end_ptep) {
+	while (nr < max_nr) {
 		pte = ptep_get(ptep);
 		if (any_writable)
 			writable = !!pte_write(pte);
@@ -282,14 +284,6 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
 		if (!pte_same(pte, expected_pte))
 			break;
 
-		/*
-		 * Stop immediately once we reached the end of the folio. In
-		 * corner cases the next PFN might fall into a different
-		 * folio.
-		 */
-		if (pte_pfn(pte) >= folio_end_pfn)
-			break;
-
 		if (any_writable)
 			*any_writable |= writable;
 		if (any_young)
@@ -297,12 +291,13 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
 		if (any_dirty)
 			*any_dirty |= dirty;
 
-		nr = pte_batch_hint(ptep, pte);
-		expected_pte = pte_advance_pfn(expected_pte, nr);
-		ptep += nr;
+		cur_nr = pte_batch_hint(ptep, pte);
+		expected_pte = pte_advance_pfn(expected_pte, cur_nr);
+		ptep += cur_nr;
+		nr += cur_nr;
 	}
 
-	return min(ptep - start_ptep, max_nr);
+	return min(nr, max_nr);
 }
 
 /**
-- 
2.48.1



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 1/1] mm: fix folio_pte_batch() on XEN PV
  2025-05-02 21:50 ` [PATCH v2 1/1] mm: fix folio_pte_batch() on XEN PV Petr Vaněk
@ 2025-05-04  1:28   ` Andrew Morton
  2025-05-04  6:47     ` David Hildenbrand
  0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2025-05-04  1:28 UTC (permalink / raw)
  To: Petr Vaněk
  Cc: linux-mm, linux-kernel, David Hildenbrand, Ryan Roberts,
	xen-devel, x86, stable

On Fri,  2 May 2025 23:50:19 +0200 Petr Vaněk <arkamar@atlas.cz> wrote:

> On XEN PV, folio_pte_batch() can incorrectly batch beyond the end of a
> folio due to a corner case in pte_advance_pfn(). Specifically, when the
> PFN following the folio maps to an invalidated MFN,
> 
> 	expected_pte = pte_advance_pfn(expected_pte, nr);
> 
> produces a pte_none(). If the actual next PTE in memory is also
> pte_none(), the pte_same() succeeds,
> 
> 	if (!pte_same(pte, expected_pte))
> 		break;
> 
> the loop is not broken, and batching continues into unrelated memory.
> 
> ...

Looks OK for now I guess but it looks like we should pay some attention
to what types we're using.

> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -248,11 +248,9 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>  		pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
>  		bool *any_writable, bool *any_young, bool *any_dirty)
>  {
> -	unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
> -	const pte_t *end_ptep = start_ptep + max_nr;
>  	pte_t expected_pte, *ptep;
>  	bool writable, young, dirty;
> -	int nr;
> +	int nr, cur_nr;
>  
>  	if (any_writable)
>  		*any_writable = false;
> @@ -265,11 +263,15 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>  	VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
>  	VM_WARN_ON_FOLIO(page_folio(pfn_to_page(pte_pfn(pte))) != folio, folio);
>  
> +	/* Limit max_nr to the actual remaining PFNs in the folio we could batch. */
> +	max_nr = min_t(unsigned long, max_nr,
> +		       folio_pfn(folio) + folio_nr_pages(folio) - pte_pfn(pte));
> +

Methinks max_nr really wants to be unsigned long.  That will permit the
cleanup of quite a bit of truncation, extension, signedness conversion
and general type chaos in folio_pte_batch()'s various callers.

And...

Why does folio_nr_pages() return a signed quantity?  It's a count.

And why the heck is folio_pte_batch() inlined?  It's larger then my
first hard disk and it has five callsites!



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 1/1] mm: fix folio_pte_batch() on XEN PV
  2025-05-04  1:28   ` Andrew Morton
@ 2025-05-04  6:47     ` David Hildenbrand
  2025-05-04  7:15       ` Andrew Morton
  0 siblings, 1 reply; 6+ messages in thread
From: David Hildenbrand @ 2025-05-04  6:47 UTC (permalink / raw)
  To: Andrew Morton, Petr Vaněk
  Cc: linux-mm, linux-kernel, Ryan Roberts, xen-devel, x86, stable

On 04.05.25 03:28, Andrew Morton wrote:
> On Fri,  2 May 2025 23:50:19 +0200 Petr Vaněk <arkamar@atlas.cz> wrote:
> 
>> On XEN PV, folio_pte_batch() can incorrectly batch beyond the end of a
>> folio due to a corner case in pte_advance_pfn(). Specifically, when the
>> PFN following the folio maps to an invalidated MFN,
>>
>> 	expected_pte = pte_advance_pfn(expected_pte, nr);
>>
>> produces a pte_none(). If the actual next PTE in memory is also
>> pte_none(), the pte_same() succeeds,
>>
>> 	if (!pte_same(pte, expected_pte))
>> 		break;
>>
>> the loop is not broken, and batching continues into unrelated memory.
>>
>> ...
> 
> Looks OK for now I guess but it looks like we should pay some attention
> to what types we're using.
> >> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -248,11 +248,9 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>>   		pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
>>   		bool *any_writable, bool *any_young, bool *any_dirty)
>>   {
>> -	unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
>> -	const pte_t *end_ptep = start_ptep + max_nr;
>>   	pte_t expected_pte, *ptep;
>>   	bool writable, young, dirty;
>> -	int nr;
>> +	int nr, cur_nr;
>>   
>>   	if (any_writable)
>>   		*any_writable = false;
>> @@ -265,11 +263,15 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
>>   	VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
>>   	VM_WARN_ON_FOLIO(page_folio(pfn_to_page(pte_pfn(pte))) != folio, folio);
>>   
>> +	/* Limit max_nr to the actual remaining PFNs in the folio we could batch. */
>> +	max_nr = min_t(unsigned long, max_nr,
>> +		       folio_pfn(folio) + folio_nr_pages(folio) - pte_pfn(pte));
>> +
> 
> Methinks max_nr really wants to be unsigned long. 

We only batch within a single PTE table, so an integer was sufficient.

The unsigned value is the result of a discussion with Ryan regarding similar/related
(rmap) functions:

"
Personally I'd go with signed int (since
that's what all the counters in struct folio that we are manipulating are,
underneath the atomic_t) then check that nr_pages > 0 in
__folio_rmap_sanity_checks().
"

https://lore.kernel.org/linux-mm/20231204142146.91437-14-david@redhat.com/T/#ma0bfff0102f0f2391dfa94aa22a8b7219b92c957

As soon as we let "max_nr" be an "unsigned long", also the return value
should be an "unsigned long", and everybody calling that function.

In this case here, we should likely just use whatever type "max_nr" is.

Not sure myself if we should change that here to unsigned long or long. Some
callers also operate with the negative values IIRC (e.g., adjust the RSS by doing -= nr).

> That will permit the
> cleanup of quite a bit of truncation, extension, signedness conversion
> and general type chaos in folio_pte_batch()'s various callers.
> > And...
> 
> Why does folio_nr_pages() return a signed quantity?  It's a count.

A partial answer is in 1ea5212aed068 ("mm: factor out large folio handling
from folio_nr_pages() into folio_large_nr_pages()"), where I stumbled over the
reason for a signed value myself and at least made the other
functions be consistent with folio_nr_pages():

"
     While at it, let's consistently return a "long" value from all these
     similar functions.  Note that we cannot use "unsigned int" (even though
     _folio_nr_pages is of that type), because it would break some callers that
     do stuff like "-folio_nr_pages()".  Both "int" or "unsigned long" would
     work as well.

"

Note that folio_nr_pages() returned a "long" since the very beginning. Probably using
a signed value for consistency because also mapcounts / refcounts are all signed.


> 
> And why the heck is folio_pte_batch() inlined?  It's larger then my
> first hard disk and it has five callsites!

:)

In case of fork/zap we really want it inlined because

(1) We want to optimize out all of the unnecessary checks we added for other users

(2) Zap/fork code is very sensitive to function call overhead

Probably, as that function sees more widespread use, we might want a
non-inlined variant that can be used in places where performance doesn't
matter all that much (although I am not sure there will be that many).

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 1/1] mm: fix folio_pte_batch() on XEN PV
  2025-05-04  6:47     ` David Hildenbrand
@ 2025-05-04  7:15       ` Andrew Morton
  2025-05-04  8:58         ` David Hildenbrand
  0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2025-05-04  7:15 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Petr Vaněk, linux-mm, linux-kernel, Ryan Roberts, xen-devel,
	x86, stable

On Sun, 4 May 2025 08:47:45 +0200 David Hildenbrand <david@redhat.com> wrote:

> > 
> > Methinks max_nr really wants to be unsigned long. 
> 
> We only batch within a single PTE table, so an integer was sufficient.
> 
> The unsigned value is the result of a discussion with Ryan regarding similar/related
> (rmap) functions:
> 
> "
> Personally I'd go with signed int (since
> that's what all the counters in struct folio that we are manipulating are,
> underneath the atomic_t) then check that nr_pages > 0 in
> __folio_rmap_sanity_checks().
> "
> 
> https://lore.kernel.org/linux-mm/20231204142146.91437-14-david@redhat.com/T/#ma0bfff0102f0f2391dfa94aa22a8b7219b92c957
> 
> As soon as we let "max_nr" be an "unsigned long", also the return value
> should be an "unsigned long", and everybody calling that function.
> 
> In this case here, we should likely just use whatever type "max_nr" is.
> 
> Not sure myself if we should change that here to unsigned long or long. Some
> callers also operate with the negative values IIRC (e.g., adjust the RSS by doing -= nr).

"rss -= nr" doesn't require, expect or anticipate that `nr' can be negative!

> 
> > That will permit the
> > cleanup of quite a bit of truncation, extension, signedness conversion
> > and general type chaos in folio_pte_batch()'s various callers.
> > > And...
> > 
> > Why does folio_nr_pages() return a signed quantity?  It's a count.
> 
> A partial answer is in 1ea5212aed068 ("mm: factor out large folio handling
> from folio_nr_pages() into folio_large_nr_pages()"), where I stumbled over the
> reason for a signed value myself and at least made the other
> functions be consistent with folio_nr_pages():
> 
> "
>      While at it, let's consistently return a "long" value from all these
>      similar functions.  Note that we cannot use "unsigned int" (even though
>      _folio_nr_pages is of that type), because it would break some callers that
>      do stuff like "-folio_nr_pages()".  Both "int" or "unsigned long" would
>      work as well.
> 
> "
> 
> Note that folio_nr_pages() returned a "long" since the very beginning. Probably using
> a signed value for consistency because also mapcounts / refcounts are all signed.

Geeze.

Can we step back and look at what we're doing?  Anything which counts
something (eg, has "nr" in the identifier) cannot be negative.

It's that damn "int" thing.  I think it was always a mistake that the C
language's go-to type is a signed one.  It's a system programming
language and system software rarely deals with negative scalars. 
Signed scalars are the rare case.

I do expect that the code in and around here would be cleaner and more
reliable if we were to do a careful expunging of inappropriately signed
variables.

> 
> > 
> > And why the heck is folio_pte_batch() inlined?  It's larger then my
> > first hard disk and it has five callsites!
> 
> :)
> 
> In case of fork/zap we really want it inlined because
> 
> (1) We want to optimize out all of the unnecessary checks we added for other users
> 
> (2) Zap/fork code is very sensitive to function call overhead
> 
> Probably, as that function sees more widespread use, we might want a
> non-inlined variant that can be used in places where performance doesn't
> matter all that much (although I am not sure there will be that many).

a quick test.

before:
   text	   data	    bss	    dec	    hex	filename
  12380	    470	      0	  12850	   3232	mm/madvise.o
  52975	   2689	     24	  55688	   d988	mm/memory.o
  25305	   1448	   2096	  28849	   70b1	mm/mempolicy.o
   8573	    924	      4	   9501	   251d	mm/mlock.o
  20950	   5864	     16	  26830	   68ce	mm/rmap.o

 (120183)

after:

   text	   data	    bss	    dec	    hex	filename
  11916	    470	      0	  12386	   3062	mm/madvise.o
  52990	   2697	     24	  55711	   d99f	mm/memory.o
  25161	   1448	   2096	  28705	   7021	mm/mempolicy.o
   8381	    924	      4	   9309	   245d	mm/mlock.o
  20806	   5864	     16	  26686	   683e	mm/rmap.o

 (119254)

so uninlining saves a kilobyte of text - less than I expected but
almost 1%.

Quite a lot of the inlines in internal.h could do with having a
critical eye upon them.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 1/1] mm: fix folio_pte_batch() on XEN PV
  2025-05-04  7:15       ` Andrew Morton
@ 2025-05-04  8:58         ` David Hildenbrand
  0 siblings, 0 replies; 6+ messages in thread
From: David Hildenbrand @ 2025-05-04  8:58 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Petr Vaněk, linux-mm, linux-kernel, Ryan Roberts, xen-devel,
	x86, stable

On 04.05.25 09:15, Andrew Morton wrote:
> On Sun, 4 May 2025 08:47:45 +0200 David Hildenbrand <david@redhat.com> wrote:
> 
>>>
>>> Methinks max_nr really wants to be unsigned long.
>>
>> We only batch within a single PTE table, so an integer was sufficient.
>>
>> The unsigned value is the result of a discussion with Ryan regarding similar/related
>> (rmap) functions:
>>
>> "
>> Personally I'd go with signed int (since
>> that's what all the counters in struct folio that we are manipulating are,
>> underneath the atomic_t) then check that nr_pages > 0 in
>> __folio_rmap_sanity_checks().
>> "
>>
>> https://lore.kernel.org/linux-mm/20231204142146.91437-14-david@redhat.com/T/#ma0bfff0102f0f2391dfa94aa22a8b7219b92c957
>>
>> As soon as we let "max_nr" be an "unsigned long", also the return value
>> should be an "unsigned long", and everybody calling that function.
>>
>> In this case here, we should likely just use whatever type "max_nr" is.
>>
>> Not sure myself if we should change that here to unsigned long or long. Some
>> callers also operate with the negative values IIRC (e.g., adjust the RSS by doing -= nr).
> 
> "rss -= nr" doesn't require, expect or anticipate that `nr' can be negative!

The one thing I ran into with "unsigned int" around folio_nr_pages()
was that if you pass

-folio-nr_pages()

into a function that expects an "long" (add vs. remove a value to a counter), then
the result might not be what one would expect when briefly glimpsing at the code:

#include <stdio.h>

static __attribute__((noinline)) void print(long diff)
{
         printf("%ld\n", diff);
}

static int value_int()
{
         return 12345;
}

static unsigned int value_unsigned_int()
{
         return 12345;
}

static int value_long()
{
         return 12345;
}

static unsigned long value_unsigned_long()
{
         return 12345;
}

int main(void)
{
         print(-value_int());
         print(-value_unsigned_int());
         print(-value_long());
         print(-value_unsigned_long());
         return 0;
}


$ ./tmp
-12345
4294954951
-12345
-12345

So, I am fine with using "unsigned long" (as stated in that commit description below).

> 
>>
>>> That will permit the
>>> cleanup of quite a bit of truncation, extension, signedness conversion
>>> and general type chaos in folio_pte_batch()'s various callers.
>>>> And...
>>>
>>> Why does folio_nr_pages() return a signed quantity?  It's a count.
>>
>> A partial answer is in 1ea5212aed068 ("mm: factor out large folio handling
>> from folio_nr_pages() into folio_large_nr_pages()"), where I stumbled over the
>> reason for a signed value myself and at least made the other
>> functions be consistent with folio_nr_pages():
>>
>> "
>>       While at it, let's consistently return a "long" value from all these
>>       similar functions.  Note that we cannot use "unsigned int" (even though
>>       _folio_nr_pages is of that type), because it would break some callers that
>>       do stuff like "-folio_nr_pages()".  Both "int" or "unsigned long" would
>>       work as well.
>>
>> "
>>
>> Note that folio_nr_pages() returned a "long" since the very beginning. Probably using
>> a signed value for consistency because also mapcounts / refcounts are all signed.
> 
> Geeze.
> 
> Can we step back and look at what we're doing?  Anything which counts
> something (eg, has "nr" in the identifier) cannot be negative.

Yes. Unless we want to catch underflows (e.g., mapcount / refcount). For "nr_pages" I agree.

> 
> It's that damn "int" thing.  I think it was always a mistake that the C
> language's go-to type is a signed one. 

Yeah. But see above that "unsigned int" in combination with long can also cause pain.

> It's a system programming
> language and system software rarely deals with negative scalars.
> Signed scalars are the rare case.
> 
> I do expect that the code in and around here would be cleaner and more
> reliable if we were to do a careful expunging of inappropriately signed
> variables.

Maybe, but it would mostly be a "int -> unsigned long" conversion, probably not
much more. I'm not against cleaning that up at all.

> 
>>
>>>
>>> And why the heck is folio_pte_batch() inlined?  It's larger then my
>>> first hard disk and it has five callsites!
>>
>> :)
>>
>> In case of fork/zap we really want it inlined because
>>
>> (1) We want to optimize out all of the unnecessary checks we added for other users
>>
>> (2) Zap/fork code is very sensitive to function call overhead
>>
>> Probably, as that function sees more widespread use, we might want a
>> non-inlined variant that can be used in places where performance doesn't
>> matter all that much (although I am not sure there will be that many).
> 
> a quick test.
> 
> before:
>     text	   data	    bss	    dec	    hex	filename
>    12380	    470	      0	  12850	   3232	mm/madvise.o
>    52975	   2689	     24	  55688	   d988	mm/memory.o
>    25305	   1448	   2096	  28849	   70b1	mm/mempolicy.o
>     8573	    924	      4	   9501	   251d	mm/mlock.o
>    20950	   5864	     16	  26830	   68ce	mm/rmap.o
> 
>   (120183)
> 
> after:
> 
>     text	   data	    bss	    dec	    hex	filename
>    11916	    470	      0	  12386	   3062	mm/madvise.o
>    52990	   2697	     24	  55711	   d99f	mm/memory.o
>    25161	   1448	   2096	  28705	   7021	mm/mempolicy.o
>     8381	    924	      4	   9309	   245d	mm/mlock.o
>    20806	   5864	     16	  26686	   683e	mm/rmap.o
> 
>   (119254)
> 
> so uninlining saves a kilobyte of text - less than I expected but
> almost 1%.

As I said, for fork+zap/unmap we really want to inline -- the first two users
of that function when that function was still simpler and resided in mm/memory.o. For
the other users, probably okay to have a non-inlined one in mm/util.c .

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-05-04  8:58 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-05-02 21:50 [PATCH v2 0/1] mm: Xen PV regression after THP PTE optimization Petr Vaněk
2025-05-02 21:50 ` [PATCH v2 1/1] mm: fix folio_pte_batch() on XEN PV Petr Vaněk
2025-05-04  1:28   ` Andrew Morton
2025-05-04  6:47     ` David Hildenbrand
2025-05-04  7:15       ` Andrew Morton
2025-05-04  8:58         ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox