linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/mm_init.c: use round_up() to align movable range
@ 2025-02-07 10:04 Wei Yang
  2025-02-11 18:13 ` Shivank Garg
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Wei Yang @ 2025-02-07 10:04 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Since MAX_ORDER_NR_PAGES is power of 2, let's use a faster version.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/mm_init.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/mm_init.c b/mm/mm_init.c
index dec4084fe15a..99ef70a8b63c 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -438,7 +438,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
 		 * was requested by the user
 		 */
 		required_movablecore =
-			roundup(required_movablecore, MAX_ORDER_NR_PAGES);
+			round_up(required_movablecore, MAX_ORDER_NR_PAGES);
 		required_movablecore = min(totalpages, required_movablecore);
 		corepages = totalpages - required_movablecore;
 
@@ -549,7 +549,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
 		unsigned long start_pfn, end_pfn;
 
 		zone_movable_pfn[nid] =
-			roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
+			round_up(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
 
 		get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
 		if (zone_movable_pfn[nid] >= end_pfn)
-- 
2.34.1



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/mm_init.c: use round_up() to align movable range
  2025-02-07 10:04 [PATCH] mm/mm_init.c: use round_up() to align movable range Wei Yang
@ 2025-02-11 18:13 ` Shivank Garg
  2025-02-12  0:24   ` Wei Yang
  2025-02-13  6:29 ` Mike Rapoport
  2025-02-13  6:37 ` Anshuman Khandual
  2 siblings, 1 reply; 5+ messages in thread
From: Shivank Garg @ 2025-02-11 18:13 UTC (permalink / raw)
  To: Wei Yang, rppt, akpm; +Cc: linux-mm



On 2/7/2025 3:34 PM, Wei Yang wrote:
> Since MAX_ORDER_NR_PAGES is power of 2, let's use a faster version.

Makes sense to me.

Reviewed-by: Shivank Garg <shivankg@amd.com>


I noticed two similar instances in the same file
where round_up() might also be applicable:

  mm_init.c (usemap_size):
    usemapsize = roundup(zonesize, pageblock_nr_pages);
    usemapsize = roundup(usemapsize, BITS_PER_LONG);

Since both pageblock_nr_pages (1UL << pageblock_order) and BITS_PER_LONG (32 or 64)
are powers of 2, these could potentially use round_up() as well. Perhaps 
worth considering in a follow-up patch?

Thanks,
Shivank



> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  mm/mm_init.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index dec4084fe15a..99ef70a8b63c 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -438,7 +438,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>  		 * was requested by the user
>  		 */
>  		required_movablecore =
> -			roundup(required_movablecore, MAX_ORDER_NR_PAGES);
> +			round_up(required_movablecore, MAX_ORDER_NR_PAGES);
>  		required_movablecore = min(totalpages, required_movablecore);
>  		corepages = totalpages - required_movablecore;
>  
> @@ -549,7 +549,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>  		unsigned long start_pfn, end_pfn;
>  
>  		zone_movable_pfn[nid] =
> -			roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
> +			round_up(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
>  
>  		get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
>  		if (zone_movable_pfn[nid] >= end_pfn)



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/mm_init.c: use round_up() to align movable range
  2025-02-11 18:13 ` Shivank Garg
@ 2025-02-12  0:24   ` Wei Yang
  0 siblings, 0 replies; 5+ messages in thread
From: Wei Yang @ 2025-02-12  0:24 UTC (permalink / raw)
  To: Shivank Garg; +Cc: Wei Yang, rppt, akpm, linux-mm

On Tue, Feb 11, 2025 at 11:43:52PM +0530, Shivank Garg wrote:
>
>
>On 2/7/2025 3:34 PM, Wei Yang wrote:
>> Since MAX_ORDER_NR_PAGES is power of 2, let's use a faster version.
>
>Makes sense to me.
>
>Reviewed-by: Shivank Garg <shivankg@amd.com>
>

Thanks for taking a look.

>
>I noticed two similar instances in the same file
>where round_up() might also be applicable:
>
>  mm_init.c (usemap_size):
>    usemapsize = roundup(zonesize, pageblock_nr_pages);
>    usemapsize = roundup(usemapsize, BITS_PER_LONG);
>
>Since both pageblock_nr_pages (1UL << pageblock_order) and BITS_PER_LONG (32 or 64)
>are powers of 2, these could potentially use round_up() as well. Perhaps 
>worth considering in a follow-up patch?

It looks reasonable to me. I would prepare one.

Thanks.

>
>Thanks,
>Shivank
>
>
>
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  mm/mm_init.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>> 
>> diff --git a/mm/mm_init.c b/mm/mm_init.c
>> index dec4084fe15a..99ef70a8b63c 100644
>> --- a/mm/mm_init.c
>> +++ b/mm/mm_init.c
>> @@ -438,7 +438,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>>  		 * was requested by the user
>>  		 */
>>  		required_movablecore =
>> -			roundup(required_movablecore, MAX_ORDER_NR_PAGES);
>> +			round_up(required_movablecore, MAX_ORDER_NR_PAGES);
>>  		required_movablecore = min(totalpages, required_movablecore);
>>  		corepages = totalpages - required_movablecore;
>>  
>> @@ -549,7 +549,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>>  		unsigned long start_pfn, end_pfn;
>>  
>>  		zone_movable_pfn[nid] =
>> -			roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
>> +			round_up(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
>>  
>>  		get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
>>  		if (zone_movable_pfn[nid] >= end_pfn)

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/mm_init.c: use round_up() to align movable range
  2025-02-07 10:04 [PATCH] mm/mm_init.c: use round_up() to align movable range Wei Yang
  2025-02-11 18:13 ` Shivank Garg
@ 2025-02-13  6:29 ` Mike Rapoport
  2025-02-13  6:37 ` Anshuman Khandual
  2 siblings, 0 replies; 5+ messages in thread
From: Mike Rapoport @ 2025-02-13  6:29 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm

On Fri, Feb 07, 2025 at 10:04:53AM +0000, Wei Yang wrote:
> Since MAX_ORDER_NR_PAGES is power of 2, let's use a faster version.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>

Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

> ---
>  mm/mm_init.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index dec4084fe15a..99ef70a8b63c 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -438,7 +438,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>  		 * was requested by the user
>  		 */
>  		required_movablecore =
> -			roundup(required_movablecore, MAX_ORDER_NR_PAGES);
> +			round_up(required_movablecore, MAX_ORDER_NR_PAGES);
>  		required_movablecore = min(totalpages, required_movablecore);
>  		corepages = totalpages - required_movablecore;
>  
> @@ -549,7 +549,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>  		unsigned long start_pfn, end_pfn;
>  
>  		zone_movable_pfn[nid] =
> -			roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
> +			round_up(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
>  
>  		get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
>  		if (zone_movable_pfn[nid] >= end_pfn)
> -- 
> 2.34.1
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/mm_init.c: use round_up() to align movable range
  2025-02-07 10:04 [PATCH] mm/mm_init.c: use round_up() to align movable range Wei Yang
  2025-02-11 18:13 ` Shivank Garg
  2025-02-13  6:29 ` Mike Rapoport
@ 2025-02-13  6:37 ` Anshuman Khandual
  2 siblings, 0 replies; 5+ messages in thread
From: Anshuman Khandual @ 2025-02-13  6:37 UTC (permalink / raw)
  To: Wei Yang, rppt, akpm; +Cc: linux-mm



On 2/7/25 15:34, Wei Yang wrote:
> Since MAX_ORDER_NR_PAGES is power of 2, let's use a faster version.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  mm/mm_init.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index dec4084fe15a..99ef70a8b63c 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -438,7 +438,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>  		 * was requested by the user
>  		 */
>  		required_movablecore =
> -			roundup(required_movablecore, MAX_ORDER_NR_PAGES);
> +			round_up(required_movablecore, MAX_ORDER_NR_PAGES);
>  		required_movablecore = min(totalpages, required_movablecore);
>  		corepages = totalpages - required_movablecore;
>  
> @@ -549,7 +549,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
>  		unsigned long start_pfn, end_pfn;
>  
>  		zone_movable_pfn[nid] =
> -			roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
> +			round_up(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
>  
>  		get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
>  		if (zone_movable_pfn[nid] >= end_pfn)

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-02-13  6:37 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-07 10:04 [PATCH] mm/mm_init.c: use round_up() to align movable range Wei Yang
2025-02-11 18:13 ` Shivank Garg
2025-02-12  0:24   ` Wei Yang
2025-02-13  6:29 ` Mike Rapoport
2025-02-13  6:37 ` Anshuman Khandual

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox