linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] selftests/mm: hugetlb cgroup charging: robustness fixes
@ 2025-12-21 12:26 Li Wang
  2025-12-21 12:26 ` [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t Li Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 21+ messages in thread
From: Li Wang @ 2025-12-21 12:26 UTC (permalink / raw)
  To: akpm, linux-kselftest, linux-kernel, linux-mm

Changes in v3:
  - 1/3: no changes.
  - 2/3: reorder with 3/3, and drop the 'size=' mount args.
  - 3/3: add $path check, improve varible declaration, sleep 1s for 60 tryies.

Changes in v2:
  - 1/3: Parse -s using sscanf("%zu", ...) instead of strtoull().
  - 2/3: Fix typo in charge_reserved_hugetlb.sh ("reseravation" -> "reservation").
  - 3/3: No changes.

This series fixes a few issues in the hugetlb cgroup charging selftests
(write_to_hugetlbfs.c + charge_reserved_hugetlb.sh) that show up on systems
with large hugepages (e.g. 512MB) and when failures cause the test to wait
indefinitely.

On an aarch64 64k page kernel with 512MB hugepages, the test consistently
fails in write_to_hugetlbfs with ENOMEM and then hangs waiting for the
expected usage values. The root cause is that charge_reserved_hugetlb.sh
mounts hugetlbfs with a fixed size=256M, which is smaller than a single
hugepage, resulting in a mount with size=0 capacity.

In addition, write_to_hugetlbfs previously parsed -s via atoi() into an
int, which can overflow and print negative sizes.

Reproducer / environment:
  - Kernel: 6.12.0-xxx.el10.aarch64+64k
  - Hugepagesize: 524288 kB (512MB)
  - ./charge_reserved_hugetlb.sh -cgroup-v2
  - Observed mount: pagesize=512M,size=0 before this series

After applying the series, the test completes successfully on the above setup.

Li Wang (3):
  selftests/mm/write_to_hugetlbfs: parse -s as size_t
  selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs
  selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper

 .../selftests/mm/charge_reserved_hugetlb.sh   | 55 +++++++++++--------
 .../testing/selftests/mm/write_to_hugetlbfs.c |  9 ++-
 2 files changed, 38 insertions(+), 26 deletions(-)

-- 
2.49.0



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
  2025-12-21 12:26 [PATCH v3 0/3] selftests/mm: hugetlb cgroup charging: robustness fixes Li Wang
@ 2025-12-21 12:26 ` Li Wang
  2025-12-21 20:23   ` Waiman Long
  2025-12-21 22:10   ` David Laight
  2025-12-21 12:26 ` [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs Li Wang
  2025-12-21 12:26 ` [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper Li Wang
  2 siblings, 2 replies; 21+ messages in thread
From: Li Wang @ 2025-12-21 12:26 UTC (permalink / raw)
  To: akpm, linux-kselftest, linux-kernel, linux-mm
  Cc: David Hildenbrand, Mark Brown, Shuah Khan, Waiman Long

write_to_hugetlbfs currently parses the -s size argument with atoi()
into an int. This silently accepts malformed input, cannot report overflow,
and can truncate large sizes.

--- Error log ---
 # uname -r
 6.12.0-xxx.el10.aarch64+64k

 # ls /sys/kernel/mm/hugepages/hugepages-*
 hugepages-16777216kB/  hugepages-2048kB/  hugepages-524288kB/

 #./charge_reserved_hugetlb.sh -cgroup-v2
 # -----------------------------------------
 ...
 # nr hugepages = 10
 # writing cgroup limit: 5368709120
 # writing reseravation limit: 5368709120
 ...
 # Writing to this path: /mnt/huge/test
 # Writing this size: -1610612736        <--------

Switch the size variable to size_t and parse -s with sscanf("%zu", ...).
Also print the size using %zu.

This avoids incorrect behavior with large -s values and makes the utility
more robust.

Signed-off-by: Li Wang <liwang@redhat.com>
Cc: David Hildenbrand <david@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
---
 tools/testing/selftests/mm/write_to_hugetlbfs.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/mm/write_to_hugetlbfs.c b/tools/testing/selftests/mm/write_to_hugetlbfs.c
index 34c91f7e6128..ecb5f7619960 100644
--- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
+++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
@@ -68,7 +68,7 @@ int main(int argc, char **argv)
 	int key = 0;
 	int *ptr = NULL;
 	int c = 0;
-	int size = 0;
+	size_t size = 0;
 	char path[256] = "";
 	enum method method = MAX_METHOD;
 	int want_sleep = 0, private = 0;
@@ -86,7 +86,10 @@ int main(int argc, char **argv)
 	while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
 		switch (c) {
 		case 's':
-			size = atoi(optarg);
+			if (sscanf(optarg, "%zu", &size) != 1) {
+				perror("Invalid -s.");
+				exit_usage();
+			}
 			break;
 		case 'p':
 			strncpy(path, optarg, sizeof(path) - 1);
@@ -131,7 +134,7 @@ int main(int argc, char **argv)
 	}
 
 	if (size != 0) {
-		printf("Writing this size: %d\n", size);
+		printf("Writing this size: %zu\n", size);
 	} else {
 		errno = EINVAL;
 		perror("size not found");
-- 
2.49.0



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs
  2025-12-21 12:26 [PATCH v3 0/3] selftests/mm: hugetlb cgroup charging: robustness fixes Li Wang
  2025-12-21 12:26 ` [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t Li Wang
@ 2025-12-21 12:26 ` Li Wang
  2025-12-21 20:24   ` Waiman Long
  2025-12-22 10:01   ` David Hildenbrand (Red Hat)
  2025-12-21 12:26 ` [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper Li Wang
  2 siblings, 2 replies; 21+ messages in thread
From: Li Wang @ 2025-12-21 12:26 UTC (permalink / raw)
  To: akpm, linux-kselftest, linux-kernel, linux-mm
  Cc: David Hildenbrand, Mark Brown, Shuah Khan, Waiman Long

charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
this is smaller than a single hugepage, so the hugetlbfs mount ends up
with zero capacity (often visible as size=0 in mount output).

As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
waiting for progress.

--- Error log ---
  # uname -r
  6.12.0-xxx.el10.aarch64+64k

  #./charge_reserved_hugetlb.sh -cgroup-v2
  # -----------------------------------------
  ...
  # nr hugepages = 10
  # writing cgroup limit: 5368709120
  # writing reseravation limit: 5368709120
  ...
  # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
  # Waiting for hugetlb memory reservation to reach size 2684354560.
  # 0
  # Waiting for hugetlb memory reservation to reach size 2684354560.
  # 0
  ...

  # mount |grep /mnt/huge
  none on /mnt/huge type hugetlbfs (rw,relatime,seclabel,pagesize=512M,size=0)

  # grep -i huge /proc/meminfo
  ...
  HugePages_Total:      10
  HugePages_Free:       10
  HugePages_Rsvd:        0
  HugePages_Surp:        0
  Hugepagesize:     524288 kB
  Hugetlb:         5242880 kB

Drop the mount args with 'size=256M', so the filesystem capacity is sufficient
regardless of HugeTLB page size.

Fixes: 29750f71a9 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")
Signed-off-by: Li Wang <liwang@redhat.com>
Cc: David Hildenbrand <david@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Waiman Long <longman@redhat.com>
---
 tools/testing/selftests/mm/charge_reserved_hugetlb.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
index e1fe16bcbbe8..fa6713892d82 100755
--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
@@ -290,7 +290,7 @@ function run_test() {
   setup_cgroup "hugetlb_cgroup_test" "$cgroup_limit" "$reservation_limit"
 
   mkdir -p /mnt/huge
-  mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
+  mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
 
   write_hugetlbfs_and_get_usage "hugetlb_cgroup_test" "$size" "$populate" \
     "$write" "/mnt/huge/test" "$method" "$private" "$expect_failure" \
@@ -344,7 +344,7 @@ function run_multiple_cgroup_test() {
   setup_cgroup "hugetlb_cgroup_test2" "$cgroup_limit2" "$reservation_limit2"
 
   mkdir -p /mnt/huge
-  mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
+  mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
 
   write_hugetlbfs_and_get_usage "hugetlb_cgroup_test1" "$size1" \
     "$populate1" "$write1" "/mnt/huge/test1" "$method" "$private" \
-- 
2.49.0



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper
  2025-12-21 12:26 [PATCH v3 0/3] selftests/mm: hugetlb cgroup charging: robustness fixes Li Wang
  2025-12-21 12:26 ` [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t Li Wang
  2025-12-21 12:26 ` [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs Li Wang
@ 2025-12-21 12:26 ` Li Wang
  2025-12-21 20:30   ` Waiman Long
  2025-12-22 10:06   ` David Hildenbrand (Red Hat)
  2 siblings, 2 replies; 21+ messages in thread
From: Li Wang @ 2025-12-21 12:26 UTC (permalink / raw)
  To: akpm, linux-kselftest, linux-kernel, linux-mm
  Cc: David Hildenbrand, Mark Brown, Shuah Khan, Waiman Long

The hugetlb cgroup usage wait loops in charge_reserved_hugetlb.sh were
unbounded and could hang forever if the expected cgroup file value never
appears (e.g. due to write_to_hugetlbfs in Error mapping).

--- Error log ---
  # uname -r
  6.12.0-xxx.el10.aarch64+64k

  # ls /sys/kernel/mm/hugepages/hugepages-*
  hugepages-16777216kB/  hugepages-2048kB/  hugepages-524288kB/

  #./charge_reserved_hugetlb.sh -cgroup-v2
  # -----------------------------------------
  ...
  # nr hugepages = 10
  # writing cgroup limit: 5368709120
  # writing reseravation limit: 5368709120
  ...
  # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
  # Waiting for hugetlb memory reservation to reach size 2684354560.
  # 0
  # Waiting for hugetlb memory reservation to reach size 2684354560.
  # 0
  # Waiting for hugetlb memory reservation to reach size 2684354560.
  # 0
  # Waiting for hugetlb memory reservation to reach size 2684354560.
  # 0
  # Waiting for hugetlb memory reservation to reach size 2684354560.
  # 0
  # Waiting for hugetlb memory reservation to reach size 2684354560.
  # 0
  ...

Introduce a small helper, wait_for_file_value(), and use it for:
  - waiting for reservation usage to drop to 0,
  - waiting for reservation usage to reach a given size,
  - waiting for fault usage to reach a given size.

This makes the waits consistent and adds a hard timeout (60 tries with
1s sleep) so the test fails instead of stalling indefinitely.

Signed-off-by: Li Wang <liwang@redhat.com>
Cc: David Hildenbrand <david@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Waiman Long <longman@redhat.com>
---
 .../selftests/mm/charge_reserved_hugetlb.sh   | 51 +++++++++++--------
 1 file changed, 30 insertions(+), 21 deletions(-)

diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
index fa6713892d82..447769657634 100755
--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
@@ -100,7 +100,7 @@ function setup_cgroup() {
   echo writing cgroup limit: "$cgroup_limit"
   echo "$cgroup_limit" >$cgroup_path/$name/hugetlb.${MB}MB.$fault_limit_file
 
-  echo writing reseravation limit: "$reservation_limit"
+  echo writing reservation limit: "$reservation_limit"
   echo "$reservation_limit" > \
     $cgroup_path/$name/hugetlb.${MB}MB.$reservation_limit_file
 
@@ -112,41 +112,50 @@ function setup_cgroup() {
   fi
 }
 
+function wait_for_file_value() {
+  local path="$1"
+  local expect="$2"
+  local max_tries=60
+
+  if [[ ! -r "$path" ]]; then
+    echo "ERROR: cannot read '$path', missing or permission denied"
+    return 1
+  fi
+
+  for ((i=1; i<=max_tries; i++)); do
+    local cur="$(cat "$path")"
+    if [[ "$cur" == "$expect" ]]; then
+      return 0
+    fi
+    echo "Waiting for $path to become '$expect' (current: '$cur') (try $i/$max_tries)"
+    sleep 1
+  done
+
+  echo "ERROR: timeout waiting for $path to become '$expect'"
+  return 1
+}
+
 function wait_for_hugetlb_memory_to_get_depleted() {
   local cgroup="$1"
   local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
-  # Wait for hugetlbfs memory to get depleted.
-  while [ $(cat $path) != 0 ]; do
-    echo Waiting for hugetlb memory to get depleted.
-    cat $path
-    sleep 0.5
-  done
+
+  wait_for_file_value "$path" "0"
 }
 
 function wait_for_hugetlb_memory_to_get_reserved() {
   local cgroup="$1"
   local size="$2"
-
   local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
-  # Wait for hugetlbfs memory to get written.
-  while [ $(cat $path) != $size ]; do
-    echo Waiting for hugetlb memory reservation to reach size $size.
-    cat $path
-    sleep 0.5
-  done
+
+  wait_for_file_value "$path" "$size"
 }
 
 function wait_for_hugetlb_memory_to_get_written() {
   local cgroup="$1"
   local size="$2"
-
   local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
-  # Wait for hugetlbfs memory to get written.
-  while [ $(cat $path) != $size ]; do
-    echo Waiting for hugetlb memory to reach size $size.
-    cat $path
-    sleep 0.5
-  done
+
+  wait_for_file_value "$path" "$size"
 }
 
 function write_hugetlbfs_and_get_usage() {
-- 
2.49.0



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
  2025-12-21 12:26 ` [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t Li Wang
@ 2025-12-21 20:23   ` Waiman Long
  2025-12-21 22:10   ` David Laight
  1 sibling, 0 replies; 21+ messages in thread
From: Waiman Long @ 2025-12-21 20:23 UTC (permalink / raw)
  To: Li Wang, akpm, linux-kselftest, linux-kernel, linux-mm
  Cc: David Hildenbrand, Mark Brown, Shuah Khan

On 12/21/25 7:26 AM, Li Wang wrote:
> write_to_hugetlbfs currently parses the -s size argument with atoi()
> into an int. This silently accepts malformed input, cannot report overflow,
> and can truncate large sizes.
>
> --- Error log ---
>   # uname -r
>   6.12.0-xxx.el10.aarch64+64k
>
>   # ls /sys/kernel/mm/hugepages/hugepages-*
>   hugepages-16777216kB/  hugepages-2048kB/  hugepages-524288kB/
>
>   #./charge_reserved_hugetlb.sh -cgroup-v2
>   # -----------------------------------------
>   ...
>   # nr hugepages = 10
>   # writing cgroup limit: 5368709120
>   # writing reseravation limit: 5368709120
>   ...
>   # Writing to this path: /mnt/huge/test
>   # Writing this size: -1610612736        <--------
>
> Switch the size variable to size_t and parse -s with sscanf("%zu", ...).
> Also print the size using %zu.
>
> This avoids incorrect behavior with large -s values and makes the utility
> more robust.
>
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
> ---
>   tools/testing/selftests/mm/write_to_hugetlbfs.c | 9 ++++++---
>   1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/write_to_hugetlbfs.c b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> index 34c91f7e6128..ecb5f7619960 100644
> --- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
> +++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> @@ -68,7 +68,7 @@ int main(int argc, char **argv)
>   	int key = 0;
>   	int *ptr = NULL;
>   	int c = 0;
> -	int size = 0;
> +	size_t size = 0;
>   	char path[256] = "";
>   	enum method method = MAX_METHOD;
>   	int want_sleep = 0, private = 0;
> @@ -86,7 +86,10 @@ int main(int argc, char **argv)
>   	while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
>   		switch (c) {
>   		case 's':
> -			size = atoi(optarg);
> +			if (sscanf(optarg, "%zu", &size) != 1) {
> +				perror("Invalid -s.");
> +				exit_usage();
> +			}
>   			break;
>   		case 'p':
>   			strncpy(path, optarg, sizeof(path) - 1);
> @@ -131,7 +134,7 @@ int main(int argc, char **argv)
>   	}
>   
>   	if (size != 0) {
> -		printf("Writing this size: %d\n", size);
> +		printf("Writing this size: %zu\n", size);
>   	} else {
>   		errno = EINVAL;
>   		perror("size not found");

LGTM

Acked-by: Waiman Long <longman@redhat.com>



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs
  2025-12-21 12:26 ` [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs Li Wang
@ 2025-12-21 20:24   ` Waiman Long
  2025-12-22 10:01   ` David Hildenbrand (Red Hat)
  1 sibling, 0 replies; 21+ messages in thread
From: Waiman Long @ 2025-12-21 20:24 UTC (permalink / raw)
  To: Li Wang, akpm, linux-kselftest, linux-kernel, linux-mm
  Cc: David Hildenbrand, Mark Brown, Shuah Khan

On 12/21/25 7:26 AM, Li Wang wrote:
> charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
> a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
> this is smaller than a single hugepage, so the hugetlbfs mount ends up
> with zero capacity (often visible as size=0 in mount output).
>
> As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
> waiting for progress.
>
> --- Error log ---
>    # uname -r
>    6.12.0-xxx.el10.aarch64+64k
>
>    #./charge_reserved_hugetlb.sh -cgroup-v2
>    # -----------------------------------------
>    ...
>    # nr hugepages = 10
>    # writing cgroup limit: 5368709120
>    # writing reseravation limit: 5368709120
>    ...
>    # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    ...
>
>    # mount |grep /mnt/huge
>    none on /mnt/huge type hugetlbfs (rw,relatime,seclabel,pagesize=512M,size=0)
>
>    # grep -i huge /proc/meminfo
>    ...
>    HugePages_Total:      10
>    HugePages_Free:       10
>    HugePages_Rsvd:        0
>    HugePages_Surp:        0
>    Hugepagesize:     524288 kB
>    Hugetlb:         5242880 kB
>
> Drop the mount args with 'size=256M', so the filesystem capacity is sufficient
> regardless of HugeTLB page size.
>
> Fixes: 29750f71a9 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> ---
>   tools/testing/selftests/mm/charge_reserved_hugetlb.sh | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> index e1fe16bcbbe8..fa6713892d82 100755
> --- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> +++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> @@ -290,7 +290,7 @@ function run_test() {
>     setup_cgroup "hugetlb_cgroup_test" "$cgroup_limit" "$reservation_limit"
>   
>     mkdir -p /mnt/huge
> -  mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
> +  mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
>   
>     write_hugetlbfs_and_get_usage "hugetlb_cgroup_test" "$size" "$populate" \
>       "$write" "/mnt/huge/test" "$method" "$private" "$expect_failure" \
> @@ -344,7 +344,7 @@ function run_multiple_cgroup_test() {
>     setup_cgroup "hugetlb_cgroup_test2" "$cgroup_limit2" "$reservation_limit2"
>   
>     mkdir -p /mnt/huge
> -  mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
> +  mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
>   
>     write_hugetlbfs_and_get_usage "hugetlb_cgroup_test1" "$size1" \
>       "$populate1" "$write1" "/mnt/huge/test1" "$method" "$private" \
Acked-by: Waiman Long <longman@redhat.com>



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper
  2025-12-21 12:26 ` [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper Li Wang
@ 2025-12-21 20:30   ` Waiman Long
  2025-12-22  0:56     ` Li Wang
  2025-12-22 10:06   ` David Hildenbrand (Red Hat)
  1 sibling, 1 reply; 21+ messages in thread
From: Waiman Long @ 2025-12-21 20:30 UTC (permalink / raw)
  To: Li Wang, akpm, linux-kselftest, linux-kernel, linux-mm
  Cc: David Hildenbrand, Mark Brown, Shuah Khan


On 12/21/25 7:26 AM, Li Wang wrote:
> The hugetlb cgroup usage wait loops in charge_reserved_hugetlb.sh were
> unbounded and could hang forever if the expected cgroup file value never
> appears (e.g. due to write_to_hugetlbfs in Error mapping).
>
> --- Error log ---
>    # uname -r
>    6.12.0-xxx.el10.aarch64+64k
>
>    # ls /sys/kernel/mm/hugepages/hugepages-*
>    hugepages-16777216kB/  hugepages-2048kB/  hugepages-524288kB/
>
>    #./charge_reserved_hugetlb.sh -cgroup-v2
>    # -----------------------------------------
>    ...
>    # nr hugepages = 10
>    # writing cgroup limit: 5368709120
>    # writing reseravation limit: 5368709120
>    ...
>    # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    ...
>
> Introduce a small helper, wait_for_file_value(), and use it for:
>    - waiting for reservation usage to drop to 0,
>    - waiting for reservation usage to reach a given size,
>    - waiting for fault usage to reach a given size.
>
> This makes the waits consistent and adds a hard timeout (60 tries with
> 1s sleep) so the test fails instead of stalling indefinitely.
>
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> ---
>   .../selftests/mm/charge_reserved_hugetlb.sh   | 51 +++++++++++--------
>   1 file changed, 30 insertions(+), 21 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> index fa6713892d82..447769657634 100755
> --- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> +++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> @@ -100,7 +100,7 @@ function setup_cgroup() {
>     echo writing cgroup limit: "$cgroup_limit"
>     echo "$cgroup_limit" >$cgroup_path/$name/hugetlb.${MB}MB.$fault_limit_file
>   
> -  echo writing reseravation limit: "$reservation_limit"
> +  echo writing reservation limit: "$reservation_limit"
>     echo "$reservation_limit" > \
>       $cgroup_path/$name/hugetlb.${MB}MB.$reservation_limit_file
>   
> @@ -112,41 +112,50 @@ function setup_cgroup() {
>     fi
>   }
>   
> +function wait_for_file_value() {
> +  local path="$1"
> +  local expect="$2"
> +  local max_tries=60
> +
> +  if [[ ! -r "$path" ]]; then
> +    echo "ERROR: cannot read '$path', missing or permission denied"
> +    return 1
> +  fi
> +
> +  for ((i=1; i<=max_tries; i++)); do
> +    local cur="$(cat "$path")"
> +    if [[ "$cur" == "$expect" ]]; then
> +      return 0
> +    fi
> +    echo "Waiting for $path to become '$expect' (current: '$cur') (try $i/$max_tries)"
> +    sleep 1
> +  done
> +
> +  echo "ERROR: timeout waiting for $path to become '$expect'"
> +  return 1
> +}
> +
>   function wait_for_hugetlb_memory_to_get_depleted() {
>     local cgroup="$1"
>     local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> -  # Wait for hugetlbfs memory to get depleted.
> -  while [ $(cat $path) != 0 ]; do
> -    echo Waiting for hugetlb memory to get depleted.
> -    cat $path
> -    sleep 0.5
> -  done
> +
> +  wait_for_file_value "$path" "0"
>   }
>   
>   function wait_for_hugetlb_memory_to_get_reserved() {
>     local cgroup="$1"
>     local size="$2"
> -
>     local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> -  # Wait for hugetlbfs memory to get written.
> -  while [ $(cat $path) != $size ]; do
> -    echo Waiting for hugetlb memory reservation to reach size $size.
> -    cat $path
> -    sleep 0.5
> -  done
> +
> +  wait_for_file_value "$path" "$size"
>   }
>   
>   function wait_for_hugetlb_memory_to_get_written() {
>     local cgroup="$1"
>     local size="$2"
> -
>     local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
> -  # Wait for hugetlbfs memory to get written.
> -  while [ $(cat $path) != $size ]; do
> -    echo Waiting for hugetlb memory to reach size $size.
> -    cat $path
> -    sleep 0.5
> -  done
> +
> +  wait_for_file_value "$path" "$size"
>   }
>   
>   function write_hugetlbfs_and_get_usage() {

wait_for_file_value() now return 0 onr success and 1 on timeout. 
However, none of the callers of the wait_for_hugetlb_memory* are 
checking their return values and acting accordingly. Are we expecting 
that the test will show failure because the waiting isn't completed or 
should we explicitly exit with ksft_fail (1) value?

Cheers,
Longman



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
  2025-12-21 12:26 ` [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t Li Wang
  2025-12-21 20:23   ` Waiman Long
@ 2025-12-21 22:10   ` David Laight
  2025-12-22  1:45     ` Li Wang
  1 sibling, 1 reply; 21+ messages in thread
From: David Laight @ 2025-12-21 22:10 UTC (permalink / raw)
  To: Li Wang
  Cc: akpm, linux-kselftest, linux-kernel, linux-mm, David Hildenbrand,
	Mark Brown, Shuah Khan, Waiman Long

On Sun, 21 Dec 2025 20:26:37 +0800
Li Wang <liwang@redhat.com> wrote:

> write_to_hugetlbfs currently parses the -s size argument with atoi()
> into an int. This silently accepts malformed input, cannot report overflow,
> and can truncate large sizes.

And sscanf() will just ignore invalid trailing characters.
Probably much the same as atoi() apart from a leading '-'.

Maybe you could use "%zu%c" and check the count is 1 - but I bet
some static checker won't like that.

Using strtoul() and checking the terminating character is 'reasonable',
but won't detect overflow.

	David

> 
> --- Error log ---
>  # uname -r
>  6.12.0-xxx.el10.aarch64+64k
> 
>  # ls /sys/kernel/mm/hugepages/hugepages-*
>  hugepages-16777216kB/  hugepages-2048kB/  hugepages-524288kB/
> 
>  #./charge_reserved_hugetlb.sh -cgroup-v2
>  # -----------------------------------------
>  ...
>  # nr hugepages = 10
>  # writing cgroup limit: 5368709120
>  # writing reseravation limit: 5368709120
>  ...
>  # Writing to this path: /mnt/huge/test
>  # Writing this size: -1610612736        <--------
> 
> Switch the size variable to size_t and parse -s with sscanf("%zu", ...).
> Also print the size using %zu.
> 
> This avoids incorrect behavior with large -s values and makes the utility
> more robust.
> 
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
> ---
>  tools/testing/selftests/mm/write_to_hugetlbfs.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/testing/selftests/mm/write_to_hugetlbfs.c b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> index 34c91f7e6128..ecb5f7619960 100644
> --- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
> +++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> @@ -68,7 +68,7 @@ int main(int argc, char **argv)
>  	int key = 0;
>  	int *ptr = NULL;
>  	int c = 0;
> -	int size = 0;
> +	size_t size = 0;
>  	char path[256] = "";
>  	enum method method = MAX_METHOD;
>  	int want_sleep = 0, private = 0;
> @@ -86,7 +86,10 @@ int main(int argc, char **argv)
>  	while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
>  		switch (c) {
>  		case 's':
> -			size = atoi(optarg);
> +			if (sscanf(optarg, "%zu", &size) != 1) {
> +				perror("Invalid -s.");
> +				exit_usage();
> +			}
>  			break;
>  		case 'p':
>  			strncpy(path, optarg, sizeof(path) - 1);
> @@ -131,7 +134,7 @@ int main(int argc, char **argv)
>  	}
>  
>  	if (size != 0) {
> -		printf("Writing this size: %d\n", size);
> +		printf("Writing this size: %zu\n", size);
>  	} else {
>  		errno = EINVAL;
>  		perror("size not found");



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper
  2025-12-21 20:30   ` Waiman Long
@ 2025-12-22  0:56     ` Li Wang
  2025-12-22  3:54       ` Waiman Long
  0 siblings, 1 reply; 21+ messages in thread
From: Li Wang @ 2025-12-22  0:56 UTC (permalink / raw)
  To: Waiman Long
  Cc: akpm, linux-kselftest, linux-kernel, linux-mm, David Hildenbrand,
	Mark Brown, Shuah Khan

On Mon, Dec 22, 2025 at 4:30 AM Waiman Long <llong@redhat.com> wrote:
>
>
> On 12/21/25 7:26 AM, Li Wang wrote:
> > The hugetlb cgroup usage wait loops in charge_reserved_hugetlb.sh were
> > unbounded and could hang forever if the expected cgroup file value never
> > appears (e.g. due to write_to_hugetlbfs in Error mapping).
> >
> > --- Error log ---
> >    # uname -r
> >    6.12.0-xxx.el10.aarch64+64k
> >
> >    # ls /sys/kernel/mm/hugepages/hugepages-*
> >    hugepages-16777216kB/  hugepages-2048kB/  hugepages-524288kB/
> >
> >    #./charge_reserved_hugetlb.sh -cgroup-v2
> >    # -----------------------------------------
> >    ...
> >    # nr hugepages = 10
> >    # writing cgroup limit: 5368709120
> >    # writing reseravation limit: 5368709120
> >    ...
> >    # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    ...
> >
> > Introduce a small helper, wait_for_file_value(), and use it for:
> >    - waiting for reservation usage to drop to 0,
> >    - waiting for reservation usage to reach a given size,
> >    - waiting for fault usage to reach a given size.
> >
> > This makes the waits consistent and adds a hard timeout (60 tries with
> > 1s sleep) so the test fails instead of stalling indefinitely.
> >
> > Signed-off-by: Li Wang <liwang@redhat.com>
> > Cc: David Hildenbrand <david@kernel.org>
> > Cc: Mark Brown <broonie@kernel.org>
> > Cc: Shuah Khan <shuah@kernel.org>
> > Cc: Waiman Long <longman@redhat.com>
> > ---
> >   .../selftests/mm/charge_reserved_hugetlb.sh   | 51 +++++++++++--------
> >   1 file changed, 30 insertions(+), 21 deletions(-)
> >
> > diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> > index fa6713892d82..447769657634 100755
> > --- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> > +++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> > @@ -100,7 +100,7 @@ function setup_cgroup() {
> >     echo writing cgroup limit: "$cgroup_limit"
> >     echo "$cgroup_limit" >$cgroup_path/$name/hugetlb.${MB}MB.$fault_limit_file
> >
> > -  echo writing reseravation limit: "$reservation_limit"
> > +  echo writing reservation limit: "$reservation_limit"
> >     echo "$reservation_limit" > \
> >       $cgroup_path/$name/hugetlb.${MB}MB.$reservation_limit_file
> >
> > @@ -112,41 +112,50 @@ function setup_cgroup() {
> >     fi
> >   }
> >
> > +function wait_for_file_value() {
> > +  local path="$1"
> > +  local expect="$2"
> > +  local max_tries=60
> > +
> > +  if [[ ! -r "$path" ]]; then
> > +    echo "ERROR: cannot read '$path', missing or permission denied"
> > +    return 1
> > +  fi
> > +
> > +  for ((i=1; i<=max_tries; i++)); do
> > +    local cur="$(cat "$path")"
> > +    if [[ "$cur" == "$expect" ]]; then
> > +      return 0
> > +    fi
> > +    echo "Waiting for $path to become '$expect' (current: '$cur') (try $i/$max_tries)"
> > +    sleep 1
> > +  done
> > +
> > +  echo "ERROR: timeout waiting for $path to become '$expect'"
> > +  return 1
> > +}
> > +
> >   function wait_for_hugetlb_memory_to_get_depleted() {
> >     local cgroup="$1"
> >     local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> > -  # Wait for hugetlbfs memory to get depleted.
> > -  while [ $(cat $path) != 0 ]; do
> > -    echo Waiting for hugetlb memory to get depleted.
> > -    cat $path
> > -    sleep 0.5
> > -  done
> > +
> > +  wait_for_file_value "$path" "0"
> >   }
> >
> >   function wait_for_hugetlb_memory_to_get_reserved() {
> >     local cgroup="$1"
> >     local size="$2"
> > -
> >     local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> > -  # Wait for hugetlbfs memory to get written.
> > -  while [ $(cat $path) != $size ]; do
> > -    echo Waiting for hugetlb memory reservation to reach size $size.
> > -    cat $path
> > -    sleep 0.5
> > -  done
> > +
> > +  wait_for_file_value "$path" "$size"
> >   }
> >
> >   function wait_for_hugetlb_memory_to_get_written() {
> >     local cgroup="$1"
> >     local size="$2"
> > -
> >     local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
> > -  # Wait for hugetlbfs memory to get written.
> > -  while [ $(cat $path) != $size ]; do
> > -    echo Waiting for hugetlb memory to reach size $size.
> > -    cat $path
> > -    sleep 0.5
> > -  done
> > +
> > +  wait_for_file_value "$path" "$size"
> >   }
> >
> >   function write_hugetlbfs_and_get_usage() {
>
> wait_for_file_value() now return 0 onr success and 1 on timeout.
> However, none of the callers of the wait_for_hugetlb_memory* are
> checking their return values and acting accordingly. Are we expecting
> that the test will show failure because the waiting isn't completed or
> should we explicitly exit with ksft_fail (1) value?

Hmm, it seems the test shouldn't exit too early.

As the wait_for_hugetlb_memory* is only trying 60s to examine the file
value, if timeouted, we still need to keep going because the test requires
CLEANUP work and exit/report from there.

The key point of each subtest is to save the '$write_result' value and
examine it
which controls the whole test to exit.

e.g.

This is an intentional error test:

# ./charge_reserved_hugetlb.sh -cgroup-v2
CLEANUP DONE
...
Writing to this path: /mnt/huge/test
Writing this size: 2684354560
Not populating.
Not writing to memory.
Using method=0
Shared mapping.
RESERVE mapping.
Allocating using HUGETLBFS.
write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
to become '2684354560' (current: '0') (try 1/60)
Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
to become '2684354560' (current: '0') (try 2/60)
Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
to become '2684354560' (current: '0') (try 3/60)
Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
to become '2684354560' (current: '0') (try 4/60)
...
Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
to become '2684354560' (current: '0') (try 60/60)
ERROR: timeout waiting for
/sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current to
become '2684354560'
After write:
hugetlb_usage=0
reserved_usage=0
0
0
Memory charged to hugtlb=0
Memory charged to reservation=0
expected (2684354560) != actual (0): Reserved memory not charged to
reservation usage.
CLEANUP DONE


-- 
Regards,
Li Wang



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
  2025-12-21 22:10   ` David Laight
@ 2025-12-22  1:45     ` Li Wang
  2025-12-22  9:48       ` David Laight
  0 siblings, 1 reply; 21+ messages in thread
From: Li Wang @ 2025-12-22  1:45 UTC (permalink / raw)
  To: David Laight
  Cc: akpm, linux-kselftest, linux-kernel, linux-mm, David Hildenbrand,
	Mark Brown, Shuah Khan, Waiman Long

[-- Attachment #1: Type: text/plain, Size: 1025 bytes --]

On Mon, Dec 22, 2025 at 6:11 AM David Laight <david.laight.linux@gmail.com>
wrote:

> On Sun, 21 Dec 2025 20:26:37 +0800
> Li Wang <liwang@redhat.com> wrote:
>
> > write_to_hugetlbfs currently parses the -s size argument with atoi()
> > into an int. This silently accepts malformed input, cannot report
> overflow,
> > and can truncate large sizes.
>
> And sscanf() will just ignore invalid trailing characters.
> Probably much the same as atoi() apart from a leading '-'.
>
> Maybe you could use "%zu%c" and check the count is 1 - but I bet
> some static checker won't like that.
>

Yes, that would be stronger, since it would reject trailing garbage.
But for a selftest this is probably sufficient: switching to size_t and
parsing with "%zu" already avoids the int truncation issue.

@Andrew Morton <akpm@linux-foundation.org>,

Hi Andrew, I noticed you have addedthe patches to your mm-new branch,
Let me know if you prefer the "%zu%c" enhancement in a new version.


-- 
Regards,
Li Wang

[-- Attachment #2: Type: text/html, Size: 2614 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper
  2025-12-22  0:56     ` Li Wang
@ 2025-12-22  3:54       ` Waiman Long
  0 siblings, 0 replies; 21+ messages in thread
From: Waiman Long @ 2025-12-22  3:54 UTC (permalink / raw)
  To: Li Wang, Waiman Long
  Cc: akpm, linux-kselftest, linux-kernel, linux-mm, David Hildenbrand,
	Mark Brown, Shuah Khan

[-- Attachment #1: Type: text/plain, Size: 7349 bytes --]


On 12/21/25 7:56 PM, Li Wang wrote:
> On Mon, Dec 22, 2025 at 4:30 AM Waiman Long<llong@redhat.com> wrote:
>>
>> On 12/21/25 7:26 AM, Li Wang wrote:
>>> The hugetlb cgroup usage wait loops in charge_reserved_hugetlb.sh were
>>> unbounded and could hang forever if the expected cgroup file value never
>>> appears (e.g. due to write_to_hugetlbfs in Error mapping).
>>>
>>> --- Error log ---
>>>     # uname -r
>>>     6.12.0-xxx.el10.aarch64+64k
>>>
>>>     # ls /sys/kernel/mm/hugepages/hugepages-*
>>>     hugepages-16777216kB/  hugepages-2048kB/  hugepages-524288kB/
>>>
>>>     #./charge_reserved_hugetlb.sh -cgroup-v2
>>>     # -----------------------------------------
>>>     ...
>>>     # nr hugepages = 10
>>>     # writing cgroup limit: 5368709120
>>>     # writing reseravation limit: 5368709120
>>>     ...
>>>     # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
>>>     # Waiting for hugetlb memory reservation to reach size 2684354560.
>>>     # 0
>>>     # Waiting for hugetlb memory reservation to reach size 2684354560.
>>>     # 0
>>>     # Waiting for hugetlb memory reservation to reach size 2684354560.
>>>     # 0
>>>     # Waiting for hugetlb memory reservation to reach size 2684354560.
>>>     # 0
>>>     # Waiting for hugetlb memory reservation to reach size 2684354560.
>>>     # 0
>>>     # Waiting for hugetlb memory reservation to reach size 2684354560.
>>>     # 0
>>>     ...
>>>
>>> Introduce a small helper, wait_for_file_value(), and use it for:
>>>     - waiting for reservation usage to drop to 0,
>>>     - waiting for reservation usage to reach a given size,
>>>     - waiting for fault usage to reach a given size.
>>>
>>> This makes the waits consistent and adds a hard timeout (60 tries with
>>> 1s sleep) so the test fails instead of stalling indefinitely.
>>>
>>> Signed-off-by: Li Wang<liwang@redhat.com>
>>> Cc: David Hildenbrand<david@kernel.org>
>>> Cc: Mark Brown<broonie@kernel.org>
>>> Cc: Shuah Khan<shuah@kernel.org>
>>> Cc: Waiman Long<longman@redhat.com>
>>> ---
>>>    .../selftests/mm/charge_reserved_hugetlb.sh   | 51 +++++++++++--------
>>>    1 file changed, 30 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
>>> index fa6713892d82..447769657634 100755
>>> --- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
>>> +++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
>>> @@ -100,7 +100,7 @@ function setup_cgroup() {
>>>      echo writing cgroup limit: "$cgroup_limit"
>>>      echo "$cgroup_limit" >$cgroup_path/$name/hugetlb.${MB}MB.$fault_limit_file
>>>
>>> -  echo writing reseravation limit: "$reservation_limit"
>>> +  echo writing reservation limit: "$reservation_limit"
>>>      echo "$reservation_limit" > \
>>>        $cgroup_path/$name/hugetlb.${MB}MB.$reservation_limit_file
>>>
>>> @@ -112,41 +112,50 @@ function setup_cgroup() {
>>>      fi
>>>    }
>>>
>>> +function wait_for_file_value() {
>>> +  local path="$1"
>>> +  local expect="$2"
>>> +  local max_tries=60
>>> +
>>> +  if [[ ! -r "$path" ]]; then
>>> +    echo "ERROR: cannot read '$path', missing or permission denied"
>>> +    return 1
>>> +  fi
>>> +
>>> +  for ((i=1; i<=max_tries; i++)); do
>>> +    local cur="$(cat "$path")"
>>> +    if [[ "$cur" == "$expect" ]]; then
>>> +      return 0
>>> +    fi
>>> +    echo "Waiting for $path to become '$expect' (current: '$cur') (try $i/$max_tries)"
>>> +    sleep 1
>>> +  done
>>> +
>>> +  echo "ERROR: timeout waiting for $path to become '$expect'"
>>> +  return 1
>>> +}
>>> +
>>>    function wait_for_hugetlb_memory_to_get_depleted() {
>>>      local cgroup="$1"
>>>      local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
>>> -  # Wait for hugetlbfs memory to get depleted.
>>> -  while [ $(cat $path) != 0 ]; do
>>> -    echo Waiting for hugetlb memory to get depleted.
>>> -    cat $path
>>> -    sleep 0.5
>>> -  done
>>> +
>>> +  wait_for_file_value "$path" "0"
>>>    }
>>>
>>>    function wait_for_hugetlb_memory_to_get_reserved() {
>>>      local cgroup="$1"
>>>      local size="$2"
>>> -
>>>      local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
>>> -  # Wait for hugetlbfs memory to get written.
>>> -  while [ $(cat $path) != $size ]; do
>>> -    echo Waiting for hugetlb memory reservation to reach size $size.
>>> -    cat $path
>>> -    sleep 0.5
>>> -  done
>>> +
>>> +  wait_for_file_value "$path" "$size"
>>>    }
>>>
>>>    function wait_for_hugetlb_memory_to_get_written() {
>>>      local cgroup="$1"
>>>      local size="$2"
>>> -
>>>      local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
>>> -  # Wait for hugetlbfs memory to get written.
>>> -  while [ $(cat $path) != $size ]; do
>>> -    echo Waiting for hugetlb memory to reach size $size.
>>> -    cat $path
>>> -    sleep 0.5
>>> -  done
>>> +
>>> +  wait_for_file_value "$path" "$size"
>>>    }
>>>
>>>    function write_hugetlbfs_and_get_usage() {
>> wait_for_file_value() now return 0 onr success and 1 on timeout.
>> However, none of the callers of the wait_for_hugetlb_memory* are
>> checking their return values and acting accordingly. Are we expecting
>> that the test will show failure because the waiting isn't completed or
>> should we explicitly exit with ksft_fail (1) value?
> Hmm, it seems the test shouldn't exit too early.
>
> As the wait_for_hugetlb_memory* is only trying 60s to examine the file
> value, if timeouted, we still need to keep going because the test requires
> CLEANUP work and exit/report from there.
>
> The key point of each subtest is to save the '$write_result' value and
> examine it
> which controls the whole test to exit.
>
> e.g.
>
> This is an intentional error test:
>
> # ./charge_reserved_hugetlb.sh -cgroup-v2
> CLEANUP DONE
> ...
> Writing to this path: /mnt/huge/test
> Writing this size: 2684354560
> Not populating.
> Not writing to memory.
> Using method=0
> Shared mapping.
> RESERVE mapping.
> Allocating using HUGETLBFS.
> write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
> Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
> to become '2684354560' (current: '0') (try 1/60)
> Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
> to become '2684354560' (current: '0') (try 2/60)
> Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
> to become '2684354560' (current: '0') (try 3/60)
> Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
> to become '2684354560' (current: '0') (try 4/60)
> ...
> Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
> to become '2684354560' (current: '0') (try 60/60)
> ERROR: timeout waiting for
> /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current to
> become '2684354560'
> After write:
> hugetlb_usage=0
> reserved_usage=0
> 0
> 0
> Memory charged to hugtlb=0
> Memory charged to reservation=0
> expected (2684354560) != actual (0): Reserved memory not charged to
> reservation usage.
> CLEANUP DONE

Thank for running a test case. As long as the test will still report a 
failure, it will be fine with me. I just want to note that the return 
value value of wait_for_file_value() isn't currently used at all.

Cheers, Longman

[-- Attachment #2: Type: text/html, Size: 7918 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
  2025-12-22  1:45     ` Li Wang
@ 2025-12-22  9:48       ` David Laight
  2025-12-22 10:56         ` Li Wang
  0 siblings, 1 reply; 21+ messages in thread
From: David Laight @ 2025-12-22  9:48 UTC (permalink / raw)
  To: Li Wang
  Cc: akpm, linux-kselftest, linux-kernel, linux-mm, David Hildenbrand,
	Mark Brown, Shuah Khan, Waiman Long

On Mon, 22 Dec 2025 09:45:41 +0800
Li Wang <liwang@redhat.com> wrote:

> On Mon, Dec 22, 2025 at 6:11 AM David Laight <david.laight.linux@gmail.com>
> wrote:
> 
> > On Sun, 21 Dec 2025 20:26:37 +0800
> > Li Wang <liwang@redhat.com> wrote:
> >  
> > > write_to_hugetlbfs currently parses the -s size argument with atoi()
> > > into an int. This silently accepts malformed input, cannot report  
> > overflow,  
> > > and can truncate large sizes.  
> >
> > And sscanf() will just ignore invalid trailing characters.
> > Probably much the same as atoi() apart from a leading '-'.
> >
> > Maybe you could use "%zu%c" and check the count is 1 - but I bet
> > some static checker won't like that.
> >  
> 
> Yes, that would be stronger, since it would reject trailing garbage.
> But for a selftest this is probably sufficient: switching to size_t and
> parsing with "%zu" already avoids the int truncation issue.

Have you checked at what does sscanf() does with an overlong digit string?
I'd guess that it just processes all the digits and then masks the result
to fix (like the kernel one does).

It reality scanf() is 'not the function you are lookign for'.

IIRC the 'SUS' (used to) say that this was absolutely fine for command
line parsing for 'standard utilities'.

It is best to use strtoul() and check the 'end' character is '\0'.

	David

> 
> @Andrew Morton <akpm@linux-foundation.org>,
> 
> Hi Andrew, I noticed you have addedthe patches to your mm-new branch,
> Let me know if you prefer the "%zu%c" enhancement in a new version.
> 
> 



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs
  2025-12-21 12:26 ` [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs Li Wang
  2025-12-21 20:24   ` Waiman Long
@ 2025-12-22 10:01   ` David Hildenbrand (Red Hat)
  2025-12-22 19:08     ` Andrew Morton
  1 sibling, 1 reply; 21+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-12-22 10:01 UTC (permalink / raw)
  To: Li Wang, akpm, linux-kselftest, linux-kernel, linux-mm
  Cc: Mark Brown, Shuah Khan, Waiman Long

On 12/21/25 13:26, Li Wang wrote:
> charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
> a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
> this is smaller than a single hugepage, so the hugetlbfs mount ends up
> with zero capacity (often visible as size=0 in mount output).
> 
> As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
> waiting for progress.
> 
> --- Error log ---
>    # uname -r
>    6.12.0-xxx.el10.aarch64+64k
> 
>    #./charge_reserved_hugetlb.sh -cgroup-v2
>    # -----------------------------------------
>    ...
>    # nr hugepages = 10
>    # writing cgroup limit: 5368709120
>    # writing reseravation limit: 5368709120
>    ...
>    # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    ...
> 
>    # mount |grep /mnt/huge
>    none on /mnt/huge type hugetlbfs (rw,relatime,seclabel,pagesize=512M,size=0)
> 
>    # grep -i huge /proc/meminfo
>    ...
>    HugePages_Total:      10
>    HugePages_Free:       10
>    HugePages_Rsvd:        0
>    HugePages_Surp:        0
>    Hugepagesize:     524288 kB
>    Hugetlb:         5242880 kB
> 
> Drop the mount args with 'size=256M', so the filesystem capacity is sufficient
> regardless of HugeTLB page size.
> 
> Fixes: 29750f71a9 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")

Likely Andrew should add a CC of stable

Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>

-- 
Cheers

David


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper
  2025-12-21 12:26 ` [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper Li Wang
  2025-12-21 20:30   ` Waiman Long
@ 2025-12-22 10:06   ` David Hildenbrand (Red Hat)
  1 sibling, 0 replies; 21+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-12-22 10:06 UTC (permalink / raw)
  To: Li Wang, akpm, linux-kselftest, linux-kernel, linux-mm
  Cc: Mark Brown, Shuah Khan, Waiman Long

On 12/21/25 13:26, Li Wang wrote:
> The hugetlb cgroup usage wait loops in charge_reserved_hugetlb.sh were
> unbounded and could hang forever if the expected cgroup file value never
> appears (e.g. due to write_to_hugetlbfs in Error mapping).
> 
> --- Error log ---
>    # uname -r
>    6.12.0-xxx.el10.aarch64+64k
> 
>    # ls /sys/kernel/mm/hugepages/hugepages-*
>    hugepages-16777216kB/  hugepages-2048kB/  hugepages-524288kB/
> 
>    #./charge_reserved_hugetlb.sh -cgroup-v2
>    # -----------------------------------------
>    ...
>    # nr hugepages = 10
>    # writing cgroup limit: 5368709120
>    # writing reseravation limit: 5368709120
>    ...
>    # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    # Waiting for hugetlb memory reservation to reach size 2684354560.
>    # 0
>    ...
> 
> Introduce a small helper, wait_for_file_value(), and use it for:
>    - waiting for reservation usage to drop to 0,
>    - waiting for reservation usage to reach a given size,
>    - waiting for fault usage to reach a given size.
> 
> This makes the waits consistent and adds a hard timeout (60 tries with
> 1s sleep) so the test fails instead of stalling indefinitely.
> 
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> ---
>   .../selftests/mm/charge_reserved_hugetlb.sh   | 51 +++++++++++--------
>   1 file changed, 30 insertions(+), 21 deletions(-)
> 
> diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> index fa6713892d82..447769657634 100755
> --- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> +++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> @@ -100,7 +100,7 @@ function setup_cgroup() {
>     echo writing cgroup limit: "$cgroup_limit"
>     echo "$cgroup_limit" >$cgroup_path/$name/hugetlb.${MB}MB.$fault_limit_file
>   
> -  echo writing reseravation limit: "$reservation_limit"
> +  echo writing reservation limit: "$reservation_limit"
>     echo "$reservation_limit" > \
>       $cgroup_path/$name/hugetlb.${MB}MB.$reservation_limit_file
>   
> @@ -112,41 +112,50 @@ function setup_cgroup() {
>     fi
>   }
>   
> +function wait_for_file_value() {
> +  local path="$1"
> +  local expect="$2"
> +  local max_tries=60
> +
> +  if [[ ! -r "$path" ]]; then
> +    echo "ERROR: cannot read '$path', missing or permission denied"
> +    return 1
> +  fi
> +
> +  for ((i=1; i<=max_tries; i++)); do
> +    local cur="$(cat "$path")"
> +    if [[ "$cur" == "$expect" ]]; then
> +      return 0
> +    fi
> +    echo "Waiting for $path to become '$expect' (current: '$cur') (try $i/$max_tries)"
> +    sleep 1
> +  done
> +
> +  echo "ERROR: timeout waiting for $path to become '$expect'"
> +  return 1
> +}
> +
>   function wait_for_hugetlb_memory_to_get_depleted() {
>     local cgroup="$1"
>     local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> -  # Wait for hugetlbfs memory to get depleted.
> -  while [ $(cat $path) != 0 ]; do
> -    echo Waiting for hugetlb memory to get depleted.
> -    cat $path
> -    sleep 0.5
> -  done
> +
> +  wait_for_file_value "$path" "0"
>   }
>   
>   function wait_for_hugetlb_memory_to_get_reserved() {
>     local cgroup="$1"
>     local size="$2"
> -
>     local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> -  # Wait for hugetlbfs memory to get written.
> -  while [ $(cat $path) != $size ]; do
> -    echo Waiting for hugetlb memory reservation to reach size $size.
> -    cat $path
> -    sleep 0.5
> -  done
> +
> +  wait_for_file_value "$path" "$size"
>   }
>   
>   function wait_for_hugetlb_memory_to_get_written() {
>     local cgroup="$1"
>     local size="$2"
> -
>     local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
> -  # Wait for hugetlbfs memory to get written.
> -  while [ $(cat $path) != $size ]; do
> -    echo Waiting for hugetlb memory to reach size $size.
> -    cat $path
> -    sleep 0.5
> -  done
> +
> +  wait_for_file_value "$path" "$size"
>   }
>   
>   function write_hugetlbfs_and_get_usage() {

It would indeed be cleaner to propagate the error and make it clearer 
how we then cleanly abort the test.

But if it keeps working for the time beiing, fine with me

Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>

-- 
Cheers

David


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
  2025-12-22  9:48       ` David Laight
@ 2025-12-22 10:56         ` Li Wang
  2025-12-23  2:05           ` Andrew Morton
  0 siblings, 1 reply; 21+ messages in thread
From: Li Wang @ 2025-12-22 10:56 UTC (permalink / raw)
  To: David Laight, Andrew Morton
  Cc: linux-kselftest, linux-kernel, linux-mm, David Hildenbrand,
	Mark Brown, Shuah Khan, Waiman Long

David Laight <david.laight.linux@gmail.com> wrote:

> > > Maybe you could use "%zu%c" and check the count is 1 - but I bet
> > > some static checker won't like that.
> > >
> >
> > Yes, that would be stronger, since it would reject trailing garbage.
> > But for a selftest this is probably sufficient: switching to size_t and
> > parsing with "%zu" already avoids the int truncation issue.
>
> Have you checked at what does sscanf() does with an overlong digit string?
> I'd guess that it just processes all the digits and then masks the result
> to fix (like the kernel one does).

It will truncate the number to the size SIZE_MAX.

From my test:

# ./write_to_hugetlbfs -m 0 -s 99999999999999999999999999 -p /mnt/huge/test -o
Writing to this path: /mnt/huge/test
Writing this size: 18446744073709551615    <---------- SIZE_MAX
Populating.
Not writing to memory.
Using method=0
Shared mapping.
RESERVE mapping.
Allocating using HUGETLBFS.
write_to_hugetlbfs: Error mapping the file: Invalid argument


> It reality scanf() is 'not the function you are lookign for'.
>
> IIRC the 'SUS' (used to) say that this was absolutely fine for command
> line parsing for 'standard utilities'.
>
> It is best to use strtoul() and check the 'end' character is '\0'.

Hmm, that sounds like we need to go back to the patch V1 [1] method.
But I am not sure, @Andrew Morton, do you think so?

--- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
+++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
@@ -86,10 +86,17 @@ int main(int argc, char **argv)
        while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
                switch (c) {
                case 's':
-                       if (sscanf(optarg, "%zu", &size) != 1) {
-                               perror("Invalid -s.");
+                       char *end = NULL;
+                       unsigned long tmp = strtoul(optarg, &end, 10);
+                       if (errno || end == optarg || *end != '\0') {
+                               perror("Invalid -s size");
                                exit_usage();
                        }
+                       if (tmp == 0) {
+                               perror("size not found");
+                               exit_usage();
+                       }
+                       size = (size_t)tmp;
                        break;
                case 'p':


[1] https://lore.kernel.org/linux-kselftest/20251220111645.2246009-3-liwang@redhat.com/T/#m5a5765349dedfda1ed66c54b8cab7af184bff371



--
Regards,
Li Wang



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs
  2025-12-22 10:01   ` David Hildenbrand (Red Hat)
@ 2025-12-22 19:08     ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2025-12-22 19:08 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat)
  Cc: Li Wang, linux-kselftest, linux-kernel, linux-mm, Mark Brown,
	Shuah Khan, Waiman Long

On Mon, 22 Dec 2025 11:01:17 +0100 "David Hildenbrand (Red Hat)" <david@kernel.org> wrote:

> > Fixes: 29750f71a9 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")
> 
> Likely Andrew should add a CC of stable
> 

yep, thanks.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
  2025-12-22 10:56         ` Li Wang
@ 2025-12-23  2:05           ` Andrew Morton
  2025-12-23  2:41             ` Li Wang
  0 siblings, 1 reply; 21+ messages in thread
From: Andrew Morton @ 2025-12-23  2:05 UTC (permalink / raw)
  To: Li Wang
  Cc: David Laight, linux-kselftest, linux-kernel, linux-mm,
	David Hildenbrand, Mark Brown, Shuah Khan, Waiman Long

On Mon, 22 Dec 2025 18:56:49 +0800 Li Wang <liwang@redhat.com> wrote:

> > It reality scanf() is 'not the function you are lookign for'.
> >
> > IIRC the 'SUS' (used to) say that this was absolutely fine for command
> > line parsing for 'standard utilities'.
> >
> > It is best to use strtoul() and check the 'end' character is '\0'.
> 
> Hmm, that sounds like we need to go back to the patch V1 [1] method.
> But I am not sure, @Andrew Morton, do you think so?
> 
> --- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
> +++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> @@ -86,10 +86,17 @@ int main(int argc, char **argv)
>         while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
>                 switch (c) {
>                 case 's':
> -                       if (sscanf(optarg, "%zu", &size) != 1) {
> -                               perror("Invalid -s.");
> +                       char *end = NULL;
> +                       unsigned long tmp = strtoul(optarg, &end, 10);
> +                       if (errno || end == optarg || *end != '\0') {
> +                               perror("Invalid -s size");
>                                 exit_usage();
>                         }
> +                       if (tmp == 0) {
> +                               perror("size not found");
> +                               exit_usage();
> +                       }
> +                       size = (size_t)tmp;
>                         break;
>                 case 'p':

Geeze guys, it's just a selftest.

hp2:/usr/src/linux-6.19-rc1> grep -r scanf tools/testing/selftests | wc -l 
177

if your command line breaks the selftest, fix your command line?


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
  2025-12-23  2:05           ` Andrew Morton
@ 2025-12-23  2:41             ` Li Wang
  2025-12-23  8:40               ` David Laight
  0 siblings, 1 reply; 21+ messages in thread
From: Li Wang @ 2025-12-23  2:41 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Laight, linux-kselftest, linux-kernel, linux-mm,
	David Hildenbrand, Mark Brown, Shuah Khan, Waiman Long

Andrew Morton <akpm@linux-foundation.org> wrote:

> > > It is best to use strtoul() and check the 'end' character is '\0'.
> >
> > Hmm, that sounds like we need to go back to the patch V1 [1] method.
> > But I am not sure, @Andrew Morton, do you think so?
> >
> > --- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
> > +++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> > @@ -86,10 +86,17 @@ int main(int argc, char **argv)
> >         while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
> >                 switch (c) {
> >                 case 's':
> > -                       if (sscanf(optarg, "%zu", &size) != 1) {
> > -                               perror("Invalid -s.");
> > +                       char *end = NULL;
> > +                       unsigned long tmp = strtoul(optarg, &end, 10);
> > +                       if (errno || end == optarg || *end != '\0') {
> > +                               perror("Invalid -s size");
> >                                 exit_usage();
> >                         }
> > +                       if (tmp == 0) {
> > +                               perror("size not found");
> > +                               exit_usage();
> > +                       }
> > +                       size = (size_t)tmp;
> >                         break;
> >                 case 'p':
>
> Geeze guys, it's just a selftest.
>
> hp2:/usr/src/linux-6.19-rc1> grep -r scanf tools/testing/selftests | wc -l
> 177
>
> if your command line breaks the selftest, fix your command line?

Yes, I am ok with sscanf() :-).

In fact, write_to hugetlbfs currently only accepts arguments from
charge_reserved_hugetlb.sh, and the way the '-s' is used is not
very diverse.

--
Regards,
Li Wang



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
  2025-12-23  2:41             ` Li Wang
@ 2025-12-23  8:40               ` David Laight
  2025-12-23  9:29                 ` Li Wang
  0 siblings, 1 reply; 21+ messages in thread
From: David Laight @ 2025-12-23  8:40 UTC (permalink / raw)
  To: Li Wang
  Cc: Andrew Morton, linux-kselftest, linux-kernel, linux-mm,
	David Hildenbrand, Mark Brown, Shuah Khan, Waiman Long

On Tue, 23 Dec 2025 10:41:22 +0800
Li Wang <liwang@redhat.com> wrote:

> Andrew Morton <akpm@linux-foundation.org> wrote:
> 
> > > > It is best to use strtoul() and check the 'end' character is '\0'.  
> > >
> > > Hmm, that sounds like we need to go back to the patch V1 [1] method.
> > > But I am not sure, @Andrew Morton, do you think so?
> > >
> > > --- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
> > > +++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> > > @@ -86,10 +86,17 @@ int main(int argc, char **argv)
> > >         while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
> > >                 switch (c) {
> > >                 case 's':
> > > -                       if (sscanf(optarg, "%zu", &size) != 1) {
> > > -                               perror("Invalid -s.");
> > > +                       char *end = NULL;

Initialiser not needed.

> > > +                       unsigned long tmp = strtoul(optarg, &end, 10);
> > > +                       if (errno || end == optarg || *end != '\0') {

I doubt that use of errno is correct.
Library functions that set errno on error don't set it to zero.
The only test needed there is *end != '\'.
(end == optarg will be picked up by size == 0 later - if that is actually
needed to stop things breaking.)

> > > +                               perror("Invalid -s size");
> > >                                 exit_usage();
> > >                         }
> > > +                       if (tmp == 0) {

No point checking for zero before the assigning the 'unsigned long' to 'size_t'.
So the result of strtoul() can just be just assigned to 'size'.
(Ignoring the fact that size_t will be unsigned long.)

> > > +                               perror("size not found");
> > > +                               exit_usage();
> > > +                       }
> > > +                       size = (size_t)tmp;
> > >                         break;
> > >                 case 'p':  
> >
> > Geeze guys, it's just a selftest.
> >
> > hp2:/usr/src/linux-6.19-rc1> grep -r scanf tools/testing/selftests | wc -l
> > 177
> >
> > if your command line breaks the selftest, fix your command line?  
> 
> Yes, I am ok with sscanf() :-).

What was wrong with atoi() ?
Or, at most, strtoul() with a check that *end == 0.

	David

> 
> In fact, write_to hugetlbfs currently only accepts arguments from
> charge_reserved_hugetlb.sh, and the way the '-s' is used is not
> very diverse.
> 
> --
> Regards,
> Li Wang
> 



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
  2025-12-23  8:40               ` David Laight
@ 2025-12-23  9:29                 ` Li Wang
  2025-12-23 12:11                   ` David Laight
  0 siblings, 1 reply; 21+ messages in thread
From: Li Wang @ 2025-12-23  9:29 UTC (permalink / raw)
  To: David Laight
  Cc: Andrew Morton, linux-kselftest, linux-kernel, linux-mm,
	David Hildenbrand, Mark Brown, Shuah Khan, Waiman Long

David Laight <david.laight.linux@gmail.com> wrote:

> > > > --- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
> > > > +++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> > > > @@ -86,10 +86,17 @@ int main(int argc, char **argv)
> > > >         while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
> > > >                 switch (c) {
> > > >                 case 's':
> > > > -                       if (sscanf(optarg, "%zu", &size) != 1) {
> > > > -                               perror("Invalid -s.");
> > > > +                       char *end = NULL;
>
> Initialiser not needed.
>
> > > > +                       unsigned long tmp = strtoul(optarg, &end, 10);
> > > > +                       if (errno || end == optarg || *end != '\0') {
>
> I doubt that use of errno is correct.
> Library functions that set errno on error don't set it to zero.
> The only test needed there is *end != '\'.
> (end == optarg will be picked up by size == 0 later - if that is actually
> needed to stop things breaking.)

Good point!

>
> > > > +                               perror("Invalid -s size");
> > > >                                 exit_usage();
> > > >                         }
> > > > +                       if (tmp == 0) {
>
> No point checking for zero before the assigning the 'unsigned long' to 'size_t'.
> So the result of strtoul() can just be just assigned to 'size'.
> (Ignoring the fact that size_t will be unsigned long.)
>
> > > > +                               perror("size not found");
> > > > +                               exit_usage();
> > > > +                       }
> > > > +                       size = (size_t)tmp;
> > > >                         break;
> > > >                 case 'p':
> > >
> > > Geeze guys, it's just a selftest.
> > >
> > > hp2:/usr/src/linux-6.19-rc1> grep -r scanf tools/testing/selftests | wc -l
> > > 177
> > >
> > > if your command line breaks the selftest, fix your command line?
> >
> > Yes, I am ok with sscanf() :-).
>
> What was wrong with atoi() ?

As the patch summary described, write_to_hugetlbfs previously parsed -s via
atoi() into an int, which can overflow and print negative sizes. This
problem was
found on our kernel-64k platform and

 #./charge_reserved_hugetlb.sh -cgroup-v2
 # -----------------------------------------
 ...
 # nr hugepages = 10
 # writing cgroup limit: 5368709120
 # writing reseravation limit: 5368709120
 ...
 # Writing to this path: /mnt/huge/test
 # Writing this size: -1610612736        <--------

> Or, at most, strtoul() with a check that *end == 0.

True, this will work as well, but as Andrew pointed, it is a tiny test issue
and also resolved by sscanf() so let's just go with it.
(He has added patchset to the mm-new branch and queued it for linux-next)

If the issue remains controversial, you could send a separate patch later.
Anyway, thank you for the review comments.

-- 
Regards,
Li Wang



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
  2025-12-23  9:29                 ` Li Wang
@ 2025-12-23 12:11                   ` David Laight
  0 siblings, 0 replies; 21+ messages in thread
From: David Laight @ 2025-12-23 12:11 UTC (permalink / raw)
  To: Li Wang
  Cc: Andrew Morton, linux-kselftest, linux-kernel, linux-mm,
	David Hildenbrand, Mark Brown, Shuah Khan, Waiman Long

On Tue, 23 Dec 2025 17:29:38 +0800
Li Wang <liwang@redhat.com> wrote:

> David Laight <david.laight.linux@gmail.com> wrote:
> 

> > What was wrong with atoi() ?  
> 
> As the patch summary described, write_to_hugetlbfs previously parsed -s via
> atoi() into an int, which can overflow and print negative sizes. This
> problem was
> found on our kernel-64k platform and
> 
>  #./charge_reserved_hugetlb.sh -cgroup-v2
>  # -----------------------------------------
>  ...
>  # nr hugepages = 10
>  # writing cgroup limit: 5368709120
>  # writing reseravation limit: 5368709120
>  ...
>  # Writing to this path: /mnt/huge/test
>  # Writing this size: -1610612736        <--------

So the problem was that atoi() doesn't let you specify valid values
over 2GB.
That isn't how I read the patch summary.
It read as though you were worried about detecting invalid input.

	David


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2025-12-23 12:11 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-21 12:26 [PATCH v3 0/3] selftests/mm: hugetlb cgroup charging: robustness fixes Li Wang
2025-12-21 12:26 ` [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t Li Wang
2025-12-21 20:23   ` Waiman Long
2025-12-21 22:10   ` David Laight
2025-12-22  1:45     ` Li Wang
2025-12-22  9:48       ` David Laight
2025-12-22 10:56         ` Li Wang
2025-12-23  2:05           ` Andrew Morton
2025-12-23  2:41             ` Li Wang
2025-12-23  8:40               ` David Laight
2025-12-23  9:29                 ` Li Wang
2025-12-23 12:11                   ` David Laight
2025-12-21 12:26 ` [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs Li Wang
2025-12-21 20:24   ` Waiman Long
2025-12-22 10:01   ` David Hildenbrand (Red Hat)
2025-12-22 19:08     ` Andrew Morton
2025-12-21 12:26 ` [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper Li Wang
2025-12-21 20:30   ` Waiman Long
2025-12-22  0:56     ` Li Wang
2025-12-22  3:54       ` Waiman Long
2025-12-22 10:06   ` David Hildenbrand (Red Hat)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox