linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Sayali Patil <sayalip@linux.ibm.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Shuah Khan <shuah@kernel.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux-kselftest@vger.kernel.org,
	Ritesh Harjani <ritesh.list@gmail.com>
Cc: David Hildenbrand <david@kernel.org>, Zi Yan <ziy@nvidia.com>,
	Michal Hocko <mhocko@kernel.org>,
	Oscar Salvador <osalvador@suse.de>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	Dev Jain <dev.jain@arm.com>,
	Liam.Howlett@oracle.com, linuxppc-dev@lists.ozlabs.org,
	Miaohe Lin <linmiaohe@huawei.com>,
	Venkat Rao Bagalkote <venkat88@linux.ibm.com>
Subject: Re: [PATCH v4 15/15] selftests/cgroup: extend test_hugetlb_memcg.c to support all huge page sizes
Date: Tue, 7 Apr 2026 17:37:42 +0530	[thread overview]
Message-ID: <53758376-14e9-4b52-89de-6d68e8c0c6f8@linux.ibm.com> (raw)
In-Reply-To: <fcc9eb99566af8f37e63924a77dc633c5f200417.1775466330.git.sayalip@linux.ibm.com>



On 06/04/26 14:49, Sayali Patil wrote:
> The hugetlb memcg selftest was previously skipped when the configured
> huge page size was not 2MB, preventing the test from running on systems
> using other default huge page sizes.
> 
> Detect the system's configured huge page size at runtime and use it for
> the allocation instead of assuming a fixed 2MB size. This allows the
> test to run on configurations using non-2MB huge pages and avoids
> unnecessary skips.
> 
> Fixes: c0dddb7aa5f8 ("selftests: add a selftest to verify hugetlb usage in memcg")
> Signed-off-by: Sayali Patil <sayalip@linux.ibm.com>
> ---
>   .../selftests/cgroup/test_hugetlb_memcg.c     | 90 ++++++++++++++-----
>   1 file changed, 68 insertions(+), 22 deletions(-)
> 
> diff --git a/tools/testing/selftests/cgroup/test_hugetlb_memcg.c b/tools/testing/selftests/cgroup/test_hugetlb_memcg.c
> index f451aa449be6..e6157a784138 100644
> --- a/tools/testing/selftests/cgroup/test_hugetlb_memcg.c
> +++ b/tools/testing/selftests/cgroup/test_hugetlb_memcg.c
> @@ -7,15 +7,21 @@
>   #include <stdlib.h>
>   #include <string.h>
>   #include <fcntl.h>
> +#include <stdint.h>
>   #include "kselftest.h"
>   #include "cgroup_util.h"
>   
>   #define ADDR ((void *)(0x0UL))
>   #define FLAGS (MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB)
> -/* mapping 8 MBs == 4 hugepages */
> -#define LENGTH (8UL*1024*1024)
>   #define PROTECTION (PROT_READ | PROT_WRITE)
>   
> +/*
> + * This value matches the kernel's MEMCG_CHARGE_BATCH definition:
> + * see include/linux/memcontrol.h. If the kernel value changes, this
> + * test constant must be updated accordingly to stay consistent.
> + */
> +#define MEMCG_CHARGE_BATCH 64U
> +
>   /* borrowed from mm/hmm-tests.c */
>   static long get_hugepage_size(void)
>   {
> @@ -84,11 +90,11 @@ static unsigned int check_first(char *addr)
>   	return *(unsigned int *)addr;
>   }
>   
> -static void write_data(char *addr)
> +static void write_data(char *addr, size_t length)
>   {
>   	unsigned long i;
>   
> -	for (i = 0; i < LENGTH; i++)
> +	for (i = 0; i < length; i++)
>   		*(addr + i) = (char)i;
>   }
>   
> @@ -96,26 +102,33 @@ static int hugetlb_test_program(const char *cgroup, void *arg)
>   {
>   	char *test_group = (char *)arg;
>   	void *addr;
> +	long hpage_size, batch_bytes;
>   	long old_current, expected_current, current;
>   	int ret = EXIT_FAILURE;
> +	size_t length;
> +	int pagesize, nr_pages;
> +
> +	pagesize = getpagesize();
> +	hpage_size = get_hugepage_size() * 1024;
> +	length = 4 * hpage_size;
> +	batch_bytes = MEMCG_CHARGE_BATCH * pagesize;
>   
>   	old_current = cg_read_long(test_group, "memory.current");
>   	set_nr_hugepages(20);
>   	current = cg_read_long(test_group, "memory.current");
> -	if (current - old_current >= MB(2)) {
> +	if (current - old_current >= hpage_size) {
>   		ksft_print_msg(
>   			"setting nr_hugepages should not increase hugepage usage.\n");
>   		ksft_print_msg("before: %ld, after: %ld\n", old_current, current);
>   		return EXIT_FAILURE;
>   	}
>   
> -	addr = mmap(ADDR, LENGTH, PROTECTION, FLAGS, 0, 0);
> -	if (addr == MAP_FAILED) {
> -		ksft_print_msg("fail to mmap.\n");
> -		return EXIT_FAILURE;
> -	}
> +	addr = mmap(ADDR, length, PROTECTION, FLAGS, 0, 0);
> +	if (addr == MAP_FAILED)
> +		ksft_exit_skip("mmap failed, not enough memory.\n");
> +
>   	current = cg_read_long(test_group, "memory.current");
> -	if (current - old_current >= MB(2)) {
> +	if (current - old_current >= hpage_size) {
>   		ksft_print_msg("mmap should not increase hugepage usage.\n");
>   		ksft_print_msg("before: %ld, after: %ld\n", old_current, current);
>   		goto out_failed_munmap;
> @@ -124,10 +137,34 @@ static int hugetlb_test_program(const char *cgroup, void *arg)
>   
>   	/* read the first page */
>   	check_first(addr);
> -	expected_current = old_current + MB(2);
> +	nr_pages = hpage_size / pagesize;
> +	expected_current = old_current + hpage_size;
>   	current = cg_read_long(test_group, "memory.current");
> -	if (!values_close(expected_current, current, 5)) {
> -		ksft_print_msg("memory usage should increase by around 2MB.\n");
> +	if (nr_pages < MEMCG_CHARGE_BATCH &&
> +	    (current == old_current ||
> +	    values_close(old_current + batch_bytes, current, 5))) {
> +		/*
> +		 * Memory cgroup charging uses per-CPU stocks and batched updates to the
> +		 *  memcg usage counters. For hugetlb allocations, the number of pages
> +		 *  that memcg charges is expressed in base pages (nr_pages), not
> +		 *  in hugepage units. When the charge for an allocation is smaller than
> +		 *  the internal batching threshold  (nr_pages <  MEMCG_CHARGE_BATCH),
> +		 *  it may be fully satisfied from the CPU’s local stock. In such
> +		 *  cases memory.current does not necessarily increase.
> +		 *
> +		 *  If the local stock is insufficient, it may be refilled in batches
> +		 *  of MEMCG_CHARGE_BATCH base pages, causing memory.current to increase
> +		 *  by more than the allocation size.
> +		 *
> +		 * Therefore, Treat both a zero delta and a batched increase as a
> +		 * valid behaviour here.
> +		 */
> +		if (current == old_current)
> +			ksft_print_msg("allocation consumed from local stock.\n");
> +		else
> +			ksft_print_msg("memcg charge batched via stock refill.\n");
> +	} else if (!values_close(expected_current, current, 5)) {
> +		ksft_print_msg("memory usage should increase by ~1 huge page.\n");
>   		ksft_print_msg(
>   			"expected memory: %ld, actual memory: %ld\n",
>   			expected_current, current);
> @@ -135,11 +172,11 @@ static int hugetlb_test_program(const char *cgroup, void *arg)
>   	}
>   
>   	/* write to the whole range */
> -	write_data(addr);
> +	write_data(addr, length);
>   	current = cg_read_long(test_group, "memory.current");
> -	expected_current = old_current + MB(8);
> +	expected_current = old_current + length;
>   	if (!values_close(expected_current, current, 5)) {
> -		ksft_print_msg("memory usage should increase by around 8MB.\n");
> +		ksft_print_msg("memory usage should increase by around 4 huge pages.\n");
>   		ksft_print_msg(
>   			"expected memory: %ld, actual memory: %ld\n",
>   			expected_current, current);
> @@ -147,7 +184,7 @@ static int hugetlb_test_program(const char *cgroup, void *arg)
>   	}
>   
>   	/* unmap the whole range */
> -	munmap(addr, LENGTH);
> +	munmap(addr, length);
>   	current = cg_read_long(test_group, "memory.current");
>   	expected_current = old_current;
>   	if (!values_close(expected_current, current, 5)) {
> @@ -162,14 +199,17 @@ static int hugetlb_test_program(const char *cgroup, void *arg)
>   	return ret;
>   
>   out_failed_munmap:
> -	munmap(addr, LENGTH);
> +	munmap(addr, length);
>   	return ret;
>   }
>   
>   static int test_hugetlb_memcg(char *root)
>   {
>   	int ret = KSFT_FAIL;
> +	int num_pages = 20;
> +	long hpage_size = get_hugepage_size();
>   	char *test_group;
> +	uint64_t limit;
>   
>   	test_group = cg_name(root, "hugetlb_memcg_test");
>   	if (!test_group || cg_create(test_group)) {
> @@ -177,7 +217,9 @@ static int test_hugetlb_memcg(char *root)
>   		goto out;
>   	}
>   
> -	if (cg_write(test_group, "memory.max", "100M")) {
> +	limit = (uint64_t)num_pages * hpage_size * 1024ULL;
> +
> +	if (cg_write_numeric(test_group, "memory.max", limit)) {
>   		ksft_print_msg("fail to set cgroup memory limit.\n");
>   		goto out;
>   	}
> @@ -200,6 +242,7 @@ int main(int argc, char **argv)
>   {
>   	char root[PATH_MAX];
>   	int ret = EXIT_SUCCESS, has_memory_hugetlb_acc;
> +	long val;
>   
>   	has_memory_hugetlb_acc = proc_mount_contains("memory_hugetlb_accounting");
>   	if (has_memory_hugetlb_acc < 0)
> @@ -208,12 +251,15 @@ int main(int argc, char **argv)
>   		ksft_exit_skip("memory hugetlb accounting is disabled\n");
>   
>   	/* Unit is kB! */
> -	if (get_hugepage_size() != 2048) {
> -		ksft_print_msg("test_hugetlb_memcg requires 2MB hugepages\n");
> +	val = get_hugepage_size();
> +	if (val < 0) {
> +		ksft_print_msg("Failed to read hugepage size\n");
>   		ksft_test_result_skip("test_hugetlb_memcg\n");
>   		return ret;
>   	}
>   
> +	ksft_print_msg("Hugepage size: %ld kB\n", val);
> +
>   	if (cg_find_unified_root(root, sizeof(root), NULL))
>   		ksft_exit_skip("cgroup v2 isn't mounted\n");
>   

Observed intermittent failure on some systems, particularly with 1GB 
huge page size, where the memory.current does not reflect the expected 
usage.

I will drop this patch from the series and will take a look at it 
separately.

Thanks,
Sayali


  reply	other threads:[~2026-04-07 12:13 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-06  9:19 [PATCH v4 00/15] selftests/mm: fix failures and robustness improvements Sayali Patil
2026-04-06  9:19 ` [PATCH v4 01/15] selftests/mm: restore default nr_hugepages value via EXIT trap in charge_reserved_hugetlb.sh Sayali Patil
2026-04-07 14:38   ` Zi Yan
2026-04-06  9:19 ` [PATCH v4 02/15] selftests/mm: fix hugetlb pathname construction " Sayali Patil
2026-04-06  9:19 ` [PATCH v4 03/15] selftests/mm: restore default nr_hugepages value via EXIT trap in hugetlb_reparenting_test.sh Sayali Patil
2026-04-07 14:39   ` Zi Yan
2026-04-06  9:19 ` [PATCH v4 04/15] selftests/mm: fix hugetlb pathname construction " Sayali Patil
2026-04-06  9:19 ` [PATCH v4 05/15] selftests/mm: fix cgroup task placement and drop memory.current checks " Sayali Patil
2026-04-06  9:19 ` [PATCH v4 06/15] selftests/mm: size tmpfs according to PMD page size in split_huge_page_test Sayali Patil
2026-04-06  9:19 ` [PATCH v4 07/15] selftests/mm: free dynamically allocated PMD-sized buffers " Sayali Patil
2026-04-07 14:44   ` Zi Yan
2026-04-06  9:19 ` [PATCH v4 08/15] selftest/mm: align memory size to huge page size in hugepage-mremap test Sayali Patil
2026-04-07 14:57   ` Zi Yan
2026-04-08 14:30     ` Sayali Patil
2026-04-06  9:19 ` [PATCH v4 09/15] selftest/mm: register existing mapping with userfaultfd in hugepage-mremap Sayali Patil
2026-04-06  9:19 ` [PATCH v4 10/15] selftests/mm: ensure destination is hugetlb-backed " Sayali Patil
2026-04-06  9:19 ` [PATCH v4 11/15] selftests/mm: skip uffd-wp-mremap if UFFD write-protect is unsupported Sayali Patil
2026-04-06  9:19 ` [PATCH v4 12/15] selftests/mm: skip uffd-stress test when nr_pages_per_cpu is zero Sayali Patil
2026-04-06  9:19 ` [PATCH v4 13/15] selftests/mm: move hwpoison setup into run_test() and silence modprobe output for memory-failure category Sayali Patil
2026-04-07 14:51   ` Zi Yan
2026-04-08  8:36     ` Sayali Patil
2026-04-08 15:40       ` Zi Yan
2026-04-06  9:19 ` [PATCH v4 14/15] selftests/mm: clarify alternate unmapping in compaction_test Sayali Patil
2026-04-06  9:19 ` [PATCH v4 15/15] selftests/cgroup: extend test_hugetlb_memcg.c to support all huge page sizes Sayali Patil
2026-04-07 12:07   ` Sayali Patil [this message]
2026-04-07 13:08 ` [PATCH v4 00/15] selftests/mm: fix failures and robustness improvements Venkat

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53758376-14e9-4b52-89de-6d68e8c0c6f8@linux.ibm.com \
    --to=sayalip@linux.ibm.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@kernel.org \
    --cc=dev.jain@arm.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@kernel.org \
    --cc=osalvador@suse.de \
    --cc=ritesh.list@gmail.com \
    --cc=shuah@kernel.org \
    --cc=venkat88@linux.ibm.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox