* [PATCH v3] selftests: cgroup: make test_memcg_sock robust against delayed sock stats
@ 2025-11-20 6:04 Guopeng Zhang
2025-11-20 15:35 ` Michal Koutný
0 siblings, 1 reply; 5+ messages in thread
From: Guopeng Zhang @ 2025-11-20 6:04 UTC (permalink / raw)
To: tj, hannes, mhocko, roman.gushchin, shakeel.butt, mkoutny, muchun.song
Cc: lance.yang, shuah, linux-mm, linux-kselftest, linux-kernel,
Guopeng Zhang
test_memcg_sock() currently requires that memory.stat's "sock " counter
is exactly zero immediately after the TCP server exits. On a busy system
this assumption is too strict:
- Socket memory may be freed with a small delay (e.g. RCU callbacks).
- memcg statistics are updated asynchronously via the rstat flushing
worker, so the "sock " value in memory.stat can stay non-zero for a
short period of time even after all socket memory has been uncharged.
As a result, test_memcg_sock() can intermittently fail even though socket
memory accounting is working correctly.
Make the test more robust by polling memory.stat for the "sock "
counter and allowing it some time to drop to zero instead of checking
it only once. The timeout is set to 3 seconds to cover the periodic
rstat flush interval (FLUSH_TIME = 2*HZ by default) plus some
scheduling slack. If the counter does not become zero within the
timeout, the test still fails as before.
On my test system, running test_memcontrol 50 times produced:
- Before this patch: 6/50 runs passed.
- After this patch: 50/50 runs passed.
Suggested-by: Lance Yang <lance.yang@linux.dev>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Signed-off-by: Guopeng Zhang <zhangguopeng@kylinos.cn>
---
v3:
- Move MEMCG_SOCKSTAT_WAIT_* defines after the #include block as
suggested.
v2:
- Mention the periodic rstat flush interval (FLUSH_TIME = 2*HZ) in
the comment and clarify the rationale for the 3s timeout.
- Replace the hard-coded retry count and wait interval with macros
to avoid magic numbers and make the 3s timeout calculation explicit.
---
.../selftests/cgroup/test_memcontrol.c | 30 ++++++++++++++++++-
1 file changed, 29 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index 4e1647568c5b..8ff7286fc80b 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -21,6 +21,9 @@
#include "kselftest.h"
#include "cgroup_util.h"
+#define MEMCG_SOCKSTAT_WAIT_RETRIES 30 /* 3s total */
+#define MEMCG_SOCKSTAT_WAIT_INTERVAL_US (100 * 1000) /* 100 ms */
+
static bool has_localevents;
static bool has_recursiveprot;
@@ -1384,6 +1387,8 @@ static int test_memcg_sock(const char *root)
int bind_retries = 5, ret = KSFT_FAIL, pid, err;
unsigned short port;
char *memcg;
+ long sock_post = -1;
+ int i;
memcg = cg_name(root, "memcg_test");
if (!memcg)
@@ -1432,7 +1437,30 @@ static int test_memcg_sock(const char *root)
if (cg_read_long(memcg, "memory.current") < 0)
goto cleanup;
- if (cg_read_key_long(memcg, "memory.stat", "sock "))
+ /*
+ * memory.stat is updated asynchronously via the memcg rstat
+ * flushing worker, which runs periodically (every 2 seconds,
+ * see FLUSH_TIME). On a busy system, the "sock " counter may
+ * stay non-zero for a short period of time after the TCP
+ * connection is closed and all socket memory has been
+ * uncharged.
+ *
+ * Poll memory.stat for up to 3 seconds (~FLUSH_TIME plus some
+ * scheduling slack) and require that the "sock " counter
+ * eventually drops to zero.
+ */
+ for (i = 0; i < MEMCG_SOCKSTAT_WAIT_RETRIES; i++) {
+ sock_post = cg_read_key_long(memcg, "memory.stat", "sock ");
+ if (sock_post < 0)
+ goto cleanup;
+
+ if (!sock_post)
+ break;
+
+ usleep(MEMCG_SOCKSTAT_WAIT_INTERVAL_US);
+ }
+
+ if (sock_post)
goto cleanup;
ret = KSFT_PASS;
--
2.25.1
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH v3] selftests: cgroup: make test_memcg_sock robust against delayed sock stats 2025-11-20 6:04 [PATCH v3] selftests: cgroup: make test_memcg_sock robust against delayed sock stats Guopeng Zhang @ 2025-11-20 15:35 ` Michal Koutný 2025-11-21 7:47 ` Guopeng Zhang 0 siblings, 1 reply; 5+ messages in thread From: Michal Koutný @ 2025-11-20 15:35 UTC (permalink / raw) To: Guopeng Zhang Cc: tj, hannes, mhocko, roman.gushchin, shakeel.butt, muchun.song, lance.yang, shuah, linux-mm, linux-kselftest, linux-kernel [-- Attachment #1: Type: text/plain, Size: 4553 bytes --] Hello Guopeng. +Cc Leon Huang Fu <leon.huangfu@shopee.com> On Thu, Nov 20, 2025 at 02:04:06PM +0800, Guopeng Zhang <zhangguopeng@kylinos.cn> wrote: > test_memcg_sock() currently requires that memory.stat's "sock " counter > is exactly zero immediately after the TCP server exits. On a busy system > this assumption is too strict: > > - Socket memory may be freed with a small delay (e.g. RCU callbacks). (FTR, I remember there is `echo 1 > /sys/module/rcutree/parameters/do_rcu_barrier`, however, I'm not sure it works always as expected (a reader may actually wait for multi-stage RCU pipeline), so plain timeout is more reliable.) > - memcg statistics are updated asynchronously via the rstat flushing > worker, so the "sock " value in memory.stat can stay non-zero for a > short period of time even after all socket memory has been uncharged. > > As a result, test_memcg_sock() can intermittently fail even though socket > memory accounting is working correctly. > > Make the test more robust by polling memory.stat for the "sock " > counter and allowing it some time to drop to zero instead of checking > it only once. I like the approach of adaptive waiting to settle in such tests. > The timeout is set to 3 seconds to cover the periodic rstat flush > interval (FLUSH_TIME = 2*HZ by default) plus some scheduling slack. If > the counter does not become zero within the timeout, the test still > fails as before. > > On my test system, running test_memcontrol 50 times produced: > > - Before this patch: 6/50 runs passed. > - After this patch: 50/50 runs passed. BTW Have you looked into the number of retries until success? Was it in accordance with the flushing interval? > > Suggested-by: Lance Yang <lance.yang@linux.dev> > Reviewed-by: Lance Yang <lance.yang@linux.dev> > Signed-off-by: Guopeng Zhang <zhangguopeng@kylinos.cn> > --- > v3: > - Move MEMCG_SOCKSTAT_WAIT_* defines after the #include block as > suggested. > v2: > - Mention the periodic rstat flush interval (FLUSH_TIME = 2*HZ) in > the comment and clarify the rationale for the 3s timeout. > - Replace the hard-coded retry count and wait interval with macros > to avoid magic numbers and make the 3s timeout calculation explicit. > --- > .../selftests/cgroup/test_memcontrol.c | 30 ++++++++++++++++++- > 1 file changed, 29 insertions(+), 1 deletion(-) > > diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c > index 4e1647568c5b..8ff7286fc80b 100644 > --- a/tools/testing/selftests/cgroup/test_memcontrol.c > +++ b/tools/testing/selftests/cgroup/test_memcontrol.c > @@ -21,6 +21,9 @@ > #include "kselftest.h" > #include "cgroup_util.h" > > +#define MEMCG_SOCKSTAT_WAIT_RETRIES 30 /* 3s total */ > +#define MEMCG_SOCKSTAT_WAIT_INTERVAL_US (100 * 1000) /* 100 ms */ > + > static bool has_localevents; > static bool has_recursiveprot; > > @@ -1384,6 +1387,8 @@ static int test_memcg_sock(const char *root) > int bind_retries = 5, ret = KSFT_FAIL, pid, err; > unsigned short port; > char *memcg; > + long sock_post = -1; > + int i; > > memcg = cg_name(root, "memcg_test"); > if (!memcg) > @@ -1432,7 +1437,30 @@ static int test_memcg_sock(const char *root) > if (cg_read_long(memcg, "memory.current") < 0) > goto cleanup; > > - if (cg_read_key_long(memcg, "memory.stat", "sock ")) > + /* > + * memory.stat is updated asynchronously via the memcg rstat > + * flushing worker, which runs periodically (every 2 seconds, > + * see FLUSH_TIME). On a busy system, the "sock " counter may > + * stay non-zero for a short period of time after the TCP > + * connection is closed and all socket memory has been > + * uncharged. > + * > + * Poll memory.stat for up to 3 seconds (~FLUSH_TIME plus some > + * scheduling slack) and require that the "sock " counter > + * eventually drops to zero. > + */ > + for (i = 0; i < MEMCG_SOCKSTAT_WAIT_RETRIES; i++) { > + sock_post = cg_read_key_long(memcg, "memory.stat", "sock "); > + if (sock_post < 0) > + goto cleanup; > + > + if (!sock_post) > + break; > + > + usleep(MEMCG_SOCKSTAT_WAIT_INTERVAL_US); > + } I think this may be useful also for othe tests (at least other memory.stat checks), so some encapsulated implementation like a macro with parameters cg_read_assert_gt_with_retries(cg, file, field, exp, timeout, retries) WDYT? Michal [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 265 bytes --] ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v3] selftests: cgroup: make test_memcg_sock robust against delayed sock stats 2025-11-20 15:35 ` Michal Koutný @ 2025-11-21 7:47 ` Guopeng Zhang 2025-12-03 11:59 ` Guopeng Zhang 0 siblings, 1 reply; 5+ messages in thread From: Guopeng Zhang @ 2025-11-21 7:47 UTC (permalink / raw) To: Michal Koutný Cc: tj, hannes, mhocko, roman.gushchin, shakeel.butt, muchun.song, lance.yang, shuah, linux-mm, linux-kselftest, linux-kernel On 11/20/25 23:35, Michal Koutný wrote: > Hello Guopeng. > > +Cc Leon Huang Fu <leon.huangfu@shopee.com> > > On Thu, Nov 20, 2025 at 02:04:06PM +0800, Guopeng Zhang <zhangguopeng@kylinos.cn> wrote: >> test_memcg_sock() currently requires that memory.stat's "sock " counter >> is exactly zero immediately after the TCP server exits. On a busy system >> this assumption is too strict: >> >> - Socket memory may be freed with a small delay (e.g. RCU callbacks). > > (FTR, I remember there is `echo 1 > /sys/module/rcutree/parameters/do_rcu_barrier`, > however, I'm not sure it works always as expected (a reader may actually > wait for multi-stage RCU pipeline), so plain timeout is more reliable.) > Hi Michal, Thank you for the suggestion. I tested using `echo 1 > /sys/module/rcutree/parameters/do_rcu_barrier`, but unfortunately the effect was not very good on my setup. As you mentioned, a reader may actually wait for the multi-stage RCU pipeline, so a plain timeout seems more reliable here. >> - memcg statistics are updated asynchronously via the rstat flushing >> worker, so the "sock " value in memory.stat can stay non-zero for a >> short period of time even after all socket memory has been uncharged. >> >> As a result, test_memcg_sock() can intermittently fail even though socket >> memory accounting is working correctly. >> >> Make the test more robust by polling memory.stat for the "sock " >> counter and allowing it some time to drop to zero instead of checking >> it only once. > > I like the approach of adaptive waiting to settle in such tests. > >> The timeout is set to 3 seconds to cover the periodic rstat flush >> interval (FLUSH_TIME = 2*HZ by default) plus some scheduling slack. If >> the counter does not become zero within the timeout, the test still >> fails as before. >> >> On my test system, running test_memcontrol 50 times produced: >> >> - Before this patch: 6/50 runs passed. >> - After this patch: 50/50 runs passed. > > BTW Have you looked into the number of retries until success? > Was it in accordance with the flushing interval? > Yes. From my observations, it usually succeeds after about 10–15 retries on average (roughly 1–1.5 seconds), and occasionally it takes more than 20 retries (>2 seconds). This looks broadly in line with the periodic rstat flushing interval (~2 seconds) plus some scheduling slack. >> >> Suggested-by: Lance Yang <lance.yang@linux.dev> >> Reviewed-by: Lance Yang <lance.yang@linux.dev> >> Signed-off-by: Guopeng Zhang <zhangguopeng@kylinos.cn> >> --- >> v3: >> - Move MEMCG_SOCKSTAT_WAIT_* defines after the #include block as >> suggested. >> v2: >> - Mention the periodic rstat flush interval (FLUSH_TIME = 2*HZ) in >> the comment and clarify the rationale for the 3s timeout. >> - Replace the hard-coded retry count and wait interval with macros >> to avoid magic numbers and make the 3s timeout calculation explicit. >> --- >> .../selftests/cgroup/test_memcontrol.c | 30 ++++++++++++++++++- >> 1 file changed, 29 insertions(+), 1 deletion(-) >> >> diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c >> index 4e1647568c5b..8ff7286fc80b 100644 >> --- a/tools/testing/selftests/cgroup/test_memcontrol.c >> +++ b/tools/testing/selftests/cgroup/test_memcontrol.c >> @@ -21,6 +21,9 @@ >> #include "kselftest.h" >> #include "cgroup_util.h" >> >> +#define MEMCG_SOCKSTAT_WAIT_RETRIES 30 /* 3s total */ >> +#define MEMCG_SOCKSTAT_WAIT_INTERVAL_US (100 * 1000) /* 100 ms */ >> + >> static bool has_localevents; >> static bool has_recursiveprot; >> >> @@ -1384,6 +1387,8 @@ static int test_memcg_sock(const char *root) >> int bind_retries = 5, ret = KSFT_FAIL, pid, err; >> unsigned short port; >> char *memcg; >> + long sock_post = -1; >> + int i; >> >> memcg = cg_name(root, "memcg_test"); >> if (!memcg) >> @@ -1432,7 +1437,30 @@ static int test_memcg_sock(const char *root) >> if (cg_read_long(memcg, "memory.current") < 0) >> goto cleanup; >> >> - if (cg_read_key_long(memcg, "memory.stat", "sock ")) >> + /* >> + * memory.stat is updated asynchronously via the memcg rstat >> + * flushing worker, which runs periodically (every 2 seconds, >> + * see FLUSH_TIME). On a busy system, the "sock " counter may >> + * stay non-zero for a short period of time after the TCP >> + * connection is closed and all socket memory has been >> + * uncharged. >> + * >> + * Poll memory.stat for up to 3 seconds (~FLUSH_TIME plus some >> + * scheduling slack) and require that the "sock " counter >> + * eventually drops to zero. >> + */ >> + for (i = 0; i < MEMCG_SOCKSTAT_WAIT_RETRIES; i++) { >> + sock_post = cg_read_key_long(memcg, "memory.stat", "sock "); >> + if (sock_post < 0) >> + goto cleanup; >> + >> + if (!sock_post) >> + break; >> + >> + usleep(MEMCG_SOCKSTAT_WAIT_INTERVAL_US); >> + } > > I think this may be useful also for othe tests (at least other > memory.stat checks), so some encapsulated implementation like a macro > with parameters > cg_read_assert_gt_with_retries(cg, file, field, exp, timeout, retries) > WDYT? > > Michal That’s a great idea. I agree this pattern could be useful for other `memory.stat` checks as well, and I will implement an encapsulated helper/macro along those lines as per your suggestion. Thanks, Guopeng ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v3] selftests: cgroup: make test_memcg_sock robust against delayed sock stats 2025-11-21 7:47 ` Guopeng Zhang @ 2025-12-03 11:59 ` Guopeng Zhang 2025-12-04 15:07 ` Michal Koutný 0 siblings, 1 reply; 5+ messages in thread From: Guopeng Zhang @ 2025-12-03 11:59 UTC (permalink / raw) To: Michal Koutný Cc: tj, hannes, mhocko, roman.gushchin, shakeel.butt, muchun.song, lance.yang, shuah, linux-mm, linux-kselftest, linux-kernel On 11/21/25 15:47, Guopeng Zhang wrote: > > > On 11/20/25 23:35, Michal Koutný wrote: >> Hello Guopeng. >> >> +Cc Leon Huang Fu <leon.huangfu@shopee.com> >> >> On Thu, Nov 20, 2025 at 02:04:06PM +0800, Guopeng Zhang <zhangguopeng@kylinos.cn> wrote: >>> test_memcg_sock() currently requires that memory.stat's "sock " counter >>> is exactly zero immediately after the TCP server exits. On a busy system >>> this assumption is too strict: >>> >>> - Socket memory may be freed with a small delay (e.g. RCU callbacks). >> >> (FTR, I remember there is `echo 1 > /sys/module/rcutree/parameters/do_rcu_barrier`, >> however, I'm not sure it works always as expected (a reader may actually >> wait for multi-stage RCU pipeline), so plain timeout is more reliable.) >> > Hi Michal, > > Thank you for the suggestion. > > I tested using `echo 1 > /sys/module/rcutree/parameters/do_rcu_barrier`, but > unfortunately the effect was not very good on my setup. As you mentioned, a > reader may actually wait for the multi-stage RCU pipeline, so a plain timeout > seems more reliable here. >>> - memcg statistics are updated asynchronously via the rstat flushing >>> worker, so the "sock " value in memory.stat can stay non-zero for a >>> short period of time even after all socket memory has been uncharged. >>> >>> As a result, test_memcg_sock() can intermittently fail even though socket >>> memory accounting is working correctly. >>> >>> Make the test more robust by polling memory.stat for the "sock " >>> counter and allowing it some time to drop to zero instead of checking >>> it only once. >> >> I like the approach of adaptive waiting to settle in such tests. >> >>> The timeout is set to 3 seconds to cover the periodic rstat flush >>> interval (FLUSH_TIME = 2*HZ by default) plus some scheduling slack. If >>> the counter does not become zero within the timeout, the test still >>> fails as before. >>> >>> On my test system, running test_memcontrol 50 times produced: >>> >>> - Before this patch: 6/50 runs passed. >>> - After this patch: 50/50 runs passed. >> >> BTW Have you looked into the number of retries until success? >> Was it in accordance with the flushing interval? >> > Yes. From my observations, it usually succeeds after about 10–15 retries on > average (roughly 1–1.5 seconds), and occasionally it takes more than 20 retries > (>2 seconds). This looks broadly in line with the periodic rstat flushing > interval (~2 seconds) plus some scheduling slack. >>> >>> Suggested-by: Lance Yang <lance.yang@linux.dev> >>> Reviewed-by: Lance Yang <lance.yang@linux.dev> >>> Signed-off-by: Guopeng Zhang <zhangguopeng@kylinos.cn> >>> --- >>> v3: >>> - Move MEMCG_SOCKSTAT_WAIT_* defines after the #include block as >>> suggested. >>> v2: >>> - Mention the periodic rstat flush interval (FLUSH_TIME = 2*HZ) in >>> the comment and clarify the rationale for the 3s timeout. >>> - Replace the hard-coded retry count and wait interval with macros >>> to avoid magic numbers and make the 3s timeout calculation explicit. >>> --- >>> .../selftests/cgroup/test_memcontrol.c | 30 ++++++++++++++++++- >>> 1 file changed, 29 insertions(+), 1 deletion(-) >>> >>> diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c >>> index 4e1647568c5b..8ff7286fc80b 100644 >>> --- a/tools/testing/selftests/cgroup/test_memcontrol.c >>> +++ b/tools/testing/selftests/cgroup/test_memcontrol.c >>> @@ -21,6 +21,9 @@ >>> #include "kselftest.h" >>> #include "cgroup_util.h" >>> >>> +#define MEMCG_SOCKSTAT_WAIT_RETRIES 30 /* 3s total */ >>> +#define MEMCG_SOCKSTAT_WAIT_INTERVAL_US (100 * 1000) /* 100 ms */ >>> + >>> static bool has_localevents; >>> static bool has_recursiveprot; >>> >>> @@ -1384,6 +1387,8 @@ static int test_memcg_sock(const char *root) >>> int bind_retries = 5, ret = KSFT_FAIL, pid, err; >>> unsigned short port; >>> char *memcg; >>> + long sock_post = -1; >>> + int i; >>> >>> memcg = cg_name(root, "memcg_test"); >>> if (!memcg) >>> @@ -1432,7 +1437,30 @@ static int test_memcg_sock(const char *root) >>> if (cg_read_long(memcg, "memory.current") < 0) >>> goto cleanup; >>> >>> - if (cg_read_key_long(memcg, "memory.stat", "sock ")) >>> + /* >>> + * memory.stat is updated asynchronously via the memcg rstat >>> + * flushing worker, which runs periodically (every 2 seconds, >>> + * see FLUSH_TIME). On a busy system, the "sock " counter may >>> + * stay non-zero for a short period of time after the TCP >>> + * connection is closed and all socket memory has been >>> + * uncharged. >>> + * >>> + * Poll memory.stat for up to 3 seconds (~FLUSH_TIME plus some >>> + * scheduling slack) and require that the "sock " counter >>> + * eventually drops to zero. >>> + */ >>> + for (i = 0; i < MEMCG_SOCKSTAT_WAIT_RETRIES; i++) { >>> + sock_post = cg_read_key_long(memcg, "memory.stat", "sock "); >>> + if (sock_post < 0) >>> + goto cleanup; >>> + >>> + if (!sock_post) >>> + break; >>> + >>> + usleep(MEMCG_SOCKSTAT_WAIT_INTERVAL_US); >>> + } >> >> I think this may be useful also for othe tests (at least other >> memory.stat checks), so some encapsulated implementation like a macro >> with parameters >> cg_read_assert_gt_with_retries(cg, file, field, exp, timeout, retries) >> WDYT? >> >> Michal > > That’s a great idea. I agree this pattern could be useful for other > `memory.stat` checks as well, and I will implement an encapsulated helper/macro > along those lines as per your suggestion. > > Thanks, > Guopeng > Hi Michal, Thanks again for your earlier suggestion about encapsulating this pattern into a helper/macro like: cg_read_assert_gt_with_retries(cg, file, field, exp, timeout, retries) In v5 I tried to follow that direction by introducing a generic helper function: cg_read_key_long_poll() in cgroup_util.[ch]. This helper encapsulates the "poll with retries" logic and returns the final value, while leaving the actual assertion to the callers. Tests like test_memcg_sock() and test_kmem_dead_cgroups() then decide what condition they want to check (e.g. == 0, > 0, etc.), which seemed a bit more flexible and reusable for other cgroup stats. Please let me know if you think this direction makes sense or if you have any further suggestions. Thanks again for the suggestion and for your feedback. Best regards, Guopeng ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v3] selftests: cgroup: make test_memcg_sock robust against delayed sock stats 2025-12-03 11:59 ` Guopeng Zhang @ 2025-12-04 15:07 ` Michal Koutný 0 siblings, 0 replies; 5+ messages in thread From: Michal Koutný @ 2025-12-04 15:07 UTC (permalink / raw) To: Guopeng Zhang Cc: tj, hannes, mhocko, roman.gushchin, shakeel.butt, muchun.song, lance.yang, shuah, linux-mm, linux-kselftest, linux-kernel [-- Attachment #1: Type: text/plain, Size: 816 bytes --] On Wed, Dec 03, 2025 at 07:59:46PM +0800, Guopeng Zhang <zhangguopeng@kylinos.cn> wrote: > In v5 I tried to follow that direction by introducing a generic helper > function: > > cg_read_key_long_poll() > > in cgroup_util.[ch]. This helper encapsulates the "poll with retries" > logic and returns the final value, while leaving the actual assertion > to the callers. Tests like test_memcg_sock() and test_kmem_dead_cgroups() > then decide what condition they want to check (e.g. == 0, > 0, etc.), > which seemed a bit more flexible and reusable for other cgroup stats. > > Please let me know if you think this direction makes sense or if you have > any further suggestions. Thanks, I think this is ideal for the current tests. (I skimmed through the v5 and I have no more remarks.) Michal [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 265 bytes --] ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-12-04 15:07 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2025-11-20 6:04 [PATCH v3] selftests: cgroup: make test_memcg_sock robust against delayed sock stats Guopeng Zhang 2025-11-20 15:35 ` Michal Koutný 2025-11-21 7:47 ` Guopeng Zhang 2025-12-03 11:59 ` Guopeng Zhang 2025-12-04 15:07 ` Michal Koutný
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox