From: Leon Huang Fu <leon.huangfu@shopee.com>
To: Guopeng Zhang <zhangguopeng@kylinos.cn>
Cc: Lance Yang <ioworker0@gmail.com>,
hannes@cmpxchg.org, linux-kernel@vger.kernel.org,
linux-kselftest@vger.kernel.org, linux-mm@kvack.org,
mhocko@kernel.org, mkoutny@suse.com, muchun.song@linux.dev,
roman.gushchin@linux.dev, shakeel.butt@linux.dev,
shuah@kernel.org, tj@kernel.org,
Lance Yang <lance.yang@linux.dev>
Subject: Re: [PATCH] selftests: cgroup: make test_memcg_sock robust against delayed sock stats
Date: Thu, 20 Nov 2025 13:39:53 +0800 [thread overview]
Message-ID: <CAPV86rqXrf027nLZocq2Acqf5T=YJY2Uj3MD1OrGG7DAUqkxzA@mail.gmail.com> (raw)
In-Reply-To: <2c276ed9-626f-4bae-9d42-727dd176ec74@kylinos.cn>
On Thu, Nov 20, 2025 at 10:12 AM Guopeng Zhang <zhangguopeng@kylinos.cn> wrote:
>
>
>On 11/19/25 20:27, Lance Yang wrote:
>> From: Lance Yang <lance.yang@linux.dev>
>>
>>
>> On Wed, 19 Nov 2025 18:52:16 +0800, Guopeng Zhang wrote:
>>> test_memcg_sock() currently requires that memory.stat's "sock " counter
>>> is exactly zero immediately after the TCP server exits. On a busy system
>>> this assumption is too strict:
>>>
>>> - Socket memory may be freed with a small delay (e.g. RCU callbacks).
>>> - memcg statistics are updated asynchronously via the rstat flushing
>>> worker, so the "sock " value in memory.stat can stay non-zero for a
>>> short period of time even after all socket memory has been uncharged.
>>>
>>> As a result, test_memcg_sock() can intermittently fail even though socket
>>> memory accounting is working correctly.
>>>
>>> Make the test more robust by polling memory.stat for the "sock " counter
>>> and allowing it some time to drop to zero instead of checking it only
>>> once. If the counter does not become zero within the timeout, the test
>>> still fails as before.
>>>
>>> On my test system, running test_memcontrol 50 times produced:
>>>
>>> - Before this patch: 6/50 runs passed.
>>> - After this patch: 50/50 runs passed.
>Hi Lance,
>
>Thanks a lot for your review and helpful comments!
>>
>>Good catch! Thanks!
>>
>> With more CPU cores, updates may be distributed across cores, making it
>> slower to reach the per-CPU flush threshold, IIUC :)
>>
>Yes, that matches what I’ve seen as well — on larger systems it indeed
>takes longer for the stats to converge due to per-CPU distribution and
>the flush threshold.
Me too.
I previously proposed a potential solution to explicitly flush stats via
a new interface, "memory.stat_refresh" [1]. However, improving the
existing flush mechanism would likely be a better long-term direction.
Links:
[1] https://lore.kernel.org/linux-mm/20251110101948.19277-1-leon.huangfu@shopee.com/
Thanks,
Leon
[...]
prev parent reply other threads:[~2025-11-20 5:40 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-19 10:52 Guopeng Zhang
2025-11-19 12:27 ` Lance Yang
2025-11-20 2:11 ` Guopeng Zhang
2025-11-20 5:39 ` Leon Huang Fu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAPV86rqXrf027nLZocq2Acqf5T=YJY2Uj3MD1OrGG7DAUqkxzA@mail.gmail.com' \
--to=leon.huangfu@shopee.com \
--cc=hannes@cmpxchg.org \
--cc=ioworker0@gmail.com \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=mkoutny@suse.com \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=shuah@kernel.org \
--cc=tj@kernel.org \
--cc=zhangguopeng@kylinos.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox