From: Michal Hocko <mhocko@suse.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Shakeel Butt <shakeel.butt@linux.dev>,
Vlastimil Babka <vbabka@suse.cz>,
Andrew Morton <akpm@linux-foundation.org>,
david@redhat.com, lorenzo.stoakes@oracle.com,
Liam.Howlett@oracle.com, rppt@kernel.org, surenb@google.com,
donettom@linux.ibm.com, aboorvad@linux.ibm.com, sj@kernel.org,
linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: fix the inaccurate memory statistics issue for users
Date: Thu, 5 Jun 2025 08:32:45 +0200 [thread overview]
Message-ID: <aEE6DW-Gjv95eBTj@tiehlicka> (raw)
In-Reply-To: <985a92d4-e0d4-4164-88eb-dc7931e2c40c@linux.alibaba.com>
On Thu 05-06-25 08:48:07, Baolin Wang wrote:
>
>
> On 2025/6/5 00:54, Shakeel Butt wrote:
> > On Wed, Jun 04, 2025 at 10:16:18PM +0800, Baolin Wang wrote:
> > >
> > >
> > > On 2025/6/4 21:46, Vlastimil Babka wrote:
> > > > On 6/4/25 14:46, Baolin Wang wrote:
> > > > > > Baolin, please run stress-ng command that stresses minor anon page
> > > > > > faults in multiple threads and then run multiple bash scripts which cat
> > > > > > /proc/pidof(stress-ng)/status. That should be how much the stress-ng
> > > > > > process is impacted by the parallel status readers versus without them.
> > > > >
> > > > > Sure. Thanks Shakeel. I run the stress-ng with the 'stress-ng --fault 32
> > > > > --perf -t 1m' command, while simultaneously running the following
> > > > > scripts to read the /proc/pidof(stress-ng)/status for each thread.
> > > >
> > > > How many of those scripts?
> > >
> > > 1 script, but will start 32 threads to read each stress-ng thread's status
> > > interface.
> > >
> > > > > From the following data, I did not observe any obvious impact of this
> > > > > patch on the stress-ng tests when repeatedly reading the
> > > > > /proc/pidof(stress-ng)/status.
> > > > >
> > > > > w/o patch
> > > > > stress-ng: info: [6891] 3,993,235,331,584 CPU Cycles
> > > > > 59.767 B/sec
> > > > > stress-ng: info: [6891] 1,472,101,565,760 Instructions
> > > > > 22.033 B/sec (0.369 instr. per cycle)
> > > > > stress-ng: info: [6891] 36,287,456 Page Faults Total
> > > > > 0.543 M/sec
> > > > > stress-ng: info: [6891] 36,287,456 Page Faults Minor
> > > > > 0.543 M/sec
> > > > >
> > > > > w/ patch
> > > > > stress-ng: info: [6872] 4,018,592,975,968 CPU Cycles
> > > > > 60.177 B/sec
> > > > > stress-ng: info: [6872] 1,484,856,150,976 Instructions
> > > > > 22.235 B/sec (0.369 instr. per cycle)
> > > > > stress-ng: info: [6872] 36,547,456 Page Faults Total
> > > > > 0.547 M/sec
> > > > > stress-ng: info: [6872] 36,547,456 Page Faults Minor
> > > > > 0.547 M/sec
> > > > >
> > > > > =========================
> > > > > #!/bin/bash
> > > > >
> > > > > # Get the PIDs of stress-ng processes
> > > > > PIDS=$(pgrep stress-ng)
> > > > >
> > > > > # Loop through each PID and monitor /proc/[pid]/status
> > > > > for PID in $PIDS; do
> > > > > while true; do
> > > > > cat /proc/$PID/status
> > > > > usleep 100000
> > > >
> > > > Hm but this limits the reading to 10 per second? If we want to simulate an
> > > > adversary process, it should be without the sleeps I think?
> > >
> > > OK. I drop the usleep, and I still can not see obvious impact.
> > >
> > > w/o patch:
> > > stress-ng: info: [6848] 4,399,219,085,152 CPU Cycles
> > > 67.327 B/sec
> > > stress-ng: info: [6848] 1,616,524,844,832 Instructions
> > > 24.740 B/sec (0.367 instr. per cycle)
> > > stress-ng: info: [6848] 39,529,792 Page Faults Total
> > > 0.605 M/sec
> > > stress-ng: info: [6848] 39,529,792 Page Faults Minor
> > > 0.605 M/sec
> > >
> > > w/patch:
> > > stress-ng: info: [2485] 4,462,440,381,856 CPU Cycles
> > > 68.382 B/sec
> > > stress-ng: info: [2485] 1,615,101,503,296 Instructions
> > > 24.750 B/sec (0.362 instr. per cycle)
> > > stress-ng: info: [2485] 39,439,232 Page Faults Total
> > > 0.604 M/sec
> > > stress-ng: info: [2485] 39,439,232 Page Faults Minor
> > > 0.604 M/sec
> >
> > Is the above with 32 non-sleeping parallel reader scripts?
>
> Yes.
Thanks, this seems much more representative. Please update the changelog
with this. With that feel free to add
Acked-by: Michal Hocko <mhocko@suse.com>
Thanks!
--
Michal Hocko
SUSE Labs
prev parent reply other threads:[~2025-06-05 6:32 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-24 1:59 Baolin Wang
2025-05-30 3:53 ` Andrew Morton
2025-05-30 13:39 ` Michal Hocko
2025-05-30 23:00 ` Andrew Morton
2025-06-03 8:08 ` Baolin Wang
2025-06-03 8:15 ` Michal Hocko
2025-06-03 8:32 ` Baolin Wang
2025-06-03 10:28 ` Michal Hocko
2025-06-03 14:22 ` Baolin Wang
2025-06-03 14:48 ` Michal Hocko
2025-06-03 17:29 ` Shakeel Butt
2025-06-04 12:46 ` Baolin Wang
2025-06-04 13:46 ` Vlastimil Babka
2025-06-04 14:16 ` Baolin Wang
2025-06-04 14:27 ` Vlastimil Babka
2025-06-04 16:54 ` Shakeel Butt
2025-06-05 0:48 ` Baolin Wang
2025-06-05 6:32 ` Michal Hocko [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aEE6DW-Gjv95eBTj@tiehlicka \
--to=mhocko@suse.com \
--cc=Liam.Howlett@oracle.com \
--cc=aboorvad@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=donettom@linux.ibm.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=sj@kernel.org \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox