From: Kuniyuki Iwashima <kuniyu@google.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
"David S. Miller" <davem@davemloft.net>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Kuniyuki Iwashima <kuni1840@gmail.com>,
linux-mm@kvack.org, Neal Cardwell <ncardwell@google.com>
Subject: Re: [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel.
Date: Thu, 17 Jul 2025 12:37:00 -0700 [thread overview]
Message-ID: <CAAVpQUCG4TCHQtCYJiN9rUzA-S1PiY2iy5uHYeAKGzbB8mML9Q@mail.gmail.com> (raw)
In-Reply-To: <20250716180232.6bcebe13e9a0d86019e1eaa0@linux-foundation.org>
On Wed, Jul 16, 2025 at 6:02 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Wed, 16 Jul 2025 16:58:29 -0700 Kuniyuki Iwashima <kuniyu@google.com> wrote:
>
> > On Wed, Jul 16, 2025 at 4:52 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> > >
> > > On Wed, 16 Jul 2025 16:13:54 -0700 Kuniyuki Iwashima <kuniyu@google.com> wrote:
> > >
> > > > > > Thus, we need to update socket_pressure to a recent timestamp
> > > > > > periodically on 32-bit kernel.
> > > > > >
> > > > > > Let's do that every 24 hours, with a variation of about 0 to 4 hours.
> > > > >
> > > > > Can't we simply convert ->socket_preesure to a 64-bit type?
> > > > > timespec64/time64_t/etc?
> > > >
> > > > I think it's doable with get_jiffies_64() & time_before64().
> > > >
> > > > My thought was a delayed work would be better than adding
> > > > two seqlock in the networking fast path.
> > >
> > > Is it on a very fast path? mem_cgroup_under_socket_pressure() itself
> > > doesn't look very fastpath-friendly and seqcounts are fast. Bearing in
> > > mind that this affects 32-bit machines only.
> > >
> > > If a get_jiffies_64() call is demonstrated to be a performance issue
> > > then perhaps there's something sneaky we can do along the lines of
> > > reading jiffies_64 directly then falling back to get_jiffies_64() in
> > > the rare something-went-wrong path. Haven't thought about it :)
> > >
> > > Dunno, the proposed patch just feels like overkill for a silly
> > > 32/66-bit issue?
> >
> > Fair enough, I'll make it u64 in v2.
> >
>
> Well, please do pay attention to any performance impact. Measurements
> with some simple microbenchmark if possible?
I don't have a real 32-bit machine so this is a result on QEMU,
but with/without the u64 jiffie patch, the time spent in
mem_cgroup_under_socket_pressure() was 1~5us and I didn't
see any measurable delta.
no patch applied:
iperf3 273 [000] 137.296248:
probe:mem_cgroup_under_socket_pressure: (c13660d0)
c13660d1 mem_cgroup_under_socket_pressure+0x1
([kernel.kallsyms])
iperf3 273 [000] 137.296249:
probe:mem_cgroup_under_socket_pressure__return: (c13660d0 <- c1d8fd7f)
iperf3 273 [000] 137.296251:
probe:mem_cgroup_under_socket_pressure: (c13660d0)
c13660d1 mem_cgroup_under_socket_pressure+0x1
([kernel.kallsyms])
iperf3 273 [000] 137.296253:
probe:mem_cgroup_under_socket_pressure__return: (c13660d0 <- c1d8fd7f)
u64 jiffies patch applied:
iperf3 308 [001] 330.669370:
probe:mem_cgroup_under_socket_pressure: (c12ddba0)
c12ddba1 mem_cgroup_under_socket_pressure+0x1
([kernel.kallsyms])
iperf3 308 [001] 330.669371:
probe:mem_cgroup_under_socket_pressure__return: (c12ddba0 <- c1ce98bf)
iperf3 308 [001] 330.669382:
probe:mem_cgroup_under_socket_pressure: (c12ddba0)
c12ddba1 mem_cgroup_under_socket_pressure+0x1
([kernel.kallsyms])
iperf3 308 [001] 330.669384:
probe:mem_cgroup_under_socket_pressure__return: (c12ddba0 <- c1ce98bf)
So, the u64 approach is good enough :)
Thanks!
prev parent reply other threads:[~2025-07-17 19:37 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-16 4:29 Kuniyuki Iwashima
2025-07-16 19:59 ` Shakeel Butt
2025-07-16 21:16 ` Kuniyuki Iwashima
2025-07-16 22:43 ` Andrew Morton
2025-07-16 23:13 ` Kuniyuki Iwashima
2025-07-16 23:52 ` Andrew Morton
2025-07-16 23:58 ` Kuniyuki Iwashima
2025-07-17 1:02 ` Andrew Morton
2025-07-17 19:37 ` Kuniyuki Iwashima [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAAVpQUCG4TCHQtCYJiN9rUzA-S1PiY2iy5uHYeAKGzbB8mML9Q@mail.gmail.com \
--to=kuniyu@google.com \
--cc=akpm@linux-foundation.org \
--cc=davem@davemloft.net \
--cc=hannes@cmpxchg.org \
--cc=kuni1840@gmail.com \
--cc=linux-mm@kvack.org \
--cc=ncardwell@google.com \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox