From: Michal Hocko <mhocko@suse.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Christoph Lameter <cl@linux.com>,
Aaron Tomlin <atomlin@atomlin.com>,
Frederic Weisbecker <frederic@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Russell King <linux@armlinux.org.uk>,
Huacai Chen <chenhuacai@kernel.org>,
Heiko Carstens <hca@linux.ibm.com>,
x86@kernel.org, Vlastimil Babka <vbabka@suse.cz>
Subject: Re: [PATCH v8 00/13] fold per-CPU vmstats remotely
Date: Thu, 25 May 2023 08:47:57 +0200 [thread overview]
Message-ID: <ZG8EncpwVMPkRJ7A@dhcp22.suse.cz> (raw)
In-Reply-To: <ZG4W01AcwhD5AiQU@tpad>
On Wed 24-05-23 10:53:23, Marcelo Tosatti wrote:
> On Wed, May 24, 2023 at 02:51:55PM +0200, Michal Hocko wrote:
> > [Sorry for a late response but I was conferencing last two weeks and now
> > catching up]
> >
> > On Mon 15-05-23 15:00:15, Marcelo Tosatti wrote:
> > [...]
> > > v8
> > > - Add summary of discussion on -v7 to cover letter
> >
> > Thanks this is very useful! This helps to frame the further discussion.
> >
> > I believe the most important question to answer is this in fact
> > > I think what needs to be done is to avoid new queue_work_on()
> > > users from being introduced in the tree (the number of
> > > existing ones is finite and can therefore be fixed).
> > >
> > > Agree with the criticism here, however, i can't see other
> > > options than the following:
> > >
> > > 1) Given an activity, which contains a sequence of instructions
> > > to execute on a CPU, to change the algorithm
> > > to execute that code remotely (therefore avoid interrupting a CPU),
> > > or to avoid the interruption somehow (which must be dealt with
> > > on a case-by-case basis).
> > >
> > > 2) To block that activity from happening in the first place,
> > > for the sites where it can be blocked (that return errors to
> > > userspace, for example).
> > >
> > > 3) Completly isolate the CPU from the kernel (off-line it).
> >
> > I agree that a reliable cpu isolation implementation needs to address
> > queue_work_on problem. And it has to do that _realiably_. This cannot by
> > achieved by an endless whack-a-mole and chasing each new instance. There
> > must be a more systematic approach. One way would be to change the
> > semantic of schedule_work_on and fail call for an isolated CPU. The
> > caller would have a way to fallback and handle the operation by other
> > means. E.g. vmstat could simply ignore folding pcp data because an
> > imprecision shouldn't really matter. Other callers might chose to do the
> > operation remotely. This is a lot of work, no doubt about that, but it
> > is a long term maintainable solution that doesn't give you new surprises
> > with any new released kernel. There are likely other remote interfaces
> > that would need to follow that scheme.
> >
> > If the cpu isolation is not planned to be worth that time investment
> > then I do not think it is also worth reducing a highly optimized vmstat
> > code. These stats are invoked from many hot paths and per-cpu
> > implementation has been optimized for that case.
>
> It is exactly the same code, but now with a "LOCK" prefix for CMPXCHG
> instruction. Which should not cost much due to cache locking (these are
> per-CPU variables anyway).
Sorry but just a LOCK prefix for a hot path is not a serious argument.
> > If your workload would
> > like to avoid that as disturbing then you already have a quiet_vmstat
> > precedence so find a way how to use it for your workload instead.
> >
> > --
> > Michal Hocko
> > SUSE Labs
>
> OK so an alternative solution is to completly disable vmstat updates
> for isolated CPUs. Are you OK with that ?
Yes, the number of events should be reasonably small and those places in
the kernel which really need a precise value need to do a per-cpu walk
anyway. IIRC /proc/vmstat et al also do accumulate pcp state.
But let me reiterate. Even with vmstat updates out of the game, you have
so many other sources of disruption that your isolated workload will be
fragile until you actually try to deal with the problem on a more
fundamental level.
--
Michal Hocko
SUSE Labs
prev parent reply other threads:[~2023-05-25 6:48 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-15 18:00 Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 01/13] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
2023-05-24 12:34 ` Michal Hocko
2023-05-25 7:05 ` Aaron Tomlin
2023-05-15 18:00 ` [PATCH v8 02/13] this_cpu_cmpxchg: ARM64: switch this_cpu_cmpxchg to locked, add _local function Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 03/13] this_cpu_cmpxchg: loongarch: " Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 04/13] this_cpu_cmpxchg: S390: " Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 05/13] this_cpu_cmpxchg: x86: " Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 06/13] add this_cpu_cmpxchg_local and asm-generic definitions Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 07/13] convert this_cpu_cmpxchg users to this_cpu_cmpxchg_local Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 08/13] mm/vmstat: switch counter modification to cmpxchg Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 09/13] vmstat: switch per-cpu vmstat counters to 32-bits Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 10/13] mm/vmstat: use xchg in cpu_vm_stats_fold Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 11/13] mm/vmstat: switch vmstat shepherd to flush per-CPU counters remotely Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 12/13] mm/vmstat: refresh stats remotely instead of via work item Marcelo Tosatti
2023-05-15 18:00 ` [PATCH v8 13/13] vmstat: add pcp remote node draining via cpu_vm_stats_fold Marcelo Tosatti
2023-05-16 8:09 ` [PATCH v8 00/13] fold per-CPU vmstats remotely Christoph Lameter
2023-05-16 18:02 ` Marcelo Tosatti
2023-05-24 12:51 ` Michal Hocko
2023-05-24 13:53 ` Marcelo Tosatti
2023-05-25 6:47 ` Michal Hocko [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZG8EncpwVMPkRJ7A@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=atomlin@atomlin.com \
--cc=chenhuacai@kernel.org \
--cc=cl@linux.com \
--cc=frederic@kernel.org \
--cc=hca@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux@armlinux.org.uk \
--cc=mtosatti@redhat.com \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox