From: Yosry Ahmed <yosryahmed@google.com>
To: Waiman Long <longman@redhat.com>
Cc: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
Jens Axboe <axboe@kernel.dk>, Zefan Li <lizefan.x@bytedance.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeelb@google.com>,
Muchun Song <muchun.song@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
Vasily Averin <vasily.averin@linux.dev>,
cgroups@vger.kernel.org, linux-block@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
bpf@vger.kernel.org
Subject: Re: [RFC PATCH 1/7] cgroup: rstat: only disable interrupts for the percpu lock
Date: Fri, 24 Mar 2023 15:50:10 -0700 [thread overview]
Message-ID: <CAJD7tkZA-LxAVA5SWRzMeQ17T26qGBApPqErqT_SpCbrtCJQkA@mail.gmail.com> (raw)
In-Reply-To: <53582a07-81c6-35eb-10bf-7920b27483be@redhat.com>
On Fri, Mar 24, 2023 at 7:12 AM Waiman Long <longman@redhat.com> wrote:
>
> On 3/24/23 03:22, Yosry Ahmed wrote:
> > On Thu, Mar 23, 2023 at 6:39 PM Tejun Heo <tj@kernel.org> wrote:
> >> Hello,
> >>
> >> On Thu, Mar 23, 2023 at 04:00:31AM +0000, Yosry Ahmed wrote:
> >>> Currently, when sleeping is not allowed during rstat flushing, we hold
> >>> the global rstat lock with interrupts disabled throughout the entire
> >>> flush operation. Flushing in an O(# cgroups * # cpus) operation, and
> >>> having interrupts disabled throughout is dangerous.
> >>>
> >>> For some contexts, we may not want to sleep, but can be interrupted
> >>> (e.g. while holding a spinlock or RCU read lock). As such, do not
> >>> disable interrupts throughout rstat flushing, only when holding the
> >>> percpu lock. This breaks down the O(# cgroups * # cpus) duration with
> >>> interrupts disabled to a series of O(# cgroups) durations.
> >>>
> >>> Furthermore, if a cpu spinning waiting for the global rstat lock, it
> >>> doesn't need to spin with interrupts disabled anymore.
> >> I'm generally not a fan of big spin locks w/o irq protection. They too often
> >> become a source of unpredictable latency spikes. As you said, the global
> >> rstat lock can be held for quite a while. Removing _irq makes irq latency
> >> better on the CPU but on the other hand it makes a lot more likely that the
> >> lock is gonna be held even longer, possibly significantly so depending on
> >> the configuration and workload which will in turn stall other CPUs waiting
> >> for the lock. Sure, irqs are being serviced quicker but if the cost is more
> >> and longer !irq context multi-cpu stalls, what's the point?
> >>
> >> I don't think there's anything which requires the global lock to be held
> >> throughout the entire flushing sequence and irq needs to be disabled when
> >> grabbing the percpu lock anyway, so why not just release the global lock on
> >> CPU boundaries instead? We don't really lose anything significant that way.
> >> The durations of irq disabled sections are still about the same as in the
> >> currently proposed solution at O(# cgroups) and we avoid the risk of holding
> >> the global lock for too long unexpectedly from getting hit repeatedly by
> >> irqs while holding the global lock.
> > Thanks for taking a look!
> >
> > I think a problem with this approach is that we risk having to contend
> > for the global lock at every CPU boundary in atomic contexts. Right
> Isn't it the plan to just do a trylock in atomic contexts so that it
> won't get stuck spinning for the lock for an indeterminate amount of time?
Not exactly. On the memory controller side, we currently only allow
one flusher at a time and force all flushers to flush the full
hierarchy, such that concurrent flushers can skip. This is done for
both atomic and non-atomic contexts.
For flushers outside the memory controller, they can still contend the
lock among themselves or with flushers in the memory controller. In
this case, instead of contending the lock once, they contend it at
each CPU boundary.
> > now we contend for the global lock once, and once we have it we go
> > through all CPUs to flush, only having to contend with updates taking
> > the percpu locks at this point. If we unconditionally release &
> > reacquire the global lock at every CPU boundary then we may contend
> > for it much more frequently with concurrent flushers.
>
> Note that with the use of qspinlock in all the major arches, the impact
> of thundering herds of lockers are much less serious than before. There
> are certainly some overhead in doing multiple lock acquires and
> releases, but that shouldn't been too excessive.
I ran some tests to measure this. Since I am using a cgroup v1
hierarchy, I cannot reproduce contention between memory controller
flushers and non-memory controller flushers, so I removed the "one
memory flusher only" restriction to have concurrent memory flushers
compete for the global rstat lock to measure the impact:
Before (only one flusher allowed to compete for the global rstat lock):
---cgroup_rstat_flush
|
--1.27%--cgroup_rstat_flush_locked
|
--0.94%--mem_cgroup_css_rstat_flush
After (concurrent flushers allowed to compete for the global rstat lock):
---cgroup_rstat_flush
|
|--4.94%--_raw_spin_lock
| |
| --4.94%--queued_spin_lock_slowpath
|
--0.92%--cgroup_rstat_flush_locked
|
--0.56%--mem_cgroup_css_rstat_flush
This was run with 20 processes trying to flush concurrently, so it may
be excessive, but it seems like in this case lock contention makes a
significant difference.
Again, this is not a regression for non-atomic flushers, as they
already compete for the lock at every CPU boundary, but for atomic
flushers that don't give up the lock at all today, it would be a
regression to start competing for the lock at every CPU boundary. This
patch series aims to minimize the number of atomic flushers (brings
them down to two, one of which is not common), so this may be fine.
My main concern is that for some flushers that this series converts
from atomic to non-atomic, we may notice a regression later and revert
it (e.g. refault path), which is why I have them in separate patches.
If we regress the atomic flushing path, it would be a larger surgery
to restore the performance for these paths -- which is why I would
rather keep the atomic path without excessive lock contention.
Thoughts?
>
> I am all in for reducing lock hold time as much as possible as it will
> improve the response time.
>
> Cheers,
> Longman
>
next prev parent reply other threads:[~2023-03-24 22:50 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-23 4:00 [RFC PATCH 0/7] Make rstat flushing IRQ and sleep friendly Yosry Ahmed
2023-03-23 4:00 ` [RFC PATCH 1/7] cgroup: rstat: only disable interrupts for the percpu lock Yosry Ahmed
2023-03-23 4:29 ` Shakeel Butt
2023-03-23 5:15 ` Yosry Ahmed
2023-03-23 6:33 ` Shakeel Butt
2023-03-23 13:35 ` Yosry Ahmed
2023-03-23 15:40 ` Shakeel Butt
2023-03-23 15:42 ` Yosry Ahmed
2023-03-23 15:46 ` Shakeel Butt
2023-03-23 16:09 ` Shakeel Butt
2023-03-23 16:17 ` Yosry Ahmed
2023-03-23 16:29 ` Shakeel Butt
2023-03-23 16:36 ` Yosry Ahmed
2023-03-23 16:45 ` Shakeel Butt
2023-03-23 16:51 ` Yosry Ahmed
2023-03-23 19:09 ` Shakeel Butt
2023-03-23 17:33 ` Johannes Weiner
2023-03-23 18:09 ` Yosry Ahmed
2023-03-23 18:19 ` Johannes Weiner
2023-03-24 1:39 ` Tejun Heo
2023-03-24 7:22 ` Yosry Ahmed
2023-03-24 14:12 ` Waiman Long
2023-03-24 22:50 ` Yosry Ahmed [this message]
2023-03-25 1:54 ` Tejun Heo
2023-03-25 2:17 ` Yosry Ahmed
2023-03-25 4:30 ` Shakeel Butt
2023-03-25 4:37 ` Yosry Ahmed
2023-03-25 4:46 ` Shakeel Butt
2023-03-27 23:23 ` Yosry Ahmed
2023-03-29 18:53 ` Tejun Heo
2023-03-29 19:22 ` Hugh Dickins
2023-03-29 20:00 ` Tejun Heo
2023-03-29 20:38 ` Hugh Dickins
2023-03-30 4:26 ` Yosry Ahmed
2023-03-31 1:51 ` Tejun Heo
2023-03-23 4:00 ` [RFC PATCH 2/7] memcg: do not disable interrupts when holding stats_flush_lock Yosry Ahmed
2023-03-23 4:32 ` Shakeel Butt
2023-03-23 5:16 ` Yosry Ahmed
2023-03-23 4:00 ` [RFC PATCH 3/7] cgroup: rstat: remove cgroup_rstat_flush_irqsafe() Yosry Ahmed
2023-03-23 15:43 ` Johannes Weiner
2023-03-23 15:45 ` Yosry Ahmed
2023-03-23 4:00 ` [RFC PATCH 4/7] memcg: sleep during flushing stats in safe contexts Yosry Ahmed
2023-03-23 15:56 ` Johannes Weiner
2023-03-23 16:01 ` Yosry Ahmed
2023-03-23 17:27 ` Johannes Weiner
2023-03-23 18:07 ` Yosry Ahmed
2023-03-23 19:35 ` Shakeel Butt
2023-03-23 4:00 ` [RFC PATCH 5/7] vmscan: memcg: sleep when flushing stats during reclaim Yosry Ahmed
2023-03-23 4:00 ` [RFC PATCH 6/7] workingset: memcg: sleep when flushing stats in workingset_refault() Yosry Ahmed
2023-03-23 15:50 ` Johannes Weiner
2023-03-23 16:02 ` Yosry Ahmed
2023-03-23 16:00 ` Johannes Weiner
2023-03-23 16:02 ` Yosry Ahmed
2023-03-23 4:00 ` [RFC PATCH 7/7] memcg: do not modify rstat tree for zero updates Yosry Ahmed
2023-03-23 4:10 ` [RFC PATCH 0/7] Make rstat flushing IRQ and sleep friendly Shakeel Butt
2023-03-23 5:07 ` Yosry Ahmed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJD7tkZA-LxAVA5SWRzMeQ17T26qGBApPqErqT_SpCbrtCJQkA@mail.gmail.com \
--to=yosryahmed@google.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=bpf@vger.kernel.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=josef@toxicpanda.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizefan.x@bytedance.com \
--cc=longman@redhat.com \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=tj@kernel.org \
--cc=vasily.averin@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox