linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosryahmed@google.com>
To: Michal Hocko <mhocko@suse.com>
Cc: "Tejun Heo" <tj@kernel.org>, "Wei Xu" <weixugc@google.com>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Roman Gushchin" <roman.gushchin@linux.dev>,
	"Shakeel Butt" <shakeelb@google.com>,
	"Muchun Song" <muchun.song@linux.dev>,
	"Ivan Babrou" <ivan@cloudflare.com>,
	"Michal Koutný" <mkoutny@suse.com>,
	"Waiman Long" <longman@redhat.com>,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org, "Greg Thelen" <gthelen@google.com>
Subject: Re: [PATCH v4 4/4] mm: memcg: use non-unified stats flushing for userspace reads
Date: Tue, 12 Sep 2023 04:09:28 -0700	[thread overview]
Message-ID: <CAJD7tka4zEcu-jMycMo0=xB7PP1j7P0gu_weGJSLQvbhYMzv9Q@mail.gmail.com> (raw)
In-Reply-To: <ZQBFZMRL8WmqRgrM@dhcp22.suse.cz>

On Tue, Sep 12, 2023 at 4:03 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Mon 11-09-23 10:21:24, Tejun Heo wrote:
> > Hello,
> >
> > On Mon, Sep 11, 2023 at 01:01:25PM -0700, Wei Xu wrote:
> > > Yes, it is the same test (10K contending readers). The kernel change
> > > is to remove stats_user_flush_mutex from mem_cgroup_user_flush_stats()
> > > so that the concurrent mem_cgroup_user_flush_stats() requests directly
> > > contend on cgroup_rstat_lock in cgroup_rstat_flush().
> >
> > I don't think it'd be a good idea to twist rstat and other kernel internal
> > code to accommodate 10k parallel readers.
>
> I didn't mean to suggest optimizing for this specific scenario. I was
> mostly curious whether the pathological case of unbound high latency due
> to lock dropping is easy to trigger by huge number of readers. It seems
> it is not and the mutex might not be really needed as a prevention.
>
> > If we want to support that, let's
> > explicitly support that by implementing better batching in the read path.
>
> Well, we need to be able to handle those situations because stat files
> are generally readable and we do not want unrelated workloads to
> influence each other heavily through this path.

I am working on a complete rework of this series based on the feedback
I got from Wei and the discussions here. I think I have something
simpler and more generic, and doesn't proliferate the number of
flushing variants we have. I am running some tests right now and will
share it as soon as I can.

It should address the high concurrency use case without adding a lot
of complexity. It basically involves a fast path where we only flush
the needed subtree if there's no contention, and a slow path where we
coalesce all flushing requests, and everyone just waits for a single
flush to complete (without spinning or contending on any locks). I am
trying to use this generic mechanism for both userspace reads and
in-kernel flushers. I am making sure in-kernel flushers do not
regress.

>
> [...]
>
> > When you have that many concurrent readers, most of them won't need to
> > actually flush.
>
> Agreed!
> --
> Michal Hocko
> SUSE Labs


  reply	other threads:[~2023-09-12 11:10 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-31 16:56 [PATCH v4 0/4] memcg: non-unified flushing for userspace stats Yosry Ahmed
2023-08-31 16:56 ` [PATCH v4 1/4] mm: memcg: properly name and document unified stats flushing Yosry Ahmed
2023-09-04 14:44   ` Michal Hocko
2023-09-05 15:55     ` Yosry Ahmed
2023-08-31 16:56 ` [PATCH v4 2/4] mm: memcg: add a helper for non-unified " Yosry Ahmed
2023-09-04 14:45   ` Michal Hocko
2023-08-31 16:56 ` [PATCH v4 3/4] mm: memcg: let non-unified root stats flushes help unified flushes Yosry Ahmed
2023-09-04 14:50   ` Michal Hocko
2023-09-04 15:29     ` Michal Koutný
2023-09-04 15:41       ` Michal Hocko
2023-09-05 14:10         ` Michal Koutný
2023-09-05 15:54           ` Yosry Ahmed
2023-09-05 16:07             ` Michal Koutný
2023-09-12 11:03             ` Michal Hocko
2023-08-31 16:56 ` [PATCH v4 4/4] mm: memcg: use non-unified stats flushing for userspace reads Yosry Ahmed
2023-09-04 15:15   ` Michal Hocko
2023-09-05 15:57     ` Yosry Ahmed
2023-09-08  0:52     ` Wei Xu
2023-09-08  1:02       ` Ivan Babrou
2023-09-08  1:11         ` Yosry Ahmed
2023-09-11 13:11       ` Michal Hocko
2023-09-11 19:15         ` Wei Xu
2023-09-11 19:34           ` Michal Hocko
2023-09-11 20:01             ` Wei Xu
2023-09-11 20:21               ` Tejun Heo
2023-09-11 20:28                 ` Yosry Ahmed
2023-09-12 11:03                 ` Michal Hocko
2023-09-12 11:09                   ` Yosry Ahmed [this message]
2023-08-31 17:18 ` [PATCH v4 0/4] memcg: non-unified flushing for userspace stats Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJD7tka4zEcu-jMycMo0=xB7PP1j7P0gu_weGJSLQvbhYMzv9Q@mail.gmail.com' \
    --to=yosryahmed@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=ivan@cloudflare.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=mhocko@suse.com \
    --cc=mkoutny@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    --cc=tj@kernel.org \
    --cc=weixugc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox