linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Shakeel Butt <shakeelb@google.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Marek Szyprowski <m.szyprowski@samsung.com>
Subject: Re: [PATCH] percpu_counter: add percpu_counter_sum_all interface
Date: Mon, 7 Nov 2022 13:05:49 -0800	[thread overview]
Message-ID: <20221107130549.db68c48afe5f711b2e99c5c0@linux-foundation.org> (raw)
In-Reply-To: <20221105014013.930636-1-shakeelb@google.com>

On Sat,  5 Nov 2022 01:40:13 +0000 Shakeel Butt <shakeelb@google.com> wrote:

> The percpu_counter is used for scenarios where performance is more
> important than the accuracy. For percpu_counter users, who want more
> accurate information in their slowpath, percpu_counter_sum is provided
> which traverses all the online CPUs to accumulate the data. The reason
> it only needs to traverse online CPUs is because percpu_counter does
> implement CPU offline callback which syncs the local data of the
> offlined CPU.
> 
> However there is a small race window between the online CPUs traversal
> of percpu_counter_sum and the CPU offline callback. The offline callback
> has to traverse all the percpu_counters on the system to flush the CPU
> local data which can be a lot. During that time, the CPU which is going
> offline has already been published as offline to all the readers. So, as
> the offline callback is running, percpu_counter_sum can be called for
> one counter which has some state on the CPU going offline. Since
> percpu_counter_sum only traverses online CPUs, it will skip that
> specific CPU and the offline callback might not have flushed the state
> for that specific percpu_counter on that offlined CPU.

OK, got it, thanks.

> Normally this is not an issue because percpu_counter users can deal with
> some inaccuracy for small time window. However a new user i.e. mm_struct
> on the cleanup path wants to check the exact state of the percpu_counter
> through check_mm(). For such users, this patch introduces
> percpu_counter_sum_all() which traverses all possible CPUs.

And uses it in fork.c:check_mm()!

> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -756,7 +756,7 @@ static void check_mm(struct mm_struct *mm)
>  			 "Please make sure 'struct resident_page_types[]' is updated as well");
>  
>  	for (i = 0; i < NR_MM_COUNTERS; i++) {
> -		long x = percpu_counter_sum(&mm->rss_stat[i]);
> +		long x = percpu_counter_sum_all(&mm->rss_stat[i]);

check_mm() just became more expensive in some cases.  nr_possible_cpus
* 4.  I wonder if this is enough for people to start caring about.

check_mm() is presently non-optional and I'd be reluctant to change
this, given how commonly we see the "BUG: Bad rss-counter state"
getting reported (22 million hits in a google search!).

We could save a ton of that cost by running percpu_counter_sum() first,
then trying percpu_counter_sum_all() if percpu_counter_sum() indicated
an error.  This is only worth bothering about if the new check_mm()
cost is a concern.




  reply	other threads:[~2022-11-07 21:05 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-05  1:40 Shakeel Butt
2022-11-07 21:05 ` Andrew Morton [this message]
2022-11-07 21:19   ` Shakeel Butt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221107130549.db68c48afe5f711b2e99c5c0@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=m.szyprowski@samsung.com \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox