From: "Chen, Tim C" <tim.c.chen@intel.com>
To: Hugh Dickins <hughd@google.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Chinner <dchinner@redhat.com>,
"Darrick J. Wong" <djwong@kernel.org>,
Christian Brauner <brauner@kernel.org>,
Carlos Maiolino <cem@kernel.org>,
Chuck Lever <chuck.lever@oracle.com>, Jan Kara <jack@suse.cz>,
Matthew Wilcox <willy@infradead.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Axel Rasmussen <axelrasmussen@google.com>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: RE: [PATCH 8/8] shmem,percpu_counter: add _limited_add(fbc, limit, amount)
Date: Thu, 5 Oct 2023 16:50:26 +0000 [thread overview]
Message-ID: <DM6PR11MB4107F132CC1203486A91A4DEDCCAA@DM6PR11MB4107.namprd11.prod.outlook.com> (raw)
In-Reply-To: <bb817848-2d19-bcc8-39ca-ea179af0f0b4@google.com>
>Signed-off-by: Hugh Dickins <hughd@google.com>
>Cc: Tim Chen <tim.c.chen@intel.com>
>Cc: Dave Chinner <dchinner@redhat.com>
>Cc: Darrick J. Wong <djwong@kernel.org>
>---
>Tim, Dave, Darrick: I didn't want to waste your time on patches 1-7, which are
>just internal to shmem, and do not affect this patch (which applies to v6.6-rc
>and linux-next as is): but want to run this by you.
>
> include/linux/percpu_counter.h | 23 +++++++++++++++
> lib/percpu_counter.c | 53 ++++++++++++++++++++++++++++++++++
> mm/shmem.c | 10 +++----
> 3 files changed, 81 insertions(+), 5 deletions(-)
>
>diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
>index d01351b1526f..8cb7c071bd5c 100644
>--- a/include/linux/percpu_counter.h
>+++ b/include/linux/percpu_counter.h
>@@ -57,6 +57,8 @@ void percpu_counter_add_batch(struct percpu_counter
>*fbc, s64 amount,
> s32 batch);
> s64 __percpu_counter_sum(struct percpu_counter *fbc); int
>__percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch);
>+bool __percpu_counter_limited_add(struct percpu_counter *fbc, s64 limit,
>+ s64 amount, s32 batch);
> void percpu_counter_sync(struct percpu_counter *fbc);
>
> static inline int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs)
>@@ -69,6 +71,13 @@ static inline void percpu_counter_add(struct
>percpu_counter *fbc, s64 amount)
> percpu_counter_add_batch(fbc, amount, percpu_counter_batch); }
>
>+static inline bool
>+percpu_counter_limited_add(struct percpu_counter *fbc, s64 limit, s64
>+amount) {
>+ return __percpu_counter_limited_add(fbc, limit, amount,
>+ percpu_counter_batch);
>+}
>+
> /*
> * With percpu_counter_add_local() and percpu_counter_sub_local(), counts
> * are accumulated in local per cpu counter and not in fbc->count until @@ -
>185,6 +194,20 @@ percpu_counter_add(struct percpu_counter *fbc, s64
>amount)
> local_irq_restore(flags);
> }
>
>+static inline bool
>+percpu_counter_limited_add(struct percpu_counter *fbc, s64 limit, s64
>+amount) {
>+ unsigned long flags;
>+ s64 count;
>+
>+ local_irq_save(flags);
>+ count = fbc->count + amount;
>+ if (count <= limit)
>+ fbc->count = count;
>+ local_irq_restore(flags);
>+ return count <= limit;
>+}
>+
> /* non-SMP percpu_counter_add_local is the same with percpu_counter_add
>*/ static inline void percpu_counter_add_local(struct percpu_counter *fbc,
>s64 amount) diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c index
>9073430dc865..58a3392f471b 100644
>--- a/lib/percpu_counter.c
>+++ b/lib/percpu_counter.c
>@@ -278,6 +278,59 @@ int __percpu_counter_compare(struct
>percpu_counter *fbc, s64 rhs, s32 batch) }
>EXPORT_SYMBOL(__percpu_counter_compare);
>
>+/*
>+ * Compare counter, and add amount if the total is within limit.
>+ * Return true if amount was added, false if it would exceed limit.
>+ */
>+bool __percpu_counter_limited_add(struct percpu_counter *fbc,
>+ s64 limit, s64 amount, s32 batch) {
>+ s64 count;
>+ s64 unknown;
>+ unsigned long flags;
>+ bool good;
>+
>+ if (amount > limit)
>+ return false;
>+
>+ local_irq_save(flags);
>+ unknown = batch * num_online_cpus();
>+ count = __this_cpu_read(*fbc->counters);
>+
>+ /* Skip taking the lock when safe */
>+ if (abs(count + amount) <= batch &&
>+ fbc->count + unknown <= limit) {
>+ this_cpu_add(*fbc->counters, amount);
>+ local_irq_restore(flags);
>+ return true;
>+ }
>+
>+ raw_spin_lock(&fbc->lock);
>+ count = fbc->count + amount;
>+
Perhaps we can fast path the case where for sure
we will exceed limit?
if (fbc->count + amount - unknown > limit)
return false;
Tim
>+ /* Skip percpu_counter_sum() when safe */
>+ if (count + unknown > limit) {
>+ s32 *pcount;
>+ int cpu;
>+
>+ for_each_cpu_or(cpu, cpu_online_mask, cpu_dying_mask) {
>+ pcount = per_cpu_ptr(fbc->counters, cpu);
>+ count += *pcount;
>+ }
>+ }
>+
>+ good = count <= limit;
>+ if (good) {
>+ count = __this_cpu_read(*fbc->counters);
>+ fbc->count += count + amount;
>+ __this_cpu_sub(*fbc->counters, count);
>+ }
>+
>+ raw_spin_unlock(&fbc->lock);
>+ local_irq_restore(flags);
>+ return good;
>+}
>+
> static int __init percpu_counter_startup(void) {
> int ret;
>diff --git a/mm/shmem.c b/mm/shmem.c
>index 4f4ab26bc58a..7cb72c747954 100644
>--- a/mm/shmem.c
>+++ b/mm/shmem.c
>@@ -217,15 +217,15 @@ static int shmem_inode_acct_blocks(struct inode
>*inode, long pages)
>
> might_sleep(); /* when quotas */
> if (sbinfo->max_blocks) {
>- if (percpu_counter_compare(&sbinfo->used_blocks,
>- sbinfo->max_blocks - pages) > 0)
>+ if (!percpu_counter_limited_add(&sbinfo->used_blocks,
>+ sbinfo->max_blocks, pages))
> goto unacct;
>
> err = dquot_alloc_block_nodirty(inode, pages);
>- if (err)
>+ if (err) {
>+ percpu_counter_sub(&sbinfo->used_blocks, pages);
> goto unacct;
>-
>- percpu_counter_add(&sbinfo->used_blocks, pages);
>+ }
> } else {
> err = dquot_alloc_block_nodirty(inode, pages);
> if (err)
>--
>2.35.3
next prev parent reply other threads:[~2023-10-05 16:50 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-30 3:23 [PATCH 0/8] shmem,tmpfs: general maintenance Hugh Dickins
2023-09-30 3:25 ` [PATCH 1/8] shmem: shrink shmem_inode_info: dir_offsets in a union Hugh Dickins
2023-09-30 16:16 ` Chuck Lever
2023-10-03 13:06 ` Jan Kara
2023-09-30 3:26 ` [PATCH 2/8] shmem: remove vma arg from shmem_get_folio_gfp() Hugh Dickins
2023-10-03 13:07 ` Jan Kara
2023-09-30 3:27 ` [PATCH 3/8] shmem: factor shmem_falloc_wait() out of shmem_fault() Hugh Dickins
2023-10-03 13:18 ` Jan Kara
2023-10-06 3:48 ` Hugh Dickins
2023-10-06 11:01 ` Jan Kara
2023-09-30 3:28 ` [PATCH 4/8] shmem: trivial tidyups, removing extra blank lines, etc Hugh Dickins
2023-10-03 13:20 ` Jan Kara
2023-09-30 3:30 ` [PATCH 5/8] shmem: shmem_acct_blocks() and shmem_inode_acct_blocks() Hugh Dickins
2023-10-03 13:21 ` Jan Kara
2023-09-30 3:31 ` [PATCH 6/8] shmem: move memcg charge out of shmem_add_to_page_cache() Hugh Dickins
2023-10-03 13:28 ` Jan Kara
2023-09-30 3:32 ` [PATCH 7/8] shmem: _add_to_page_cache() before shmem_inode_acct_blocks() Hugh Dickins
2023-09-30 3:42 ` [PATCH 8/8] shmem,percpu_counter: add _limited_add(fbc, limit, amount) Hugh Dickins
2023-10-04 15:32 ` Jan Kara
2023-10-04 23:10 ` Dave Chinner
2023-10-06 5:35 ` Hugh Dickins
2023-10-09 0:15 ` Dave Chinner
2023-10-12 4:36 ` Hugh Dickins
2023-10-12 4:40 ` [PATCH 9/8] percpu_counter: extend _limited_add() to negative amounts Hugh Dickins
2023-10-05 16:50 ` Chen, Tim C [this message]
2023-10-06 5:42 ` [PATCH 8/8] shmem,percpu_counter: add _limited_add(fbc, limit, amount) Hugh Dickins
2023-10-06 22:59 ` Dennis Zhou
2023-10-12 4:09 ` Hugh Dickins
[not found] ` <jh3yqdz43c24ur7w2jjutyvwodsdccefo6ycmtmjyvh25hojn4@aysycyla6pom>
2024-05-28 13:44 ` Mateusz Guzik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DM6PR11MB4107F132CC1203486A91A4DEDCCAA@DM6PR11MB4107.namprd11.prod.outlook.com \
--to=tim.c.chen@intel.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=brauner@kernel.org \
--cc=cem@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=dchinner@redhat.com \
--cc=djwong@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox