From: Johannes Weiner <hannes@cmpxchg.org>
To: Waiman Long <longman@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Tejun Heo <tj@kernel.org>, Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Vlastimil Babka <vbabka@suse.cz>, Roman Gushchin <guro@fb.com>,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
linux-mm@kvack.org, Shakeel Butt <shakeelb@google.com>,
Muchun Song <songmuchun@bytedance.com>,
Alex Shi <alex.shi@linux.alibaba.com>,
Chris Down <chris@chrisdown.name>,
Yafang Shao <laoar.shao@gmail.com>,
Wei Yang <richard.weiyang@gmail.com>,
Masayoshi Mizuma <msys.mizuma@gmail.com>,
Xing Zhengjun <zhengjun.xing@linux.intel.com>,
Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH v4 2/5] mm/memcg: Cache vmstat data in percpu memcg_stock_pcp
Date: Mon, 19 Apr 2021 12:38:27 -0400 [thread overview]
Message-ID: <YH2yA1oZoyQoMhAH@cmpxchg.org> (raw)
In-Reply-To: <20210419000032.5432-3-longman@redhat.com>
On Sun, Apr 18, 2021 at 08:00:29PM -0400, Waiman Long wrote:
> Before the new slab memory controller with per object byte charging,
> charging and vmstat data update happen only when new slab pages are
> allocated or freed. Now they are done with every kmem_cache_alloc()
> and kmem_cache_free(). This causes additional overhead for workloads
> that generate a lot of alloc and free calls.
>
> The memcg_stock_pcp is used to cache byte charge for a specific
> obj_cgroup to reduce that overhead. To further reducing it, this patch
> makes the vmstat data cached in the memcg_stock_pcp structure as well
> until it accumulates a page size worth of update or when other cached
> data change. Caching the vmstat data in the per-cpu stock eliminates two
> writes to non-hot cachelines for memcg specific as well as memcg-lruvecs
> specific vmstat data by a write to a hot local stock cacheline.
>
> On a 2-socket Cascade Lake server with instrumentation enabled and this
> patch applied, it was found that about 20% (634400 out of 3243830)
> of the time when mod_objcg_state() is called leads to an actual call
> to __mod_objcg_state() after initial boot. When doing parallel kernel
> build, the figure was about 17% (24329265 out of 142512465). So caching
> the vmstat data reduces the number of calls to __mod_objcg_state()
> by more than 80%.
>
> Signed-off-by: Waiman Long <longman@redhat.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> ---
> mm/memcontrol.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 61 insertions(+), 3 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index dc9032f28f2e..693453f95d99 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2213,7 +2213,10 @@ struct memcg_stock_pcp {
>
> #ifdef CONFIG_MEMCG_KMEM
> struct obj_cgroup *cached_objcg;
> + struct pglist_data *cached_pgdat;
> unsigned int nr_bytes;
> + int vmstat_idx;
> + int vmstat_bytes;
> #endif
>
> struct work_struct work;
> @@ -3150,8 +3153,9 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
> css_put(&memcg->css);
> }
>
> -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
> - enum node_stat_item idx, int nr)
> +static inline void __mod_objcg_state(struct obj_cgroup *objcg,
> + struct pglist_data *pgdat,
> + enum node_stat_item idx, int nr)
This naming is dangerous, as the __mod_foo naming scheme we use
everywhere else suggests it's the same function as mod_foo() just with
preemption/irqs disabled.
> @@ -3159,10 +3163,53 @@ void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
> rcu_read_lock();
> memcg = obj_cgroup_memcg(objcg);
> lruvec = mem_cgroup_lruvec(memcg, pgdat);
> - mod_memcg_lruvec_state(lruvec, idx, nr);
> + __mod_memcg_lruvec_state(lruvec, idx, nr);
> rcu_read_unlock();
> }
>
> +void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
> + enum node_stat_item idx, int nr)
> +{
> + struct memcg_stock_pcp *stock;
> + unsigned long flags;
> +
> + local_irq_save(flags);
> + stock = this_cpu_ptr(&memcg_stock);
> +
> + /*
> + * Save vmstat data in stock and skip vmstat array update unless
> + * accumulating over a page of vmstat data or when pgdat or idx
> + * changes.
> + */
> + if (stock->cached_objcg != objcg) {
> + /* Output the current data as is */
When you get here with the wrong objcg and hit the cold path, it's
usually immediately followed by an uncharge -> refill_obj_stock() that
will then flush and reset cached_objcg.
Instead of doing two cold paths, why not flush the old objcg right
away and set the new so that refill_obj_stock() can use the fast path?
> + } else if (!stock->vmstat_bytes) {
> + /* Save the current data */
> + stock->vmstat_bytes = nr;
> + stock->vmstat_idx = idx;
> + stock->cached_pgdat = pgdat;
> + nr = 0;
> + } else if ((stock->cached_pgdat != pgdat) ||
> + (stock->vmstat_idx != idx)) {
> + /* Output the cached data & save the current data */
> + swap(nr, stock->vmstat_bytes);
> + swap(idx, stock->vmstat_idx);
> + swap(pgdat, stock->cached_pgdat);
Is this optimization worth doing?
You later split vmstat_bytes and idx doesn't change anymore.
How often does the pgdat change? This is a per-cpu cache after all,
and the numa node a given cpu allocates from tends to not change that
often. Even with interleaving mode, which I think is pretty rare, the
interleaving happens at the slab/page level, not the object level, and
the cache isn't bigger than a page anyway.
> + } else {
> + stock->vmstat_bytes += nr;
> + if (abs(stock->vmstat_bytes) > PAGE_SIZE) {
> + nr = stock->vmstat_bytes;
> + stock->vmstat_bytes = 0;
> + } else {
> + nr = 0;
> + }
..and this is the regular overflow handling done by the objcg and
memcg charge stock as well.
How about this?
if (stock->cached_objcg != objcg ||
stock->cached_pgdat != pgdat ||
stock->vmstat_idx != idx) {
drain_obj_stock(stock);
obj_cgroup_get(objcg);
stock->cached_objcg = objcg;
stock->nr_bytes = atomic_xchg(&objcg->nr_charged_bytes, 0);
stock->vmstat_idx = idx;
}
stock->vmstat_bytes += nr_bytes;
if (abs(stock->vmstat_bytes > PAGE_SIZE))
drain_obj_stock(stock);
(Maybe we could be clever, here since the charge and stat caches are
the same size: don't flush an oversized charge cache from
refill_obj_stock in the charge path, but leave it to the
mod_objcg_state() that follows; likewise don't flush an undersized
vmstat stock from mod_objcg_state() in the uncharge path, but leave it
to the refill_obj_stock() that follows. Could get a bit complicated...)
> @@ -3213,6 +3260,17 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock)
> stock->nr_bytes = 0;
> }
>
> + /*
> + * Flush the vmstat data in current stock
> + */
> + if (stock->vmstat_bytes) {
> + __mod_objcg_state(old, stock->cached_pgdat, stock->vmstat_idx,
> + stock->vmstat_bytes);
... then inline __mod_objcg_state() here into the only caller, and
there won't be any need to come up with a better name.
> + stock->cached_pgdat = NULL;
> + stock->vmstat_bytes = 0;
> + stock->vmstat_idx = 0;
> + }
> +
> obj_cgroup_put(old);
> stock->cached_objcg = NULL;
next prev parent reply other threads:[~2021-04-19 16:38 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-19 0:00 [PATCH v4 0/5] mm/memcg: Reduce kmemcache memory accounting overhead Waiman Long
2021-04-19 0:00 ` [PATCH v4 1/5] mm/memcg: Move mod_objcg_state() to memcontrol.c Waiman Long
2021-04-19 15:14 ` Johannes Weiner
2021-04-19 15:21 ` Waiman Long
2021-04-19 16:18 ` Waiman Long
2021-04-19 17:13 ` Johannes Weiner
2021-04-19 17:19 ` Waiman Long
2021-04-19 17:26 ` Waiman Long
2021-04-19 21:11 ` Johannes Weiner
2021-04-19 21:24 ` Waiman Long
2021-04-20 8:05 ` Michal Hocko
2021-04-19 15:24 ` Shakeel Butt
2021-04-19 0:00 ` [PATCH v4 2/5] mm/memcg: Cache vmstat data in percpu memcg_stock_pcp Waiman Long
2021-04-19 16:38 ` Johannes Weiner [this message]
2021-04-19 23:42 ` Waiman Long
2021-04-19 0:00 ` [PATCH v4 3/5] mm/memcg: Optimize user context object stock access Waiman Long
2021-04-19 0:00 ` [PATCH v4 4/5] mm/memcg: Save both reclaimable & unreclaimable bytes in object stock Waiman Long
2021-04-19 16:55 ` Johannes Weiner
2021-04-20 19:09 ` Waiman Long
2021-04-19 0:00 ` [PATCH v4 5/5] mm/memcg: Improve refill_obj_stock() performance Waiman Long
2021-04-19 6:06 ` [External] " Muchun Song
2021-04-19 15:00 ` Shakeel Butt
2021-04-19 15:19 ` Waiman Long
2021-04-19 15:56 ` Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YH2yA1oZoyQoMhAH@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=akpm@linux-foundation.org \
--cc=alex.shi@linux.alibaba.com \
--cc=cgroups@vger.kernel.org \
--cc=chris@chrisdown.name \
--cc=cl@linux.com \
--cc=guro@fb.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=laoar.shao@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=longman@redhat.com \
--cc=mhocko@kernel.org \
--cc=msys.mizuma@gmail.com \
--cc=penberg@kernel.org \
--cc=richard.weiyang@gmail.com \
--cc=rientjes@google.com \
--cc=shakeelb@google.com \
--cc=songmuchun@bytedance.com \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
--cc=vdavydov.dev@gmail.com \
--cc=willy@infradead.org \
--cc=zhengjun.xing@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox