From: "Zhang, Cathy" <cathy.zhang@intel.com>
To: Shakeel Butt <shakeelb@google.com>
Cc: Eric Dumazet <edumazet@google.com>, Linux MM <linux-mm@kvack.org>,
Cgroups <cgroups@vger.kernel.org>,
Paolo Abeni <pabeni@redhat.com>,
"davem@davemloft.net" <davem@davemloft.net>,
"kuba@kernel.org" <kuba@kernel.org>,
"Brandeburg, Jesse" <jesse.brandeburg@intel.com>,
"Srinivas, Suresh" <suresh.srinivas@intel.com>,
"Chen, Tim C" <tim.c.chen@intel.com>,
"You, Lizhen" <lizhen.you@intel.com>,
"eric.dumazet@gmail.com" <eric.dumazet@gmail.com>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: RE: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper size
Date: Thu, 11 May 2023 06:59:19 +0000 [thread overview]
Message-ID: <IA0PR11MB7355E486112E922AA6095CCCFC749@IA0PR11MB7355.namprd11.prod.outlook.com> (raw)
In-Reply-To: <CH3PR11MB73454C44EC8BCD43685BCB58FC749@CH3PR11MB7345.namprd11.prod.outlook.com>
> -----Original Message-----
> From: Zhang, Cathy
> Sent: Thursday, May 11, 2023 8:53 AM
> To: Shakeel Butt <shakeelb@google.com>
> Cc: Eric Dumazet <edumazet@google.com>; Linux MM <linux-
> mm@kvack.org>; Cgroups <cgroups@vger.kernel.org>; Paolo Abeni
> <pabeni@redhat.com>; davem@davemloft.net; kuba@kernel.org;
> Brandeburg, Jesse <jesse.brandeburg@intel.com>; Srinivas, Suresh
> <suresh.srinivas@intel.com>; Chen, Tim C <tim.c.chen@intel.com>; You,
> Lizhen <Lizhen.You@intel.com>; eric.dumazet@gmail.com;
> netdev@vger.kernel.org
> Subject: RE: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
> size
>
>
>
> > -----Original Message-----
> > From: Shakeel Butt <shakeelb@google.com>
> > Sent: Thursday, May 11, 2023 3:00 AM
> > To: Zhang, Cathy <cathy.zhang@intel.com>
> > Cc: Eric Dumazet <edumazet@google.com>; Linux MM <linux-
> > mm@kvack.org>; Cgroups <cgroups@vger.kernel.org>; Paolo Abeni
> > <pabeni@redhat.com>; davem@davemloft.net; kuba@kernel.org;
> Brandeburg,
> > Jesse <jesse.brandeburg@intel.com>; Srinivas, Suresh
> > <suresh.srinivas@intel.com>; Chen, Tim C <tim.c.chen@intel.com>; You,
> > Lizhen <lizhen.you@intel.com>; eric.dumazet@gmail.com;
> > netdev@vger.kernel.org
> > Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a
> > proper size
> >
> > On Wed, May 10, 2023 at 9:09 AM Zhang, Cathy <cathy.zhang@intel.com>
> > wrote:
> > >
> > >
> > [...]
> > > > > >
> > > > > > Have you tried to increase batch sizes ?
> > > > >
> > > > > I jus picked up 256 and 1024 for a try, but no help, the
> > > > > overhead still
> > exists.
> > > >
> > > > This makes no sense at all.
> > >
> > > Eric,
> > >
> > > I added a pr_info in try_charge_memcg() to print nr_pages if
> > > nr_pages
> > > >= MEMCG_CHARGE_BATCH, except it prints 64 during the initialization
> > > of instances, there is no other output during the running. That
> > > means nr_pages is not over 64, I guess that might be the reason why
> > > to increase MEMCG_CHARGE_BATCH doesn't affect this case.
> > >
> >
> > I am assuming you increased MEMCG_CHARGE_BATCH to 256 and 1024
> but
> > that did not help. To me that just means there is a different
> > bottleneck in the memcg charging codepath. Can you please share the
> > perf profile? Please note that memcg charging does a lot of other
> > things as well like updating memcg stats and checking (and enforcing)
> > memory.high even if you have not set memory.high.
>
> Thanks Shakeel! I will check more details on what you mentioned. We use
> "sudo perf top -p $(docker inspect -f '{{.State.Pid}}' memcached_2)" to
> monitor one of those instances, and also use "sudo perf top" to check the
> overhead from system wide.
Here is the annotate output of perf top for the three memcg hot paths:
Showing cycles for page_counter_try_charge
Events Pcnt (>=5%)
Percent | Source code & Disassembly of elf for cycles (543288 samples, percent: local period)
---------------------------------------------------------------------------------------------------
0.00 : ffffffff8141388d: mov %r12,%rax
76.82 : ffffffff81413890: lock xadd %rax,(%rbx)
22.10 : ffffffff81413895: lea (%r12,%rax,1),%r15
Showing cycles for page_counter_cancel
Events Pcnt (>=5%)
Percent | Source code & Disassembly of elf for cycles (1004744 samples, percent: local period)
----------------------------------------------------------------------------------------------------
: 160 return i + xadd(&v->counter, i);
77.42 : ffffffff81413759: lock xadd %rax,(%rdi)
22.34 : ffffffff8141375e: sub %rsi,%rax
Showing cycles for try_charge_memcg
Events Pcnt (>=5%)
Percent | Source code & Disassembly of elf for cycles (256531 samples, percent: local period)
---------------------------------------------------------------------------------------------------
: 22 return __READ_ONCE((v)->counter);
77.53 : ffffffff8141df86: mov 0x100(%r13),%rdx
: 2826 READ_ONCE(memcg->memory.high);
19.45 : ffffffff8141df8d: mov 0x190(%r13),%rcx
next prev parent reply other threads:[~2023-05-11 7:00 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20230508020801.10702-1-cathy.zhang@intel.com>
[not found] ` <20230508020801.10702-2-cathy.zhang@intel.com>
2023-05-09 17:19 ` Shakeel Butt
2023-05-09 18:04 ` Chen, Tim C
2023-05-09 18:17 ` Shakeel Butt
2023-05-10 7:03 ` Zhang, Cathy
2023-05-10 7:32 ` Zhang, Cathy
[not found] ` <3887b08ac0e55e27a24d2f66afcfff1961ed9b13.camel@redhat.com>
[not found] ` <CH3PR11MB73459006FCE3887E1EA3B82FFC769@CH3PR11MB7345.namprd11.prod.outlook.com>
[not found] ` <CH3PR11MB73456D792EC6E7614E2EF14DFC769@CH3PR11MB7345.namprd11.prod.outlook.com>
[not found] ` <CANn89iL6Ckuu9vOEvc7A9CBLGuh-EpbwFRxRAchV-6VFyhTUpg@mail.gmail.com>
[not found] ` <CH3PR11MB73458BB403D537CFA96FD8DDFC769@CH3PR11MB7345.namprd11.prod.outlook.com>
[not found] ` <CANn89iJvpgXTwGEiXAkFwY3j3RqVhNzJ_6_zmuRb4w7rUA_8Ug@mail.gmail.com>
2023-05-09 16:09 ` Shakeel Butt
2023-05-10 6:54 ` Zhang, Cathy
2023-05-10 11:11 ` Zhang, Cathy
2023-05-10 11:24 ` Eric Dumazet
2023-05-10 13:52 ` Zhang, Cathy
2023-05-10 15:07 ` Eric Dumazet
2023-05-10 16:09 ` Zhang, Cathy
2023-05-10 19:00 ` Shakeel Butt
2023-05-11 0:53 ` Zhang, Cathy
2023-05-11 6:59 ` Zhang, Cathy [this message]
2023-05-11 7:50 ` Eric Dumazet
2023-05-11 9:26 ` Zhang, Cathy
2023-05-11 16:23 ` Shakeel Butt
2023-05-11 16:35 ` Eric Dumazet
2023-05-11 17:10 ` Shakeel Butt
2023-05-11 21:18 ` Shakeel Butt
2023-05-12 2:38 ` Zhang, Cathy
2023-05-12 3:23 ` Zhang, Cathy
2023-05-12 5:06 ` Shakeel Butt
2023-05-12 5:51 ` Zhang, Cathy
2023-05-12 17:17 ` Shakeel Butt
2023-05-15 3:46 ` Zhang, Cathy
2023-05-15 4:13 ` Shakeel Butt
2023-05-15 6:27 ` Zhang, Cathy
2023-05-15 19:50 ` Shakeel Butt
2023-05-16 5:46 ` Oliver Sang
2023-05-17 16:24 ` Shakeel Butt
2023-05-17 16:33 ` Eric Dumazet
2023-05-17 17:04 ` Shakeel Butt
2023-07-28 2:26 ` Zhang, Cathy
2023-05-19 2:53 ` Oliver Sang
2023-05-31 8:46 ` Oliver Sang
2023-05-09 17:58 ` Shakeel Butt
2023-05-10 7:21 ` Zhang, Cathy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=IA0PR11MB7355E486112E922AA6095CCCFC749@IA0PR11MB7355.namprd11.prod.outlook.com \
--to=cathy.zhang@intel.com \
--cc=cgroups@vger.kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eric.dumazet@gmail.com \
--cc=jesse.brandeburg@intel.com \
--cc=kuba@kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizhen.you@intel.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=shakeelb@google.com \
--cc=suresh.srinivas@intel.com \
--cc=tim.c.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox