linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: Eric Dumazet <edumazet@google.com>
Cc: "Zhang, Cathy" <cathy.zhang@intel.com>,
	Linux MM <linux-mm@kvack.org>,  Cgroups <cgroups@vger.kernel.org>,
	Paolo Abeni <pabeni@redhat.com>,
	 "davem@davemloft.net" <davem@davemloft.net>,
	"kuba@kernel.org" <kuba@kernel.org>,
	 "Brandeburg, Jesse" <jesse.brandeburg@intel.com>,
	"Srinivas, Suresh" <suresh.srinivas@intel.com>,
	 "Chen, Tim C" <tim.c.chen@intel.com>,
	"You, Lizhen" <lizhen.you@intel.com>,
	 "eric.dumazet@gmail.com" <eric.dumazet@gmail.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper size
Date: Thu, 11 May 2023 10:10:28 -0700	[thread overview]
Message-ID: <CALvZod5sbwXYqPZavojs1cvspxZv1iFHBG8=LQGNodinLXVL=w@mail.gmail.com> (raw)
In-Reply-To: <CANn89iKoB2hn8QKBw+8faL4MWZ1ByDW8T9UHyS9G-8c11mWdOw@mail.gmail.com>

On Thu, May 11, 2023 at 9:35 AM Eric Dumazet <edumazet@google.com> wrote:
>
[...]
>
> The suspect part is really:
>
> >      8.98%  mc-worker        [kernel.vmlinux]          [k] page_counter_cancel
> >             |
> >              --8.97%--page_counter_cancel
> >                        |
> >                         --8.97%--page_counter_uncharge
> >                                   drain_stock
> >                                   __refill_stock
> >                                   refill_stock
> >                                   |
> >                                    --8.91%--try_charge_memcg
> >                                              mem_cgroup_charge_skmem
> >                                              |
> >                                               --8.91%--__sk_mem_raise_allocated
> >                                                         __sk_mem_schedule
>
> Shakeel, networking has a per-cpu cache, of +/- 1MB.
>
> Even with asymmetric alloc/free, this would mean that a 100Gbit NIC
> would require something like 25,000
> operations on the shared cache line per second.
>
> Hardly an issue I think.
>
> memcg does not seem to have an equivalent strategy ?

memcg has +256KiB per-cpu cache (note the absence of '-'). However it
seems like Cathy already tested with 4MiB (1024 page batch) which is
comparable to networking per-cpu cache (i.e. 2MiB window) and still
see the issue. Additionally this is a single machine test (no NIC),
so, I am kind of contemplating between (1) this is not real world
workload and thus ignore or (2) implement asymmetric charge/uncharge
strategy for memcg.


  reply	other threads:[~2023-05-11 17:10 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20230508020801.10702-1-cathy.zhang@intel.com>
     [not found] ` <20230508020801.10702-2-cathy.zhang@intel.com>
2023-05-09 17:19   ` Shakeel Butt
2023-05-09 18:04     ` Chen, Tim C
2023-05-09 18:17       ` Shakeel Butt
2023-05-10  7:03         ` Zhang, Cathy
2023-05-10  7:32           ` Zhang, Cathy
     [not found]   ` <3887b08ac0e55e27a24d2f66afcfff1961ed9b13.camel@redhat.com>
     [not found]     ` <CH3PR11MB73459006FCE3887E1EA3B82FFC769@CH3PR11MB7345.namprd11.prod.outlook.com>
     [not found]       ` <CH3PR11MB73456D792EC6E7614E2EF14DFC769@CH3PR11MB7345.namprd11.prod.outlook.com>
     [not found]         ` <CANn89iL6Ckuu9vOEvc7A9CBLGuh-EpbwFRxRAchV-6VFyhTUpg@mail.gmail.com>
     [not found]           ` <CH3PR11MB73458BB403D537CFA96FD8DDFC769@CH3PR11MB7345.namprd11.prod.outlook.com>
     [not found]             ` <CANn89iJvpgXTwGEiXAkFwY3j3RqVhNzJ_6_zmuRb4w7rUA_8Ug@mail.gmail.com>
2023-05-09 16:09               ` Shakeel Butt
2023-05-10  6:54                 ` Zhang, Cathy
2023-05-10 11:11                 ` Zhang, Cathy
2023-05-10 11:24                   ` Eric Dumazet
2023-05-10 13:52                     ` Zhang, Cathy
2023-05-10 15:07                       ` Eric Dumazet
2023-05-10 16:09                         ` Zhang, Cathy
2023-05-10 19:00                           ` Shakeel Butt
2023-05-11  0:53                             ` Zhang, Cathy
2023-05-11  6:59                               ` Zhang, Cathy
2023-05-11  7:50                                 ` Eric Dumazet
2023-05-11  9:26                                   ` Zhang, Cathy
2023-05-11 16:23                                     ` Shakeel Butt
2023-05-11 16:35                                       ` Eric Dumazet
2023-05-11 17:10                                         ` Shakeel Butt [this message]
2023-05-11 21:18                                     ` Shakeel Butt
2023-05-12  2:38                                       ` Zhang, Cathy
2023-05-12  3:23                                         ` Zhang, Cathy
2023-05-12  5:06                                           ` Shakeel Butt
2023-05-12  5:51                                             ` Zhang, Cathy
2023-05-12 17:17                                               ` Shakeel Butt
2023-05-15  3:46                                                 ` Zhang, Cathy
2023-05-15  4:13                                                   ` Shakeel Butt
2023-05-15  6:27                                                     ` Zhang, Cathy
2023-05-15 19:50                                                       ` Shakeel Butt
2023-05-16  5:46                                                         ` Oliver Sang
2023-05-17 16:24                                                           ` Shakeel Butt
2023-05-17 16:33                                                             ` Eric Dumazet
2023-05-17 17:04                                                               ` Shakeel Butt
2023-07-28  2:26                                                                 ` Zhang, Cathy
2023-05-19  2:53                                                             ` Oliver Sang
2023-05-31  8:46                                                             ` Oliver Sang
2023-05-09 17:58             ` Shakeel Butt
2023-05-10  7:21               ` Zhang, Cathy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALvZod5sbwXYqPZavojs1cvspxZv1iFHBG8=LQGNodinLXVL=w@mail.gmail.com' \
    --to=shakeelb@google.com \
    --cc=cathy.zhang@intel.com \
    --cc=cgroups@vger.kernel.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=eric.dumazet@gmail.com \
    --cc=jesse.brandeburg@intel.com \
    --cc=kuba@kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizhen.you@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=suresh.srinivas@intel.com \
    --cc=tim.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox