linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Zhang, Cathy" <cathy.zhang@intel.com>
To: Eric Dumazet <edumazet@google.com>
Cc: Shakeel Butt <shakeelb@google.com>, Linux MM <linux-mm@kvack.org>,
	Cgroups <cgroups@vger.kernel.org>,
	Paolo Abeni <pabeni@redhat.com>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"kuba@kernel.org" <kuba@kernel.org>,
	"Brandeburg, Jesse" <jesse.brandeburg@intel.com>,
	"Srinivas, Suresh" <suresh.srinivas@intel.com>,
	"Chen, Tim C" <tim.c.chen@intel.com>,
	"You, Lizhen" <lizhen.you@intel.com>,
	"eric.dumazet@gmail.com" <eric.dumazet@gmail.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: RE: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper size
Date: Wed, 10 May 2023 16:09:23 +0000	[thread overview]
Message-ID: <CH3PR11MB734502756F495CB9C520494FFC779@CH3PR11MB7345.namprd11.prod.outlook.com> (raw)
In-Reply-To: <CANn89i+J+ciJGPkWAFKDwhzJERFJr9_2Or=ehpwSTYO14qzHmA@mail.gmail.com>



> -----Original Message-----
> From: Eric Dumazet <edumazet@google.com>
> Sent: Wednesday, May 10, 2023 11:07 PM
> To: Zhang, Cathy <cathy.zhang@intel.com>
> Cc: Shakeel Butt <shakeelb@google.com>; Linux MM <linux-mm@kvack.org>;
> Cgroups <cgroups@vger.kernel.org>; Paolo Abeni <pabeni@redhat.com>;
> davem@davemloft.net; kuba@kernel.org; Brandeburg, Jesse
> <jesse.brandeburg@intel.com>; Srinivas, Suresh
> <suresh.srinivas@intel.com>; Chen, Tim C <tim.c.chen@intel.com>; You,
> Lizhen <lizhen.you@intel.com>; eric.dumazet@gmail.com;
> netdev@vger.kernel.org
> Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
> size
> 
> On Wed, May 10, 2023 at 3:54 PM Zhang, Cathy <cathy.zhang@intel.com>
> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Eric Dumazet <edumazet@google.com>
> > > Sent: Wednesday, May 10, 2023 7:25 PM
> > > To: Zhang, Cathy <cathy.zhang@intel.com>
> > > Cc: Shakeel Butt <shakeelb@google.com>; Linux MM
> > > <linux-mm@kvack.org>; Cgroups <cgroups@vger.kernel.org>; Paolo
> Abeni
> > > <pabeni@redhat.com>; davem@davemloft.net; kuba@kernel.org;
> > > Brandeburg, Jesse <jesse.brandeburg@intel.com>; Srinivas, Suresh
> > > <suresh.srinivas@intel.com>; Chen, Tim C <tim.c.chen@intel.com>;
> > > You, Lizhen <lizhen.you@intel.com>; eric.dumazet@gmail.com;
> > > netdev@vger.kernel.org
> > > Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as
> > > a proper size
> > >
> > > On Wed, May 10, 2023 at 1:11 PM Zhang, Cathy <cathy.zhang@intel.com>
> > > wrote:
> > > >
> > > > Hi Shakeel, Eric and all,
> > > >
> > > > How about adding memory pressure checking in sk_mem_uncharge() to
> > > > decide if keep part of memory or not, which can help avoid the
> > > > issue you fixed and the problem we find on the system with more CPUs.
> > > >
> > > > The code draft is like this:
> > > >
> > > > static inline void sk_mem_uncharge(struct sock *sk, int size) {
> > > >         int reclaimable;
> > > >         int reclaim_threshold = SK_RECLAIM_THRESHOLD;
> > > >
> > > >         if (!sk_has_account(sk))
> > > >                 return;
> > > >         sk->sk_forward_alloc += size;
> > > >
> > > >         if (mem_cgroup_sockets_enabled && sk->sk_memcg &&
> > > >             mem_cgroup_under_socket_pressure(sk->sk_memcg)) {
> > > >                 sk_mem_reclaim(sk);
> > > >                 return;
> > > >         }
> > > >
> > > >         reclaimable = sk->sk_forward_alloc -
> > > > sk_unused_reserved_mem(sk);
> > > >
> > > >         if (reclaimable > reclaim_threshold) {
> > > >                 reclaimable -= reclaim_threshold;
> > > >                 __sk_mem_reclaim(sk, reclaimable);
> > > >         }
> > > > }
> > > >
> > > > I've run a test with the new code, the result looks good, it does
> > > > not introduce latency, RPS is the same.
> > > >
> > >
> > > It will not work for sockets that are idle, after a burst.
> > > If we restore per socket caches, we will need a shrinker.
> > > Trust me, we do not want that kind of big hammer, crushing latencies.
> > >
> > > Have you tried to increase batch sizes ?
> >
> > I jus picked up 256 and 1024 for a try, but no help, the overhead still exists.
> 
> This makes no sense at all.

Eric,

I added a pr_info in try_charge_memcg() to print nr_pages if
nr_pages >= MEMCG_CHARGE_BATCH, except it prints 64 during the initialization
of instances, there is no other output during the running. That means nr_pages is not
over 64, I guess that might be the reason why to increase MEMCG_CHARGE_BATCH
doesn't affect this case.

> 
> I suspect a plain bug in mm/memcontrol.c
> 
> I will let mm experts work on this.
> 
> >
> > >
> > > Any kind of cache (even per-cpu) might need some adjustment when
> > > core count or expected traffic is increasing.
> > > This was somehow hinted in
> > > commit 1813e51eece0ad6f4aacaeb738e7cced46feb470
> > > Author: Shakeel Butt <shakeelb@google.com>
> > > Date:   Thu Aug 25 00:05:06 2022 +0000
> > >
> > >     memcg: increase MEMCG_CHARGE_BATCH to 64
> > >
> > >
> > >
> > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > index
> > >
> 222d7370134c73e59fdbdf598ed8d66897dbbf1d..0418229d30c25d114132a1e
> > > d46ac01358cf21424
> > > 100644
> > > --- a/include/linux/memcontrol.h
> > > +++ b/include/linux/memcontrol.h
> > > @@ -334,7 +334,7 @@ struct mem_cgroup {
> > >   * TODO: maybe necessary to use big numbers in big irons or dynamic
> > > based of the
> > >   * workload.
> > >   */
> > > -#define MEMCG_CHARGE_BATCH 64U
> > > +#define MEMCG_CHARGE_BATCH 128U
> > >
> > >  extern struct mem_cgroup *root_mem_cgroup;
> > >
> > > diff --git a/include/net/sock.h b/include/net/sock.h index
> > >
> 656ea89f60ff90d600d16f40302000db64057c64..82f6a288be650f886b207e6a
> > > 5e62a1d5dda808b0
> > > 100644
> > > --- a/include/net/sock.h
> > > +++ b/include/net/sock.h
> > > @@ -1433,8 +1433,8 @@ sk_memory_allocated(const struct sock *sk)
> > >         return proto_memory_allocated(sk->sk_prot);
> > >  }
> > >
> > > -/* 1 MB per cpu, in page units */
> > > -#define SK_MEMORY_PCPU_RESERVE (1 << (20 - PAGE_SHIFT))
> > > +/* 2 MB per cpu, in page units */
> > > +#define SK_MEMORY_PCPU_RESERVE (1 << (21 - PAGE_SHIFT))
> > >
> > >  static inline void
> > >  sk_memory_allocated_add(struct sock *sk, int amt)
> > >
> > >
> > >
> > >
> > >
> > >
> > > > > -----Original Message-----
> > > > > From: Shakeel Butt <shakeelb@google.com>
> > > > > Sent: Wednesday, May 10, 2023 12:10 AM
> > > > > To: Eric Dumazet <edumazet@google.com>; Linux MM <linux-
> > > > > mm@kvack.org>; Cgroups <cgroups@vger.kernel.org>
> > > > > Cc: Zhang, Cathy <cathy.zhang@intel.com>; Paolo Abeni
> > > > > <pabeni@redhat.com>; davem@davemloft.net; kuba@kernel.org;
> > > > > Brandeburg, Jesse <jesse.brandeburg@intel.com>; Srinivas, Suresh
> > > > > <suresh.srinivas@intel.com>; Chen, Tim C <tim.c.chen@intel.com>;
> > > > > You, Lizhen <lizhen.you@intel.com>; eric.dumazet@gmail.com;
> > > > > netdev@vger.kernel.org
> > > > > Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc
> > > > > as a proper size
> > > > >
> > > > > +linux-mm & cgroup
> > > > >
> > > > > Thread: https://lore.kernel.org/all/20230508020801.10702-1-
> > > > > cathy.zhang@intel.com/
> > > > >
> > > > > On Tue, May 9, 2023 at 8:43 AM Eric Dumazet
> > > > > <edumazet@google.com>
> > > > > wrote:
> > > > > >
> > > > > [...]
> > > > > > Some mm experts should chime in, this is not a networking issue.
> > > > >
> > > > > Most of the MM folks are busy in LSFMM this week. I will take a
> > > > > look at this soon.

  reply	other threads:[~2023-05-10 16:11 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20230508020801.10702-1-cathy.zhang@intel.com>
     [not found] ` <20230508020801.10702-2-cathy.zhang@intel.com>
2023-05-09 17:19   ` Shakeel Butt
2023-05-09 18:04     ` Chen, Tim C
2023-05-09 18:17       ` Shakeel Butt
2023-05-10  7:03         ` Zhang, Cathy
2023-05-10  7:32           ` Zhang, Cathy
     [not found]   ` <3887b08ac0e55e27a24d2f66afcfff1961ed9b13.camel@redhat.com>
     [not found]     ` <CH3PR11MB73459006FCE3887E1EA3B82FFC769@CH3PR11MB7345.namprd11.prod.outlook.com>
     [not found]       ` <CH3PR11MB73456D792EC6E7614E2EF14DFC769@CH3PR11MB7345.namprd11.prod.outlook.com>
     [not found]         ` <CANn89iL6Ckuu9vOEvc7A9CBLGuh-EpbwFRxRAchV-6VFyhTUpg@mail.gmail.com>
     [not found]           ` <CH3PR11MB73458BB403D537CFA96FD8DDFC769@CH3PR11MB7345.namprd11.prod.outlook.com>
     [not found]             ` <CANn89iJvpgXTwGEiXAkFwY3j3RqVhNzJ_6_zmuRb4w7rUA_8Ug@mail.gmail.com>
2023-05-09 16:09               ` Shakeel Butt
2023-05-10  6:54                 ` Zhang, Cathy
2023-05-10 11:11                 ` Zhang, Cathy
2023-05-10 11:24                   ` Eric Dumazet
2023-05-10 13:52                     ` Zhang, Cathy
2023-05-10 15:07                       ` Eric Dumazet
2023-05-10 16:09                         ` Zhang, Cathy [this message]
2023-05-10 19:00                           ` Shakeel Butt
2023-05-11  0:53                             ` Zhang, Cathy
2023-05-11  6:59                               ` Zhang, Cathy
2023-05-11  7:50                                 ` Eric Dumazet
2023-05-11  9:26                                   ` Zhang, Cathy
2023-05-11 16:23                                     ` Shakeel Butt
2023-05-11 16:35                                       ` Eric Dumazet
2023-05-11 17:10                                         ` Shakeel Butt
2023-05-11 21:18                                     ` Shakeel Butt
2023-05-12  2:38                                       ` Zhang, Cathy
2023-05-12  3:23                                         ` Zhang, Cathy
2023-05-12  5:06                                           ` Shakeel Butt
2023-05-12  5:51                                             ` Zhang, Cathy
2023-05-12 17:17                                               ` Shakeel Butt
2023-05-15  3:46                                                 ` Zhang, Cathy
2023-05-15  4:13                                                   ` Shakeel Butt
2023-05-15  6:27                                                     ` Zhang, Cathy
2023-05-15 19:50                                                       ` Shakeel Butt
2023-05-16  5:46                                                         ` Oliver Sang
2023-05-17 16:24                                                           ` Shakeel Butt
2023-05-17 16:33                                                             ` Eric Dumazet
2023-05-17 17:04                                                               ` Shakeel Butt
2023-07-28  2:26                                                                 ` Zhang, Cathy
2023-05-19  2:53                                                             ` Oliver Sang
2023-05-31  8:46                                                             ` Oliver Sang
2023-05-09 17:58             ` Shakeel Butt
2023-05-10  7:21               ` Zhang, Cathy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CH3PR11MB734502756F495CB9C520494FFC779@CH3PR11MB7345.namprd11.prod.outlook.com \
    --to=cathy.zhang@intel.com \
    --cc=cgroups@vger.kernel.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=eric.dumazet@gmail.com \
    --cc=jesse.brandeburg@intel.com \
    --cc=kuba@kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizhen.you@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=shakeelb@google.com \
    --cc=suresh.srinivas@intel.com \
    --cc=tim.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox