linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Pasha Tatashin <pasha.tatashin@soleen.com>
To: Yosry Ahmed <yosryahmed@google.com>
Cc: robin.murphy@arm.com, joro@8bytes.org, will@kernel.org,
	 iommu@lists.linux.dev, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org,  rientjes@google.com
Subject: Re: [PATCH] iommu/iova: use named kmem_cache for iova magazines
Date: Fri, 2 Feb 2024 12:52:03 -0500	[thread overview]
Message-ID: <CA+CK2bCCGXuB9QfAc+BZ_JWf872xy3uGE=-pUbhYJwZSkSdrew@mail.gmail.com> (raw)
In-Reply-To: <CAJD7tkbDwwzTfm5h6v5f8XSN8KduBy6h7EVuQt0CAfX--Nb0gQ@mail.gmail.com>

> > +static int iova_magazine_cache_init(void)
> > +{
> > +       int ret = 0;
> > +
> > +       mutex_lock(&iova_magazine_cache_mutex);
> > +
> > +       iova_magazine_cache_users++;
> > +       if (iova_magazine_cache_users > 1)
> > +               goto out_unlock;
> > +
> > +       iova_magazine_cache = kmem_cache_create("iommu_iova_magazine",
> > +                                               sizeof(struct iova_magazine),
> > +                                               0, SLAB_HWCACHE_ALIGN, NULL);
>
> Could this slab cache be merged with a compatible one in the slab
> code? If this happens, do we still get a separate entry in
> /proc/slabinfo?

Hi Yosry,

Good suggestion to check for merges. I have checked,
iommu_iova_magazine is not merged.

> It may be useful to use SLAB_NO_MERGE if the purpose is to
> specifically have observability into this slab cache, but the comments
> above the flag make me think I may be misunderstanding it.

SLAB_NO_MERGE may reduce performance, and fragmentation efficiency, it
is better to keep it as-is.

Pasha

On Thu, Feb 1, 2024 at 5:29 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Thu, Feb 1, 2024 at 11:30 AM Pasha Tatashin
> <pasha.tatashin@soleen.com> wrote:
> >
> > From: Pasha Tatashin <pasha.tatashin@soleen.com>
> >
> > The magazine buffers can take gigabytes of kmem memory, dominating all
> > other allocations. For observability prurpose create named slab cache so
> > the iova magazine memory overhead can be clearly observed.
> >
> > With this change:
> >
> > > slabtop -o | head
> >  Active / Total Objects (% used)    : 869731 / 952904 (91.3%)
> >  Active / Total Slabs (% used)      : 103411 / 103974 (99.5%)
> >  Active / Total Caches (% used)     : 135 / 211 (64.0%)
> >  Active / Total Size (% used)       : 395389.68K / 411430.20K (96.1%)
> >  Minimum / Average / Maximum Object : 0.02K / 0.43K / 8.00K
> >
> > OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
> > 244412 244239 99%    1.00K  61103       4    244412K iommu_iova_magazine
> >  91636  88343 96%    0.03K    739     124      2956K kmalloc-32
> >  75744  74844 98%    0.12K   2367      32      9468K kernfs_node_cache
> >
> > On this machine it is now clear that magazine use 242M of kmem memory.
> >
> > Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> > ---
> >  drivers/iommu/iova.c | 57 +++++++++++++++++++++++++++++++++++++++++---
> >  1 file changed, 54 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> > index d30e453d0fb4..617bbc2b79f5 100644
> > --- a/drivers/iommu/iova.c
> > +++ b/drivers/iommu/iova.c
> > @@ -630,6 +630,10 @@ EXPORT_SYMBOL_GPL(reserve_iova);
> >
> >  #define IOVA_DEPOT_DELAY msecs_to_jiffies(100)
> >
> > +static struct kmem_cache *iova_magazine_cache;
> > +static unsigned int iova_magazine_cache_users;
> > +static DEFINE_MUTEX(iova_magazine_cache_mutex);
> > +
> >  struct iova_magazine {
> >         union {
> >                 unsigned long size;
> > @@ -654,11 +658,51 @@ struct iova_rcache {
> >         struct delayed_work work;
> >  };
> >
> > +static int iova_magazine_cache_init(void)
> > +{
> > +       int ret = 0;
> > +
> > +       mutex_lock(&iova_magazine_cache_mutex);
> > +
> > +       iova_magazine_cache_users++;
> > +       if (iova_magazine_cache_users > 1)
> > +               goto out_unlock;
> > +
> > +       iova_magazine_cache = kmem_cache_create("iommu_iova_magazine",
> > +                                               sizeof(struct iova_magazine),
> > +                                               0, SLAB_HWCACHE_ALIGN, NULL);
>
> Could this slab cache be merged with a compatible one in the slab
> code? If this happens, do we still get a separate entry in
> /proc/slabinfo?
>
> It may be useful to use SLAB_NO_MERGE if the purpose is to
> specifically have observability into this slab cache, but the comments
> above the flag make me think I may be misunderstanding it.


      reply	other threads:[~2024-02-02 17:52 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-01 19:30 Pasha Tatashin
2024-02-01 20:56 ` Robin Murphy
2024-02-01 21:06   ` Pasha Tatashin
2024-02-01 21:23     ` Robin Murphy
2024-02-01 22:10       ` Pasha Tatashin
2024-02-02 18:04       ` Pasha Tatashin
2024-02-02 18:27         ` Robin Murphy
2024-02-02 19:14           ` Pasha Tatashin
2024-02-01 22:28 ` Yosry Ahmed
2024-02-02 17:52   ` Pasha Tatashin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+CK2bCCGXuB9QfAc+BZ_JWf872xy3uGE=-pUbhYJwZSkSdrew@mail.gmail.com' \
    --to=pasha.tatashin@soleen.com \
    --cc=iommu@lists.linux.dev \
    --cc=joro@8bytes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox