linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Roman Gushchin <guro@fb.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Roman Gushchin <guroan@gmail.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Alexey Dobriyan <adobriyan@gmail.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@suse.com>, Vlastimil Babka <vbabka@suse.cz>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Kernel Team <Kernel-team@fb.com>
Subject: Re: [RFC 4/4] mm: show number of vmalloc pages in /proc/meminfo
Date: Fri, 14 Dec 2018 18:42:50 +0000	[thread overview]
Message-ID: <20181214184244.GA5196@castle.DHCP.thefacebook.com> (raw)
In-Reply-To: <20181214182904.GE10600@bombadil.infradead.org>

On Fri, Dec 14, 2018 at 10:29:04AM -0800, Matthew Wilcox wrote:
> On Fri, Dec 14, 2018 at 10:07:20AM -0800, Roman Gushchin wrote:
> > Vmalloc() is getting more and more used these days (kernel stacks,
> > bpf and percpu allocator are new top users), and the total %
> > of memory consumed by vmalloc() can be pretty significant
> > and changes dynamically.
> > 
> > /proc/meminfo is the best place to display this information:
> > its top goal is to show top consumers of the memory.
> > 
> > Since the VmallocUsed field in /proc/meminfo is not in use
> > for quite a long time (it has been defined to 0 by the
> > commit a5ad88ce8c7f ("mm: get rid of 'vmalloc_info' from
> > /proc/meminfo")), let's reuse it for showing the actual
> > physical memory consumption of vmalloc().
> 
> Do you see significant contention on nr_vmalloc_pages?  Also, if it's
> just an atomic_long_t, is it worth having an accessor for it?  And if
> it is worth having an accessor for it, then it can be static.

Not really, so I decided that per-cpu counter is an overkill
right now; but we can easily switch over once we'll notice any contention.
Will add static.

> 
> Also, I seem to be missing 3/4.
> 

Hm, https://lkml.org/lkml/2018/12/14/1048 ?

Thanks!

      reply	other threads:[~2018-12-14 18:44 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-14 18:07 [RFC 0/4] vmalloc enhancements Roman Gushchin
2018-12-14 18:07 ` [RFC 1/4] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() Roman Gushchin
2018-12-14 18:24   ` Matthew Wilcox
2018-12-14 18:07 ` [RFC 2/4] mm: separate memory allocation and actual work in alloc_vmap_area() Roman Gushchin
2018-12-14 18:13   ` Matthew Wilcox
2018-12-14 19:40     ` Joe Perches
2018-12-14 19:45       ` Matthew Wilcox
2018-12-14 21:57         ` Roman Gushchin
2018-12-15  2:22         ` Joe Perches
2018-12-14 18:07 ` [RFC 3/4] mm: allocate vmalloc metadata in one allocation Roman Gushchin
2018-12-14 18:07 ` [RFC 4/4] mm: show number of vmalloc pages in /proc/meminfo Roman Gushchin
2018-12-14 18:29   ` Matthew Wilcox
2018-12-14 18:42     ` Roman Gushchin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181214184244.GA5196@castle.DHCP.thefacebook.com \
    --to=guro@fb.com \
    --cc=Kernel-team@fb.com \
    --cc=adobriyan@gmail.com \
    --cc=guroan@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox