linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM v4
Date: Mon, 28 Feb 2011 18:48:21 +0900	[thread overview]
Message-ID: <20110228184821.f10dba19.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20110228095316.GC4648@tiehlicka.suse.cz>

On Mon, 28 Feb 2011 10:53:16 +0100
Michal Hocko <mhocko@suse.cz> wrote:

> On Mon 28-02-11 18:23:22, KAMEZAWA Hiroyuki wrote:
> [...]
> > > From 84a9555741b59cb2a0a67b023e4bd0f92c670ca1 Mon Sep 17 00:00:00 2001
> > > From: Michal Hocko <mhocko@suse.cz>
> > > Date: Thu, 24 Feb 2011 11:25:44 +0100
> > > Subject: [PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM
> > > 
> > > Currently we are allocating a single page_cgroup array per memory
> > > section (stored in mem_section->base) when CONFIG_SPARSEMEM is selected.
> > > This is correct but memory inefficient solution because the allocated
> > > memory (unless we fall back to vmalloc) is not kmalloc friendly:
> > >         - 32b - 16384 entries (20B per entry) fit into 327680B so the
> > >           524288B slab cache is used
> > >         - 32b with PAE - 131072 entries with 2621440B fit into 4194304B
> > >         - 64b - 32768 entries (40B per entry) fit into 2097152 cache
> > > 
> > > This is ~37% wasted space per memory section and it sumps up for the
> > > whole memory. On a x86_64 machine it is something like 6MB per 1GB of
> > > RAM.
> > > 
> > > We can reduce the internal fragmentation by using alloc_pages_exact
> > > which allocates PAGE_SIZE aligned blocks so we will get down to <4kB
> > > wasted memory per section which is much better.
> > > 
> > > We still need a fallback to vmalloc because we have no guarantees that
> > > we will have a continuous memory of that size (order-10) later on during
> > > the hotplug events.
> > > 
> > > Signed-off-by: Michal Hocko <mhocko@suse.cz>
> > > CC: Dave Hansen <dave@linux.vnet.ibm.com>
> > > CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > 
> > Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> 
> Thanks. I will repost it with Andrew in the CC.
> 
> > 
> > But...nitpick, it may be from my fault..
> [...]
> > > +static void free_page_cgroup(void *addr)
> > > +{
> > > +	if (is_vmalloc_addr(addr)) {
> > > +		vfree(addr);
> > > +	} else {
> > > +		struct page *page = virt_to_page(addr);
> > > +		if (!PageReserved(page)) { /* Is bootmem ? */
> > 
> > I think we never see PageReserved if we just use alloc_pages_exact()/vmalloc().
> 
> I have checked that and we really do not (unless I am missing some
> subtle side effects). Anyway, I think we still should at least BUG_ON on
> that.
> 
> > Maybe my old patch was not enough and this kind of junks are remaining in
> > the original code.
> 
> Should I incorporate it into the patch. I think that a separate one
> would be better for readability.
> 
> ---
> From e7a897a42b526620eb4afada2d036e1c9ff9e62a Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@suse.cz>
> Date: Mon, 28 Feb 2011 10:43:12 +0100
> Subject: [PATCH] page_cgroup array is never stored on reserved pages
> 
> KAMEZAWA Hiroyuki noted that free_pages_cgroup doesn't have to check for
> PageReserved because we never store the array on reserved pages
> (neither alloc_pages_exact nor vmalloc use those pages).
> 
> So we can replace the check by a BUG_ON.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.cz>
> CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Thank you.
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-02-28  9:54 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-23 15:10 [RFC PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM Michal Hocko
2011-02-23 18:19 ` Dave Hansen
2011-02-23 23:52   ` KAMEZAWA Hiroyuki
2011-02-24  9:35     ` Michal Hocko
2011-02-24 10:02       ` KAMEZAWA Hiroyuki
2011-02-24  9:33   ` Michal Hocko
2011-02-24 13:40   ` [RFC PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM v2 Michal Hocko
2011-02-25  3:25     ` KAMEZAWA Hiroyuki
2011-02-25  9:53       ` [RFC PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM v3 Michal Hocko
2011-02-28  0:53         ` KAMEZAWA Hiroyuki
2011-02-28  9:12           ` [RFC PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM v4 Michal Hocko
2011-02-28  9:23             ` KAMEZAWA Hiroyuki
2011-02-28  9:53               ` Michal Hocko
2011-02-28  9:48                 ` KAMEZAWA Hiroyuki [this message]
2011-02-28 10:12                   ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110228184821.f10dba19.kamezawa.hiroyu@jp.fujitsu.com \
    --to=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=dave@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox