From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM v2
Date: Fri, 25 Feb 2011 12:25:22 +0900 [thread overview]
Message-ID: <20110225122522.8c4f1057.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20110224134045.GA22122@tiehlicka.suse.cz>
On Thu, 24 Feb 2011 14:40:45 +0100
Michal Hocko <mhocko@suse.cz> wrote:
> Here is the second version of the patch. I have used alloc_pages_exact
> instead of the complex double array approach.
>
> I still fallback to kmalloc/vmalloc because hotplug can happen quite
> some time after boot and we can end up not having enough continuous
> pages at that time.
>
> I am also thinking whether it would make sense to introduce
> alloc_pages_exact_node function which would allocate pages from the
> given node.
>
> Any thoughts?
The patch itself is fine but please update the description.
But have some comments, below.
> ---
> From e8909bbd1d759de274a6ed7812530e576ad8bc44 Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@suse.cz>
> Date: Thu, 24 Feb 2011 11:25:44 +0100
> Subject: [PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM
>
> Currently we are allocating a single page_cgroup array per memory
> section (stored in mem_section->base) when CONFIG_SPARSEMEM is selected.
> This is correct but memory inefficient solution because the allocated
> memory (unless we fall back to vmalloc) is not kmalloc friendly:
> - 32b - 16384 entries (20B per entry) fit into 327680B so the
> 524288B slab cache is used
> - 32b with PAE - 131072 entries with 2621440B fit into 4194304B
> - 64b - 32768 entries (40B per entry) fit into 2097152 cache
>
> This is ~37% wasted space per memory section and it sumps up for the
> whole memory. On a x86_64 machine it is something like 6MB per 1GB of
> RAM.
>
> We can reduce the internal fragmentation either by imeplementing 2
> dimensional array and allocate kmalloc aligned sizes for each entry (as
> suggested in https://lkml.org/lkml/2011/2/23/232) or we can get rid of
> kmalloc altogether and allocate directly from the buddy allocator (use
> alloc_pages_exact) as suggested by Dave Hansen.
>
> The later solution is much simpler and the internal fragmentation is
> comparable (~1 page per section).
>
> We still need a fallback to kmalloc/vmalloc because we have no
> guarantees that we will have a continuous memory of that size (order-10)
> later on the hotplug events.
>
> Signed-off-by: Michal Hocko <mhocko@suse.cz>
> ---
> mm/page_cgroup.c | 62 ++++++++++++++++++++++++++++++++++--------------------
> 1 files changed, 39 insertions(+), 23 deletions(-)
>
> diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
> index 5bffada..eaae7de 100644
> --- a/mm/page_cgroup.c
> +++ b/mm/page_cgroup.c
> @@ -105,7 +105,41 @@ struct page_cgroup *lookup_page_cgroup(struct page *page)
> return section->page_cgroup + pfn;
> }
>
> -/* __alloc_bootmem...() is protected by !slab_available() */
> +static void *__init_refok alloc_mcg_table(size_t size, int nid)
> +{
> + void *addr = NULL;
> + if((addr = alloc_pages_exact(size, GFP_KERNEL | __GFP_NOWARN)))
> + return addr;
> +
> + if (node_state(nid, N_HIGH_MEMORY)) {
> + addr = kmalloc_node(size, GFP_KERNEL | __GFP_NOWARN, nid);
> + if (!addr)
> + addr = vmalloc_node(size, nid);
> + } else {
> + addr = kmalloc(size, GFP_KERNEL | __GFP_NOWARN);
> + if (!addr)
> + addr = vmalloc(size);
> + }
> +
> + return addr;
> +}
What is the case we need to call kmalloc_node() even when alloc_pages_exact() fails ?
vmalloc() may need to be called when the size of chunk is larger than
MAX_ORDER or there is fragmentation.....
And the function name, alloc_mcg_table(), I don't like it because this is an
allocation for page_cgroup.
How about alloc_page_cgroup() simply ?
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-02-25 3:32 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-02-23 15:10 [RFC PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM Michal Hocko
2011-02-23 18:19 ` Dave Hansen
2011-02-23 23:52 ` KAMEZAWA Hiroyuki
2011-02-24 9:35 ` Michal Hocko
2011-02-24 10:02 ` KAMEZAWA Hiroyuki
2011-02-24 9:33 ` Michal Hocko
2011-02-24 13:40 ` [RFC PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM v2 Michal Hocko
2011-02-25 3:25 ` KAMEZAWA Hiroyuki [this message]
2011-02-25 9:53 ` [RFC PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM v3 Michal Hocko
2011-02-28 0:53 ` KAMEZAWA Hiroyuki
2011-02-28 9:12 ` [RFC PATCH] page_cgroup: Reduce allocation overhead for page_cgroup array for CONFIG_SPARSEMEM v4 Michal Hocko
2011-02-28 9:23 ` KAMEZAWA Hiroyuki
2011-02-28 9:53 ` Michal Hocko
2011-02-28 9:48 ` KAMEZAWA Hiroyuki
2011-02-28 10:12 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110225122522.8c4f1057.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=dave@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox