From: Eric Dumazet <dada1@cosmosbay.com>
To: David Miller <davem@davemloft.net>, akpm@linux-foundation.org
Cc: dhowells@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] MM : alloc_large_system_hash() can free some memory for non power-of-two bucketsize
Date: Sat, 19 May 2007 22:36:52 +0200 [thread overview]
Message-ID: <464F5FE4.2010607@cosmosbay.com> (raw)
In-Reply-To: <20070519.115442.30184476.davem@davemloft.net>
[-- Attachment #1: Type: text/plain, Size: 1257 bytes --]
David Miller a ecrit :
> From: Eric Dumazet <dada1@cosmosbay.com>
> Date: Sat, 19 May 2007 20:07:11 +0200
>
>> Maybe David has an idea how this can be done properly ?
>>
>> ref : http://marc.info/?l=linux-netdev&m=117706074825048&w=2
>
> You need to use __GFP_COMP or similar to make this splitting+freeing
> thing work.
>
> Otherwise the individual pages don't have page references, only
> the head page of the high-order page will.
>
Oh thanks David for the hint.
I added a split_page() call and it seems to work now.
[PATCH] MM : alloc_large_system_hash() can free some memory for non
power-of-two bucketsize
alloc_large_system_hash() is called at boot time to allocate space for several
large hash tables.
Lately, TCP hash table was changed and its bucketsize is not a power-of-two
anymore.
On most setups, alloc_large_system_hash() allocates one big page (order > 0)
with __get_free_pages(GFP_ATOMIC, order). This single high_order page has a
power-of-two size, bigger than the needed size.
We can free all pages that wont be used by the hash table.
On a 1GB i386 machine, this patch saves 128 KB of LOWMEM memory.
TCP established hash table entries: 32768 (order: 6, 393216 bytes)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
[-- Attachment #2: alloc_large.patch --]
[-- Type: text/plain, Size: 823 bytes --]
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ae96dd8..7c219eb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3350,6 +3350,21 @@ void *__init alloc_large_system_hash(const char *tablename,
for (order = 0; ((1UL << order) << PAGE_SHIFT) < size; order++)
;
table = (void*) __get_free_pages(GFP_ATOMIC, order);
+ /*
+ * If bucketsize is not a power-of-two, we may free
+ * some pages at the end of hash table.
+ */
+ if (table) {
+ unsigned long alloc_end = (unsigned long)table +
+ (PAGE_SIZE << order);
+ unsigned long used = (unsigned long)table +
+ PAGE_ALIGN(size);
+ split_page(virt_to_page(table), order);
+ while (used < alloc_end) {
+ free_page(used);
+ used += PAGE_SIZE;
+ }
+ }
}
} while (!table && size > PAGE_SIZE && --log2qty);
next prev parent reply other threads:[~2007-05-19 20:36 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-05-18 9:54 Eric Dumazet
2007-05-18 18:21 ` Christoph Lameter
2007-05-19 8:37 ` Andrew Morton
2007-05-19 18:07 ` Eric Dumazet
2007-05-19 18:54 ` David Miller, Eric Dumazet
2007-05-19 20:36 ` Eric Dumazet [this message]
2007-05-19 18:21 ` William Lee Irwin III
2007-05-19 18:41 ` Eric Dumazet
2007-05-21 8:11 ` William Lee Irwin III
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=464F5FE4.2010607@cosmosbay.com \
--to=dada1@cosmosbay.com \
--cc=akpm@linux-foundation.org \
--cc=davem@davemloft.net \
--cc=dhowells@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox