From: "Benjamin C.R. LaHaise" <blah@kvack.org>
To: Rik van Riel <H.H.vanRiel@fys.ruu.nl>
Cc: Finn Arne Gangstad <finnag@guardian.no>,
Linus Torvalds <torvalds@transmeta.com>,
"Stephen C. Tweedie" <sct@dcs.ed.ac.uk>,
linux-mm <linux-mm@kvack.org>
Subject: Re: 2.1.90 dies with many procs procs, partial fix
Date: Mon, 23 Mar 1998 16:20:47 -0500 (EST) [thread overview]
Message-ID: <Pine.LNX.3.95.980323155005.17867E-100000@as200.spellcast.com> (raw)
In-Reply-To: <Pine.LNX.3.91.980323203732.771G-100000@mirkwood.dummy.home>
On Mon, 23 Mar 1998, Rik van Riel wrote:
...
> Hmm, this is evidence that I was right when I said
> that the free_memory_available() system combined
> with our current allocation scheme gives trouble.
> Linus, what fix do you propose?
> (I don't really feel like coding a fix that will
> be rejected :-)
That sounds about right, but it isn't fixing the underlying problem -
which, as Linus has pointed out, we can't avoid anymore. Here's a
suggestion that might help: change get_free_pages not to break larger
order memory blocks for non-atomic allocations that will result in too few
blocks of the upper order remaining. GFP_ATOMIC is a nice hint that the
memory allocated will be freed soon. If an atomic allocation does require
breaking up a huge chunk, what happens? Do the resulting blocks get
consumed by lower priority, yet smaller, allocations before the large
atomically allocated portion is released?
An approach that should help is to use a [large] fixed size block for a
fixed purpose, a la slab. Using 256KB or so blocks, which once allocated
will only go to simlar uses (just using a breakdown according to order
would be a big help). If we can also keep all user pages together, then
later on we'll be able to reap large chunks from user memory whenever a
device driver starts up and needs a chunk of memory.
Currently, my experimental page-queue stuff is moving towards this end of
things for user/page cache pages. Since these allocations are always for
page size objects, there's no need to fiddle with bitmaps, coalescing and
such under normal circumstances in get_free_page (the swap daemon takes
care of running the balancing act). User allocations will simply be: try
to remove page from the reaped queue, otherwise try to refill the reaped
queue. If that fails and we're not pushing memory allocation limits, do a
normal get_free_page. Otherwise sleep until some swapout has completed.
-ben
next prev parent reply other threads:[~1998-03-23 21:20 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <Pine.LNX.3.95.980322022425.5774A-100000@lucifer.guardian.no>
1998-03-23 19:39 ` Rik van Riel
1998-03-23 21:20 ` Benjamin C.R. LaHaise [this message]
1998-03-23 23:11 ` Linus Torvalds
1998-03-24 9:48 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.3.95.980323155005.17867E-100000@as200.spellcast.com \
--to=blah@kvack.org \
--cc=H.H.vanRiel@fys.ruu.nl \
--cc=finnag@guardian.no \
--cc=linux-mm@kvack.org \
--cc=sct@dcs.ed.ac.uk \
--cc=torvalds@transmeta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox