linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Stephen C. Tweedie" <sct@redhat.com>
To: "S. Parker" <linux@sparker.net>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] VM system in 2.4.16 doesn't try hard enough for user memory...
Date: Thu, 6 Dec 2001 12:48:30 +0000	[thread overview]
Message-ID: <20011206124830.C2029@redhat.com> (raw)
In-Reply-To: <4.2.2.20011205174951.00ab0e20@slither>; from linux@sparker.net on Wed, Dec 05, 2001 at 05:54:44PM -0800

Hi,

On Wed, Dec 05, 2001 at 05:54:44PM -0800, S. Parker wrote:
 
> Attached below is "memstride.c", a simple program to exercise a process which
> wishes to grow to the largest size of available VM the system can handle,
> scribble in it all.  Actually, scribble in it all several times.
> 
> Under at least 2.4.14 -> 2.4.16, the VM system *always* over-commits to
> memstride, even on an otherwise idle system, and ends up killing it.
> This is wrong.  It should be possible for memstride to be told when
> it has over-stepped the size of the system's total VM resources, by
> having sbrk() return -1 (out of memory).

Yes, over-commit protection is far from perfect.  However, it's a
difficult problem to get right.

> Also attached is my proposed fix for this problem.  It has the following
> changes:
> 
> 1.  Do a better job estimating how much VM is available
>          vm_enough_memory() was changed to take the sum of all free RAM
>          and all free swap, subtract up to 1/8th of physical RAM (but not
>          more than 16MB) as a reserve for system buffers to prevent deadlock,
>          and compare this to the request.  If the VM request is <= the
>          available free stuff, then we're set.

That's still just a guestimate: do you have any hard data to back
up the magic numbers here?

> 2.  Be willing to sleep for memory chunks larger than 8 pages.
>          __alloc_pages had an uncommented piece of code, that I couldn't
>          see any reason to have.  It doesn't matter how big the piece of
>          memory is--if we're low, and it's a sleepable request, we should
>          sleep.  Now it does.  (Can anyone explain to me why this coded was
>          added originally??)

That's totally separate: *all* user VM allocations are done with
order-0 allocations, so this can't have any effect on VM overcommit.

Ultimately, your patch still doesn't protect against overcommit: if
you run two large, lazy memory using applications in parallel, you'll
still get each of them being told there's enough VM left at the time
of sbrk/mmap, and they will both later on find out at page fault time
that there's not enough memory to go round.

Cheers,
 Stephen
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

  reply	other threads:[~2001-12-06 12:48 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-12-06  1:54 S. Parker
2001-12-06 12:48 ` Stephen C. Tweedie [this message]
2001-12-07 22:47   ` S. Parker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20011206124830.C2029@redhat.com \
    --to=sct@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@sparker.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox