linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mark_H_Johnson@Raytheon.com
To: linux-mm@kvack.org
Subject: Query on memory management
Date: Thu, 6 Apr 2000 09:16:47 -0500	[thread overview]
Message-ID: <OF65849FAF.07536636-ON862568B9.004B90AB@hso.link.com> (raw)

We are looking to port some real-time applications from a larger Unix
system to a cluster of PC's running Linux. I've read through the kernel
code a few times, on both 2.2.10 and 2.2.14, but I find it hard to
understand w/o some more information.

Some of the memory related capabilities we need include:
 - memory locking [on the order of 60-80% of real memory] on the target
system.
 - build and test large subsets on a development machine in non real time.
Perhaps run with "single cycle" or 1/10th real time performance. Swapping
is OK if it means that I don't have to wait until 3am to get time on a
target system. I'm worried about reported "out of memory" problems.
 - control of page fault handling; we currently emulate flight hardware by
trapping memory accesses to I/O devices. I see a similar issue with paging
large working sets of data from static files (e.g., for a visual system)
where smart use of page replacement algorithms can simplify implementation
of lookahead.

Questions -
(1) What hard limits are there on how much memory can be mlock'd? I see
checks [in mm/mlock.c] related to num_physpages/2, but can't tell if that
is a system wide limit or a limit per process.

(2) I've seen traffic related to "out of memory" problems. How close are we
to a permanent solution & do you need suggestions? For example, I can't
seem to find any per-process limits to the "working set or virtual size"
(could refer to either the number of physical or virtual pages a process
can use). If that was implemented, some of the problems you have seen with
rogue processes could be prevented.

(3) Re: out of memory. I also saw code in 2.2.14 [arch/i386/mm/fault.c]
prevents the init task (pid==1) from getting killed. Why can't that
solution be applied to all tasks & let kswapd (or something else) keep
moving pages to the swap file (or memory mapped files) & kill tasks if and
only if the backing store on disk is gone?

(4) Is there a "hook" for user defined page replacement or page fault
handling? I could not find one.

(5) If the answer to (4) is no, could I generate a loadable module that
handled the page fault trap, and then checked a status in the task block to
determine if I should just call the default page fault handler or handle
the fault myself?

Any feedback on these questions is appreciated. Thanks.
  --Mark H Johnson
  <mailto:Mark_H_Johnson@raytheon.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

             reply	other threads:[~2000-04-06 14:17 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2000-04-06 14:16 Mark_H_Johnson [this message]
2000-04-06 15:30 ` Andi Kleen
2000-04-06 18:30   ` Jamie Lokier
2000-04-07 19:41 Mark_H_Johnson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=OF65849FAF.07536636-ON862568B9.004B90AB@hso.link.com \
    --to=mark_h_johnson@raytheon.com \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox