From: "Martin J. Bligh" <Martin.Bligh@us.ibm.com>
To: Andrew Morton <akpm@zip.com.au>,
William Lee Irwin III <wli@holomorphy.com>
Cc: Linus Torvalds <torvalds@transmeta.com>,
Rik van Riel <riel@conectiva.com.br>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Ed Tomlinson <tomlins@cam.org>
Subject: Re: [PATCH][1/2] return values shrink_dcache_memory etc
Date: Sun, 21 Jul 2002 23:06:58 -0700 [thread overview]
Message-ID: <2725228.1027292816@[10.10.2.3]> (raw)
In-Reply-To: <3D3B9A6F.12B096E1@zip.com.au>
>> > If we can get something in place which works acceptably on Martin
>> > Bligh's machines, and we can see that the gains of rmap (whatever
>> > they are ;)) are worth the as-yet uncoded pains then let's move on.
>> > But until then, adding new stuff to the VM just makes a `patch -R'
>> > harder to do.
>>
>> I have the same kinds of machines and have already been testing with
>> precisely the many tasks workloads he's concerned about for the sake of
>> correctness, and efficiency is also a concern here. highpte_chain is
>> already so high up on my priority queue that all other work is halted.
>
> OK. But we're adding non-trivial amounts of new code simply
> to get the reverse mapping working as robustly as the virtual
> scan. And we'll always have rmap's additional storage requirements.
>
> At some point we need to make a decision as to whether it's all
> worth it. Right now we do not even have the information on the
> pluses side to do this. That's worrisome.
These large NUMA machines should actually be rmap's glory day in the
sun. Per-node kswapd, being able to free mem pressure on one node
easily (without cross-node bouncing), breakup of the lru list into
smaller chunks, etc. These actually fix some of the biggest problems
that we have right now and are hard to solve in other ways.
The large rmap overheads we still have to kill seem to me to be the
memory usage and the fork overhead. There's also a certain amount of
overhead to managing any more data structures, of course. I think we
know how to kill most of it. I don't think adding highpte_chain is
the correct thing to do ... seems like adding insult to injury. I'd
rather see us drive a silver stake through the problem's heart and
kill it properly ...
M.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/
next prev parent reply other threads:[~2002-07-22 6:06 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-07-20 19:40 Rik van Riel
2002-07-20 20:11 ` Linus Torvalds
2002-07-20 20:41 ` Rik van Riel
2002-07-20 20:53 ` Linus Torvalds
2002-07-20 21:42 ` Rik van Riel
2002-07-22 5:04 ` Andrew Morton
2002-07-22 5:16 ` William Lee Irwin III
2002-07-22 5:38 ` Andrew Morton
2002-07-22 6:06 ` Martin J. Bligh [this message]
2002-07-22 6:46 ` Andrew Morton
2002-07-22 7:20 ` Martin J. Bligh
2002-07-22 14:00 ` Rik van Riel
2002-07-22 13:34 ` Rik van Riel
2002-07-22 13:44 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='2725228.1027292816@[10.10.2.3]' \
--to=martin.bligh@us.ibm.com \
--cc=akpm@zip.com.au \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=riel@conectiva.com.br \
--cc=tomlins@cam.org \
--cc=torvalds@transmeta.com \
--cc=wli@holomorphy.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox