linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Colascione <dancol@google.com>
To: dave.hansen@intel.com
Cc: linux-mm@kvack.org, Tim Murray <timmurray@google.com>,
	Minchan Kim <minchan@kernel.org>
Subject: Re: Why do we let munmap fail?
Date: Mon, 21 May 2018 15:54:16 -0700	[thread overview]
Message-ID: <CAKOZuesoh7svdmdNY9md3N+vWGurigDLZ5_xDjwgU=uYdKkwqg@mail.gmail.com> (raw)
In-Reply-To: <20eeca79-0813-a921-8b86-4c2a0c98a1a1@intel.com>

On Mon, May 21, 2018 at 3:48 PM Dave Hansen <dave.hansen@intel.com> wrote:

> On 05/21/2018 03:35 PM, Daniel Colascione wrote:
> >> I know folks use memfd to figure out
> >> how much memory pressure we are under.  I guess that would trigger when
> >> you consume lots of memory with VMAs.
> >
> > I think you're thinking of the VM pressure level special files, not
memfd,
> > which creates an anonymous tmpfs file.

> Yep, you're right.

> >> VMAs are probably the most similar to things like page tables that are
> >> kernel memory that can't be directly reclaimed, but do get freed at
> >> OOM-kill-time.  But, VMAs are a bit harder than page tables because
> >> freeing a page worth of VMAs does not necessarily free an entire page.
> >
> > I don't understand. We can reclaim memory used by VMAs by killing the
> > process or processes attached to the address space that owns those VMAs.
> > The OOM killer should Just Work. Why do we have to have some special
limit
> > of VMA count?

> The OOM killer doesn't take the VMA count into consideration as far as I
> remember.  I can't think of any reason why not except for the internal
> fragmentation problem.

> The current VMA limit is ~12MB of VMAs per process, which is quite a
> bit.  I think it would be reasonable to start considering that in OOM
> decisions, although it's surely inconsequential except on very small
> systems.

> There are also certainly denial-of-service concerns if you allow
> arbitrary numbers of VMAs.  The rbtree, for instance, is O(log(n)), but
> I 'd be willing to be there are plenty of things that fall over if you
> let the ~65k limit get 10x or 100x larger.

Sure. I'm receptive to the idea of having *some* VMA limit. I just think
it's unacceptable let deallocation routines fail.

What about the proposal at the end of my original message? If we account
for mapped address space by counting pages instead of counting VMAs, no
amount of VMA splitting can trip us over the threshold. We could just
impose a system-wide vsize limit in addition to RLIMIT_AS, with the
effective limit being the smaller of the two. (On further thought, we'd
probably want to leave the meaning of max_map_count unchanged and introduce
a new knob.)

  reply	other threads:[~2018-05-21 22:54 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-21 22:07 Daniel Colascione
2018-05-21 22:12 ` Dave Hansen
2018-05-21 22:20   ` Daniel Colascione
2018-05-21 22:29     ` Dave Hansen
2018-05-21 22:35       ` Daniel Colascione
2018-05-21 22:48         ` Dave Hansen
2018-05-21 22:54           ` Daniel Colascione [this message]
2018-05-21 23:02             ` Dave Hansen
2018-05-21 23:16               ` Daniel Colascione
2018-05-21 23:32                 ` Dave Hansen
2018-05-22  0:00                   ` Daniel Colascione
2018-05-22  0:22                     ` Matthew Wilcox
2018-05-22  0:38                       ` Daniel Colascione
2018-05-22  1:19                         ` Theodore Y. Ts'o
2018-05-22  1:41                           ` Daniel Colascione
2018-05-22  2:09                             ` Daniel Colascione
2018-05-22  2:11                             ` Matthew Wilcox
2018-05-22  1:22                         ` Matthew Wilcox
2018-05-22  5:34                     ` Nicholas Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKOZuesoh7svdmdNY9md3N+vWGurigDLZ5_xDjwgU=uYdKkwqg@mail.gmail.com' \
    --to=dancol@google.com \
    --cc=dave.hansen@intel.com \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=timmurray@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox