linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kalesh Singh <kaleshsingh@google.com>
To: David Hildenbrand <david@redhat.com>
Cc: akpm@linux-foundation.org, minchan@kernel.org,
	lorenzo.stoakes@oracle.com,  kernel-team@android.com,
	android-mm@google.com,
	 "Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	 Mike Rapoport <rppt@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	Michal Hocko <mhocko@suse.com>,  Jann Horn <jannh@google.com>,
	Pedro Falcato <pfalcato@suse.de>,
	linux-mm@kvack.org,  linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: centralize and fix max map count limit checking
Date: Thu, 4 Sep 2025 09:24:43 -0700	[thread overview]
Message-ID: <CAC_TJvc-V9=59p0HG8tmUqQHmp5Sn3E8BV3gHsPj+RHCANd0HA@mail.gmail.com> (raw)
In-Reply-To: <3997965c-4cc6-405c-b6fe-a70b4e75afa6@redhat.com>

On Thu, Sep 4, 2025 at 3:14 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 04.09.25 01:24, Kalesh Singh wrote:
> > The check against the max map count (sysctl_max_map_count) was
> > open-coded in several places. This led to inconsistent enforcement
> > and subtle bugs where the limit could be exceeded.
> >
> > For example, some paths would check map_count > sysctl_max_map_count
> > before allocating a new VMA and incrementing the count, allowing the
> > process to reach sysctl_max_map_count + 1:
> >
> >      int do_brk_flags(...)
> >      {
> >          if (mm->map_count > sysctl_max_map_count)
> >              return -ENOMEM;
> >
> >          /* We can get here with mm->map_count == sysctl_max_map_count */
> >
> >          vma = vm_area_alloc(mm);
> >          ...
> >          mm->map_count++   /* We've now exceeded the threshold. */
> >      }
> >
> > To fix this and unify the logic, introduce a new function,
> > exceeds_max_map_count(), to consolidate the check. All open-coded
> > checks are replaced with calls to this new function, ensuring the
> > limit is applied uniformly and correctly.
> >
> > To improve encapsulation, sysctl_max_map_count is now static to
> > mm/mmap.c. The new helper also adds a rate-limited warning to make
> > debugging applications that exhaust their VMA limit easier.
> >
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Minchan Kim <minchan@kernel.org>
> > Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
> > ---
> >   include/linux/mm.h | 11 ++++++++++-
> >   mm/mmap.c          | 15 ++++++++++++++-
> >   mm/mremap.c        |  7 ++++---
> >   mm/nommu.c         |  2 +-
> >   mm/util.c          |  1 -
> >   mm/vma.c           |  6 +++---
> >   6 files changed, 32 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 1ae97a0b8ec7..d4e64e6a9814 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -192,7 +192,16 @@ static inline void __mm_zero_struct_page(struct page *page)
> >   #define MAPCOUNT_ELF_CORE_MARGIN    (5)
> >   #define DEFAULT_MAX_MAP_COUNT       (USHRT_MAX - MAPCOUNT_ELF_CORE_MARGIN)
> >
> > -extern int sysctl_max_map_count;
> > +/**
> > + * exceeds_max_map_count - check if a VMA operation would exceed max_map_count
> > + * @mm: The memory descriptor for the process.
> > + * @new_vmas: The number of new VMAs the operation will create.
>
> It's not always a "will" right? At least I remember that this was the
> worst case scenario in some ("may split").
>
> "The number of new VMAs the operation may create in the worst case.
>

Hi Daivd,

You are correct. Cases like mremap account for the worst case (3 way
split on both src and dest). I'll update the  description.

Thanks,
Kalesh
>
> --
> Cheers
>
> David / dhildenb
>


  reply	other threads:[~2025-09-04 16:25 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-03 23:24 Kalesh Singh
2025-09-03 23:46 ` Pedro Falcato
2025-09-04  3:01   ` Kalesh Singh
2025-09-04 15:24     ` Pedro Falcato
2025-09-04 16:32       ` Kalesh Singh
2025-09-05 19:43   ` Minchan Kim
2025-09-07  4:24     ` Kalesh Singh
2025-09-04  7:29 ` Mike Rapoport
2025-09-04 16:20   ` Kalesh Singh
2025-09-04 10:14 ` David Hildenbrand
2025-09-04 16:24   ` Kalesh Singh [this message]
2025-09-04 16:02 ` Lorenzo Stoakes
2025-09-04 16:34   ` Kalesh Singh
2025-09-04 17:22   ` Liam R. Howlett
2025-09-04 17:33     ` Lorenzo Stoakes
2025-09-04 17:41       ` David Hildenbrand
2025-09-04 17:51         ` Kalesh Singh
2025-09-04 18:49           ` Liam R. Howlett
2025-09-04 19:02             ` David Hildenbrand
2025-09-04 19:11               ` Liam R. Howlett
2025-09-05  7:40                 ` Mike Rapoport
2025-09-04 17:43       ` Kalesh Singh
2025-09-04 18:41         ` Liam R. Howlett

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAC_TJvc-V9=59p0HG8tmUqQHmp5Sn3E8BV3gHsPj+RHCANd0HA@mail.gmail.com' \
    --to=kaleshsingh@google.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=android-mm@google.com \
    --cc=david@redhat.com \
    --cc=jannh@google.com \
    --cc=kernel-team@android.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=minchan@kernel.org \
    --cc=pfalcato@suse.de \
    --cc=rppt@kernel.org \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox