linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Pasha Tatashin <pasha.tatashin@soleen.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	mike.kravetz@oracle.com,  mhocko@suse.com, muchun.song@linux.dev,
	rientjes@google.com,  souravpanda@google.com
Subject: Re: [PATCH v2] mm: hugetlb_vmemmap: provide stronger vmemmap allocation guarantees
Date: Thu, 13 Apr 2023 10:59:29 -0400	[thread overview]
Message-ID: <CA+CK2bBGNTxvWi=99Z0DLOmNd=JWG3E-mOnx1MqxoziGdTEqYg@mail.gmail.com> (raw)
In-Reply-To: <20230412131302.cf42a7f4b710db8c18b7b676@linux-foundation.org>

On Wed, Apr 12, 2023 at 4:13 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> Lots of questions (ie, missing information!)
>
> On Wed, 12 Apr 2023 19:59:39 +0000 Pasha Tatashin <pasha.tatashin@soleen.com> wrote:
>
> > HugeTLB pages have a struct page optimizations where struct pages for tail
> > pages are freed. However, when HugeTLB pages are destroyed, the memory for
> > struct pages (vmemmap) need to be allocated again.
> >
> > Currently, __GFP_NORETRY flag is used to allocate the memory for vmemmap,
> > but given that this flag makes very little effort to actually reclaim
> > memory the returning of huge pages back to the system can be problem.
>
> Are there any reports of this happening in the real world?
>
> > Lets
> > use __GFP_RETRY_MAYFAIL instead. This flag is also performs graceful
> > reclaim without causing ooms, but at least it may perform a few retries,
> > and will fail only when there is genuinely little amount of unused memory
> > in the system.
>
> If so, does this change help?

It helps to avoid transient allocation problems. In general it is not
a good idea to fail because we are trying to free gigantic pages back
to the system.

>
> If the allocation attempt fails, what are the consequences?

The gigantic page is not going to be returned to the system. The use
will have to free some memory before returning them back to the
system.

>
> What are the potential downsides to this change?  Why did we choose
> __GFP_NORETRY in the first place?
>
> What happens if we try harder (eg, GFP_KERNEL)?

MIchal answered this question, that it won't do much difference due to
__GFP_THISNODE


      parent reply	other threads:[~2023-04-13 15:00 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-12 19:59 Pasha Tatashin
2023-04-12 20:13 ` Andrew Morton
2023-04-12 20:18   ` Michal Hocko
2023-04-13 15:05     ` Pasha Tatashin
2023-04-13 15:25       ` Michal Hocko
2023-04-13 17:11         ` Pasha Tatashin
2023-04-13 18:12           ` Michal Hocko
2023-04-15  0:47             ` David Rientjes
2023-04-17  8:33               ` Michal Hocko
2023-04-17 18:51                 ` Mike Kravetz
2023-04-13 14:59   ` Pasha Tatashin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+CK2bBGNTxvWi=99Z0DLOmNd=JWG3E-mOnx1MqxoziGdTEqYg@mail.gmail.com' \
    --to=pasha.tatashin@soleen.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=muchun.song@linux.dev \
    --cc=rientjes@google.com \
    --cc=souravpanda@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox