From: Oscar Salvador <osalvador@suse.de>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Vlastimil Babka <vbabka@suse.cz>, Michal Hocko <mhocko@suse.com>,
Oscar Salvador <OSalvador@suse.com>,
Yuanxi Liu <y.liu@naruida.com>,
David Hildenbrand <david@redhat.com>,
Matthew Wilcox <willy@infradead.org>,
Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] mm: page_alloc: Skip regions with hugetlbfs pages when allocating 1G pages
Date: Sun, 16 Apr 2023 17:31:34 +0200 [thread overview]
Message-ID: <253c6362834bf3c9deea94cfc7741920@suse.de> (raw)
In-Reply-To: <20230414141429.pwgieuwluxwez3rj@techsingularity.net>
On 2023-04-14 16:14, Mel Gorman wrote:
> A bug was reported by Yuanxi Liu where allocating 1G pages at runtime
> is
> taking an excessive amount of time for large amounts of memory. Further
> testing allocating huge pages that the cost is linear i.e. if
> allocating
> 1G pages in batches of 10 then the time to allocate nr_hugepages from
> 10->20->30->etc increases linearly even though 10 pages are allocated
> at
> each step. Profiles indicated that much of the time is spent checking
> the
> validity within already existing huge pages and then attempting a
> migration
> that fails after isolating the range, draining pages and a whole lot of
> other useless work.
>
> Commit eb14d4eefdc4 ("mm,page_alloc: drop unnecessary checks from
> pfn_range_valid_contig") removed two checks, one which ignored huge
> pages
> for contiguous allocations as huge pages can sometimes migrate. While
> there may be value on migrating a 2M page to satisfy a 1G allocation,
> it's
> potentially expensive if the 1G allocation fails and it's pointless to
> try moving a 1G page for a new 1G allocation or scan the tail pages for
> valid PFNs.
>
> Reintroduce the PageHuge check and assume any contiguous region with
> hugetlbfs pages is unsuitable for a new 1G allocation.
>
> The hpagealloc test allocates huge pages in batches and reports the
> average latency per page over time. This test happens just after boot
> when
> fragmentation is not an issue. Units are in milliseconds.
>
...
>
> BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=217022
> Fixes: eb14d4eefdc4 ("mm,page_alloc: drop unnecessary checks from
> pfn_range_valid_contig")
> Reported-by: Yuanxi Liu <y.liu@naruida.com>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Thanks Mel!
--
Oscar Salvador
SUSE Labs
prev parent reply other threads:[~2023-04-16 15:31 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-14 14:14 Mel Gorman
2023-04-14 14:38 ` Vlastimil Babka
2023-04-14 14:39 ` David Hildenbrand
2023-04-14 14:53 ` Michal Hocko
2023-04-14 19:46 ` Andrew Morton
2023-04-14 23:20 ` Doug Berger
2023-04-17 8:09 ` Mel Gorman
2023-04-16 15:31 ` Oscar Salvador [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=253c6362834bf3c9deea94cfc7741920@suse.de \
--to=osalvador@suse.de \
--cc=OSalvador@suse.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=y.liu@naruida.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox