From: Michal Hocko <mhocko@kernel.org>
To: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: akpm@linux-foundation.org, ard.biesheuvel@linaro.org,
broonie@kernel.org, christophe.leroy@c-s.fr,
dan.j.williams@intel.com, dave.hansen@intel.com,
davem@davemloft.net, gerald.schaefer@de.ibm.com,
gregkh@linuxfoundation.org, heiko.carstens@de.ibm.com,
jgg@ziepe.ca, jhogan@kernel.org, keescook@chromium.org,
kirill@shutemov.name, linux@armlinux.org.uk,
mark.rutland@arm.com, mike.kravetz@oracle.com,
mm-commits@vger.kernel.org, mpe@ellerman.id.au,
paul.burton@mips.com, paulus@samba.org,
penguin-kernel@i-love.sakura.ne.jp, peterz@infradead.org,
ralf@linux-mips.org, rppt@linux.vnet.ibm.com,
schowdary@nvidia.com, schwidefsky@de.ibm.com,
Steven.Price@arm.com, tglx@linutronix.de, vbabka@suse.cz,
vgupta@synopsys.com, willy@infradead.org,
yamada.masahiro@socionext.com, linux-mm@kvack.org
Subject: Re: + mm-hugetlb-make-alloc_gigantic_page-available-for-general-use.patch added to -mm tree
Date: Mon, 14 Oct 2019 15:00:23 +0200 [thread overview]
Message-ID: <20191014130023.GF317@dhcp22.suse.cz> (raw)
In-Reply-To: <89f328f0-ca00-a9b0-8df6-808a53bfcdd4@arm.com>
On Mon 14-10-19 18:23:00, Anshuman Khandual wrote:
>
>
> On 10/14/2019 05:47 PM, Michal Hocko wrote:
> > On Fri 11-10-19 13:29:32, Andrew Morton wrote:
> >> alloc_gigantic_page() implements an allocation method where it scans over
> >> various zones looking for a large contiguous memory block which could not
> >> have been allocated through the buddy allocator. A subsequent patch which
> >> tests arch page table helpers needs such a method to allocate PUD_SIZE
> >> sized memory block. In the future such methods might have other use cases
> >> as well. So alloc_gigantic_page() has been split carving out actual
> >> memory allocation method and made available via new
> >> alloc_gigantic_page_order().
> >
> > You are exporting a helper used for hugetlb internally. Is this really
>
> Right, because the helper i.e alloc_gigantic_page() is generic enough which
> scans over various zones to allocate a page order which could not be allocated
> through the buddy. Only thing which is HugeTLB specific in there, is struct
> hstate from where the order is derived with huge_page_order(). Otherwise it
> is very generic.
>
> > what is needed? I haven't followed this patchset but don't you simply
>
> Originally I had just implemented similar allocator inside the test itself
> but then figured that alloc_gigantic_page() can be factored out to create
> a generic enough allocator helper.
>
> > need a generic 1GB allocator? If yes then you should be looking at
>
> The test needs a PUD_SIZE allocator.
>
> > alloc_contig_range.
>
> IIUC alloc_contig_range() requires (start pfn, end pfn) for the region to be
> allocated. But before that all applicable zones need to be scanned to figure
> out any available and suitable pfn range for alloc_contig_range() to try. In
> this case pfn_range_valid_gigantic() check seemed reasonable while scanning
> the zones.
>
> If pfn_range_valid_gigantic() is good enough or could be made more generic,
> then the new factored alloc_gigantic_page_order() could be made a helper in
> mm/page_alloc.c
OK, thanks for the clarification. This all means that this patch is not
the right approach. If you need a more generic alloc_contig_range then
add it to page_alloc.c and make it completely independent on the hugetlb
config and the code. Hugetlb allocator can reuse that helper.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2019-10-14 13:00 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20191011202932.GZoUOoURm%akpm@linux-foundation.org>
2019-10-14 12:17 ` Michal Hocko
2019-10-14 12:53 ` Anshuman Khandual
2019-10-14 13:00 ` Michal Hocko [this message]
2019-10-14 13:08 ` Anshuman Khandual
2019-10-14 13:21 ` Michal Hocko
2019-10-14 16:52 ` Mike Kravetz
2019-10-15 9:57 ` Anshuman Khandual
2019-10-15 10:31 ` Michal Hocko
2019-10-15 10:34 ` Anshuman Khandual
2019-10-14 20:29 ` Matthew Wilcox
2019-10-15 9:30 ` Anshuman Khandual
2019-10-15 11:24 ` Matthew Wilcox
2019-10-15 11:36 ` Michal Hocko
2019-10-15 12:14 ` Anshuman Khandual
2019-10-16 8:55 ` Anshuman Khandual
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191014130023.GF317@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=Steven.Price@arm.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=ard.biesheuvel@linaro.org \
--cc=broonie@kernel.org \
--cc=christophe.leroy@c-s.fr \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=davem@davemloft.net \
--cc=gerald.schaefer@de.ibm.com \
--cc=gregkh@linuxfoundation.org \
--cc=heiko.carstens@de.ibm.com \
--cc=jgg@ziepe.ca \
--cc=jhogan@kernel.org \
--cc=keescook@chromium.org \
--cc=kirill@shutemov.name \
--cc=linux-mm@kvack.org \
--cc=linux@armlinux.org.uk \
--cc=mark.rutland@arm.com \
--cc=mike.kravetz@oracle.com \
--cc=mm-commits@vger.kernel.org \
--cc=mpe@ellerman.id.au \
--cc=paul.burton@mips.com \
--cc=paulus@samba.org \
--cc=penguin-kernel@i-love.sakura.ne.jp \
--cc=peterz@infradead.org \
--cc=ralf@linux-mips.org \
--cc=rppt@linux.vnet.ibm.com \
--cc=schowdary@nvidia.com \
--cc=schwidefsky@de.ibm.com \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=vgupta@synopsys.com \
--cc=willy@infradead.org \
--cc=yamada.masahiro@socionext.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox