From: Muchun Song <muchun.song@linux.dev>
To: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Matthew Wilcox <willy@infradead.org>,
Andrew Morton <akpm@linux-foundation.org>,
Mike Kravetz <mike.kravetz@oracle.com>,
Linux-MM <linux-mm@kvack.org>, Yuan Can <yuancan@huawei.com>
Subject: Re: [PATCH resend] mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list()
Date: Wed, 6 Sep 2023 22:32:35 +0800 [thread overview]
Message-ID: <11F83276-0C5C-4526-85F7-C807D741EAFD@linux.dev> (raw)
In-Reply-To: <a3458610-e812-4b0d-8843-5ca058592134@huawei.com>
> On Sep 6, 2023, at 17:33, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
>
>
> On 2023/9/6 11:25, Muchun Song wrote:
>>> On Sep 6, 2023, at 11:13, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>>>
>>>
>>>
>>> On 2023/9/6 10:47, Matthew Wilcox wrote:
>>>> On Tue, Sep 05, 2023 at 06:35:08PM +0800, Kefeng Wang wrote:
>>>>> It is needed 4095 pages(1G) or 7 pages(2M) to be allocated once in
>>>>> alloc_vmemmap_page_list(), so let's add a bulk allocator varietas
>>>>> alloc_pages_bulk_list_node() and switch alloc_vmemmap_page_list()
>>>>> to use it to accelerate page allocation.
>>>> Argh, no, please don't do this.
>>>> Iterating a linked list is _expensive_. It is about 10x quicker to
>>>> iterate an array than a linked list. Adding the list_head option
>>>> to __alloc_pages_bulk() was a colossal mistake. Don't perpetuate it.
>>>> These pages are going into an array anyway. Don't put them on a list
>>>> first.
>>>
>>> struct vmemmap_remap_walk - walk vmemmap page table
>>>
>>> * @vmemmap_pages: the list head of the vmemmap pages that can be freed
>>> * or is mapped from.
>>>
>>> At present, the struct vmemmap_remap_walk use a list for vmemmap page table walk, so do you mean we need change vmemmap_pages from a list to a array firstly and then use array bulk api, even kill list bulk api ?
>> It'll be a little complex for hugetlb_vmemmap. Should it be reasonable to
>> directly use __alloc_pages_bulk in hugetlb_vmemmap itself?
>
>
> We could use alloc_pages_bulk_array_node() here without introduce a new
> alloc_pages_bulk_list_node(), only focus on accelerate page allocation
> for now.
>
No. Using alloc_pages_bulk_array_node() will add more complexity (you need to allocate
an array fist) for hugetlb_vmemap and this path that you optimized is only a control
path and this optimization is at the millisecond level. So I don't think it is a great
value to do this.
Thanks.
next prev parent reply other threads:[~2023-09-06 14:33 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-05 10:35 Kefeng Wang
2023-09-06 2:41 ` Muchun Song
2023-09-06 2:47 ` Matthew Wilcox
2023-09-06 3:13 ` Kefeng Wang
2023-09-06 3:14 ` Matthew Wilcox
2023-09-06 12:32 ` Kefeng Wang
2023-09-06 3:25 ` Muchun Song
2023-09-06 9:33 ` Kefeng Wang
2023-09-06 14:32 ` Muchun Song [this message]
2023-09-06 14:58 ` Kefeng Wang
2023-09-07 6:35 ` Muchun Song
2023-10-25 9:32 ` Mel Gorman
2023-10-25 13:50 ` kernel test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=11F83276-0C5C-4526-85F7-C807D741EAFD@linux.dev \
--to=muchun.song@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=yuancan@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox