From: Jia He <hejianet@gmail.com>
To: Wei Yang <richard.weiyang@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Michal Hocko <mhocko@suse.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Mel Gorman <mgorman@suse.de>, Will Deacon <will.deacon@arm.com>,
Mark Rutland <mark.rutland@arm.com>,
Ard Biesheuvel <ard.biesheuvel@linaro.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
Pavel Tatashin <pasha.tatashin@oracle.com>,
Daniel Jordan <daniel.m.jordan@oracle.com>,
AKASHI Takahiro <takahiro.akashi@linaro.org>,
Gioh Kim <gi-oh.kim@profitbricks.com>,
Steven Sistare <steven.sistare@oracle.com>,
Daniel Vacek <neelx@redhat.com>,
Eugeniu Rosca <erosca@de.adit-jv.com>,
Vlastimil Babka <vbabka@suse.cz>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
James Morse <james.morse@arm.com>,
Steve Capper <steve.capper@arm.com>,
x86@kernel.org, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Kate Stewart <kstewart@linuxfoundation.org>,
Philippe Ombredanne <pombredanne@nexb.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Kemi Wang <kemi.wang@intel.com>, Petr Tesarik <ptesarik@suse.com>,
YASUAKI ISHIMATSU <yasu.isimatu@gmail.com>,
Andrey Ryabinin <aryabinin@virtuozzo.com>,
Nikolay Borisov <nborisov@suse.com>
Subject: Re: [PATCH v3 0/5] optimize memblock_next_valid_pfn and early_pfn_valid
Date: Wed, 28 Mar 2018 09:45:33 +0800 [thread overview]
Message-ID: <49fefc1c-81dd-98f8-7da5-5b5e85d919e4@gmail.com> (raw)
In-Reply-To: <20180328003012.GA91956@WeideMacBook-Pro.local>
On 3/28/2018 8:30 AM, Wei Yang Wrote:
> On Tue, Mar 27, 2018 at 03:15:08PM +0800, Jia He wrote:
>>
>> On 3/27/2018 9:02 AM, Wei Yang Wrote:
>>> On Sun, Mar 25, 2018 at 08:02:14PM -0700, Jia He wrote:
>>>> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
>>>> where possible") tried to optimize the loop in memmap_init_zone(). But
>>>> there is still some room for improvement.
>>>>
>>>> Patch 1 remain the memblock_next_valid_pfn when CONFIG_HAVE_ARCH_PFN_VALID
>>>> is enabled
>>>> Patch 2 optimizes the memblock_next_valid_pfn()
>>>> Patch 3~5 optimizes the early_pfn_valid(), I have to split it into parts
>>>> because the changes are located across subsystems.
>>>>
>>>> I tested the pfn loop process in memmap_init(), the same as before.
>>>> As for the performance improvement, after this set, I can see the time
>>>> overhead of memmap_init() is reduced from 41313 us to 24345 us in my
>>>> armv8a server(QDF2400 with 96G memory).
>>>>
>>>> Attached the memblock region information in my server.
>>>> [ 86.956758] Zone ranges:
>>>> [ 86.959452] DMA [mem 0x0000000000200000-0x00000000ffffffff]
>>>> [ 86.966041] Normal [mem 0x0000000100000000-0x00000017ffffffff]
>>>> [ 86.972631] Movable zone start for each node
>>>> [ 86.977179] Early memory node ranges
>>>> [ 86.980985] node 0: [mem 0x0000000000200000-0x000000000021ffff]
>>>> [ 86.987666] node 0: [mem 0x0000000000820000-0x000000000307ffff]
>>>> [ 86.994348] node 0: [mem 0x0000000003080000-0x000000000308ffff]
>>>> [ 87.001029] node 0: [mem 0x0000000003090000-0x00000000031fffff]
>>>> [ 87.007710] node 0: [mem 0x0000000003200000-0x00000000033fffff]
>>>> [ 87.014392] node 0: [mem 0x0000000003410000-0x000000000563ffff]
>>>> [ 87.021073] node 0: [mem 0x0000000005640000-0x000000000567ffff]
>>>> [ 87.027754] node 0: [mem 0x0000000005680000-0x00000000056dffff]
>>>> [ 87.034435] node 0: [mem 0x00000000056e0000-0x00000000086fffff]
>>>> [ 87.041117] node 0: [mem 0x0000000008700000-0x000000000871ffff]
>>>> [ 87.047798] node 0: [mem 0x0000000008720000-0x000000000894ffff]
>>>> [ 87.054479] node 0: [mem 0x0000000008950000-0x0000000008baffff]
>>>> [ 87.061161] node 0: [mem 0x0000000008bb0000-0x0000000008bcffff]
>>>> [ 87.067842] node 0: [mem 0x0000000008bd0000-0x0000000008c4ffff]
>>>> [ 87.074524] node 0: [mem 0x0000000008c50000-0x0000000008e2ffff]
>>>> [ 87.081205] node 0: [mem 0x0000000008e30000-0x0000000008e4ffff]
>>>> [ 87.087886] node 0: [mem 0x0000000008e50000-0x0000000008fcffff]
>>>> [ 87.094568] node 0: [mem 0x0000000008fd0000-0x000000000910ffff]
>>>> [ 87.101249] node 0: [mem 0x0000000009110000-0x00000000092effff]
>>>> [ 87.107930] node 0: [mem 0x00000000092f0000-0x000000000930ffff]
>>>> [ 87.114612] node 0: [mem 0x0000000009310000-0x000000000963ffff]
>>>> [ 87.121293] node 0: [mem 0x0000000009640000-0x000000000e61ffff]
>>>> [ 87.127975] node 0: [mem 0x000000000e620000-0x000000000e64ffff]
>>>> [ 87.134657] node 0: [mem 0x000000000e650000-0x000000000fffffff]
>>>> [ 87.141338] node 0: [mem 0x0000000010800000-0x0000000017feffff]
>>>> [ 87.148019] node 0: [mem 0x000000001c000000-0x000000001c00ffff]
>>>> [ 87.154701] node 0: [mem 0x000000001c010000-0x000000001c7fffff]
>>>> [ 87.161383] node 0: [mem 0x000000001c810000-0x000000007efbffff]
>>>> [ 87.168064] node 0: [mem 0x000000007efc0000-0x000000007efdffff]
>>>> [ 87.174746] node 0: [mem 0x000000007efe0000-0x000000007efeffff]
>>>> [ 87.181427] node 0: [mem 0x000000007eff0000-0x000000007effffff]
>>>> [ 87.188108] node 0: [mem 0x000000007f000000-0x00000017ffffffff]
>>> Hi, Jia
>>>
>>> I haven't taken a deep look into your code, just one curious question on your
>>> memory layout.
>>>
>>> The log above is printed out in free_area_init_nodes(), which iterates on
>>> memblock.memory and prints them. If I am not wrong, memory regions added to
>>> memblock.memory are ordered and merged if possible.
>>>
>>> While from your log, I see many regions could be merged but are isolated. For
>>> example, the last two region:
>>>
>>> node 0: [mem 0x000000007eff0000-0x000000007effffff]
>>> node 0: [mem 0x000000007f000000-0x00000017ffffffff]
>>>
>>> So I am curious why they are isolated instead of combined to one.
>>>
>>> >From the code, the possible reason is the region's flag differs from each
>>> other. If you have time, would you mind taking a look into this?
>>>
>> Hi Wei
>> I thought these 2 have different flags
>> [A A A 0.000000] idx=30,region [7eff0000:10000]flag=4A A A A <--- aka
>> MEMBLOCK_NOMAP
>> [A A A 0.000000]A A nodeA A 0: [mem 0x000000007eff0000-0x000000007effffff]
>> [A A A 0.000000] idx=31,region [7f000000:81000000]flag=0 <--- aka MEMBLOCK_NONE
>> [A A A 0.000000]A A nodeA A 0: [mem 0x000000007f000000-0x00000017ffffffff]
> Thanks.
>
> Hmm, I am not that familiar with those flags, while they look like to indicate
> the physical capability of this range.
>
> MEMBLOCK_NONE no special
> MEMBLOCK_HOTPLUG hotplug-able
> MEMBLOCK_MIRROR high reliable
> MEMBLOCK_NOMAP no direct map
>
> While these flags are not there when they are first added into the memory
> region. When you look at the memblock_add_range(), the last parameter passed
> is always 0. This means current several separated ranges reflect the physical
> memory capability layout.
>
> Then, why this layout is so scattered? As you can see several ranges are less
> than 1M.
>
> If, just my assumption, we could merge some of them, we could have a better
> performance. Less ranges, less searching time.
Thanks for your suggestions, Wei
Need further digging and will consider to improve it in another patchset.
--
Cheers,
Jia
next prev parent reply other threads:[~2018-03-28 1:45 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-26 3:02 Jia He
2018-03-26 3:02 ` [PATCH v3 1/5] mm: page_alloc: remain memblock_next_valid_pfn() when CONFIG_HAVE_ARCH_PFN_VALID is enable Jia He
2018-03-28 9:18 ` Wei Yang
2018-03-28 9:49 ` Jia He
2018-04-02 8:12 ` Wei Yang
2018-04-02 9:17 ` Jia He
2018-04-03 0:14 ` Wei Yang
2018-03-26 3:02 ` [PATCH v3 2/5] mm: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn() Jia He
2018-03-27 17:17 ` Daniel Vacek
2018-03-28 2:09 ` Jia He
2018-03-28 9:26 ` Wei Yang
2018-03-29 8:06 ` Jia He
2018-03-30 1:43 ` Wei Yang
2018-03-30 2:12 ` Jia He
2018-03-26 3:02 ` [PATCH v3 3/5] mm/memblock: introduce memblock_search_pfn_regions() Jia He
2018-03-26 3:02 ` [PATCH v3 4/5] arm64: introduce pfn_valid_region() Jia He
2018-03-28 9:38 ` Wei Yang
2018-03-26 3:02 ` [PATCH v3 5/5] mm: page_alloc: reduce unnecessary binary search in early_pfn_valid() Jia He
2018-03-27 17:51 ` Daniel Vacek
2018-03-28 2:10 ` Jia He
2018-03-27 1:02 ` [PATCH v3 0/5] optimize memblock_next_valid_pfn and early_pfn_valid Wei Yang
2018-03-27 7:15 ` Jia He
2018-03-28 0:30 ` Wei Yang
2018-03-28 1:45 ` Jia He [this message]
2018-03-28 2:36 ` Wei Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49fefc1c-81dd-98f8-7da5-5b5e85d919e4@gmail.com \
--to=hejianet@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=ard.biesheuvel@linaro.org \
--cc=aryabinin@virtuozzo.com \
--cc=catalin.marinas@arm.com \
--cc=daniel.m.jordan@oracle.com \
--cc=erosca@de.adit-jv.com \
--cc=gi-oh.kim@profitbricks.com \
--cc=gregkh@linuxfoundation.org \
--cc=hannes@cmpxchg.org \
--cc=hpa@zytor.com \
--cc=james.morse@arm.com \
--cc=kemi.wang@intel.com \
--cc=kstewart@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mark.rutland@arm.com \
--cc=mgorman@suse.de \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=nborisov@suse.com \
--cc=neelx@redhat.com \
--cc=pasha.tatashin@oracle.com \
--cc=pombredanne@nexb.com \
--cc=ptesarik@suse.com \
--cc=richard.weiyang@gmail.com \
--cc=steve.capper@arm.com \
--cc=steven.sistare@oracle.com \
--cc=takahiro.akashi@linaro.org \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=will.deacon@arm.com \
--cc=x86@kernel.org \
--cc=yasu.isimatu@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox