From: David Hildenbrand <david@redhat.com>
To: Aneesh Kumar K V <aneesh.kumar@linux.ibm.com>,
linux-mm@kvack.org, akpm@linux-foundation.org,
mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org,
npiggin@gmail.com, christophe.leroy@csgroup.eu
Cc: Oscar Salvador <osalvador@suse.de>,
Michal Hocko <mhocko@suse.com>,
Vishal Verma <vishal.l.verma@intel.com>
Subject: Re: [PATCH v2 1/5] mm/hotplug: Embed vmem_altmap details in memory block
Date: Fri, 7 Jul 2023 17:42:55 +0200 [thread overview]
Message-ID: <26e9bd4b-965a-4aaa-6ae9-b1600c7ef52d@redhat.com> (raw)
In-Reply-To: <eaeb0b15-0efb-039c-27d4-2ca84b5a2b5d@linux.ibm.com>
On 07.07.23 15:30, Aneesh Kumar K V wrote:
> On 7/7/23 5:47 PM, David Hildenbrand wrote:
>> On 06.07.23 18:06, Aneesh Kumar K V wrote:
>>> On 7/6/23 6:29 PM, David Hildenbrand wrote:
>>>> On 06.07.23 14:32, Aneesh Kumar K V wrote:
>>>>> On 7/6/23 4:44 PM, David Hildenbrand wrote:
>>>>>> On 06.07.23 11:36, Aneesh Kumar K V wrote:
>>>>>>> On 7/6/23 2:48 PM, David Hildenbrand wrote:
>>>>>>>> On 06.07.23 10:50, Aneesh Kumar K.V wrote:
>>>>>>>>> With memmap on memory, some architecture needs more details w.r.t altmap
>>>>>>>>> such as base_pfn, end_pfn, etc to unmap vmemmap memory.
>>>>>>>>
>>>>>>>> Can you elaborate why ppc64 needs that and x86-64 + aarch64 don't?
>>>>>>>>
>>>>>>>> IOW, why can't ppc64 simply allocate the vmemmap from the start of the memblock (-> base_pfn) and use the stored number of vmemmap pages to calculate the end_pfn?
>>>>>>>>
>>>>>>>> To rephrase: if the vmemmap is not at the beginning and doesn't cover full apgeblocks, memory onlining/offlining would be broken.
>>>>>>>>
>>>>>>>> [...]
>>>>>>>
>>>>>>>
>>>>>>> With ppc64 and 64K pagesize and different memory block sizes, we can end up allocating vmemmap backing memory from outside altmap because
>>>>>>> a single page vmemmap can cover 1024 pages (64 *1024/sizeof(struct page)). and that can point to pages outside the dev_pagemap range.
>>>>>>> So on free we check
>>>>>>
>>>>>> So you end up with a mixture of altmap and ordinarily-allocated vmemmap pages? That sound wrong (and is counter-intuitive to the feature in general, where we *don't* want to allocate the vmemmap from outside the altmap).
>>>>>>
>>>>>> (64 * 1024) / sizeof(struct page) -> 1024 pages
>>>>>>
>>>>>> 1024 pages * 64k = 64 MiB.
>>>>>>
>>>>>> What's the memory block size on these systems? If it's >= 64 MiB the vmemmap of a single memory block fits into a single page and we should be fine.
>>>>>>
>>>>>> Smells like you want to disable the feature on a 64k system.
>>>>>>
>>>>>
>>>>> But that part of vmemmap_free is common for both dax,dax kmem and the new memmap on memory feature. ie, ppc64 vmemmap_free have checks which require
>>>>> a full altmap structure with all the details in. So for memmap on memmory to work on ppc64 we do require similar altmap struct. Hence the idea
>>>>> of adding vmemmap_altmap to struct memory_block
>>>>
>>>> I'd suggest making sure that for the memmap_on_memory case your really *always* allocate from the altmap (that's what the feature is about after all), and otherwise block the feature (i.e., arch_mhp_supports_... should reject it).
>>>>
>>>
>>> Sure. How about?
>>>
>>> bool mhp_supports_memmap_on_memory(unsigned long size)
>>> {
>>>
>>> unsigned long nr_pages = size >> PAGE_SHIFT;
>>> unsigned long vmemmap_size = nr_pages * sizeof(struct page);
>>>
>>> if (!radix_enabled())
>>> return false;
>>> /*
>>> * memmap on memory only supported with memory block size add/remove
>>> */
>>> if (size != memory_block_size_bytes())
>>> return false;
>>> /*
>>> * Also make sure the vmemmap allocation is fully contianed
>>> * so that we always allocate vmemmap memory from altmap area.
>>> */
>>> if (!IS_ALIGNED(vmemmap_size, PAGE_SIZE))
>>> return false;
>>> /*
>>> * The pageblock alignment requirement is met by using
>>> * reserve blocks in altmap.
>>> */
>>> return true;
>>> }
>>
>> Better, but the PAGE_SIZE that could be added to common code as well.
>>
>> ... but, the pageblock check in common code implies a PAGE_SIZE check, so why do we need any other check besides the radix_enabled() check for arm64 and just keep all the other checks in common code as they are?
>>
>> If your vmemmap does not cover full pageblocks (which implies full pages), the feature cannot be used *unless* we'd waste altmap space in the vmemmap to cover one pageblock.
>>
>> Wasting hotplugged memory certainly sounds wrong?
>>
>>
>> So I appreciate if you could explain why the pageblock check should not be had for ppc64?
>>
>
> If we want things to be aligned to pageblock (2M) we will have to use 2M vmemmap space and that implies a memory block of 2G with 64K page size. That requirements makes the feature not useful at all
> on power. The compromise i came to was what i mentioned in the commit message for enabling the feature on ppc64.
As we'll always handle a 2M pageblock, you'll end up wasting memory.
Assume a 64MiB memory block:
With 64k: 1024 pages -> 64k vmemmap, almost 2 MiB wasted. ~3.1 %
With 4k: 16384 pages -> 1 MiB vmemmap, 1 MiB wasted. ~1.5%
It gets worse with smaller memory block sizes.
>
> We use altmap.reserve feature to align things correctly at pageblock granularity. We can end up loosing some pages in memory with this. For ex: with 256MB memory block
> size, we require 4 pages to map vmemmap pages, In order to align things correctly we end up adding a reserve of 28 pages. ie, for every 4096 pages
> 28 pages get reserved.
You can simply align-up the nr_vmemmap_pages up to pageblocks in the
memory hotplug code (e.g., depending on a config/arch knob whether
wasting memory is supported).
Because the pageblock granularity is a memory onlining/offlining
limitation and should be checked+handled exactly there.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2023-07-07 15:43 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-06 8:50 [PATCH v2 0/5] Add support for memmap on memory feature on ppc64 Aneesh Kumar K.V
2023-07-06 8:50 ` [PATCH v2 1/5] mm/hotplug: Embed vmem_altmap details in memory block Aneesh Kumar K.V
2023-07-06 9:18 ` David Hildenbrand
2023-07-06 9:36 ` Aneesh Kumar K V
2023-07-06 11:14 ` David Hildenbrand
2023-07-06 12:32 ` Aneesh Kumar K V
2023-07-06 12:59 ` David Hildenbrand
2023-07-06 16:06 ` Aneesh Kumar K V
2023-07-07 12:17 ` David Hildenbrand
2023-07-07 13:30 ` Aneesh Kumar K V
2023-07-07 15:42 ` David Hildenbrand [this message]
2023-07-07 16:25 ` Aneesh Kumar K V
2023-07-07 20:26 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 2/5] mm/hotplug: Allow architecture override for memmap on memory feature Aneesh Kumar K.V
2023-07-06 9:19 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 3/5] mm/hotplug: Simplify the handling of MHP_MEMMAP_ON_MEMORY flag Aneesh Kumar K.V
2023-07-06 9:24 ` David Hildenbrand
2023-07-06 10:04 ` Aneesh Kumar K V
2023-07-06 11:20 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 4/5] mm/hotplug: Simplify ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE kconfig Aneesh Kumar K.V
2023-07-06 8:53 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 5/5] powerpc/book3s64/memhotplug: Enable memmap on memory for radix Aneesh Kumar K.V
2023-07-06 9:07 ` David Hildenbrand
2023-07-06 9:27 ` Aneesh Kumar K V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=26e9bd4b-965a-4aaa-6ae9-b1600c7ef52d@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mhocko@suse.com \
--cc=mpe@ellerman.id.au \
--cc=npiggin@gmail.com \
--cc=osalvador@suse.de \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox