From: David Hildenbrand <david@redhat.com>
To: Aneesh Kumar K V <aneesh.kumar@linux.ibm.com>,
linux-mm@kvack.org, akpm@linux-foundation.org,
mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org,
npiggin@gmail.com, christophe.leroy@csgroup.eu
Cc: Oscar Salvador <osalvador@suse.de>,
Michal Hocko <mhocko@suse.com>,
Vishal Verma <vishal.l.verma@intel.com>
Subject: Re: [PATCH v2 1/5] mm/hotplug: Embed vmem_altmap details in memory block
Date: Thu, 6 Jul 2023 13:14:13 +0200 [thread overview]
Message-ID: <e975f02b-1d35-8f22-9f3a-dfe0209306a1@redhat.com> (raw)
In-Reply-To: <996e226a-2835-5b53-2255-2005c6335f98@linux.ibm.com>
On 06.07.23 11:36, Aneesh Kumar K V wrote:
> On 7/6/23 2:48 PM, David Hildenbrand wrote:
>> On 06.07.23 10:50, Aneesh Kumar K.V wrote:
>>> With memmap on memory, some architecture needs more details w.r.t altmap
>>> such as base_pfn, end_pfn, etc to unmap vmemmap memory.
>>
>> Can you elaborate why ppc64 needs that and x86-64 + aarch64 don't?
>>
>> IOW, why can't ppc64 simply allocate the vmemmap from the start of the memblock (-> base_pfn) and use the stored number of vmemmap pages to calculate the end_pfn?
>>
>> To rephrase: if the vmemmap is not at the beginning and doesn't cover full apgeblocks, memory onlining/offlining would be broken.
>>
>> [...]
>
>
> With ppc64 and 64K pagesize and different memory block sizes, we can end up allocating vmemmap backing memory from outside altmap because
> a single page vmemmap can cover 1024 pages (64 *1024/sizeof(struct page)). and that can point to pages outside the dev_pagemap range.
> So on free we check
So you end up with a mixture of altmap and ordinarily-allocated vmemmap
pages? That sound wrong (and is counter-intuitive to the feature in
general, where we *don't* want to allocate the vmemmap from outside the
altmap).
(64 * 1024) / sizeof(struct page) -> 1024 pages
1024 pages * 64k = 64 MiB.
What's the memory block size on these systems? If it's >= 64 MiB the
vmemmap of a single memory block fits into a single page and we should
be fine.
Smells like you want to disable the feature on a 64k system.
>
> vmemmap_free() {
> ...
> if (altmap) {
> alt_start = altmap->base_pfn;
> alt_end = altmap->base_pfn + altmap->reserve +
> altmap->free + altmap->alloc + altmap->align;
> }
>
> ...
> if (base_pfn >= alt_start && base_pfn < alt_end) {
> vmem_altmap_free(altmap, nr_pages);
>
> to see whether we did use altmap for the vmemmap allocation.
>
>>
>>> +/**
>>> + * struct vmem_altmap - pre-allocated storage for vmemmap_populate
>>> + * @base_pfn: base of the entire dev_pagemap mapping
>>> + * @reserve: pages mapped, but reserved for driver use (relative to @base)
>>> + * @free: free pages set aside in the mapping for memmap storage
>>> + * @align: pages reserved to meet allocation alignments
>>> + * @alloc: track pages consumed, private to vmemmap_populate()
>>> + */
>>> +struct vmem_altmap {
>>> + unsigned long base_pfn;
>>> + const unsigned long end_pfn;
>>> + const unsigned long reserve;
>>> + unsigned long free;
>>> + unsigned long align;
>>> + unsigned long alloc;
>>> +};
>>
>> Instead of embedding that, what about conditionally allocating it and store a pointer to it in the "struct memory_block"?
>>
>> In the general case as of today, we don't have an altmap.
>>
>
> Sure but with memmap on memory option it is essentially adding that right?.
At least on x86_64 and aarch64 only for 128 MiB DIMMs (and especially,
not memory added by hv-balloon, virtio-mem, xen-balloon).
So in the general case it's not that frequently used. Maybe on ppc64
once wired up.
Is the concern related to the increase in the size of
> struct memory_block ?
Partially. It looks cleaner to have !mem->altmap if there is no altmap.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2023-07-06 11:14 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-06 8:50 [PATCH v2 0/5] Add support for memmap on memory feature on ppc64 Aneesh Kumar K.V
2023-07-06 8:50 ` [PATCH v2 1/5] mm/hotplug: Embed vmem_altmap details in memory block Aneesh Kumar K.V
2023-07-06 9:18 ` David Hildenbrand
2023-07-06 9:36 ` Aneesh Kumar K V
2023-07-06 11:14 ` David Hildenbrand [this message]
2023-07-06 12:32 ` Aneesh Kumar K V
2023-07-06 12:59 ` David Hildenbrand
2023-07-06 16:06 ` Aneesh Kumar K V
2023-07-07 12:17 ` David Hildenbrand
2023-07-07 13:30 ` Aneesh Kumar K V
2023-07-07 15:42 ` David Hildenbrand
2023-07-07 16:25 ` Aneesh Kumar K V
2023-07-07 20:26 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 2/5] mm/hotplug: Allow architecture override for memmap on memory feature Aneesh Kumar K.V
2023-07-06 9:19 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 3/5] mm/hotplug: Simplify the handling of MHP_MEMMAP_ON_MEMORY flag Aneesh Kumar K.V
2023-07-06 9:24 ` David Hildenbrand
2023-07-06 10:04 ` Aneesh Kumar K V
2023-07-06 11:20 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 4/5] mm/hotplug: Simplify ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE kconfig Aneesh Kumar K.V
2023-07-06 8:53 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 5/5] powerpc/book3s64/memhotplug: Enable memmap on memory for radix Aneesh Kumar K.V
2023-07-06 9:07 ` David Hildenbrand
2023-07-06 9:27 ` Aneesh Kumar K V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e975f02b-1d35-8f22-9f3a-dfe0209306a1@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mhocko@suse.com \
--cc=mpe@ellerman.id.au \
--cc=npiggin@gmail.com \
--cc=osalvador@suse.de \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox