From: David Hildenbrand <david@redhat.com>
To: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
linux-mm@kvack.org, akpm@linux-foundation.org,
mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org,
npiggin@gmail.com, christophe.leroy@csgroup.eu
Cc: Oscar Salvador <osalvador@suse.de>,
Michal Hocko <mhocko@suse.com>,
Vishal Verma <vishal.l.verma@intel.com>
Subject: Re: [PATCH v2 1/5] mm/hotplug: Embed vmem_altmap details in memory block
Date: Thu, 6 Jul 2023 11:18:58 +0200 [thread overview]
Message-ID: <72488b8a-8f1e-c652-ab48-47e38290441f@redhat.com> (raw)
In-Reply-To: <20230706085041.826340-2-aneesh.kumar@linux.ibm.com>
On 06.07.23 10:50, Aneesh Kumar K.V wrote:
> With memmap on memory, some architecture needs more details w.r.t altmap
> such as base_pfn, end_pfn, etc to unmap vmemmap memory.
Can you elaborate why ppc64 needs that and x86-64 + aarch64 don't?
IOW, why can't ppc64 simply allocate the vmemmap from the start of the
memblock (-> base_pfn) and use the stored number of vmemmap pages to
calculate the end_pfn?
To rephrase: if the vmemmap is not at the beginning and doesn't cover
full apgeblocks, memory onlining/offlining would be broken.
[...]
>
> +/**
> + * struct vmem_altmap - pre-allocated storage for vmemmap_populate
> + * @base_pfn: base of the entire dev_pagemap mapping
> + * @reserve: pages mapped, but reserved for driver use (relative to @base)
> + * @free: free pages set aside in the mapping for memmap storage
> + * @align: pages reserved to meet allocation alignments
> + * @alloc: track pages consumed, private to vmemmap_populate()
> + */
> +struct vmem_altmap {
> + unsigned long base_pfn;
> + const unsigned long end_pfn;
> + const unsigned long reserve;
> + unsigned long free;
> + unsigned long align;
> + unsigned long alloc;
> +};
Instead of embedding that, what about conditionally allocating it and
store a pointer to it in the "struct memory_block"?
In the general case as of today, we don't have an altmap.
> +
> struct memory_block {
> unsigned long start_section_nr;
> unsigned long state; /* serialized by the dev->lock */
> @@ -77,11 +94,7 @@ struct memory_block {
> */
> struct zone *zone;
> struct device dev;
> - /*
> - * Number of vmemmap pages. These pages
> - * lay at the beginning of the memory block.
> - */
> - unsigned long nr_vmemmap_pages;
> + struct vmem_altmap altmap;
> struct memory_group *group; /* group (if any) for this block */
> struct list_head group_next; /* next block inside memory group */
> #if defined(CONFIG_MEMORY_FAILURE) && defined(CONFIG_MEMORY_HOTPLUG)
> @@ -147,7 +160,7 @@ static inline int hotplug_memory_notifier(notifier_fn_t fn, int pri)
> extern int register_memory_notifier(struct notifier_block *nb);
> extern void unregister_memory_notifier(struct notifier_block *nb);
> int create_memory_block_devices(unsigned long start, unsigned long size,
[...]
> static int check_cpu_on_node(int nid)
> @@ -2036,9 +2042,8 @@ EXPORT_SYMBOL(try_offline_node);
>
> static int __ref try_remove_memory(u64 start, u64 size)
> {
> - struct vmem_altmap mhp_altmap = {};
> + int ret;
> struct vmem_altmap *altmap = NULL;
> - unsigned long nr_vmemmap_pages;
> int rc = 0, nid = NUMA_NO_NODE;
>
> BUG_ON(check_hotplug_memory_range(start, size));
> @@ -2060,24 +2065,16 @@ static int __ref try_remove_memory(u64 start, u64 size)
> * We only support removing memory added with MHP_MEMMAP_ON_MEMORY in
> * the same granularity it was added - a single memory block.
> */
> +
^ unrealted change?
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2023-07-06 9:19 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-06 8:50 [PATCH v2 0/5] Add support for memmap on memory feature on ppc64 Aneesh Kumar K.V
2023-07-06 8:50 ` [PATCH v2 1/5] mm/hotplug: Embed vmem_altmap details in memory block Aneesh Kumar K.V
2023-07-06 9:18 ` David Hildenbrand [this message]
2023-07-06 9:36 ` Aneesh Kumar K V
2023-07-06 11:14 ` David Hildenbrand
2023-07-06 12:32 ` Aneesh Kumar K V
2023-07-06 12:59 ` David Hildenbrand
2023-07-06 16:06 ` Aneesh Kumar K V
2023-07-07 12:17 ` David Hildenbrand
2023-07-07 13:30 ` Aneesh Kumar K V
2023-07-07 15:42 ` David Hildenbrand
2023-07-07 16:25 ` Aneesh Kumar K V
2023-07-07 20:26 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 2/5] mm/hotplug: Allow architecture override for memmap on memory feature Aneesh Kumar K.V
2023-07-06 9:19 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 3/5] mm/hotplug: Simplify the handling of MHP_MEMMAP_ON_MEMORY flag Aneesh Kumar K.V
2023-07-06 9:24 ` David Hildenbrand
2023-07-06 10:04 ` Aneesh Kumar K V
2023-07-06 11:20 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 4/5] mm/hotplug: Simplify ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE kconfig Aneesh Kumar K.V
2023-07-06 8:53 ` David Hildenbrand
2023-07-06 8:50 ` [PATCH v2 5/5] powerpc/book3s64/memhotplug: Enable memmap on memory for radix Aneesh Kumar K.V
2023-07-06 9:07 ` David Hildenbrand
2023-07-06 9:27 ` Aneesh Kumar K V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=72488b8a-8f1e-c652-ab48-47e38290441f@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mhocko@suse.com \
--cc=mpe@ellerman.id.au \
--cc=npiggin@gmail.com \
--cc=osalvador@suse.de \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox