linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Oscar Salvador <osalvador@suse.de>
Cc: akpm@linux-foundation.org, mhocko@suse.com,
	dan.j.williams@intel.com, Jonathan.Cameron@huawei.com,
	anshuman.khandual@arm.com, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 0/4] mm,memory_hotplug: allocate memmap from hotadded memory
Date: Fri, 29 Mar 2019 10:01:26 +0100	[thread overview]
Message-ID: <6c58b0ef-7a9a-491d-7286-7642f9d4c7bb@redhat.com> (raw)
In-Reply-To: <23dcfb4a-339b-dcaf-c037-331f82fdef5a@redhat.com>

On 29.03.19 09:56, David Hildenbrand wrote:
> On 29.03.19 09:45, Oscar Salvador wrote:
>> On Thu, Mar 28, 2019 at 04:31:44PM +0100, David Hildenbrand wrote:
>>> Correct me if I am wrong. I think I was confused - vmemmap data is still
>>> allocated *per memory block*, not for the whole added memory, correct?
>>
>> No, vmemap data is allocated per memory-resource added.
>> In case a DIMM, would be a DIMM, in case a qemu memory-device, would be that
>> memory-device.
>> That is counting that ACPI does not split the DIMM/memory-device in several memory
>> resources.
>> If that happens, then acpi_memory_enable_device() calls __add_memory for every
>> memory-resource, which means that the vmemmap data will be allocated per
>> memory-resource.
>> I did not see this happening though, and I am not sure under which circumstances
>> can happen (I have to study the ACPI code a bit more).
>>
>> The problem with allocating vmemmap data per memblock, is the fragmentation.
>> Let us say you do the following:
>>
>> * memblock granularity 128M
>>
>> (qemu) object_add memory-backend-ram,id=ram0,size=256M
>> (qemu) device_add pc-dimm,id=dimm0,memdev=ram0,node=1
>>
>> This will create two memblocks (2 sections), and if we allocate the vmemmap
>> data for each corresponding section within it section(memblock), you only get
>> 126M contiguous memory.
> 
> Oh okay, so actually the way I guessed it would be now.
> 
> While this makes totally sense, I'll have to look how it is currently
> handled, meaning if there is a change. I somewhat remembering that
> delayed struct pages initialization would initialize vmmap per section,
> not per memory resource.
> 
> But as I work on 10 things differently, my mind sometimes seems to
> forget stuff in order to replace it with random nonsense. Will look into
> the details to not have to ask too many dumb questions.

s/differently/concurrently/

See, nonsense ;)

-- 

Thanks,

David / dhildenb


  reply	other threads:[~2019-03-29  9:01 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-28 13:43 Oscar Salvador
2019-03-28 13:43 ` [PATCH 1/4] mm, memory_hotplug: cleanup memory offline path Oscar Salvador
2019-04-03  8:43   ` Michal Hocko
2019-03-28 13:43 ` [PATCH 2/4] mm, memory_hotplug: provide a more generic restrictions for memory hotplug Oscar Salvador
2019-04-03  8:46   ` Michal Hocko
2019-04-03  8:48     ` David Hildenbrand
2019-04-04 10:04     ` Oscar Salvador
2019-04-04 10:06       ` David Hildenbrand
2019-04-04 10:31       ` Michal Hocko
2019-04-04 12:04         ` Oscar Salvador
2019-03-28 13:43 ` [PATCH 3/4] mm, memory_hotplug: allocate memmap from the added memory range for sparse-vmemmap Oscar Salvador
2019-03-28 13:43 ` [PATCH 4/4] mm, sparse: rename kmalloc_section_memmap, __kfree_section_memmap Oscar Salvador
2019-03-28 15:09 ` [PATCH 0/4] mm,memory_hotplug: allocate memmap from hotadded memory David Hildenbrand
2019-03-28 15:31   ` David Hildenbrand
2019-03-29  8:45     ` Oscar Salvador
2019-03-29  8:56       ` David Hildenbrand
2019-03-29  9:01         ` David Hildenbrand [this message]
2019-03-29  9:20         ` Oscar Salvador
2019-03-29 13:42       ` Michal Hocko
2019-04-01  7:59         ` Oscar Salvador
2019-04-01 11:53           ` Michal Hocko
2019-04-02  8:28             ` Oscar Salvador
2019-04-02  8:39               ` David Hildenbrand
2019-04-02 12:48               ` Michal Hocko
2019-04-03  8:01                 ` Oscar Salvador
2019-04-03  8:12                   ` Michal Hocko
2019-04-03  8:17                     ` David Hildenbrand
2019-04-03  8:37                       ` Michal Hocko
2019-04-03  8:41                         ` David Hildenbrand
2019-04-03  8:49                           ` Michal Hocko
2019-04-03  8:53                             ` David Hildenbrand
2019-04-03  8:50                           ` Oscar Salvador
2019-04-03  8:54                             ` David Hildenbrand
2019-04-03  9:40                         ` Oscar Salvador
2019-04-03 10:46                           ` Michal Hocko
2019-04-04 10:25                           ` Vlastimil Babka
2019-04-03  8:34                     ` Oscar Salvador
2019-04-03  8:36                       ` David Hildenbrand
2019-03-29  8:30   ` Oscar Salvador
2019-03-29  8:51     ` David Hildenbrand
2019-03-29 22:23 ` John Hubbard
2019-04-01  7:52   ` Oscar Salvador

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6c58b0ef-7a9a-491d-7286-7642f9d4c7bb@redhat.com \
    --to=david@redhat.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=dan.j.williams@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=osalvador@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox