linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Wei Yang <richard.weiyang@gmail.com>
To: David Hildenbrand <david@redhat.com>
Cc: Wei Yang <richard.weiyang@gmail.com>,
	mhocko@suse.com, osalvador@suse.de, akpm@linux-foundation.org,
	linux-mm@kvack.org
Subject: Re: [PATCH v2] mm, hotplug: move init_currently_empty_zone() under zone_span_lock protection
Date: Thu, 22 Nov 2018 23:53:37 +0000	[thread overview]
Message-ID: <20181122235337.zgw65u7xnd5hmokb@master> (raw)
In-Reply-To: <8a130cbe-8f1a-420f-9a82-f0905f4fc46d@redhat.com>

On Thu, Nov 22, 2018 at 10:53:31PM +0100, David Hildenbrand wrote:
>On 22.11.18 22:28, Wei Yang wrote:
>> On Thu, Nov 22, 2018 at 04:26:40PM +0100, David Hildenbrand wrote:
>>> On 22.11.18 11:12, Wei Yang wrote:
>>>> During online_pages phase, pgdat->nr_zones will be updated in case this
>>>> zone is empty.
>>>>
>>>> Currently the online_pages phase is protected by the global lock
>>>> mem_hotplug_begin(), which ensures there is no contention during the
>>>> update of nr_zones. But this global lock introduces scalability issues.
>>>>
>>>> This patch is a preparation for removing the global lock during
>>>> online_pages phase. Also this patch changes the documentation of
>>>> node_size_lock to include the protectioin of nr_zones.
>>>
>>> I looked into locking recently, and there is more to it.
>>>
>>> Please read:
>>>
>>> commit dee6da22efac451d361f5224a60be2796d847b51
>>> Author: David Hildenbrand <david@redhat.com>
>>> Date:   Tue Oct 30 15:10:44 2018 -0700
>>>
>>>    memory-hotplug.rst: add some details about locking internals
>>>    
>>>    Let's document the magic a bit, especially why device_hotplug_lock is
>>>    required when adding/removing memory and how it all play together with
>>>    requests to online/offline memory from user space.
>>>
>>> Short summary: Onlining/offlining of memory requires the device_hotplug_lock
>>> as of now.
>>>
>>> mem_hotplug_begin() is just an internal optimization. (we don't want
>>> everybody to take the device lock)
>>>
>> 
>> Hi, David
>> 
>> Thanks for your comment.
>
>My last sentence should have been "we don't want everybody to take the
>device hotplug lock" :) That caused confusion.
>
>> 
>> Hmm... I didn't catch your point.
>> 
>> Related to memory hot-plug, there are (at least) three locks,
>> 
>>   * device_hotplug_lock    (global)
>>   * device lock            (device scope)
>>   * mem_hotplug_lock       (global)
>> 
>> But with two different hold sequence in two cases:
>> 
>>   * device_online()
>> 
>>     device_hotplug_lock
>>     device_lock
>>     mem_hotplug_lock
>> 
>>   * add_memory_resource()
>> 
>>     device_hotplug_lock
>>     mem_hotplug_lock
>>     device_lock
>>        ^
>>        |
>>        I don't find where this is hold in add_memory_resource(). 
>>        Would you mind giving me a hint?
>> 
>> If my understanding is correct, what is your point?
>> 
>
>The point I was trying to make:
>
>Right now all onlining/offlining/adding/removing is protected by the
>device_hotplug_lock (and that's a good thing, things are fragile enough
>already).
>
>mem_hotplug_lock() is used in addition for get_online_mems().
>
>"This patch is a preparation for removing the global lock during
>online_pages phase." - is more like "one global lock".
>

Thanks for reminding. You are right.

>> I guess your point is : just remove mem_hotplug_lock is not enough to
>> resolve the scalability issue?
>
>Depends on which scalability issue :)
>
>Getting rid of / removing the impact of mem_hotplug_lock is certainly a
>very good idea. And improves scalability of all callers of
>get_online_mems(). If that is the intention, very good :)
>

Maybe not exact.

The intention is to get rid of mem_hotplug_begin/done, if I am correct.

>If the intention is to make onlining/offlining more scalable (e.g. in
>parallel or such), then scalability is limited by device_hotplug_lock.
>

I didn't notice this lock.

While this is a step by step improvement.

>
>> 
>> Please correct me, if I am not. :-)
>> 
>
>Guess I was just wondering which scalability issue we are trying to solve :)
>
>-- 
>
>Thanks,
>
>David / dhildenb

-- 
Wei Yang
Help you, Help me

  reply	other threads:[~2018-11-22 23:53 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-20  1:48 [PATCH] mm, hotplug: protect nr_zones with pgdat_resize_lock() Wei Yang
2018-11-20  7:31 ` Michal Hocko
2018-11-20  7:58   ` osalvador
2018-11-20  8:48     ` Michal Hocko
2018-11-21  2:52     ` Wei Yang
2018-11-21  7:15       ` Michal Hocko
2018-11-22  1:52         ` Wei Yang
2018-11-22  8:39           ` Michal Hocko
2018-11-26  2:28         ` Wei Yang
2018-11-26  8:16           ` Michal Hocko
2018-11-26  9:06             ` Wei Yang
2018-11-26 10:03               ` Michal Hocko
2018-11-27  0:18                 ` Wei Yang
2018-11-27  3:12             ` Wei Yang
2018-11-27 13:16               ` Michal Hocko
2018-11-27 23:56                 ` Wei Yang
2018-11-21  8:24       ` osalvador
2018-11-21  2:44   ` Wei Yang
2018-11-21  7:14     ` Michal Hocko
2018-11-22 10:12 ` [PATCH v2] mm, hotplug: move init_currently_empty_zone() under zone_span_lock protection Wei Yang
2018-11-22 10:15   ` Wei Yang
2018-11-22 10:29     ` Michal Hocko
2018-11-22 14:27       ` Wei Yang
2018-11-22 10:37   ` osalvador
2018-11-22 14:28     ` Wei Yang
2018-11-22 15:26   ` David Hildenbrand
2018-11-22 21:28     ` Wei Yang
2018-11-22 21:53       ` David Hildenbrand
2018-11-22 23:53         ` Wei Yang [this message]
2018-11-23  8:42     ` Michal Hocko
2018-11-23  8:46       ` David Hildenbrand
2018-11-26  1:44         ` Wei Yang
2018-11-26  9:24           ` David Hildenbrand
2018-11-27  0:23             ` Wei Yang
2018-11-30  6:58   ` [PATCH v3] " Wei Yang
2018-11-30  9:30     ` David Hildenbrand
2018-12-01  0:27       ` Wei Yang
2018-12-03 10:09         ` David Hildenbrand
2018-12-03 20:37           ` Wei Yang
2018-12-03 20:50     ` [PATCH v4] " Wei Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181122235337.zgw65u7xnd5hmokb@master \
    --to=richard.weiyang@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=osalvador@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox