linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Wei Yang <richard.weiyang@gmail.com>
To: Mike Rapoport <rppt@linux.ibm.com>
Cc: Wei Yang <richard.weiyang@gmail.com>,
	david@redhat.com, mhocko@suse.com, osalvador@suse.de,
	akpm@linux-foundation.org, linux-doc@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 2/2] core-api/memory-hotplug.rst: divide Locking Internal section by different locks
Date: Wed, 5 Dec 2018 09:24:26 +0000	[thread overview]
Message-ID: <20181205092426.6i7rrhcackavpdys@master> (raw)
In-Reply-To: <20181205084044.GB19181@rapoport-lnx>

On Wed, Dec 05, 2018 at 10:40:45AM +0200, Mike Rapoport wrote:
>On Wed, Dec 05, 2018 at 10:34:26AM +0800, Wei Yang wrote:
>> Currently locking for memory hotplug is a little complicated.
>> 
>> Generally speaking, we leverage the two global lock:
>> 
>>   * device_hotplug_lock
>>   * mem_hotplug_lock
>> 
>> to serialise the process.
>> 
>> While for the long term, we are willing to have more fine-grained lock
>> to provide higher scalability.
>> 
>> This patch divides Locking Internal section based on these two global
>> locks to help readers to understand it. Also it adds some new finding to
>> enrich it.
>> 
>> [David: words arrangement]
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  Documentation/core-api/memory-hotplug.rst | 27 ++++++++++++++++++++++++---
>>  1 file changed, 24 insertions(+), 3 deletions(-)
>> 
>> diff --git a/Documentation/core-api/memory-hotplug.rst b/Documentation/core-api/memory-hotplug.rst
>> index de7467e48067..95662b283328 100644
>> --- a/Documentation/core-api/memory-hotplug.rst
>> +++ b/Documentation/core-api/memory-hotplug.rst
>> @@ -89,6 +89,20 @@ NOTIFY_STOP stops further processing of the notification queue.
>>  Locking Internals
>>  =================
>> 
>> +There are three locks involved in memory-hotplug, two global lock and one local
>
>typo:                                                          ^locks
>

Thanks :-)

>> +lock:
>> +
>> +- device_hotplug_lock
>> +- mem_hotplug_lock
>> +- device_lock
>> +
>> +Currently, they are twisted together for all kinds of reasons. The following
>> +part is divided into device_hotplug_lock and mem_hotplug_lock parts
>> +respectively to describe those tricky situations.
>> +
>> +device_hotplug_lock
>> +---------------------
>> +
>>  When adding/removing memory that uses memory block devices (i.e. ordinary RAM),
>>  the device_hotplug_lock should be held to:
>> 
>> @@ -111,13 +125,20 @@ As the device is visible to user space before taking the device_lock(), this
>>  can result in a lock inversion.
>> 
>>  onlining/offlining of memory should be done via device_online()/
>> -device_offline() - to make sure it is properly synchronized to actions
>> -via sysfs. Holding device_hotplug_lock is advised (to e.g. protect online_type)
>> +device_offline() - to make sure it is properly synchronized to actions via
>> +sysfs. Even mem_hotplug_lock is used to protect the process, because of the
>
>I think it should be "Even if mem_hotplug_lock ..."
>

Ah, my poor English, will fix it in next version. :-)

>> +lock inversion described above, holding device_hotplug_lock is still advised
>> +(to e.g. protect online_type)
>> +
>> +mem_hotplug_lock
>> +---------------------
>> 
>>  When adding/removing/onlining/offlining memory or adding/removing
>>  heterogeneous/device memory, we should always hold the mem_hotplug_lock in
>>  write mode to serialise memory hotplug (e.g. access to global/zone
>> -variables).
>> +variables). Currently, we take advantage of this to serialise sparsemem's
>> +mem_section handling in sparse_add_one_section() and
>> +sparse_remove_one_section().
>> 
>>  In addition, mem_hotplug_lock (in contrast to device_hotplug_lock) in read
>>  mode allows for a quite efficient get_online_mems/put_online_mems
>> -- 
>> 2.15.1
>> 
>
>-- 
>Sincerely yours,
>Mike.

-- 
Wei Yang
Help you, Help me

  reply	other threads:[~2018-12-05  9:24 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-05  2:34 [PATCH 1/2] admin-guide/memory-hotplug.rst: remove locking internal part from admin-guide Wei Yang
2018-12-05  2:34 ` [PATCH 2/2] core-api/memory-hotplug.rst: divide Locking Internal section by different locks Wei Yang
2018-12-05  8:08   ` David Hildenbrand
2018-12-05  9:23     ` Wei Yang
2018-12-05  8:40   ` Mike Rapoport
2018-12-05  9:24     ` Wei Yang [this message]
2018-12-05 12:13   ` Michal Hocko
2018-12-05 12:20     ` Wei Yang
2018-12-05  8:03 ` [PATCH 1/2] admin-guide/memory-hotplug.rst: remove locking internal part from admin-guide David Hildenbrand
2018-12-05  8:30   ` Mike Rapoport
2018-12-05  9:20     ` Wei Yang
2018-12-05  9:20   ` Wei Yang
2018-12-05 12:11 ` Michal Hocko
2018-12-06  0:26 ` [PATCH v2 " Wei Yang
2018-12-06  0:26   ` [PATCH v2 2/2] core-api/memory-hotplug.rst: divide Locking Internal section by different locks Wei Yang
2018-12-06  7:32     ` Mike Rapoport
2018-12-06  8:22     ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181205092426.6i7rrhcackavpdys@master \
    --to=richard.weiyang@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=osalvador@suse.de \
    --cc=rppt@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox