From: David Hildenbrand <david@redhat.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Qian Cai <cai@lca.pw>,
akpm@linux-foundation.org, sergey.senozhatsky.work@gmail.com,
pmladek@suse.com, rostedt@goodmis.org, peterz@infradead.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH -next v4] mm/hotplug: silence a lockdep splat with printk()
Date: Fri, 17 Jan 2020 10:42:10 +0100 [thread overview]
Message-ID: <521da382-d9b2-8556-d603-5537b030d8fd@redhat.com> (raw)
In-Reply-To: <20200117094009.GP19428@dhcp22.suse.cz>
On 17.01.20 10:40, Michal Hocko wrote:
> On Fri 17-01-20 10:25:06, David Hildenbrand wrote:
>> On 17.01.20 09:59, Michal Hocko wrote:
>>> On Fri 17-01-20 09:51:05, David Hildenbrand wrote:
>>>> On 17.01.20 03:21, Qian Cai wrote:
>>> [...]
>>>>> Even though has_unmovable_pages doesn't hold any reference to the
>>>>> returned page this should be reasonably safe for the purpose of
>>>>> reporting the page (dump_page) because it cannot be hotremoved. The
>>>>
>>>> This is only true in the context of memory unplug, but not in the
>>>> context of is_mem_section_removable()-> is_pageblock_removable_nolock().
>>>
>>> Well, the above should hold for that path as well AFAICS. If the page is
>>> unmovable then a racing hotplug cannot remove it, right? Or do you
>>> consider a temporary unmovability to be a problem?
>>
>> Somebody could test /sys/devices/system/memory/memoryX/removable. While
>> returning the unmovable page, it could become movable and
>> offlining+removing could succeed.
>
> Doesn't this path use device lock or something? If not than the new code
> is not more racy then the existing one. Just look at
> is_pageblock_removable_nolock and how it dereferences struct page
> (page_zonenum in page_zone.)
>
AFAIK no device lock, no device hotplug lock, no memory hotplug lock. I
think it holds a reference to the device and to the kernelfs node. But
AFAIK that does not block removal of offlining/memory, just when the
objects get freed.
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2020-01-17 9:42 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-17 2:21 Qian Cai
2020-01-17 2:49 ` Sergey Senozhatsky
2020-01-17 8:51 ` David Hildenbrand
2020-01-17 8:59 ` Michal Hocko
2020-01-17 9:25 ` David Hildenbrand
2020-01-17 9:40 ` Michal Hocko
2020-01-17 9:42 ` David Hildenbrand [this message]
2020-01-17 10:17 ` Michal Hocko
2020-01-17 10:18 ` David Hildenbrand
2020-01-17 12:40 ` Qian Cai
2020-01-17 12:53 ` David Hildenbrand
2020-01-17 13:30 ` Qian Cai
2020-01-17 13:42 ` David Hildenbrand
2020-01-17 14:42 ` Michal Hocko
2020-01-17 14:43 ` David Hildenbrand
2020-01-17 15:26 ` Michal Hocko
2020-01-17 8:59 ` Michal Hocko
2020-01-17 12:32 ` Qian Cai
2020-01-17 14:39 ` Michal Hocko
2020-01-17 15:05 ` Qian Cai
2020-01-17 15:46 ` Michal Hocko
2020-01-17 18:49 ` Qian Cai
2020-01-17 19:15 ` David Hildenbrand
2020-01-17 19:42 ` Qian Cai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=521da382-d9b2-8556-d603-5537b030d8fd@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cai@lca.pw \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=peterz@infradead.org \
--cc=pmladek@suse.com \
--cc=rostedt@goodmis.org \
--cc=sergey.senozhatsky.work@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox