linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Dan Williams <dan.j.williams@intel.com>,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
	Oscar Salvador <osalvador@suse.de>
Subject: Re: uninitialized pmem struct pages
Date: Mon, 4 Jan 2021 16:15:23 +0100	[thread overview]
Message-ID: <26db2c3e-10c7-c6e3-23f7-21eb5101b31a@redhat.com> (raw)
In-Reply-To: <20210104151005.GK13207@dhcp22.suse.cz>

On 04.01.21 16:10, Michal Hocko wrote:
> On Mon 04-01-21 15:51:35, David Hildenbrand wrote:
>> On 04.01.21 15:26, Michal Hocko wrote:
>>> On Mon 04-01-21 11:45:39, David Hildenbrand wrote:
> [....]
>>>> One instance where this is still an issue is
>>>> mm/memory-failure.c:memory_failure() and
>>>> mm/memory-failure.c:soft_offline_page(). I thought for a while about
>>>> "fixing" these, but to me it felt like fixing pfn_to_online_page() is
>>>> actually the right approach.
>>>>
>>>> But worse, before ZONE_DEVICE hot-add
>>>> 1. The whole section memmap does already exist (early sections always
>>>> have a full memmap for the whole section)
>>>> 2. The whole section memmap is initialized (although eventually with
>>>> dummy node/zone 0/0 for memory holes until that part is fixed) and might
>>>> be accessed by pfn walkers.
>>>>
>>>> So when hotadding ZONE_DEVICE we are modifying already existing and
>>>> visible memmaps. Bad.
>>>
>>> Could you elaborate please?
>>
>> Simplistic example: Assume you have a VM with 64MB on x86-64.
>>
>> We need exactly one memory section (-> one memory block device). We
>> allocate the memmap for a full section - an "early section". So we have
>> a memmap for 128MB, while 64MB are actually in use, the other 64MB is
>> initialized (like a memory hole). pfn_to_online_page() would return a
>> valid struct page for the whole section memmap.
>>
>> The remaining 64MB can later be used for hot-adding ZONE_DEVICE memory,
>> essentially re-initializing that part of the already-existing memmap.
>>
>> See pfn_valid():
>>
>> /*
>>  * Traditionally early sections always returned pfn_valid() for
>>  * the entire section-sized span.
>>  */
>> return early_section(ms) || pfn_section_valid(ms, pfn);
>>
>>
>> Depending on the memory layout of the system, a pfn walker might just be
>> about to stumble over this range getting re-initialized.
> 
> Right. But as long as pfn walkers are not synchronized with the memory
> hotplug this is a general problem with any struct page. Whether it
> belongs to pmem or a regular memory, no?

Yes, however in this case even the memory hotplug lock does not help.
But yes, related issues.

> 
>>>> 2. Deferred init of ZONE_DEVICE ranges
>>>>
>>>> memmap_init_zone_device() runs after the ZONE_DEVICE zone was resized
>>>> and outside the memhp lock. I did not follow if the use of
>>>> get_dev_pagemap() actually makes sure that memmap_init_zone_device() in
>>>> pagemap_range() actually completed. I don't think it does.
>>>
>>> So a pfn walker can see an unitialized struct page for a while, right?
>>>
>>> The problem that I have encountered is that some zone device pages are
>>> not initialized at all. That sounds like a different from those 2 above.
>>> I am having hard time to track what kind of pages those are and why we
>>> cannot initialized their zone/node and make them reserved at least.
>>
>> And you are sure that these are in fact ZONE_DEVICE pages? Not memory
>> holes e.g., tackled by
> 
> Well, the physical address matches the pmem range so I believe this is
> the case.
> 
> [...]
>> However, I do remember a discussion regarding "reserved altmap space"
>> ZONE_DEVICE ranges, and whether to initialize them or leave them
>> uninitialized. See comment in
>>
>> commit 77e080e7680e1e615587352f70c87b9e98126d03
>> Author: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>> Date:   Fri Oct 18 20:19:39 2019 -0700
>>
>>     mm/memunmap: don't access uninitialized memmap in memunmap_pages()
> 
> yes, the reserved altmap space sounds like it might be it.

[...]

> Would it be possible to iterate over the reserved space and initialize
> Node/zones at least?

Right, I was confused by the terminology. We actually initialize the
pages used for memory mapping in
move_pfn_range_to_zone()->memmap_init_zone(). But we seem to exclude the
"reserved space" - I think for good reason.

I think the issue is that this "reserved space" might actually get
overridden by something else later, as it won't be used as a memmap, but
just to store "anything".

Do the physical addresses you see fall into the same section as boot
memory? Or what's around these addresses?


-- 
Thanks,

David / dhildenb



  reply	other threads:[~2021-01-04 15:15 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-04 10:03 Michal Hocko
2021-01-04 10:45 ` David Hildenbrand
2021-01-04 14:26   ` Michal Hocko
2021-01-04 14:51     ` David Hildenbrand
2021-01-04 15:10       ` Michal Hocko
2021-01-04 15:15         ` David Hildenbrand [this message]
2021-01-04 15:33           ` Michal Hocko
2021-01-04 15:43             ` David Hildenbrand
2021-01-04 15:44               ` David Hildenbrand
2021-01-05  8:00                 ` Michal Hocko
2021-01-05  8:16                   ` Michal Hocko
2021-01-05  8:27                     ` Dan Williams
2021-01-05  8:42                       ` Michal Hocko
2021-01-05  8:57                         ` Dan Williams
2021-01-05  9:05                           ` Michal Hocko
2021-01-05  9:13                             ` David Hildenbrand
2021-01-05  9:25                               ` Michal Hocko
2021-01-05  9:27                                 ` David Hildenbrand
2021-01-04 15:59               ` Michal Hocko
2021-01-04 16:30                 ` David Hildenbrand
2021-01-05  7:44                   ` Michal Hocko
2021-01-05  9:56                     ` David Hildenbrand
2021-01-05  5:33                 ` Dan Williams
2021-01-05  7:40                   ` Michal Hocko
2021-01-05  5:17   ` Dan Williams
2021-01-05  7:50     ` Michal Hocko
2021-01-05  9:16       ` David Hildenbrand
2021-01-05  9:25     ` David Hildenbrand
2021-01-05  9:33       ` Dan Williams
2021-01-05  9:37         ` David Hildenbrand
2021-01-05  9:56           ` Dan Williams
2021-01-05  9:58             ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=26db2c3e-10c7-c6e3-23f7-21eb5101b31a@redhat.com \
    --to=david@redhat.com \
    --cc=dan.j.williams@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=osalvador@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox