From: Mike Kravetz <mike.kravetz@oracle.com>
To: David Hildenbrand <david@redhat.com>, Peter Xu <peterx@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Andrew Morton <akpm@linux-foundation.org>,
Arnd Bergmann <arnd@arndb.de>, Michal Hocko <mhocko@suse.com>,
Oscar Salvador <osalvador@suse.de>,
Matthew Wilcox <willy@infradead.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Minchan Kim <minchan@kernel.org>, Jann Horn <jannh@google.com>,
Jason Gunthorpe <jgg@ziepe.ca>,
Dave Hansen <dave.hansen@intel.com>,
Hugh Dickins <hughd@google.com>, Rik van Riel <riel@surriel.com>,
"Michael S . Tsirkin" <mst@redhat.com>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
Vlastimil Babka <vbabka@suse.cz>,
Richard Henderson <rth@twiddle.net>,
Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
Matt Turner <mattst88@gmail.com>,
Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
Helge Deller <deller@gmx.de>, Chris Zankel <chris@zankel.net>,
Max Filippov <jcmvbkbc@gmail.com>,
linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-xtensa@linux-xtensa.org,
linux-arch@vger.kernel.org
Subject: Re: [PATCH RFC] mm/madvise: introduce MADV_POPULATE to prefault/prealloc memory
Date: Fri, 19 Feb 2021 11:25:51 -0800 [thread overview]
Message-ID: <15da147c-e440-ee87-c505-a4684a5b29dc@oracle.com> (raw)
In-Reply-To: <4d8e6f55-66a6-d701-6a94-79f5e2b23e46@redhat.com>
On 2/19/21 11:14 AM, David Hildenbrand wrote:
>>> It's interesting to know about commit 1e356fc14be ("mem-prealloc: reduce large
>>> guest start-up and migration time.", 2017-03-14). It seems for speeding up VM
>>> boot, but what I can't understand is why it would cause the delay of hugetlb
>>> accounting - I thought we'd fail even earlier at either fallocate() on the
>>> hugetlb file (when we use /dev/hugepages) or on mmap() of the memfd which
>>> contains the huge pages. See hugetlb_reserve_pages() and its callers. Or did
>>> I miss something?
>>
>> We should fail on mmap() when the reservation happens (unless
>> MAP_NORESERVE is passed) I think.
>>
>>>
>>> I think there's a special case if QEMU fork() with a MAP_PRIVATE hugetlbfs
>>> mapping, that could cause the memory accouting to be delayed until COW happens.
>>
>> That would be kind of weird. I'd assume the reservation gets properly
>> done during fork() - just like for VM_ACCOUNT.
>>
>>> However that's definitely not the case for QEMU since QEMU won't work at all as
>>> late as that point.
>>>
>>> IOW, for hugetlbfs I don't know why we need to populate the pages at all if we
>>> simply want to know "whether we do still have enough space".. And IIUC 2)
>>> above is the major issue you'd like to solve too.
>>
>> To avoid page faults at runtime on access I think. Reservation <=
>> Preallocation.
>
> I just learned that there is more to it: (test done on v5.9)
>
> # echo 512 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> # cat /sys/devices/system/node/node*/meminfo | grep HugePages_
> Node 0 HugePages_Total: 512
> Node 0 HugePages_Free: 512
> Node 0 HugePages_Surp: 0
> Node 1 HugePages_Total: 0
> Node 1 HugePages_Free: 0
> Node 1 HugePages_Surp: 0
> # cat /proc/meminfo | grep HugePages_
> HugePages_Total: 512
> HugePages_Free: 512
> HugePages_Rsvd: 0
> HugePages_Surp: 0
>
> # /usr/libexec/qemu-kvm -m 1G -smp 1 -object memory-backend-memfd,id=mem0,size=1G,hugetlb=on,hugetlbsize=2M,policy=bind,host-nodes=0 -numa node,nodeid=0,memdev=mem0 -hda Fedora-Cloud-Base-Rawhide-20201004.n.1.x86_64.qcow2 -nographic
> -> works just fine
>
> # /usr/libexec/qemu-kvm -m 1G -smp 1 -object memory-backend-memfd,id=mem0,size=1G,hugetlb=on,hugetlbsize=2M,policy=bind,host-nodes=1 -numa node,nodeid=0,memdev=mem0 -hda Fedora-Cloud-Base-Rawhide-20201004.n.1.x86_64.qcow2 -nographic
> -> Does not fail nicely but crashes!
>
>
> See https://bugzilla.redhat.com/show_bug.cgi?id=1686261 for something similar, however, it no longer applies like that on more recent kernels.
>
> Hugetlbfs reservations don't always protect you (especially with NUMA) - that's why e.g., libvirt always tells QEMU to prealloc.
>
> I think the "issue" is that the reservation happens on mmap(). mbind() runs afterwards. Preallocation saves you from that.
>
> I suspect something similar will happen with anonymous memory with mbind() even if we reserved swap space. Did not test yet, though.
>
Sorry, for jumping in late ... hugetlb keyword just hit my mail filters :)
Yes, it is true that hugetlb reservations are not numa aware. So, even if
pages are reserved at mmap time one could still SIGBUS if a fault is
restricted to a node with insufficient pages.
I looked into this some years ago, and there really is not a good way to
make hugetlb reservations numa aware. preallocation, or on demand
populating as proposed here is a way around the issue.
--
Mike Kravetz
next prev parent reply other threads:[~2021-02-19 19:26 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-17 15:48 David Hildenbrand
2021-02-17 16:46 ` Dave Hansen
2021-02-17 17:06 ` David Hildenbrand
2021-02-17 17:21 ` Vlastimil Babka
2021-02-18 11:07 ` Rolf Eike Beer
2021-02-18 11:27 ` David Hildenbrand
2021-02-18 10:25 ` Michal Hocko
2021-02-18 10:44 ` David Hildenbrand
2021-02-18 10:54 ` David Hildenbrand
2021-02-18 11:28 ` Michal Hocko
2021-02-18 11:27 ` Michal Hocko
2021-02-18 11:38 ` David Hildenbrand
2021-02-18 12:22 ` [PATCH RFC] madvise.2: Document MADV_POPULATE David Hildenbrand
2021-02-18 22:59 ` [PATCH RFC] mm/madvise: introduce MADV_POPULATE to prefault/prealloc memory Peter Xu
2021-02-19 8:20 ` David Hildenbrand
2021-02-19 16:31 ` Peter Xu
2021-02-19 17:13 ` David Hildenbrand
2021-02-19 19:14 ` David Hildenbrand
2021-02-19 19:25 ` Mike Kravetz [this message]
2021-02-20 9:01 ` David Hildenbrand
2021-02-19 19:23 ` Peter Xu
2021-02-19 20:04 ` David Hildenbrand
2021-02-22 12:46 ` Michal Hocko
2021-02-22 12:52 ` David Hildenbrand
2021-02-19 10:35 ` Michal Hocko
2021-02-19 10:43 ` David Hildenbrand
2021-02-19 11:04 ` Michal Hocko
2021-02-19 11:10 ` David Hildenbrand
2021-02-20 9:12 ` David Hildenbrand
2021-02-22 12:56 ` Michal Hocko
2021-02-22 12:59 ` David Hildenbrand
2021-02-22 13:19 ` Michal Hocko
2021-02-22 13:22 ` David Hildenbrand
2021-02-22 14:02 ` Michal Hocko
2021-02-22 15:30 ` David Hildenbrand
2021-02-24 14:25 ` David Hildenbrand
2021-02-24 14:38 ` David Hildenbrand
2021-02-25 8:41 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=15da147c-e440-ee87-c505-a4684a5b29dc@oracle.com \
--to=mike.kravetz@oracle.com \
--cc=James.Bottomley@hansenpartnership.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=arnd@arndb.de \
--cc=chris@zankel.net \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=deller@gmx.de \
--cc=hughd@google.com \
--cc=ink@jurassic.park.msu.ru \
--cc=jannh@google.com \
--cc=jcmvbkbc@gmail.com \
--cc=jgg@ziepe.ca \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-alpha@vger.kernel.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-parisc@vger.kernel.org \
--cc=linux-xtensa@linux-xtensa.org \
--cc=mattst88@gmail.com \
--cc=mhocko@suse.com \
--cc=minchan@kernel.org \
--cc=mst@redhat.com \
--cc=osalvador@suse.de \
--cc=peterx@redhat.com \
--cc=riel@surriel.com \
--cc=rth@twiddle.net \
--cc=tsbogend@alpha.franken.de \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox