From: David Hildenbrand <david@redhat.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Khalid Aziz <khalid.aziz@oracle.com>,
"Longpeng (Mike,
Cloud Infrastructure Service Product Dept.)"
<longpeng2@huawei.com>,
Steven Sistare <steven.sistare@oracle.com>,
Anthony Yznaga <anthony.yznaga@oracle.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"Gonglei (Arei)" <arei.gonglei@huawei.com>
Subject: Re: [RFC PATCH 0/5] madvise MADV_DOEXEC
Date: Mon, 16 Aug 2021 16:10:28 +0200 [thread overview]
Message-ID: <40bad572-501d-e4cf-80e3-9a8daa98dc7e@redhat.com> (raw)
In-Reply-To: <YRpo4EAJSkY7hI7Q@casper.infradead.org>
>>> Until recently, the CPUs only having 4 1GB TLB entries. I'm sure we
>>> still have customers using that generation of CPUs. 2MB pages perform
>>> better than 1GB pages on the previous generation of hardware, and I
>>> haven't seen numbers for the next generation yet.
>>
>> I read that somewhere else before, yet we have heavy 1 GiB page users,
>> especially in the context of VMs and DPDK.
>
> I wonder if those users actually benchmarked. Or whether the memory
> savings worked out so well for them that the loss of TLB performance
> didn't matter.
These applications are extremely performance sensitive (i.e., RT
workloads), that's why I'm wondering. I recall that they are most
certainly using more than 4 GiB memory in real applications.
E.g., the doc [1] even has a note that "For 64-bit applications, it is
recommended to use 1 GB hugepages if the platform supports them."
[1] https://doc.dpdk.org/guides-16.04/linux_gsg/sys_reqs.html
>
>> So, it only works for hugetlbfs in case uffd is not in place (-> no
>> per-process data in the page table) and we have an actual shared mappings.
>> When unsharing, we zap the PUD entry, which will result in allocating a
>> per-process page table on next fault.
>
> I think uffd was a huge mistake. It should have been a filesystem
> instead of a hack on the side of anonymous memory.
Yes it was. Especially, looking at all the special-casing, for example,
even in mm/pagewalk.c.
>
>> I will rephrase my previous statement "hugetlbfs just doesn't raise these
>> problems because we are special casing it all over the place already". For
>> example, not allowing to swap such pages. Disallowing MADV_DONTNEED. Special
>> hugetlbfs locking.
>
> Sure, that's why I want to drag this feature out of "oh this is a
> hugetlb special case" and into "this is something Linux supports".
I would have understood the move to optimize SHMEM internally - similar
to how we seem to optimize hugetlbfs SHMEM right now internally.
(although sharing page tables for shmem can still be quite tricky)
I did not follow why we have to play games with MAP_PRIVATE, and having
private anonymous pages shared between processes that don't COW,
introducing new syscalls etc.
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2021-08-16 14:10 UTC|newest]
Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-27 17:11 Anthony Yznaga
2020-07-27 17:07 ` Eric W. Biederman
2020-07-27 18:00 ` Steven Sistare
2020-07-28 13:40 ` Christian Brauner
2020-07-27 17:11 ` [RFC PATCH 1/5] elf: reintroduce using MAP_FIXED_NOREPLACE for elf executable mappings Anthony Yznaga
2020-07-27 17:11 ` [RFC PATCH 2/5] mm: do not assume only the stack vma exists in setup_arg_pages() Anthony Yznaga
2020-07-27 17:11 ` [RFC PATCH 3/5] mm: introduce VM_EXEC_KEEP Anthony Yznaga
2020-07-28 13:38 ` Eric W. Biederman
2020-07-28 17:44 ` Anthony Yznaga
2020-07-29 13:52 ` Kirill A. Shutemov
2020-07-29 23:20 ` Anthony Yznaga
2020-07-27 17:11 ` [RFC PATCH 4/5] exec, elf: require opt-in for accepting preserved mem Anthony Yznaga
2020-07-27 17:11 ` [RFC PATCH 5/5] mm: introduce MADV_DOEXEC Anthony Yznaga
2020-07-28 13:22 ` Kirill Tkhai
2020-07-28 14:06 ` Steven Sistare
2020-07-28 11:34 ` [RFC PATCH 0/5] madvise MADV_DOEXEC Kirill Tkhai
2020-07-28 17:28 ` Anthony Yznaga
2020-07-28 14:23 ` Andy Lutomirski
2020-07-28 14:30 ` Steven Sistare
2020-07-30 15:22 ` Matthew Wilcox
2020-07-30 15:27 ` Christian Brauner
2020-07-30 15:34 ` Matthew Wilcox
2020-07-30 15:54 ` Christian Brauner
2020-07-31 9:12 ` Stefan Hajnoczi
2020-07-30 15:59 ` Steven Sistare
2020-07-30 17:12 ` Matthew Wilcox
2020-07-30 17:35 ` Steven Sistare
2020-07-30 17:49 ` Matthew Wilcox
2020-07-30 18:27 ` Steven Sistare
2020-07-30 21:58 ` Eric W. Biederman
2020-07-31 14:57 ` Steven Sistare
2020-07-31 15:27 ` Matthew Wilcox
2020-07-31 16:11 ` Steven Sistare
2020-07-31 16:56 ` Jason Gunthorpe
2020-07-31 17:15 ` Steven Sistare
2020-07-31 17:48 ` Jason Gunthorpe
2020-07-31 17:55 ` Steven Sistare
2020-07-31 17:23 ` Matthew Wilcox
2020-08-03 15:28 ` Eric W. Biederman
2020-08-03 15:42 ` James Bottomley
2020-08-03 20:03 ` Steven Sistare
[not found] ` <9371b8272fd84280ae40b409b260bab3@AcuMS.aculab.com>
2020-08-04 11:13 ` Matthew Wilcox
2020-08-03 19:29 ` Steven Sistare
2020-07-31 19:41 ` Steven Sistare
2021-07-08 9:52 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-07-08 12:48 ` Steven Sistare
2021-07-12 1:05 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-07-12 1:30 ` Matthew Wilcox
2021-07-13 0:57 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-13 19:49 ` Khalid Aziz
2021-08-14 20:07 ` David Laight
2021-08-16 0:26 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-16 8:07 ` David Laight
2021-08-16 6:54 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-16 8:02 ` David Hildenbrand
2021-08-16 12:07 ` Matthew Wilcox
2021-08-16 12:20 ` David Hildenbrand
2021-08-16 12:42 ` David Hildenbrand
2021-08-16 12:46 ` Matthew Wilcox
2021-08-16 13:24 ` David Hildenbrand
2021-08-16 13:32 ` Matthew Wilcox
2021-08-16 14:10 ` David Hildenbrand [this message]
2021-08-16 14:27 ` Matthew Wilcox
2021-08-16 14:33 ` David Hildenbrand
2021-08-16 14:40 ` Matthew Wilcox
2021-08-16 15:01 ` David Hildenbrand
2021-08-16 15:59 ` Matthew Wilcox
2021-08-16 16:06 ` Khalid Aziz
2021-08-16 16:15 ` Matthew Wilcox
2021-08-16 16:13 ` David Hildenbrand
2021-08-16 12:27 ` [private] " David Hildenbrand
2021-08-16 12:30 ` David Hildenbrand
2021-08-17 0:47 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-17 0:55 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=40bad572-501d-e4cf-80e3-9a8daa98dc7e@redhat.com \
--to=david@redhat.com \
--cc=anthony.yznaga@oracle.com \
--cc=arei.gonglei@huawei.com \
--cc=khalid.aziz@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=longpeng2@huawei.com \
--cc=steven.sistare@oracle.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox