From: David Hildenbrand <david@redhat.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>,
linux-mm@kvack.org, cgroups@vger.kernel.org,
linux-kernel@vger.kernel.org, jhubbard@nvidia.com,
tjmercier@google.com, hannes@cmpxchg.org, surenb@google.com,
mkoutny@suse.com, daniel@ffwll.ch
Subject: Re: [RFC PATCH 00/19] mm: Introduce a cgroup to limit the amount of locked and pinned memory
Date: Tue, 31 Jan 2023 15:15:49 +0100 [thread overview]
Message-ID: <1a2417bc-f3ac-3e63-a930-bdefaab2578e@redhat.com> (raw)
In-Reply-To: <Y9khYwunmC/xdXT9@nvidia.com>
On 31.01.23 15:10, Jason Gunthorpe wrote:
> On Tue, Jan 31, 2023 at 03:06:10PM +0100, David Hildenbrand wrote:
>> On 31.01.23 15:03, Jason Gunthorpe wrote:
>>> On Tue, Jan 31, 2023 at 02:57:20PM +0100, David Hildenbrand wrote:
>>>
>>>>> I'm excited by this series, thanks for making it.
>>>>>
>>>>> The pin accounting has been a long standing problem and cgroups will
>>>>> really help!
>>>>
>>>> Indeed. I'm curious how GUP-fast, pinning the same page multiple times, and
>>>> pinning subpages of larger folios are handled :)
>>>
>>> The same as today. The pinning is done based on the result from GUP,
>>> and we charge every returned struct page.
>>>
>>> So duplicates are counted multiple times, folios are ignored.
>>>
>>> Removing duplicate charges would be costly, it would require storage
>>> to keep track of how many times individual pages have been charged to
>>> each cgroup (eg an xarray indexed by PFN of integers in each cgroup).
>>>
>>> It doesn't seem worth the cost, IMHO.
>>>
>>> We've made alot of investment now with iommufd to remove the most
>>> annoying sources of duplicated pins so it is much less of a problem in
>>> the qemu context at least.
>>
>> Wasn't there the discussion regarding using vfio+io_uring+rdma+$whatever on
>> a VM and requiring multiple times the VM size as memlock limit?
>
> Yes, but iommufd gives us some more options to mitigate this.
>
> eg it makes some of logical sense to point RDMA at the iommufd page
> table that is already pinned when trying to DMA from guest memory, in
> this case it could ride on the existing pin.
Right, I suspect some issue is that the address space layout for the
RDMA device might be completely different. But I'm no expert on IOMMUs
at all :)
I do understand that at least multiple VFIO containers could benefit by
only pinning once (IIUC that mgiht have been an issue?).
>
>> Would it be the same now, just that we need multiple times the pin
>> limit?
>
> Yes
Okay, thanks.
It's all still a big improvement, because I also asked for TDX
restrictedmem to be accounted somehow as unmovable.
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2023-01-31 14:15 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-24 5:42 Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 01/19] mm: Introduce vm_account Alistair Popple
2023-01-24 6:29 ` Christoph Hellwig
2023-01-24 14:32 ` Jason Gunthorpe
2023-01-30 11:36 ` Alistair Popple
2023-01-31 14:00 ` David Hildenbrand
2023-01-24 5:42 ` [RFC PATCH 02/19] drivers/vhost: Convert to use vm_account Alistair Popple
2023-01-24 5:55 ` Michael S. Tsirkin
2023-01-30 10:43 ` Alistair Popple
2023-01-24 14:34 ` Jason Gunthorpe
2023-01-24 5:42 ` [RFC PATCH 03/19] drivers/vdpa: Convert vdpa to use the new vm_structure Alistair Popple
2023-01-24 14:35 ` Jason Gunthorpe
2023-01-24 5:42 ` [RFC PATCH 04/19] infiniband/umem: Convert to use vm_account Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 05/19] RMDA/siw: " Alistair Popple
2023-01-24 14:37 ` Jason Gunthorpe
2023-01-24 15:22 ` Bernard Metzler
2023-01-24 15:56 ` Bernard Metzler
2023-01-30 11:34 ` Alistair Popple
2023-01-30 13:27 ` Bernard Metzler
2023-01-24 5:42 ` [RFC PATCH 06/19] RDMA/usnic: convert " Alistair Popple
2023-01-24 14:41 ` Jason Gunthorpe
2023-01-30 11:10 ` Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 07/19] vfio/type1: Charge pinned pages to pinned_vm instead of locked_vm Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 08/19] vfio/spapr_tce: Convert accounting to pinned_vm Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 09/19] io_uring: convert to use vm_account Alistair Popple
2023-01-24 14:44 ` Jason Gunthorpe
2023-01-30 11:12 ` Alistair Popple
2023-01-30 13:21 ` Jason Gunthorpe
2023-01-24 5:42 ` [RFC PATCH 10/19] net: skb: Switch to using vm_account Alistair Popple
2023-01-24 14:51 ` Jason Gunthorpe
2023-01-30 11:17 ` Alistair Popple
2023-02-06 4:36 ` Alistair Popple
2023-02-06 13:14 ` Jason Gunthorpe
2023-01-24 5:42 ` [RFC PATCH 11/19] xdp: convert to use vm_account Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 12/19] kvm/book3s_64_vio: Convert account_locked_vm() to vm_account_pinned() Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 13/19] fpga: dfl: afu: convert to use vm_account Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 14/19] mm: Introduce a cgroup for pinned memory Alistair Popple
2023-01-27 21:44 ` Tejun Heo
2023-01-30 13:20 ` Jason Gunthorpe
2023-01-24 5:42 ` [RFC PATCH 15/19] mm/util: Extend vm_account to charge pages against the pin cgroup Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 16/19] mm/util: Refactor account_locked_vm Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 17/19] mm: Convert mmap and mlock to use account_locked_vm Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 18/19] mm/mmap: Charge locked memory to pins cgroup Alistair Popple
2023-01-24 5:42 ` [RFC PATCH 19/19] selftests/vm: Add pins-cgroup selftest for mlock/mmap Alistair Popple
2023-01-24 18:26 ` [RFC PATCH 00/19] mm: Introduce a cgroup to limit the amount of locked and pinned memory Yosry Ahmed
2023-01-31 0:54 ` Alistair Popple
2023-01-31 5:14 ` Yosry Ahmed
2023-01-31 11:22 ` Alistair Popple
2023-01-31 19:49 ` Yosry Ahmed
2023-01-24 20:12 ` Jason Gunthorpe
2023-01-31 13:57 ` David Hildenbrand
2023-01-31 14:03 ` Jason Gunthorpe
2023-01-31 14:06 ` David Hildenbrand
2023-01-31 14:10 ` Jason Gunthorpe
2023-01-31 14:15 ` David Hildenbrand [this message]
2023-01-31 14:21 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1a2417bc-f3ac-3e63-a930-bdefaab2578e@redhat.com \
--to=david@redhat.com \
--cc=apopple@nvidia.com \
--cc=cgroups@vger.kernel.org \
--cc=daniel@ffwll.ch \
--cc=hannes@cmpxchg.org \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mkoutny@suse.com \
--cc=surenb@google.com \
--cc=tjmercier@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox