linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Michal Hocko <mhocko@suse.com>,
	Yosry Ahmed <yosryahmed@google.com>,
	Alistair Popple <apopple@nvidia.com>,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org, jhubbard@nvidia.com,
	tjmercier@google.com, hannes@cmpxchg.org, surenb@google.com,
	mkoutny@suse.com, daniel@ffwll.ch,
	"Daniel P . Berrange" <berrange@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Zefan Li <lizefan.x@bytedance.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 14/19] mm: Introduce a cgroup for pinned memory
Date: Tue, 21 Feb 2023 08:07:13 -1000	[thread overview]
Message-ID: <Y/UIURDjR9pv+gzx@slm.duckdns.org> (raw)
In-Reply-To: <Y/UEkNn0O65Pfi4e@nvidia.com>

Hello,

On Tue, Feb 21, 2023 at 01:51:12PM -0400, Jason Gunthorpe wrote:
> > Yeah, so, what I'm trying to say is that that might be the source of the
> > problem. Is the current page ownership attribution correct 
> 
> It should be correct.
> 
> This mechanism is driven by pin_user_page(), (as it is the only API
> that can actually create a pin) so the cgroup owner of the page is
> broadly related to the "owner" of the VMA's inode.
> 
> The owner of the pin is the caller of pin_user_page(), which is
> initated by some FD/proces that is not necessarily related to the
> VMA's inode.
> 
> Eg concretely, something like io_uring will do something like:
>   buffer = mmap()     <- Charge memcg for the pages
>   fd = io_uring_setup(..)
>   io_uring_register(fd,xx,buffer,..);   <- Charge the pincg for the pin
> 
> If mmap is a private anonymous VMA created by the same process then it
> is likely the pages will have the same cgroup as io_uring_register and
> the FD.
> 
> Otherwise the page cgroup is unconstrained. MAP_SHARED mappings will
> have the page cgroup point at whatever cgroup was first to allocate
> the page for the VMA's inode.
> 
> AFAIK there are few real use cases to establish a pin on MAP_SHARED
> mappings outside your cgroup. However, it is possible, the APIs allow
> it, and for security sandbox purposes we can't allow a process inside
> a cgroup to triger a charge on a different cgroup. That breaks the
> sandbox goal.

It seems broken anyway. Please consider the following scenario:

1. A is a tiny cgroup which only does streaming IOs and has memory.high of
   128M which is more than sufficient for IO window. The last file it
   streamed happened to be F which was about 256M.

2. B is an a lot larger cgroup w/ pin limit way above 256M. B pins the
   entirety of F.

3. A now tries to stream another file but F is almost fully occupying its
   memory allowance and can't be evicted. A keeps thrashing due to lack of
   memory and isolation is completely broken.

This stems directly from page ownership and pin accounting discrepancy.

> If memcg could support multiple owners then it would be logical that
> the pinner would be one of the memcg owners.
> 
> > for whatever reason is determining the pinning ownership or should the page
> > ownership be attributed the same way too? If they indeed need to differ,
> > that probably would need pretty strong justifications.
> 
> It is inherent to how pin_user_pages() works. It is an API that
> establishs pins on existing pages. There is nothing about it that says
> who the page's memcg owner is.
> 
> I don't think we can do anything about this without breaking things.

That's a discrepancy in an internal interface and we don't wanna codify
something like that into userspace interface. Semantially, it seems like if
pin_user_pages() wanna charge pinning to the cgroup associated with an fd
(or whatever), it should also claim the ownership of the pages themselves. I
have no idea how feasiable that'd be from memcg POV tho. Given that this
would be a fairly cold path (in most cases, the ownership should already
match), maybe it won't be too bad?

Thanks.

-- 
tejun


  reply	other threads:[~2023-02-21 18:07 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-06  7:47 [PATCH 00/19] mm: Introduce a cgroup to limit the amount of locked and " Alistair Popple
2023-02-06  7:47 ` [PATCH 01/19] mm: Introduce vm_account Alistair Popple
2023-02-06  7:47 ` [PATCH 02/19] drivers/vhost: Convert to use vm_account Alistair Popple
2023-02-06  7:47 ` [PATCH 03/19] drivers/vdpa: Convert vdpa to use the new vm_structure Alistair Popple
2023-02-06  7:47 ` [PATCH 04/19] infiniband/umem: Convert to use vm_account Alistair Popple
2023-02-06  7:47 ` [PATCH 05/19] RMDA/siw: " Alistair Popple
2023-02-12 17:32   ` Bernard Metzler
2023-02-06  7:47 ` [PATCH 06/19] RDMA/usnic: convert " Alistair Popple
2023-02-06  7:47 ` [PATCH 07/19] vfio/type1: Charge pinned pages to pinned_vm instead of locked_vm Alistair Popple
2023-02-06  7:47 ` [PATCH 08/19] vfio/spapr_tce: Convert accounting to pinned_vm Alistair Popple
2023-02-06  7:47 ` [PATCH 09/19] io_uring: convert to use vm_account Alistair Popple
2023-02-06 15:29   ` Jens Axboe
2023-02-07  1:03     ` Alistair Popple
2023-02-07 14:28       ` Jens Axboe
2023-02-07 14:55         ` Jason Gunthorpe
2023-02-07 17:05           ` Jens Axboe
2023-02-13 11:30             ` Alistair Popple
2023-02-06  7:47 ` [PATCH 10/19] net: skb: Switch to using vm_account Alistair Popple
2023-02-06  7:47 ` [PATCH 11/19] xdp: convert to use vm_account Alistair Popple
2023-02-06  7:47 ` [PATCH 12/19] kvm/book3s_64_vio: Convert account_locked_vm() to vm_account_pinned() Alistair Popple
2023-02-06  7:47 ` [PATCH 13/19] fpga: dfl: afu: convert to use vm_account Alistair Popple
2023-02-06  7:47 ` [PATCH 14/19] mm: Introduce a cgroup for pinned memory Alistair Popple
2023-02-06 21:01   ` Yosry Ahmed
2023-02-06 21:14   ` Tejun Heo
2023-02-06 22:32     ` Yosry Ahmed
2023-02-06 22:36       ` Tejun Heo
2023-02-06 22:39         ` Yosry Ahmed
2023-02-06 23:25           ` Tejun Heo
2023-02-06 23:34             ` Yosry Ahmed
2023-02-06 23:40             ` Jason Gunthorpe
2023-02-07  0:32               ` Tejun Heo
2023-02-07 12:19                 ` Jason Gunthorpe
2023-02-15 19:00                 ` Michal Hocko
2023-02-15 19:07                   ` Jason Gunthorpe
2023-02-16  8:04                     ` Michal Hocko
2023-02-16 12:45                       ` Jason Gunthorpe
2023-02-21 16:51                         ` Tejun Heo
2023-02-21 17:25                           ` Jason Gunthorpe
2023-02-21 17:29                             ` Tejun Heo
2023-02-21 17:51                               ` Jason Gunthorpe
2023-02-21 18:07                                 ` Tejun Heo [this message]
2023-02-21 19:26                                   ` Jason Gunthorpe
2023-02-21 19:45                                     ` Tejun Heo
2023-02-21 19:49                                       ` Tejun Heo
2023-02-21 19:57                                       ` Jason Gunthorpe
2023-02-22 11:38                                         ` Alistair Popple
2023-02-22 12:57                                           ` Jason Gunthorpe
2023-02-22 22:59                                             ` Alistair Popple
2023-02-23  0:05                                               ` Christoph Hellwig
2023-02-23  0:35                                                 ` Alistair Popple
2023-02-23  1:53                                               ` Jason Gunthorpe
2023-02-23  9:12                                                 ` Daniel P. Berrangé
2023-02-23 17:31                                                   ` Jason Gunthorpe
2023-02-23 17:18                                                 ` T.J. Mercier
2023-02-23 17:28                                                   ` Jason Gunthorpe
2023-02-23 18:03                                                     ` Yosry Ahmed
2023-02-23 18:10                                                       ` Jason Gunthorpe
2023-02-23 18:14                                                         ` Yosry Ahmed
2023-02-23 18:15                                                         ` Tejun Heo
2023-02-23 18:17                                                           ` Jason Gunthorpe
2023-02-23 18:22                                                             ` Tejun Heo
2023-02-07  1:00           ` Waiman Long
2023-02-07  1:03             ` Tejun Heo
2023-02-07  1:50               ` Alistair Popple
2023-02-06  7:47 ` [PATCH 15/19] mm/util: Extend vm_account to charge pages against the pin cgroup Alistair Popple
2023-02-06  7:47 ` [PATCH 16/19] mm/util: Refactor account_locked_vm Alistair Popple
2023-02-06  7:47 ` [PATCH 17/19] mm: Convert mmap and mlock to use account_locked_vm Alistair Popple
2023-02-06  7:47 ` [PATCH 18/19] mm/mmap: Charge locked memory to pins cgroup Alistair Popple
2023-02-06 21:12   ` Yosry Ahmed
2023-02-06  7:47 ` [PATCH 19/19] selftests/vm: Add pins-cgroup selftest for mlock/mmap Alistair Popple
2023-02-16 11:01 ` [PATCH 00/19] mm: Introduce a cgroup to limit the amount of locked and pinned memory David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y/UIURDjR9pv+gzx@slm.duckdns.org \
    --to=tj@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=alex.williamson@redhat.com \
    --cc=apopple@nvidia.com \
    --cc=berrange@redhat.com \
    --cc=cgroups@vger.kernel.org \
    --cc=daniel@ffwll.ch \
    --cc=hannes@cmpxchg.org \
    --cc=jgg@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan.x@bytedance.com \
    --cc=mhocko@suse.com \
    --cc=mkoutny@suse.com \
    --cc=surenb@google.com \
    --cc=tjmercier@google.com \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox