linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kenny Ho <y2kenny@gmail.com>
To: Leon Romanovsky <leon@kernel.org>
Cc: "Brian Welty" <brian.welty@intel.com>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Parav Pandit" <parav@mellanox.com>,
	"David Airlie" <airlied@linux.ie>,
	intel-gfx@lists.freedesktop.org,
	"Jérôme Glisse" <jglisse@redhat.com>,
	dri-devel@lists.freedesktop.org,
	"Michal Hocko" <mhocko@kernel.org>,
	linux-mm@kvack.org, "Rodrigo Vivi" <rodrigo.vivi@intel.com>,
	"Li Zefan" <lizefan@huawei.com>,
	"Vladimir Davydov" <vdavydov.dev@gmail.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Tejun Heo" <tj@kernel.org>,
	cgroups@vger.kernel.org,
	"Christian König" <christian.koenig@amd.com>,
	"RDMA mailing list" <linux-rdma@vger.kernel.org>
Subject: Re: [RFC PATCH 0/5] cgroup support for GPU devices
Date: Thu, 2 May 2019 18:48:00 -0400	[thread overview]
Message-ID: <CAOWid-cYknxeTQvP9vQf3-i3Cpux+bs7uBs7_o-YMFjVCo19bg@mail.gmail.com> (raw)
In-Reply-To: <20190502083433.GP7676@mtr-leonro.mtl.com>

> Count us (Mellanox) too, our RDMA devices are exposing special and
> limited in size device memory to the users and we would like to provide
> an option to use cgroup to control its exposure.
Doesn't RDMA already has a separate cgroup?  Why not implement it there?


> > and with future work, we could extend to:
> > *  track and control share of GPU time (reuse of cpu/cpuacct)
> > *  apply mask of allowed execution engines (reuse of cpusets)
> >
> > Instead of introducing a new cgroup subsystem for GPU devices, a new
> > framework is proposed to allow devices to register with existing cgroup
> > controllers, which creates per-device cgroup_subsys_state within the
> > cgroup.  This gives device drivers their own private cgroup controls
> > (such as memory limits or other parameters) to be applied to device
> > resources instead of host system resources.
> > Device drivers (GPU or other) are then able to reuse the existing cgroup
> > controls, instead of inventing similar ones.
> >
> > Per-device controls would be exposed in cgroup filesystem as:
> >     mount/<cgroup_name>/<subsys_name>.devices/<dev_name>/<subsys_files>
> > such as (for example):
> >     mount/<cgroup_name>/memory.devices/<dev_name>/memory.max
> >     mount/<cgroup_name>/memory.devices/<dev_name>/memory.current
> >     mount/<cgroup_name>/cpu.devices/<dev_name>/cpu.stat
> >     mount/<cgroup_name>/cpu.devices/<dev_name>/cpu.weight
> >
> > The drm/i915 patch in this series is based on top of other RFC work [1]
> > for i915 device memory support.
> >
> > AMD [2] and Intel [3] have proposed related work in this area within the
> > last few years, listed below as reference.  This new RFC reuses existing
> > cgroup controllers and takes a different approach than prior work.
> >
> > Finally, some potential discussion points for this series:
> > * merge proposed <subsys_name>.devices into a single devices directory?
> > * allow devices to have multiple registrations for subsets of resources?
> > * document a 'common charging policy' for device drivers to follow?
> >
> > [1] https://patchwork.freedesktop.org/series/56683/
> > [2] https://lists.freedesktop.org/archives/dri-devel/2018-November/197106.html
> > [3] https://lists.freedesktop.org/archives/intel-gfx/2018-January/153156.html
> >
> >
> > Brian Welty (5):
> >   cgroup: Add cgroup_subsys per-device registration framework
> >   cgroup: Change kernfs_node for directories to store
> >     cgroup_subsys_state
> >   memcg: Add per-device support to memory cgroup subsystem
> >   drm: Add memory cgroup registration and DRIVER_CGROUPS feature bit
> >   drm/i915: Use memory cgroup for enforcing device memory limit
> >
> >  drivers/gpu/drm/drm_drv.c                  |  12 +
> >  drivers/gpu/drm/drm_gem.c                  |   7 +
> >  drivers/gpu/drm/i915/i915_drv.c            |   2 +-
> >  drivers/gpu/drm/i915/intel_memory_region.c |  24 +-
> >  include/drm/drm_device.h                   |   3 +
> >  include/drm/drm_drv.h                      |   8 +
> >  include/drm/drm_gem.h                      |  11 +
> >  include/linux/cgroup-defs.h                |  28 ++
> >  include/linux/cgroup.h                     |   3 +
> >  include/linux/memcontrol.h                 |  10 +
> >  kernel/cgroup/cgroup-v1.c                  |  10 +-
> >  kernel/cgroup/cgroup.c                     | 310 ++++++++++++++++++---
> >  mm/memcontrol.c                            | 183 +++++++++++-
> >  13 files changed, 552 insertions(+), 59 deletions(-)
> >
> > --
> > 2.21.0
> >
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


  reply	other threads:[~2019-05-02 22:48 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-01 14:04 Brian Welty
2019-05-01 14:04 ` [RFC PATCH 1/5] cgroup: Add cgroup_subsys per-device registration framework Brian Welty
2019-05-01 14:04 ` [RFC PATCH 2/5] cgroup: Change kernfs_node for directories to store cgroup_subsys_state Brian Welty
2019-05-01 14:04 ` [RFC PATCH 3/5] memcg: Add per-device support to memory cgroup subsystem Brian Welty
2019-05-01 14:04 ` [RFC PATCH 4/5] drm: Add memory cgroup registration and DRIVER_CGROUPS feature bit Brian Welty
2019-05-01 14:04 ` [RFC PATCH 5/5] drm/i915: Use memory cgroup for enforcing device memory limit Brian Welty
2019-05-02  8:34 ` [RFC PATCH 0/5] cgroup support for GPU devices Leon Romanovsky
2019-05-02 22:48   ` Kenny Ho [this message]
2019-05-03 21:14     ` Welty, Brian
2019-05-05  7:14       ` Leon Romanovsky
2019-05-05 14:21         ` Kenny Ho
2019-05-05 16:05           ` Leon Romanovsky
2019-05-05 16:34             ` Kenny Ho
2019-05-05 16:55               ` Leon Romanovsky
2019-05-05 16:46             ` Chris Down
2019-05-06 15:16 ` Johannes Weiner
2019-05-06 15:26 ` Tejun Heo
2019-05-07 19:50   ` Welty, Brian
2019-05-09 16:52     ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAOWid-cYknxeTQvP9vQf3-i3Cpux+bs7uBs7_o-YMFjVCo19bg@mail.gmail.com \
    --to=y2kenny@gmail.com \
    --cc=airlied@linux.ie \
    --cc=alexander.deucher@amd.com \
    --cc=brian.welty@intel.com \
    --cc=cgroups@vger.kernel.org \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hannes@cmpxchg.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jglisse@redhat.com \
    --cc=leon@kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=mhocko@kernel.org \
    --cc=parav@mellanox.com \
    --cc=rodrigo.vivi@intel.com \
    --cc=tj@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox