* Re: [PATCH 2/3] mm: Charge active memcg when no mm is set
[not found] ` <20210610173944.1203706-3-schatzberg.dan@gmail.com>
@ 2021-06-25 14:47 ` Michal Koutný
0 siblings, 0 replies; 14+ messages in thread
From: Michal Koutný @ 2021-06-25 14:47 UTC (permalink / raw)
To: Dan Schatzberg
Cc: Andrew Morton, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT, Johannes Weiner, Tejun Heo,
Chris Down, Jens Axboe, Shakeel Butt
[-- Attachment #1: Type: text/plain, Size: 862 bytes --]
On Thu, Jun 10, 2021 at 10:39:43AM -0700, Dan Schatzberg <schatzberg.dan@gmail.com> wrote:
> @@ -926,8 +937,17 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
> * counting is disabled on the root level in the
> * cgroup core. See CSS_NO_REF.
> */
> - if (unlikely(!mm))
> - return root_mem_cgroup;
> + if (unlikely(!mm)) {
> + memcg = active_memcg();
> + if (unlikely(memcg)) {
> + /* remote memcg must hold a ref */
> + css_get(&memcg->css);
> + return memcg;
> + }
> + mm = current->mm;
> + if (unlikely(!mm))
> + return root_mem_cgroup;
> + }
With the change in __add_to_page_cache_locked() all page cache charges
will supply null mm, so the first !mm unlikely hint may not be warranted
anymore. Just an interesting point, generally, I'm adding
Reviewed-by: Michal Koutný <mkoutny@suse.com>
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
[parent not found: <20210610173944.1203706-4-schatzberg.dan@gmail.com>]
* Re: [PATCH 3/3] loop: Charge i/o to mem and blk cg
[not found] ` <20210610173944.1203706-4-schatzberg.dan@gmail.com>
@ 2021-06-25 15:01 ` Michal Koutný
2021-06-28 14:17 ` Dan Schatzberg
0 siblings, 1 reply; 14+ messages in thread
From: Michal Koutný @ 2021-06-25 15:01 UTC (permalink / raw)
To: Dan Schatzberg
Cc: Andrew Morton, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT, Johannes Weiner, Jens Axboe
[-- Attachment #1: Type: text/plain, Size: 1416 bytes --]
Hi.
On Thu, Jun 10, 2021 at 10:39:44AM -0700, Dan Schatzberg <schatzberg.dan@gmail.com> wrote:
> The current code only associates with the existing blkcg when aio is
> used to access the backing file. This patch covers all types of i/o to
> the backing file and also associates the memcg so if the backing file is
> on tmpfs, memory is charged appropriately.
>
> This patch also exports cgroup_get_e_css and int_active_memcg so it
> can be used by the loop module.
Wouldn't it be clearer to export (not explicitly inlined anymore)
set_active_memcg() instead of the int_active_memcg that's rather an
implementation detail?
> @@ -2111,13 +2112,18 @@ static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx,
> }
>
> /* always use the first bio's css */
> + cmd->blkcg_css = NULL;
> + cmd->memcg_css = NULL;
> #ifdef CONFIG_BLK_CGROUP
> - if (cmd->use_aio && rq->bio && rq->bio->bi_blkg) {
> - cmd->css = &bio_blkcg(rq->bio)->css;
> - css_get(cmd->css);
> - } else
> + if (rq->bio && rq->bio->bi_blkg) {
> + cmd->blkcg_css = &bio_blkcg(rq->bio)->css;
> +#ifdef CONFIG_MEMCG
> + cmd->memcg_css =
> + cgroup_get_e_css(cmd->blkcg_css->cgroup,
> + &memory_cgrp_subsys);
> +#endif
> + }
> #endif
> - cmd->css = NULL;
> loop_queue_work(lo, cmd);
I see you dropped the cmd->blkcg_css reference (while rq is handled). Is
it intentional?
Thanks,
Michal
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH 3/3] loop: Charge i/o to mem and blk cg
2021-06-25 15:01 ` [PATCH 3/3] loop: Charge i/o to mem and blk cg Michal Koutný
@ 2021-06-28 14:17 ` Dan Schatzberg
2021-06-29 10:26 ` Michal Koutný
0 siblings, 1 reply; 14+ messages in thread
From: Dan Schatzberg @ 2021-06-28 14:17 UTC (permalink / raw)
To: Michal Koutný
Cc: Andrew Morton, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT, Johannes Weiner, Jens Axboe
Hi Michal,
On Fri, Jun 25, 2021 at 05:01:03PM +0200, Michal Koutný wrote:
> Hi.
>
> On Thu, Jun 10, 2021 at 10:39:44AM -0700, Dan Schatzberg <schatzberg.dan@gmail.com> wrote:
> > The current code only associates with the existing blkcg when aio is
> > used to access the backing file. This patch covers all types of i/o to
> > the backing file and also associates the memcg so if the backing file is
> > on tmpfs, memory is charged appropriately.
> >
> > This patch also exports cgroup_get_e_css and int_active_memcg so it
> > can be used by the loop module.
>
> Wouldn't it be clearer to export (not explicitly inlined anymore)
> set_active_memcg() instead of the int_active_memcg that's rather an
> implementation detail?
Agreed that exporting int_active_memcg is an implementation detail,
but would this prevent set_active_memcg from being inlined? Is that
desireable?
>
> > @@ -2111,13 +2112,18 @@ static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx,
> > }
> >
> > /* always use the first bio's css */
> > + cmd->blkcg_css = NULL;
> > + cmd->memcg_css = NULL;
> > #ifdef CONFIG_BLK_CGROUP
> > - if (cmd->use_aio && rq->bio && rq->bio->bi_blkg) {
> > - cmd->css = &bio_blkcg(rq->bio)->css;
> > - css_get(cmd->css);
> > - } else
> > + if (rq->bio && rq->bio->bi_blkg) {
> > + cmd->blkcg_css = &bio_blkcg(rq->bio)->css;
> > +#ifdef CONFIG_MEMCG
> > + cmd->memcg_css =
> > + cgroup_get_e_css(cmd->blkcg_css->cgroup,
> > + &memory_cgrp_subsys);
> > +#endif
> > + }
> > #endif
> > - cmd->css = NULL;
> > loop_queue_work(lo, cmd);
>
> I see you dropped the cmd->blkcg_css reference (while rq is handled). Is
> it intentional?
Yes it is intentional. All requests (not just aio) go through the loop
worker which grabs the blkcg reference in loop_queue_work() on
construction. So I believe grabbing a reference per request is
unnecessary.
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH 3/3] loop: Charge i/o to mem and blk cg
2021-06-28 14:17 ` Dan Schatzberg
@ 2021-06-29 10:26 ` Michal Koutný
2021-06-29 14:03 ` Dan Schatzberg
0 siblings, 1 reply; 14+ messages in thread
From: Michal Koutný @ 2021-06-29 10:26 UTC (permalink / raw)
To: Dan Schatzberg
Cc: Andrew Morton, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT, Johannes Weiner, Jens Axboe
[-- Attachment #1: Type: text/plain, Size: 1031 bytes --]
On Mon, Jun 28, 2021 at 10:17:18AM -0400, Dan Schatzberg <schatzberg.dan@gmail.com> wrote:
> Agreed that exporting int_active_memcg is an implementation detail,
> but would this prevent set_active_memcg from being inlined?
Non-inlining in the loop module doesn't seem like a big trouble. OTOH,
other callers may be more sensitive and would need to rely on inlining.
I can't currently think of a nice way to have both the exported and the
exlicitly inlined variant at once. It seems it's either API or perf
craft in the end but both are uncertain, so I guess the current approach
is fine in the end.
> Yes it is intentional. All requests (not just aio) go through the loop
> worker which grabs the blkcg reference in loop_queue_work() on
> construction. So I believe grabbing a reference per request is
> unnecessary.
Isn't there a window without the reference between loop_queue_rq and
loop_queue_work? I don't know, you seem to know better, so I'd suggest
dropping a comment line into the code explaining this.
Thanks,
Michal
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] loop: Charge i/o to mem and blk cg
2021-06-29 10:26 ` Michal Koutný
@ 2021-06-29 14:03 ` Dan Schatzberg
2021-06-30 9:42 ` Michal Koutný
0 siblings, 1 reply; 14+ messages in thread
From: Dan Schatzberg @ 2021-06-29 14:03 UTC (permalink / raw)
To: Michal Koutný
Cc: Andrew Morton, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT, Johannes Weiner, Jens Axboe
> Non-inlining in the loop module doesn't seem like a big trouble. OTOH,
> other callers may be more sensitive and would need to rely on inlining.
Yes, this is my concern as well.
> I can't currently think of a nice way to have both the exported and the
> exlicitly inlined variant at once. It seems it's either API or perf
> craft in the end but both are uncertain, so I guess the current approach
> is fine in the end.
>
> > Yes it is intentional. All requests (not just aio) go through the loop
> > worker which grabs the blkcg reference in loop_queue_work() on
> > construction. So I believe grabbing a reference per request is
> > unnecessary.
>
> Isn't there a window without the reference between loop_queue_rq and
> loop_queue_work?
Hmm, perhaps I'm not understanding how the reference counting works,
but my understanding is that we enter loop_queue_rq with presumably
some code earlier holding a reference to the blkcg, we only need to
acquire a reference sometime before returning from loop_queue_rq. The
"window" between loop_queue_rq and loop_queue_work is all
straight-line code so there's no possibility for the earlier code to
get control back and drop the reference.
> I don't know, you seem to know better, so I'd suggest
> dropping a comment line into the code explaining this.
I wouldn't be so sure that I know any better here :D - I'm fairly
inexperienced in this domain.
Where would you suggest putting such a comment? The change in question
removed a particular case where we explicitly grab a reference to the
blkcg because now we do it uniformly in one place. Would you like a
comment explaining why we acquire a reference for all loop workers or
one explaining specifically why we don't need to acquire one for aio?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] loop: Charge i/o to mem and blk cg
2021-06-29 14:03 ` Dan Schatzberg
@ 2021-06-30 9:42 ` Michal Koutný
2021-06-30 14:49 ` Dan Schatzberg
0 siblings, 1 reply; 14+ messages in thread
From: Michal Koutný @ 2021-06-30 9:42 UTC (permalink / raw)
To: Dan Schatzberg
Cc: Andrew Morton, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT, Johannes Weiner, Jens Axboe
[-- Attachment #1: Type: text/plain, Size: 2000 bytes --]
On Tue, Jun 29, 2021 at 10:03:33AM -0400, Dan Schatzberg <schatzberg.dan@gmail.com> wrote:
> Hmm, perhaps I'm not understanding how the reference counting works,
> but my understanding is that we enter loop_queue_rq with presumably
> some code earlier holding a reference to the blkcg, we only need to
> acquire a reference sometime before returning from loop_queue_rq. The
> "window" between loop_queue_rq and loop_queue_work is all
> straight-line code so there's no possibility for the earlier code to
> get control back and drop the reference.
I don't say the current implementation is wrong, it just looked
suspicious to me when the css address is copied without taking the
reference.
The straight path is clear, I'm not sure about later invocations through
loop_workfn where the blkcg_css is accessed via the cmd->blkcg_css.
> Where would you suggest putting such a comment?
This is how I understand it:
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -996,6 +996,7 @@ static void loop_queue_work(struct loop_device *lo, struct loop_cmd *cmd)
rb_insert_color(&worker->rb_node, &lo->worker_tree);
queue_work:
if (worker) {
+ WARN_ON_ONCE(worker->blkcg_css != cmd->blkcg_css);
/*
* We need to remove from the idle list here while
* holding the lock so that the idle timer doesn't
@@ -2106,6 +2107,8 @@ static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx,
cmd->memcg_css = NULL;
#ifdef CONFIG_BLK_CGROUP
if (rq->bio && rq->bio->bi_blkg) {
+ /* reference to blkcg_css will be held by loop_worker (outlives
+ * cmd) or it is the eternal root css */
cmd->blkcg_css = &bio_blkcg(rq->bio)->css;
#ifdef CONFIG_MEMCG
cmd->memcg_css =
(On further thoughts, maybe the blkcg_css reference isn't needed even in
the loop_worker if it can be reasoned that blkcg_css won't go away while
there's an outstanding rq.)
HTH,
Michal
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH 3/3] loop: Charge i/o to mem and blk cg
2021-06-30 9:42 ` Michal Koutný
@ 2021-06-30 14:49 ` Dan Schatzberg
0 siblings, 0 replies; 14+ messages in thread
From: Dan Schatzberg @ 2021-06-30 14:49 UTC (permalink / raw)
To: Michal Koutný
Cc: Andrew Morton, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT, Johannes Weiner, Jens Axboe
> This is how I understand it:
>
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -996,6 +996,7 @@ static void loop_queue_work(struct loop_device *lo, struct loop_cmd *cmd)
> rb_insert_color(&worker->rb_node, &lo->worker_tree);
> queue_work:
> if (worker) {
> + WARN_ON_ONCE(worker->blkcg_css != cmd->blkcg_css);
Yes, this is correct. Though the check here seems a bit obvious to me
- it must be correct because we assign worker above:
if (cur_worker->blkcg_css == cmd->blkcg_css) {
worker = cur_worker;
break;
or when we construct the worker:
worker->blkcg_css = cmd->blkcg_css;
I think this WARN_ON_ONCE check might be more interesting in
loop_process_work which invokes loop_handle_cmd and actually uses
cmd->blkcg_css. In any event, your understanding is correct here.
> /*
> * We need to remove from the idle list here while
> * holding the lock so that the idle timer doesn't
> @@ -2106,6 +2107,8 @@ static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx,
> cmd->memcg_css = NULL;
> #ifdef CONFIG_BLK_CGROUP
> if (rq->bio && rq->bio->bi_blkg) {
> + /* reference to blkcg_css will be held by loop_worker (outlives
> + * cmd) or it is the eternal root css */
Yes, this is correct. Feel free to add my Acked-by to such a patch
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH V13 0/3] Charge loop device i/o to issuing cgroup
@ 2021-06-03 14:57 Dan Schatzberg
[not found] ` <20210603145707.4031641-3-schatzberg.dan@gmail.com>
0 siblings, 1 reply; 14+ messages in thread
From: Dan Schatzberg @ 2021-06-03 14:57 UTC (permalink / raw)
To: Jens Axboe
Cc: open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT
No significant changes, rebased on Linus's tree.
Jens, this series was intended to go into the mm tree since it had
some conflicts with mm changes. It never got picked up for 5.12 and
the corresponding mm changes are now in linus's tree. This is mostly a
loop change so it feels more appropriate to go through the block tree.
Do you think that makes sense?
Changes since V12:
* Small change to get_mem_cgroup_from_mm to avoid needing
get_active_memcg
Changes since V11:
* Removed WQ_MEM_RECLAIM flag from loop workqueue. Technically, this
can be driven by writeback, but this was causing a warning in xfs
and likely other filesystems aren't equipped to be driven by reclaim
at the VFS layer.
* Included a small fix from Colin Ian King.
* reworked get_mem_cgroup_from_mm to institute the necessary charge
priority.
Changes since V10:
* Added page-cache charging to mm: Charge active memcg when no mm is set
Changes since V9:
* Rebased against linus's branch which now includes Roman Gushchin's
patch this series is based off of
Changes since V8:
* Rebased on top of Roman Gushchin's patch
(https://lkml.org/lkml/2020/8/21/1464) which provides the nesting
support for setting active memcg. Dropped the patch from this series
that did the same thing.
Changes since V7:
* Rebased against linus's branch
Changes since V6:
* Added separate spinlock for worker synchronization
* Minor style changes
Changes since V5:
* Fixed a missing css_put when failing to allocate a worker
* Minor style changes
Changes since V4:
Only patches 1 and 2 have changed.
* Fixed irq lock ordering bug
* Simplified loop detach
* Added support for nesting memalloc_use_memcg
Changes since V3:
* Fix race on loop device destruction and deferred worker cleanup
* Ensure charge on shmem_swapin_page works just like getpage
* Minor style changes
Changes since V2:
* Deferred destruction of workqueue items so in the common case there
is no allocation needed
Changes since V1:
* Split out and reordered patches so cgroup charging changes are
separate from kworker -> workqueue change
* Add mem_css to struct loop_cmd to simplify logic
The loop device runs all i/o to the backing file on a separate kworker
thread which results in all i/o being charged to the root cgroup. This
allows a loop device to be used to trivially bypass resource limits
and other policy. This patch series fixes this gap in accounting.
A simple script to demonstrate this behavior on cgroupv2 machine:
'''
#!/bin/bash
set -e
CGROUP=/sys/fs/cgroup/test.slice
LOOP_DEV=/dev/loop0
if [[ ! -d $CGROUP ]]
then
sudo mkdir $CGROUP
fi
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit to tmpfs -> OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
dd if=/dev/zero of=/tmp/file bs=1M count=256" || true
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit through loopback
# device -> no OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
truncate -s 512m /tmp/backing_file
losetup $LOOP_DEV /tmp/backing_file
dd if=/dev/zero of=$LOOP_DEV bs=1M count=256;
losetup -D $LOOP_DEV" || true
grep oom_kill $CGROUP/memory.events
'''
Naively charging cgroups could result in priority inversions through
the single kworker thread in the case where multiple cgroups are
reading/writing to the same loop device. This patch series does some
minor modification to the loop driver so that each cgroup can make
forward progress independently to avoid this inversion.
With this patch series applied, the above script triggers OOM kills
when writing through the loop device as expected.
Dan Schatzberg (3):
loop: Use worker per cgroup instead of kworker
mm: Charge active memcg when no mm is set
loop: Charge i/o to mem and blk cg
drivers/block/loop.c | 241 ++++++++++++++++++++++++++++++-------
drivers/block/loop.h | 15 ++-
include/linux/memcontrol.h | 6 +
kernel/cgroup/cgroup.c | 1 +
mm/filemap.c | 2 +-
mm/memcontrol.c | 49 +++++---
mm/shmem.c | 4 +-
7 files changed, 250 insertions(+), 68 deletions(-)
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH V12 0/3] Charge loop device i/o to issuing cgroup
@ 2021-04-02 19:16 Dan Schatzberg
2021-04-02 19:16 ` [PATCH 2/3] mm: Charge active memcg when no mm is set Dan Schatzberg
0 siblings, 1 reply; 14+ messages in thread
From: Dan Schatzberg @ 2021-04-02 19:16 UTC (permalink / raw)
Cc: Jens Axboe, Tejun Heo, Zefan Li, Johannes Weiner, Andrew Morton,
Michal Hocko, Vladimir Davydov, Hugh Dickins, Shakeel Butt,
Roman Gushchin, Muchun Song, Yang Shi, Alex Shi, Alexander Duyck,
Wei Yang, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT
No major changes, rebased on top of latest mm tree
Changes since V12:
* Small change to get_mem_cgroup_from_mm to avoid needing
get_active_memcg
Changes since V11:
* Removed WQ_MEM_RECLAIM flag from loop workqueue. Technically, this
can be driven by writeback, but this was causing a warning in xfs
and likely other filesystems aren't equipped to be driven by reclaim
at the VFS layer.
* Included a small fix from Colin Ian King.
* reworked get_mem_cgroup_from_mm to institute the necessary charge
priority.
Changes since V10:
* Added page-cache charging to mm: Charge active memcg when no mm is set
Changes since V9:
* Rebased against linus's branch which now includes Roman Gushchin's
patch this series is based off of
Changes since V8:
* Rebased on top of Roman Gushchin's patch
(https://lkml.org/lkml/2020/8/21/1464) which provides the nesting
support for setting active memcg. Dropped the patch from this series
that did the same thing.
Changes since V7:
* Rebased against linus's branch
Changes since V6:
* Added separate spinlock for worker synchronization
* Minor style changes
Changes since V5:
* Fixed a missing css_put when failing to allocate a worker
* Minor style changes
Changes since V4:
Only patches 1 and 2 have changed.
* Fixed irq lock ordering bug
* Simplified loop detach
* Added support for nesting memalloc_use_memcg
Changes since V3:
* Fix race on loop device destruction and deferred worker cleanup
* Ensure charge on shmem_swapin_page works just like getpage
* Minor style changes
Changes since V2:
* Deferred destruction of workqueue items so in the common case there
is no allocation needed
Changes since V1:
* Split out and reordered patches so cgroup charging changes are
separate from kworker -> workqueue change
* Add mem_css to struct loop_cmd to simplify logic
The loop device runs all i/o to the backing file on a separate kworker
thread which results in all i/o being charged to the root cgroup. This
allows a loop device to be used to trivially bypass resource limits
and other policy. This patch series fixes this gap in accounting.
A simple script to demonstrate this behavior on cgroupv2 machine:
'''
#!/bin/bash
set -e
CGROUP=/sys/fs/cgroup/test.slice
LOOP_DEV=/dev/loop0
if [[ ! -d $CGROUP ]]
then
sudo mkdir $CGROUP
fi
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit to tmpfs -> OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
dd if=/dev/zero of=/tmp/file bs=1M count=256" || true
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit through loopback
# device -> no OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
truncate -s 512m /tmp/backing_file
losetup $LOOP_DEV /tmp/backing_file
dd if=/dev/zero of=$LOOP_DEV bs=1M count=256;
losetup -D $LOOP_DEV" || true
grep oom_kill $CGROUP/memory.events
'''
Naively charging cgroups could result in priority inversions through
the single kworker thread in the case where multiple cgroups are
reading/writing to the same loop device. This patch series does some
minor modification to the loop driver so that each cgroup can make
forward progress independently to avoid this inversion.
With this patch series applied, the above script triggers OOM kills
when writing through the loop device as expected.
Dan Schatzberg (3):
loop: Use worker per cgroup instead of kworker
mm: Charge active memcg when no mm is set
loop: Charge i/o to mem and blk cg
drivers/block/loop.c | 244 ++++++++++++++++++++++++++++++-------
drivers/block/loop.h | 15 ++-
include/linux/memcontrol.h | 6 +
kernel/cgroup/cgroup.c | 1 +
mm/filemap.c | 2 +-
mm/memcontrol.c | 49 +++++---
mm/shmem.c | 4 +-
7 files changed, 253 insertions(+), 68 deletions(-)
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH 2/3] mm: Charge active memcg when no mm is set
2021-04-02 19:16 [PATCH V12 0/3] Charge loop device i/o to issuing cgroup Dan Schatzberg
@ 2021-04-02 19:16 ` Dan Schatzberg
0 siblings, 0 replies; 14+ messages in thread
From: Dan Schatzberg @ 2021-04-02 19:16 UTC (permalink / raw)
Cc: Jens Axboe, Tejun Heo, Zefan Li, Johannes Weiner, Andrew Morton,
Michal Hocko, Vladimir Davydov, Hugh Dickins, Shakeel Butt,
Roman Gushchin, Muchun Song, Yang Shi, Alex Shi, Alexander Duyck,
Wei Yang, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT, Chris Down
set_active_memcg() worked for kernel allocations but was silently
ignored for user pages.
This patch establishes a precedence order for who gets charged:
1. If there is a memcg associated with the page already, that memcg is
charged. This happens during swapin.
2. If an explicit mm is passed, mm->memcg is charged. This happens
during page faults, which can be triggered in remote VMs (eg gup).
3. Otherwise consult the current process context. If there is an
active_memcg, use that. Otherwise, current->mm->memcg.
Previously, if a NULL mm was passed to mem_cgroup_charge (case 3) it
would always charge the root cgroup. Now it looks up the active_memcg
first (falling back to charging the root cgroup if not set).
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Chris Down <chris@chrisdown.name>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
---
mm/filemap.c | 2 +-
mm/memcontrol.c | 48 +++++++++++++++++++++++++++++++-----------------
mm/shmem.c | 4 ++--
3 files changed, 34 insertions(+), 20 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index c03463cb72d6..38648f7d2106 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -872,7 +872,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
page->index = offset;
if (!huge) {
- error = mem_cgroup_charge(page, current->mm, gfp);
+ error = mem_cgroup_charge(page, NULL, gfp);
if (error)
goto error;
charged = true;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c0b83a396299..d2939d6602b3 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -886,13 +886,24 @@ struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
}
EXPORT_SYMBOL(mem_cgroup_from_task);
+static __always_inline struct mem_cgroup *active_memcg(void)
+{
+ if (in_interrupt())
+ return this_cpu_read(int_active_memcg);
+ else
+ return current->active_memcg;
+}
+
/**
* get_mem_cgroup_from_mm: Obtain a reference on given mm_struct's memcg.
* @mm: mm from which memcg should be extracted. It can be NULL.
*
- * Obtain a reference on mm->memcg and returns it if successful. Otherwise
- * root_mem_cgroup is returned. However if mem_cgroup is disabled, NULL is
- * returned.
+ * Obtain a reference on mm->memcg and returns it if successful. If mm
+ * is NULL, then the memcg is chosen as follows:
+ * 1) The active memcg, if set.
+ * 2) current->mm->memcg, if available
+ * 3) root memcg
+ * If mem_cgroup is disabled, NULL is returned.
*/
struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
{
@@ -901,13 +912,23 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
if (mem_cgroup_disabled())
return NULL;
+ /*
+ * Page cache insertions can happen without an
+ * actual mm context, e.g. during disk probing
+ * on boot, loopback IO, acct() writes etc.
+ */
+ if (unlikely(!mm)) {
+ memcg = active_memcg();
+ if (unlikely(memcg)) {
+ /* remote memcg must hold a ref */
+ css_get(&memcg->css);
+ return memcg;
+ }
+ mm = current->mm;
+ }
+
rcu_read_lock();
do {
- /*
- * Page cache insertions can happen without an
- * actual mm context, e.g. during disk probing
- * on boot, loopback IO, acct() writes etc.
- */
if (unlikely(!mm))
memcg = root_mem_cgroup;
else {
@@ -921,14 +942,6 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
}
EXPORT_SYMBOL(get_mem_cgroup_from_mm);
-static __always_inline struct mem_cgroup *active_memcg(void)
-{
- if (in_interrupt())
- return this_cpu_read(int_active_memcg);
- else
- return current->active_memcg;
-}
-
static __always_inline bool memcg_kmem_bypass(void)
{
/* Allow remote memcg charging from any context. */
@@ -6537,7 +6550,8 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
* @gfp_mask: reclaim mode
*
* Try to charge @page to the memcg that @mm belongs to, reclaiming
- * pages according to @gfp_mask if necessary.
+ * pages according to @gfp_mask if necessary. if @mm is NULL, try to
+ * charge to the active memcg.
*
* Do not use this for pages allocated for swapin.
*
diff --git a/mm/shmem.c b/mm/shmem.c
index 5cfd2fb6e52b..524fa5aa0459 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1694,7 +1694,7 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
- struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm;
+ struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL;
struct page *page;
swp_entry_t swap;
int error;
@@ -1815,7 +1815,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
}
sbinfo = SHMEM_SB(inode->i_sb);
- charge_mm = vma ? vma->vm_mm : current->mm;
+ charge_mm = vma ? vma->vm_mm : NULL;
page = pagecache_get_page(mapping, index,
FGP_ENTRY | FGP_HEAD | FGP_LOCK, 0);
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH V11 0/3] Charge loop device i/o to issuing cgroup
@ 2021-03-29 14:48 Dan Schatzberg
2021-03-29 14:48 ` [PATCH 2/3] mm: Charge active memcg when no mm is set Dan Schatzberg
0 siblings, 1 reply; 14+ messages in thread
From: Dan Schatzberg @ 2021-03-29 14:48 UTC (permalink / raw)
Cc: Jens Axboe, Tejun Heo, Zefan Li, Johannes Weiner, Andrew Morton,
Michal Hocko, Vladimir Davydov, Hugh Dickins, Shakeel Butt,
Roman Gushchin, Yang Shi, Muchun Song, Alex Shi, Alexander Duyck,
Yafang Shao, Wei Yang, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT
No major changes, rebased on top of latest mm tree
Changes since V11:
* Removed WQ_MEM_RECLAIM flag from loop workqueue. Technically, this
can be driven by writeback, but this was causing a warning in xfs
and likely other filesystems aren't equipped to be driven by reclaim
at the VFS layer.
* Included a small fix from Colin Ian King.
* reworked get_mem_cgroup_from_mm to institute the necessary charge
priority.
Changes since V10:
* Added page-cache charging to mm: Charge active memcg when no mm is set
Changes since V9:
* Rebased against linus's branch which now includes Roman Gushchin's
patch this series is based off of
Changes since V8:
* Rebased on top of Roman Gushchin's patch
(https://lkml.org/lkml/2020/8/21/1464) which provides the nesting
support for setting active memcg. Dropped the patch from this series
that did the same thing.
Changes since V7:
* Rebased against linus's branch
Changes since V6:
* Added separate spinlock for worker synchronization
* Minor style changes
Changes since V5:
* Fixed a missing css_put when failing to allocate a worker
* Minor style changes
Changes since V4:
Only patches 1 and 2 have changed.
* Fixed irq lock ordering bug
* Simplified loop detach
* Added support for nesting memalloc_use_memcg
Changes since V3:
* Fix race on loop device destruction and deferred worker cleanup
* Ensure charge on shmem_swapin_page works just like getpage
* Minor style changes
Changes since V2:
* Deferred destruction of workqueue items so in the common case there
is no allocation needed
Changes since V1:
* Split out and reordered patches so cgroup charging changes are
separate from kworker -> workqueue change
* Add mem_css to struct loop_cmd to simplify logic
The loop device runs all i/o to the backing file on a separate kworker
thread which results in all i/o being charged to the root cgroup. This
allows a loop device to be used to trivially bypass resource limits
and other policy. This patch series fixes this gap in accounting.
A simple script to demonstrate this behavior on cgroupv2 machine:
'''
#!/bin/bash
set -e
CGROUP=/sys/fs/cgroup/test.slice
LOOP_DEV=/dev/loop0
if [[ ! -d $CGROUP ]]
then
sudo mkdir $CGROUP
fi
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit to tmpfs -> OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
dd if=/dev/zero of=/tmp/file bs=1M count=256" || true
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit through loopback
# device -> no OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
truncate -s 512m /tmp/backing_file
losetup $LOOP_DEV /tmp/backing_file
dd if=/dev/zero of=$LOOP_DEV bs=1M count=256;
losetup -D $LOOP_DEV" || true
grep oom_kill $CGROUP/memory.events
'''
Naively charging cgroups could result in priority inversions through
the single kworker thread in the case where multiple cgroups are
reading/writing to the same loop device. This patch series does some
minor modification to the loop driver so that each cgroup can make
forward progress independently to avoid this inversion.
With this patch series applied, the above script triggers OOM kills
when writing through the loop device as expected.
Dan Schatzberg (3):
loop: Use worker per cgroup instead of kworker
mm: Charge active memcg when no mm is set
loop: Charge i/o to mem and blk cg
drivers/block/loop.c | 248 ++++++++++++++++++++++++++++++-------
drivers/block/loop.h | 15 ++-
include/linux/memcontrol.h | 6 +
kernel/cgroup/cgroup.c | 1 +
mm/filemap.c | 2 +-
mm/memcontrol.c | 73 ++++++-----
mm/shmem.c | 4 +-
7 files changed, 267 insertions(+), 82 deletions(-)
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH 2/3] mm: Charge active memcg when no mm is set
2021-03-29 14:48 [PATCH V11 0/3] Charge loop device i/o to issuing cgroup Dan Schatzberg
@ 2021-03-29 14:48 ` Dan Schatzberg
0 siblings, 0 replies; 14+ messages in thread
From: Dan Schatzberg @ 2021-03-29 14:48 UTC (permalink / raw)
Cc: Jens Axboe, Tejun Heo, Zefan Li, Johannes Weiner, Andrew Morton,
Michal Hocko, Vladimir Davydov, Hugh Dickins, Shakeel Butt,
Roman Gushchin, Yang Shi, Muchun Song, Alex Shi, Alexander Duyck,
Yafang Shao, Wei Yang, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT, Chris Down
set_active_memcg() worked for kernel allocations but was silently
ignored for user pages.
This patch establishes a precedence order for who gets charged:
1. If there is a memcg associated with the page already, that memcg is
charged. This happens during swapin.
2. If an explicit mm is passed, mm->memcg is charged. This happens
during page faults, which can be triggered in remote VMs (eg gup).
3. Otherwise consult the current process context. If there is an
active_memcg, use that. Otherwise, current->mm->memcg.
Previously, if a NULL mm was passed to mem_cgroup_charge (case 3) it
would always charge the root cgroup. Now it looks up the active_memcg
first (falling back to charging the root cgroup if not set).
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Chris Down <chris@chrisdown.name>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
---
mm/filemap.c | 2 +-
mm/memcontrol.c | 72 ++++++++++++++++++++++++++++---------------------
mm/shmem.c | 4 +--
3 files changed, 44 insertions(+), 34 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index eeeb8e2cc36a..63fd980e863a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -872,7 +872,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
page->index = offset;
if (!huge) {
- error = mem_cgroup_charge(page, current->mm, gfp);
+ error = mem_cgroup_charge(page, NULL, gfp);
if (error)
goto error;
charged = true;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 668d1d7c2645..adc618814fd2 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -884,13 +884,38 @@ struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
}
EXPORT_SYMBOL(mem_cgroup_from_task);
+static __always_inline struct mem_cgroup *active_memcg(void)
+{
+ if (in_interrupt())
+ return this_cpu_read(int_active_memcg);
+ else
+ return current->active_memcg;
+}
+
+static __always_inline struct mem_cgroup *get_active_memcg(void)
+{
+ struct mem_cgroup *memcg;
+
+ rcu_read_lock();
+ memcg = active_memcg();
+ /* remote memcg must hold a ref. */
+ if (memcg && WARN_ON_ONCE(!css_tryget(&memcg->css)))
+ memcg = root_mem_cgroup;
+ rcu_read_unlock();
+
+ return memcg;
+}
+
/**
* get_mem_cgroup_from_mm: Obtain a reference on given mm_struct's memcg.
* @mm: mm from which memcg should be extracted. It can be NULL.
*
- * Obtain a reference on mm->memcg and returns it if successful. Otherwise
- * root_mem_cgroup is returned. However if mem_cgroup is disabled, NULL is
- * returned.
+ * Obtain a reference on mm->memcg and returns it if successful. If mm
+ * is NULL, then the memcg is chosen as follows:
+ * 1) The active memcg, if set.
+ * 2) current->mm->memcg, if available
+ * 3) root memcg
+ * If mem_cgroup is disabled, NULL is returned.
*/
struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
{
@@ -899,13 +924,19 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
if (mem_cgroup_disabled())
return NULL;
+ /*
+ * Page cache insertions can happen without an
+ * actual mm context, e.g. during disk probing
+ * on boot, loopback IO, acct() writes etc.
+ */
+ if (unlikely(!mm)) {
+ if (unlikely(active_memcg()))
+ return get_active_memcg();
+ mm = current->mm;
+ }
+
rcu_read_lock();
do {
- /*
- * Page cache insertions can happen withou an
- * actual mm context, e.g. during disk probing
- * on boot, loopback IO, acct() writes etc.
- */
if (unlikely(!mm))
memcg = root_mem_cgroup;
else {
@@ -919,28 +950,6 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
}
EXPORT_SYMBOL(get_mem_cgroup_from_mm);
-static __always_inline struct mem_cgroup *active_memcg(void)
-{
- if (in_interrupt())
- return this_cpu_read(int_active_memcg);
- else
- return current->active_memcg;
-}
-
-static __always_inline struct mem_cgroup *get_active_memcg(void)
-{
- struct mem_cgroup *memcg;
-
- rcu_read_lock();
- memcg = active_memcg();
- /* remote memcg must hold a ref. */
- if (memcg && WARN_ON_ONCE(!css_tryget(&memcg->css)))
- memcg = root_mem_cgroup;
- rcu_read_unlock();
-
- return memcg;
-}
-
static __always_inline bool memcg_kmem_bypass(void)
{
/* Allow remote memcg charging from any context. */
@@ -6549,7 +6558,8 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
* @gfp_mask: reclaim mode
*
* Try to charge @page to the memcg that @mm belongs to, reclaiming
- * pages according to @gfp_mask if necessary.
+ * pages according to @gfp_mask if necessary. if @mm is NULL, try to
+ * charge to the active memcg.
*
* Do not use this for pages allocated for swapin.
*
diff --git a/mm/shmem.c b/mm/shmem.c
index 78ab81a62b29..7c09276125d5 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1694,7 +1694,7 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
- struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm;
+ struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL;
struct page *page;
swp_entry_t swap;
int error;
@@ -1815,7 +1815,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
}
sbinfo = SHMEM_SB(inode->i_sb);
- charge_mm = vma ? vma->vm_mm : current->mm;
+ charge_mm = vma ? vma->vm_mm : NULL;
page = pagecache_get_page(mapping, index,
FGP_ENTRY | FGP_HEAD | FGP_LOCK, 0);
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v10 0/3] Charge loop device i/o to issuing cgroup
@ 2021-03-16 15:36 Dan Schatzberg
2021-03-16 15:36 ` [PATCH 2/3] mm: Charge active memcg when no mm is set Dan Schatzberg
0 siblings, 1 reply; 14+ messages in thread
From: Dan Schatzberg @ 2021-03-16 15:36 UTC (permalink / raw)
Cc: Jens Axboe, Tejun Heo, Zefan Li, Johannes Weiner, Andrew Morton,
Michal Hocko, Vladimir Davydov, Hugh Dickins, Shakeel Butt,
Roman Gushchin, Muchun Song, Alex Shi, Alexander Duyck,
Chris Down, Yafang Shao, Wei Yang, open list:BLOCK LAYER,
open list, open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT
No major changes, just rebasing and resubmitting
Changes since V10:
* Added page-cache charging to mm: Charge active memcg when no mm is set
Changes since V9:
* Rebased against linus's branch which now includes Roman Gushchin's
patch this series is based off of
Changes since V8:
* Rebased on top of Roman Gushchin's patch
(https://lkml.org/lkml/2020/8/21/1464) which provides the nesting
support for setting active memcg. Dropped the patch from this series
that did the same thing.
Changes since V7:
* Rebased against linus's branch
Changes since V6:
* Added separate spinlock for worker synchronization
* Minor style changes
Changes since V5:
* Fixed a missing css_put when failing to allocate a worker
* Minor style changes
Changes since V4:
Only patches 1 and 2 have changed.
* Fixed irq lock ordering bug
* Simplified loop detach
* Added support for nesting memalloc_use_memcg
Changes since V3:
* Fix race on loop device destruction and deferred worker cleanup
* Ensure charge on shmem_swapin_page works just like getpage
* Minor style changes
Changes since V2:
* Deferred destruction of workqueue items so in the common case there
is no allocation needed
Changes since V1:
* Split out and reordered patches so cgroup charging changes are
separate from kworker -> workqueue change
* Add mem_css to struct loop_cmd to simplify logic
The loop device runs all i/o to the backing file on a separate kworker
thread which results in all i/o being charged to the root cgroup. This
allows a loop device to be used to trivially bypass resource limits
and other policy. This patch series fixes this gap in accounting.
A simple script to demonstrate this behavior on cgroupv2 machine:
'''
#!/bin/bash
set -e
CGROUP=/sys/fs/cgroup/test.slice
LOOP_DEV=/dev/loop0
if [[ ! -d $CGROUP ]]
then
sudo mkdir $CGROUP
fi
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit to tmpfs -> OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
dd if=/dev/zero of=/tmp/file bs=1M count=256" || true
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit through loopback
# device -> no OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
truncate -s 512m /tmp/backing_file
losetup $LOOP_DEV /tmp/backing_file
dd if=/dev/zero of=$LOOP_DEV bs=1M count=256;
losetup -D $LOOP_DEV" || true
grep oom_kill $CGROUP/memory.events
'''
Naively charging cgroups could result in priority inversions through
the single kworker thread in the case where multiple cgroups are
reading/writing to the same loop device. This patch series does some
minor modification to the loop driver so that each cgroup can make
forward progress independently to avoid this inversion.
With this patch series applied, the above script triggers OOM kills
when writing through the loop device as expected.
Dan Schatzberg (3):
loop: Use worker per cgroup instead of kworker
mm: Charge active memcg when no mm is set
loop: Charge i/o to mem and blk cg
drivers/block/loop.c | 248 ++++++++++++++++++++++++++++++-------
drivers/block/loop.h | 15 ++-
include/linux/memcontrol.h | 11 ++
kernel/cgroup/cgroup.c | 1 +
mm/filemap.c | 2 +-
mm/memcontrol.c | 15 ++-
mm/shmem.c | 4 +-
7 files changed, 242 insertions(+), 54 deletions(-)
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH 2/3] mm: Charge active memcg when no mm is set
2021-03-16 15:36 [PATCH v10 0/3] Charge loop device i/o to issuing cgroup Dan Schatzberg
@ 2021-03-16 15:36 ` Dan Schatzberg
2021-03-16 15:50 ` Shakeel Butt
0 siblings, 1 reply; 14+ messages in thread
From: Dan Schatzberg @ 2021-03-16 15:36 UTC (permalink / raw)
Cc: Jens Axboe, Tejun Heo, Zefan Li, Johannes Weiner, Andrew Morton,
Michal Hocko, Vladimir Davydov, Hugh Dickins, Shakeel Butt,
Roman Gushchin, Muchun Song, Alex Shi, Alexander Duyck,
Chris Down, Yafang Shao, Wei Yang, open list:BLOCK LAYER,
open list, open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT
memalloc_use_memcg() worked for kernel allocations but was silently
ignored for user pages.
This patch establishes a precedence order for who gets charged:
1. If there is a memcg associated with the page already, that memcg is
charged. This happens during swapin.
2. If an explicit mm is passed, mm->memcg is charged. This happens
during page faults, which can be triggered in remote VMs (eg gup).
3. Otherwise consult the current process context. If it has configured
a current->active_memcg, use that. Otherwise, current->mm->memcg.
Previously, if a NULL mm was passed to mem_cgroup_try_charge (case 3) it
would always charge the root cgroup. Now it looks up the current
active_memcg first (falling back to charging the root cgroup if not
set).
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Chris Down <chris@chrisdown.name>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
---
mm/filemap.c | 2 +-
mm/memcontrol.c | 14 +++++++++++---
mm/shmem.c | 4 ++--
3 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 43700480d897..5135f330f05c 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -843,7 +843,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
page->index = offset;
if (!huge) {
- error = mem_cgroup_charge(page, current->mm, gfp);
+ error = mem_cgroup_charge(page, NULL, gfp);
if (error)
goto error;
charged = true;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e064ac0d850a..9a1b23ed3412 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6690,7 +6690,8 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
* @gfp_mask: reclaim mode
*
* Try to charge @page to the memcg that @mm belongs to, reclaiming
- * pages according to @gfp_mask if necessary.
+ * pages according to @gfp_mask if necessary. if @mm is NULL, try to
+ * charge to the active memcg.
*
* Returns 0 on success. Otherwise, an error code is returned.
*/
@@ -6726,8 +6727,15 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
rcu_read_unlock();
}
- if (!memcg)
- memcg = get_mem_cgroup_from_mm(mm);
+ if (!memcg) {
+ if (!mm) {
+ memcg = get_mem_cgroup_from_current();
+ if (!memcg)
+ memcg = get_mem_cgroup_from_mm(current->mm);
+ } else {
+ memcg = get_mem_cgroup_from_mm(mm);
+ }
+ }
ret = try_charge(memcg, gfp_mask, nr_pages);
if (ret)
diff --git a/mm/shmem.c b/mm/shmem.c
index b2db4ed0fbc7..353b362c370e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1695,7 +1695,7 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
- struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm;
+ struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL;
struct page *page;
swp_entry_t swap;
int error;
@@ -1816,7 +1816,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
}
sbinfo = SHMEM_SB(inode->i_sb);
- charge_mm = vma ? vma->vm_mm : current->mm;
+ charge_mm = vma ? vma->vm_mm : NULL;
page = pagecache_get_page(mapping, index,
FGP_ENTRY | FGP_HEAD | FGP_LOCK, 0);
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH 2/3] mm: Charge active memcg when no mm is set
2021-03-16 15:36 ` [PATCH 2/3] mm: Charge active memcg when no mm is set Dan Schatzberg
@ 2021-03-16 15:50 ` Shakeel Butt
2021-03-16 16:02 ` Dan Schatzberg
0 siblings, 1 reply; 14+ messages in thread
From: Shakeel Butt @ 2021-03-16 15:50 UTC (permalink / raw)
To: Dan Schatzberg
Cc: Jens Axboe, Tejun Heo, Zefan Li, Johannes Weiner, Andrew Morton,
Michal Hocko, Vladimir Davydov, Hugh Dickins, Roman Gushchin,
Muchun Song, Alex Shi, Alexander Duyck, Chris Down, Yafang Shao,
Wei Yang, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT
On Tue, Mar 16, 2021 at 8:37 AM Dan Schatzberg <schatzberg.dan@gmail.com> wrote:
>
> memalloc_use_memcg() worked for kernel allocations but was silently
> ignored for user pages.
set_active_memcg()
>
> This patch establishes a precedence order for who gets charged:
>
> 1. If there is a memcg associated with the page already, that memcg is
> charged. This happens during swapin.
>
> 2. If an explicit mm is passed, mm->memcg is charged. This happens
> during page faults, which can be triggered in remote VMs (eg gup).
>
> 3. Otherwise consult the current process context. If it has configured
> a current->active_memcg, use that. Otherwise, current->mm->memcg.
It's a bit more sophisticated than current->active_memcg. It has been
extended to work in interrupt context as well.
>
> Previously, if a NULL mm was passed to mem_cgroup_try_charge (case 3) it
mem_cgroup_charge()
> would always charge the root cgroup. Now it looks up the current
> active_memcg first (falling back to charging the root cgroup if not
> set).
>
> Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> Acked-by: Tejun Heo <tj@kernel.org>
> Acked-by: Chris Down <chris@chrisdown.name>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> ---
> mm/filemap.c | 2 +-
> mm/memcontrol.c | 14 +++++++++++---
> mm/shmem.c | 4 ++--
> 3 files changed, 14 insertions(+), 6 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 43700480d897..5135f330f05c 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -843,7 +843,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
> page->index = offset;
>
> if (!huge) {
> - error = mem_cgroup_charge(page, current->mm, gfp);
> + error = mem_cgroup_charge(page, NULL, gfp);
> if (error)
> goto error;
> charged = true;
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index e064ac0d850a..9a1b23ed3412 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6690,7 +6690,8 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
> * @gfp_mask: reclaim mode
> *
> * Try to charge @page to the memcg that @mm belongs to, reclaiming
> - * pages according to @gfp_mask if necessary.
> + * pages according to @gfp_mask if necessary. if @mm is NULL, try to
> + * charge to the active memcg.
> *
> * Returns 0 on success. Otherwise, an error code is returned.
> */
> @@ -6726,8 +6727,15 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
> rcu_read_unlock();
> }
>
> - if (!memcg)
> - memcg = get_mem_cgroup_from_mm(mm);
> + if (!memcg) {
> + if (!mm) {
> + memcg = get_mem_cgroup_from_current();
> + if (!memcg)
> + memcg = get_mem_cgroup_from_mm(current->mm);
> + } else {
> + memcg = get_mem_cgroup_from_mm(mm);
> + }
> + }
You will need to rebase to the latest mm tree. This code has changed.
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH 2/3] mm: Charge active memcg when no mm is set
2021-03-16 15:50 ` Shakeel Butt
@ 2021-03-16 16:02 ` Dan Schatzberg
0 siblings, 0 replies; 14+ messages in thread
From: Dan Schatzberg @ 2021-03-16 16:02 UTC (permalink / raw)
To: Shakeel Butt
Cc: Jens Axboe, Tejun Heo, Zefan Li, Johannes Weiner, Andrew Morton,
Michal Hocko, Vladimir Davydov, Hugh Dickins, Roman Gushchin,
Muchun Song, Alex Shi, Alexander Duyck, Chris Down, Yafang Shao,
Wei Yang, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:MEMORY MANAGEMENT
On Tue, Mar 16, 2021 at 08:50:16AM -0700, Shakeel Butt wrote:
> You will need to rebase to the latest mm tree. This code has changed.
Thanks for the feedback, I will address these comments in another
rebase. I'll wait and see if there's any comments concerning the
loop-related patches but it sounds like this will need to go through
the mm-tree
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v8 0/3] Charge loop device i/o to issuing cgroup
@ 2020-08-31 15:36 Dan Schatzberg
2020-08-31 15:36 ` [PATCH 2/3] mm: Charge active memcg when no mm is set Dan Schatzberg
0 siblings, 1 reply; 14+ messages in thread
From: Dan Schatzberg @ 2020-08-31 15:36 UTC (permalink / raw)
Cc: Dan Schatzberg, Jens Axboe, Tejun Heo, Li Zefan, Johannes Weiner,
Michal Hocko, Vladimir Davydov, Andrew Morton, Hugh Dickins,
Shakeel Butt, Roman Gushchin, Joonsoo Kim, Chris Down, Yang Shi,
Jakub Kicinski, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:CONTROL GROUP - MEMORY RESOURCE CONTROLLER (MEMCG)
Much of the discussion about this has died down. There's been a
concern raised that we could generalize infrastructure across loop,
md, etc. This may be possible, in the future, but it isn't clear to me
how this would look like. I'm inclined to fix the existing issue with
loop devices now (this is a problem we hit at FB) and address
consolidation with other cases if and when those need to be addressed.
Note that this series needs to be based off of Roman Gushchin's patch
(https://lkml.org/lkml/2020/8/21/1464) to compile.
Changes since V8:
* Rebased on top of Roman Gushchin's patch
(https://lkml.org/lkml/2020/8/21/1464) which provides the nesting
support for setting active memcg. Dropped the patch from this series
that did the same thing.
Changes since V7:
* Rebased against linus's branch
Changes since V6:
* Added separate spinlock for worker synchronization
* Minor style changes
Changes since V5:
* Fixed a missing css_put when failing to allocate a worker
* Minor style changes
Changes since V4:
Only patches 1 and 2 have changed.
* Fixed irq lock ordering bug
* Simplified loop detach
* Added support for nesting memalloc_use_memcg
Changes since V3:
* Fix race on loop device destruction and deferred worker cleanup
* Ensure charge on shmem_swapin_page works just like getpage
* Minor style changes
Changes since V2:
* Deferred destruction of workqueue items so in the common case there
is no allocation needed
Changes since V1:
* Split out and reordered patches so cgroup charging changes are
separate from kworker -> workqueue change
* Add mem_css to struct loop_cmd to simplify logic
The loop device runs all i/o to the backing file on a separate kworker
thread which results in all i/o being charged to the root cgroup. This
allows a loop device to be used to trivially bypass resource limits
and other policy. This patch series fixes this gap in accounting.
A simple script to demonstrate this behavior on cgroupv2 machine:
'''
#!/bin/bash
set -e
CGROUP=/sys/fs/cgroup/test.slice
LOOP_DEV=/dev/loop0
if [[ ! -d $CGROUP ]]
then
sudo mkdir $CGROUP
fi
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit to tmpfs -> OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
dd if=/dev/zero of=/tmp/file bs=1M count=256" || true
grep oom_kill $CGROUP/memory.events
# Set a memory limit, write more than that limit through loopback
# device -> no OOM kill
sudo unshare -m bash -c "
echo \$\$ > $CGROUP/cgroup.procs;
echo 0 > $CGROUP/memory.swap.max;
echo 64M > $CGROUP/memory.max;
mount -t tmpfs -o size=512m tmpfs /tmp;
truncate -s 512m /tmp/backing_file
losetup $LOOP_DEV /tmp/backing_file
dd if=/dev/zero of=$LOOP_DEV bs=1M count=256;
losetup -D $LOOP_DEV" || true
grep oom_kill $CGROUP/memory.events
'''
Naively charging cgroups could result in priority inversions through
the single kworker thread in the case where multiple cgroups are
reading/writing to the same loop device. This patch series does some
minor modification to the loop driver so that each cgroup can make
forward progress independently to avoid this inversion.
With this patch series applied, the above script triggers OOM kills
when writing through the loop device as expected.
Dan Schatzberg (3):
loop: Use worker per cgroup instead of kworker
mm: Charge active memcg when no mm is set
loop: Charge i/o to mem and blk cg
drivers/block/loop.c | 248 ++++++++++++++++++++++++++++++-------
drivers/block/loop.h | 15 ++-
include/linux/memcontrol.h | 6 +
kernel/cgroup/cgroup.c | 1 +
mm/memcontrol.c | 11 +-
mm/shmem.c | 4 +-
6 files changed, 232 insertions(+), 53 deletions(-)
--
2.24.1
^ permalink raw reply [flat|nested] 14+ messages in thread* [PATCH 2/3] mm: Charge active memcg when no mm is set
2020-08-31 15:36 [PATCH v8 0/3] Charge loop device i/o to issuing cgroup Dan Schatzberg
@ 2020-08-31 15:36 ` Dan Schatzberg
0 siblings, 0 replies; 14+ messages in thread
From: Dan Schatzberg @ 2020-08-31 15:36 UTC (permalink / raw)
Cc: Dan Schatzberg, Johannes Weiner, Tejun Heo, Chris Down,
Shakeel Butt, Jens Axboe, Li Zefan, Michal Hocko,
Vladimir Davydov, Andrew Morton, Hugh Dickins, Roman Gushchin,
Yang Shi, Jakub Kicinski, open list:BLOCK LAYER, open list,
open list:CONTROL GROUP (CGROUP),
open list:CONTROL GROUP - MEMORY RESOURCE CONTROLLER (MEMCG)
memalloc_use_memcg() worked for kernel allocations but was silently
ignored for user pages.
This patch establishes a precedence order for who gets charged:
1. If there is a memcg associated with the page already, that memcg is
charged. This happens during swapin.
2. If an explicit mm is passed, mm->memcg is charged. This happens
during page faults, which can be triggered in remote VMs (eg gup).
3. Otherwise consult the current process context. If it has configured
a current->active_memcg, use that. Otherwise, current->mm->memcg.
Previously, if a NULL mm was passed to mem_cgroup_try_charge (case 3) it
would always charge the root cgroup. Now it looks up the current
active_memcg first (falling back to charging the root cgroup if not
set).
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Chris Down <chris@chrisdown.name>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
---
mm/memcontrol.c | 11 ++++++++---
mm/shmem.c | 4 ++--
2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e27ac6d79a32..877778b792da 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6676,7 +6676,8 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
* @gfp_mask: reclaim mode
*
* Try to charge @page to the memcg that @mm belongs to, reclaiming
- * pages according to @gfp_mask if necessary.
+ * pages according to @gfp_mask if necessary. if @mm is NULL, try to
+ * charge to the active memcg.
*
* Returns 0 on success. Otherwise, an error code is returned.
*/
@@ -6712,8 +6713,12 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
rcu_read_unlock();
}
- if (!memcg)
- memcg = get_mem_cgroup_from_mm(mm);
+ if (!memcg) {
+ if (!mm)
+ memcg = get_mem_cgroup_from_current();
+ else
+ memcg = get_mem_cgroup_from_mm(mm);
+ }
ret = try_charge(memcg, gfp_mask, nr_pages);
if (ret)
diff --git a/mm/shmem.c b/mm/shmem.c
index 271548ca20f3..1139f52ac4ee 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1695,7 +1695,7 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
- struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm;
+ struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL;
struct page *page;
swp_entry_t swap;
int error;
@@ -1809,7 +1809,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
}
sbinfo = SHMEM_SB(inode->i_sb);
- charge_mm = vma ? vma->vm_mm : current->mm;
+ charge_mm = vma ? vma->vm_mm : NULL;
page = find_lock_entry(mapping, index);
if (xa_is_value(page)) {
--
2.24.1
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2021-06-30 14:50 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20210610173944.1203706-1-schatzberg.dan@gmail.com>
[not found] ` <20210610173944.1203706-3-schatzberg.dan@gmail.com>
2021-06-25 14:47 ` [PATCH 2/3] mm: Charge active memcg when no mm is set Michal Koutný
[not found] ` <20210610173944.1203706-4-schatzberg.dan@gmail.com>
2021-06-25 15:01 ` [PATCH 3/3] loop: Charge i/o to mem and blk cg Michal Koutný
2021-06-28 14:17 ` Dan Schatzberg
2021-06-29 10:26 ` Michal Koutný
2021-06-29 14:03 ` Dan Schatzberg
2021-06-30 9:42 ` Michal Koutný
2021-06-30 14:49 ` Dan Schatzberg
2021-06-03 14:57 [PATCH V13 0/3] Charge loop device i/o to issuing cgroup Dan Schatzberg
[not found] ` <20210603145707.4031641-3-schatzberg.dan@gmail.com>
2021-06-03 16:53 ` [PATCH 2/3] mm: Charge active memcg when no mm is set Shakeel Butt
-- strict thread matches above, loose matches on Subject: below --
2021-04-02 19:16 [PATCH V12 0/3] Charge loop device i/o to issuing cgroup Dan Schatzberg
2021-04-02 19:16 ` [PATCH 2/3] mm: Charge active memcg when no mm is set Dan Schatzberg
2021-03-29 14:48 [PATCH V11 0/3] Charge loop device i/o to issuing cgroup Dan Schatzberg
2021-03-29 14:48 ` [PATCH 2/3] mm: Charge active memcg when no mm is set Dan Schatzberg
2021-03-16 15:36 [PATCH v10 0/3] Charge loop device i/o to issuing cgroup Dan Schatzberg
2021-03-16 15:36 ` [PATCH 2/3] mm: Charge active memcg when no mm is set Dan Schatzberg
2021-03-16 15:50 ` Shakeel Butt
2021-03-16 16:02 ` Dan Schatzberg
2020-08-31 15:36 [PATCH v8 0/3] Charge loop device i/o to issuing cgroup Dan Schatzberg
2020-08-31 15:36 ` [PATCH 2/3] mm: Charge active memcg when no mm is set Dan Schatzberg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox