From: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
cgroups@vger.kernel.org
Cc: Michal Hocko <mhocko@suse.com>, Roman Gushchin <guro@fb.com>,
Johannes Weiner <hannes@cmpxchg.org>
Subject: [PATCH v1 0/7] mm/memcontrol: recharge mlocked pages
Date: Wed, 04 Sep 2019 16:53:08 +0300 [thread overview]
Message-ID: <156760509382.6560.17364256340940314860.stgit@buzz> (raw)
Currently mlock keeps pages in cgroups where they were accounted.
This way one container could affect another if they share file cache.
Typical case is writing (downloading) file in one container and then
locking in another. After that first container cannot get rid of cache.
Also removed cgroup stays pinned by these mlocked pages.
This patchset implements recharging pages to cgroup of mlock user.
There are three cases:
* recharging at first mlock
* recharging at munlock to any remaining mlock
* recharging at 'culling' in reclaimer to any existing mlock
To keep things simple recharging ignores memory limit. After that memory
usage temporary could be higher than limit but cgroup will reclaim memory
later or trigger oom, which is valid outcome when somebody mlock too much.
---
Konstantin Khlebnikov (7):
mm/memcontrol: move locking page out of mem_cgroup_move_account
mm/memcontrol: add mem_cgroup_recharge
mm/mlock: add vma argument for mlock_vma_page()
mm/mlock: recharge memory accounting to first mlock user
mm/mlock: recharge memory accounting to second mlock user at munlock
mm/vmscan: allow changing page memory cgroup during reclaim
mm/mlock: recharge mlocked pages at culling by vmscan
Documentation/admin-guide/cgroup-v1/memory.rst | 5 +
include/linux/memcontrol.h | 9 ++
include/linux/rmap.h | 3 -
mm/gup.c | 2
mm/huge_memory.c | 4 -
mm/internal.h | 6 +
mm/ksm.c | 2
mm/memcontrol.c | 104 ++++++++++++++++--------
mm/migrate.c | 2
mm/mlock.c | 14 +++
mm/rmap.c | 5 +
mm/vmscan.c | 17 ++--
12 files changed, 121 insertions(+), 52 deletions(-)
--
Signature
next reply other threads:[~2019-09-04 13:53 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-04 13:53 Konstantin Khlebnikov [this message]
2019-09-04 13:53 ` [PATCH v1 1/7] mm/memcontrol: move locking page out of mem_cgroup_move_account Konstantin Khlebnikov
2019-09-04 13:53 ` [PATCH v1 2/7] mm/memcontrol: add mem_cgroup_recharge Konstantin Khlebnikov
2019-09-04 13:53 ` [PATCH v1 3/7] mm/mlock: add vma argument for mlock_vma_page() Konstantin Khlebnikov
2019-09-04 13:53 ` [PATCH v1 4/7] mm/mlock: recharge memory accounting to first mlock user Konstantin Khlebnikov
2019-09-04 13:53 ` [PATCH v1 5/7] mm/mlock: recharge memory accounting to second mlock user at munlock Konstantin Khlebnikov
2019-09-04 13:53 ` [PATCH v1 6/7] mm/vmscan: allow changing page memory cgroup during reclaim Konstantin Khlebnikov
2019-09-04 13:53 ` [PATCH v1 7/7] mm/mlock: recharge mlocked pages at culling by vmscan Konstantin Khlebnikov
2019-09-04 14:37 ` [PATCH v1 0/7] mm/memcontrol: recharge mlocked pages Michal Hocko
2019-09-05 7:38 ` Konstantin Khlebnikov
2019-09-05 23:11 ` Roman Gushchin
2019-09-04 23:13 ` Roman Gushchin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=156760509382.6560.17364256340940314860.stgit@buzz \
--to=khlebnikov@yandex-team.ru \
--cc=cgroups@vger.kernel.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox