From: Glauber Costa <glommer@parallels.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Anton Vorontsov <anton.vorontsov@linaro.org>,
David Rientjes <rientjes@google.com>,
Pekka Enberg <penberg@kernel.org>, Mel Gorman <mgorman@suse.de>,
Michal Hocko <mhocko@suse.cz>,
"Kirill A. Shutemov" <kirill@shutemov.name>,
Luiz Capitulino <lcapitulino@redhat.com>,
Greg Thelen <gthelen@google.com>,
Leonid Moiseichuk <leonid.moiseichuk@nokia.com>,
KOSAKI Motohiro <kosaki.motohiro@gmail.com>,
Minchan Kim <minchan@kernel.org>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
John Stultz <john.stultz@linaro.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linaro-kernel@lists.linaro.org, patches@linaro.org,
kernel-team@android.com
Subject: Re: [PATCH 1/2] Add mempressure cgroup
Date: Wed, 9 Jan 2013 18:10:02 +0400 [thread overview]
Message-ID: <50ED7A3A.2030700@parallels.com> (raw)
In-Reply-To: <20130108134424.0423dc1f.akpm@linux-foundation.org>
On 01/09/2013 01:44 AM, Andrew Morton wrote:
> On Fri, 4 Jan 2013 00:29:11 -0800
> Anton Vorontsov <anton.vorontsov@linaro.org> wrote:
>
>> This commit implements David Rientjes' idea of mempressure cgroup.
>>
>> The main characteristics are the same to what I've tried to add to vmevent
>> API; internally, it uses Mel Gorman's idea of scanned/reclaimed ratio for
>> pressure index calculation. But we don't expose the index to the userland.
>> Instead, there are three levels of the pressure:
>>
>> o low (just reclaiming, e.g. caches are draining);
>> o medium (allocation cost becomes high, e.g. swapping);
>> o oom (about to oom very soon).
>>
>> The rationale behind exposing levels and not the raw pressure index
>> described here: http://lkml.org/lkml/2012/11/16/675
>>
>> For a task it is possible to be in both cpusets, memcg and mempressure
>> cgroups, so by rearranging the tasks it is possible to watch a specific
>> pressure (i.e. caused by cpuset and/or memcg).
>>
>> Note that while this adds the cgroups support, the code is well separated
>> and eventually we might add a lightweight, non-cgroups API, i.e. vmevent.
>> But this is another story.
>>
>
> I'd have thought that it's pretty important offer this feature to
> non-cgroups setups. Restricting it to cgroups-only seems a large
> limitation.
>
Why is it so, Andrew?
When we talk about "cgroups", we are not necessarily talking about the
whole beast, with all controllers enabled. Much less we are talking
about hierarchies being created, and tasks put on it.
It's an interface only. And since all controllers will always have a
special "root" cgroup, this applies to the tasks in the system all the
same. In the end of the day, if we have something like
CONFIG_MEMPRESSURE that selects CONFIG_CGROUP, the user needs to do the
same thing to actually turn on the functionality: switch a config
option. It is not more expensive, and it doesn't bring in anything extra
as well.
To actually use it, one needs to mount the filesystem, and write to a
file. Nothing else.
What is that drives this opposition towards a cgroup-only interface?
Is it about the interface, or the underlying machinery ?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-01-09 14:09 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-01-04 8:27 [PATCH 0/2] Mempressure cgroup Anton Vorontsov
2013-01-04 8:29 ` [PATCH 1/2] Add mempressure cgroup Anton Vorontsov
2013-01-04 15:05 ` Kirill A. Shutemov
2013-01-07 8:51 ` Kamezawa Hiroyuki
2013-01-08 7:29 ` Anton Vorontsov
2013-01-08 7:57 ` leonid.moiseichuk
2013-01-08 8:24 ` Kamezawa Hiroyuki
2013-01-08 8:49 ` Minchan Kim
2013-01-09 22:14 ` Anton Vorontsov
2013-01-11 5:12 ` Minchan Kim
2013-01-11 5:38 ` Anton Vorontsov
2013-01-11 5:56 ` Minchan Kim
2013-01-11 6:09 ` Anton Vorontsov
2013-01-08 21:44 ` Andrew Morton
2013-01-09 14:10 ` Glauber Costa [this message]
2013-01-09 20:28 ` Andrew Morton
2013-01-09 8:56 ` Glauber Costa
2013-01-09 9:15 ` Andrew Morton
2013-01-09 13:43 ` Glauber Costa
2013-01-09 20:37 ` Tejun Heo
2013-01-09 20:39 ` Tejun Heo
2013-01-09 21:20 ` Glauber Costa
2013-01-09 21:36 ` Anton Vorontsov
2013-01-09 21:55 ` Tejun Heo
2013-01-09 22:04 ` Tejun Heo
2013-01-09 22:06 ` Anton Vorontsov
2013-01-09 22:21 ` Tejun Heo
2013-01-10 7:18 ` Glauber Costa
2013-01-13 8:50 ` Simon Jeons
2013-01-13 8:52 ` Wanpeng Li
2013-01-13 8:52 ` Wanpeng Li
2013-01-04 8:29 ` [PATCH 2/2] Add shrinker interface for " Anton Vorontsov
2013-01-11 19:13 ` [PATCH 0/2] Mempressure cgroup Luiz Capitulino
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50ED7A3A.2030700@parallels.com \
--to=glommer@parallels.com \
--cc=akpm@linux-foundation.org \
--cc=anton.vorontsov@linaro.org \
--cc=b.zolnierkie@samsung.com \
--cc=gthelen@google.com \
--cc=john.stultz@linaro.org \
--cc=kernel-team@android.com \
--cc=kirill@shutemov.name \
--cc=kosaki.motohiro@gmail.com \
--cc=lcapitulino@redhat.com \
--cc=leonid.moiseichuk@nokia.com \
--cc=linaro-kernel@lists.linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.cz \
--cc=minchan@kernel.org \
--cc=patches@linaro.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox