From: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org
Subject: Re: [LSF/MM TOPIC] Few things I would like to discuss
Date: Wed, 13 Feb 2013 09:39:17 +0900 [thread overview]
Message-ID: <511AE0B5.4020502@jp.fujitsu.com> (raw)
In-Reply-To: <20130205123515.GA26229@dhcp22.suse.cz>
(2013/02/05 21:35), Michal Hocko wrote:
> Hi,
> I would like to discuss the following topics:
I missed the deadline :(
> * memcg oom should be more sensitive to locked contexts because now
> it is possible that a task is sitting in mem_cgroup_handle_oom holding
> some other lock (e.g. i_mutex or mmap_sem) up the chain which might
> block other task to terminate on OOM so we basically end up in a
> deadlock. Almost all memcg charges happen from the page fault path
> where we can retry but one class of them happen from
> add_to_page_cache_locked and that is a bit more problematic.
Yes, this is a topic should be discussed.
> * memcg doesn't use PF_MEMALLOC for the targeted reclaim code paths
> which asks for stack overflows (and we have already seen those -
> e.g. from the xfs pageout paths). The primary problem to use the flag
> is that there is no dirty pages throttling and writeback kicked out
> for memcg so if we didn't writeback from the reclaim the caller could
> be blocked for ever. Memcg dirty accounting is shaping slowly so we
> should start thinking about the writeback as well.
Sure.
> * While we are at the memcg dirty pages accounting
> (https://lkml.org/lkml/2012/12/25/95). It turned out that the locking
> is really nasty (https://lkml.org/lkml/2013/1/2/48). The locking
> should be reworked without incurring any penalty on the fast path.
> This sounds really challenging.
I'd like to fix the locking problem.
> * I would really like to finally settle down on something wrt. soft
> limit reclaim. I am pretty sure Ying would like to discuss this topic
> as well so I will not go into details about it. I will post what I
> have before the conference so that we can discuss her approach and
> what was the primary disagreement the last time. I can go into more
> ditails as a follow up if people are interested of course.
> * Finally I would like to collect feedback for the mm git tree.
>
Other points related to memcg is ...
+ kernel memory accounting + per-zone-per-memcg inode/dentry caching.
Glaubler tries to account inode/dentry in kmem controller. To do that,
I think inode and dentry should be hanldled per zone, at first. IIUC, there are
ongoing work but not merged yet.
+ overheads by memcg
Mel explained memcg's big overheads last year's MM summit. AFAIK, we have not
made any progress with that. If someone have detailed data, please share again...
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-02-13 0:39 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-02-05 12:35 Michal Hocko
2013-02-05 14:13 ` Glauber Costa
2013-02-13 0:39 ` Kamezawa Hiroyuki [this message]
2013-02-13 3:53 ` [Lsf-pc] " James Bottomley
2013-02-13 8:20 ` Glauber Costa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=511AE0B5.4020502@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=mhocko@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox