From: Tejun Heo <tj@kernel.org>
To: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
cgroups@vger.kernel.org, Michal Hocko <mhocko@kernel.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Roman Gushchin <guro@fb.com>,
Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: [PATCH RFC 2/3] proc/kpagecgroup: report also inode numbers of offline cgroups
Date: Wed, 22 Aug 2018 07:58:46 -0700 [thread overview]
Message-ID: <20180822145846.GT3978217@devbig004.ftw2.facebook.com> (raw)
In-Reply-To: <153414348994.737150.10057219558779418929.stgit@buzz>
Hello,
On Mon, Aug 13, 2018 at 09:58:10AM +0300, Konstantin Khlebnikov wrote:
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 19a4348974a4..7ef6ea9d5e4a 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -333,6 +333,7 @@ struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
> /**
> * page_cgroup_ino - return inode number of the memcg a page is charged to
> * @page: the page
> + * @online: return closest online ancestor
> *
> * Look up the closest online ancestor of the memory cgroup @page is charged to
> * and return its inode number or 0 if @page is not charged to any cgroup. It
> @@ -343,14 +344,14 @@ struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
> * after page_cgroup_ino() returns, so it only should be used by callers that
> * do not care (such as procfs interfaces).
> */
> -ino_t page_cgroup_ino(struct page *page)
> +ino_t page_cgroup_ino(struct page *page, bool online)
> {
> struct mem_cgroup *memcg;
> unsigned long ino = 0;
>
> rcu_read_lock();
> memcg = READ_ONCE(page->mem_cgroup);
> - while (memcg && !(memcg->css.flags & CSS_ONLINE))
> + while (memcg && online && !(memcg->css.flags & CSS_ONLINE))
> memcg = parent_mem_cgroup(memcg);
> if (memcg)
> ino = cgroup_ino(memcg->css.cgroup);
We pin the ino till the cgroup is actually released now but that's an
implementation detail which may change in the future, so I'm not sure
this is a good idea. Can you instead use the 64bit filehandle exposed
by kernfs? That's currently also based on ino (+gen) but it's
something guarnateed to stay unique per cgroup and you can easily get
to the cgroup using the fh too.
Thanks.
--
tejun
next prev parent reply other threads:[~2018-08-22 14:58 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-13 6:58 [PATCH RFC 1/3] cgroup: list all subsystem states in debugfs files Konstantin Khlebnikov
2018-08-13 6:58 ` [PATCH RFC 2/3] proc/kpagecgroup: report also inode numbers of offline cgroups Konstantin Khlebnikov
2018-08-22 14:58 ` Tejun Heo [this message]
2018-08-13 6:58 ` [PATCH RFC 3/3] tools/vm/page-types: add flag for showing inodes " Konstantin Khlebnikov
2018-08-13 13:48 ` [PATCH RFC 1/3] cgroup: list all subsystem states in debugfs files Tejun Heo
2018-08-13 17:11 ` Johannes Weiner
2018-08-13 17:53 ` Roman Gushchin
2018-08-14 9:40 ` Konstantin Khlebnikov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180822145846.GT3978217@devbig004.ftw2.facebook.com \
--to=tj@kernel.org \
--cc=cgroups@vger.kernel.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=khlebnikov@yandex-team.ru \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox