From: Waiman Long <longman@redhat.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Petr Mladek <pmladek@suse.com>,
Steven Rostedt <rostedt@goodmis.org>,
Sergey Senozhatsky <senozhatsky@chromium.org>,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Rasmus Villemoes <linux@rasmusvillemoes.dk>,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
linux-mm@kvack.org, Ira Weiny <ira.weiny@intel.com>,
Mike Rapoport <rppt@kernel.org>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <guro@fb.com>, Rafael Aquini <aquini@redhat.com>
Subject: Re: [PATCH v3 3/4] mm/page_owner: Print memcg information
Date: Wed, 2 Feb 2022 11:12:36 -0500 [thread overview]
Message-ID: <723f0d47-5450-a403-ed90-4643910f2eb2@redhat.com> (raw)
In-Reply-To: <YfpFkVLBb0GsDFsi@dhcp22.suse.cz>
On 2/2/22 03:49, Michal Hocko wrote:
> On Tue 01-02-22 12:04:37, Waiman Long wrote:
>> On 2/1/22 05:54, Michal Hocko wrote:
>>> On Mon 31-01-22 14:23:07, Waiman Long wrote:
>>>> It was found that a number of offlined memcgs were not freed because
>>>> they were pinned by some charged pages that were present. Even "echo
>>>> 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
>>>> offlined but not freed memcgs tend to increase in number over time with
>>>> the side effect that percpu memory consumption as shown in /proc/meminfo
>>>> also increases over time.
>>>>
>>>> In order to find out more information about those pages that pin
>>>> offlined memcgs, the page_owner feature is extended to print memory
>>>> cgroup information especially whether the cgroup is offlined or not.
>>>>
>>>> Signed-off-by: Waiman Long <longman@redhat.com>
>>>> Acked-by: David Rientjes <rientjes@google.com>
>>>> ---
>>>> mm/page_owner.c | 39 +++++++++++++++++++++++++++++++++++++++
>>>> 1 file changed, 39 insertions(+)
>>>>
>>>> diff --git a/mm/page_owner.c b/mm/page_owner.c
>>>> index 28dac73e0542..a471c74c7fe0 100644
>>>> --- a/mm/page_owner.c
>>>> +++ b/mm/page_owner.c
>>>> @@ -10,6 +10,7 @@
>>>> #include <linux/migrate.h>
>>>> #include <linux/stackdepot.h>
>>>> #include <linux/seq_file.h>
>>>> +#include <linux/memcontrol.h>
>>>> #include <linux/sched/clock.h>
>>>> #include "internal.h"
>>>> @@ -325,6 +326,42 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
>>>> seq_putc(m, '\n');
>>>> }
>>>> +#ifdef CONFIG_MEMCG
>>>> +/*
>>>> + * Looking for memcg information and print it out
>>>> + */
>>>> +static inline void print_page_owner_memcg(char *kbuf, size_t count, int *pret,
>>>> + struct page *page)
>>>> +{
>>>> + unsigned long memcg_data = READ_ONCE(page->memcg_data);
>>>> + struct mem_cgroup *memcg;
>>>> + bool onlined;
>>>> + char name[80];
>>>> +
>>>> + if (!memcg_data)
>>>> + return;
>>>> +
>>>> + if (memcg_data & MEMCG_DATA_OBJCGS)
>>>> + *pret += scnprintf(kbuf + *pret, count - *pret,
>>>> + "Slab cache page\n");
>>>> +
>>>> + memcg = page_memcg_check(page);
>>>> + if (!memcg)
>>>> + return;
>>>> +
>>>> + onlined = (memcg->css.flags & CSS_ONLINE);
>>>> + cgroup_name(memcg->css.cgroup, name, sizeof(name));
>>>> + *pret += scnprintf(kbuf + *pret, count - *pret,
>>>> + "Charged %sto %smemcg %s\n",
>>>> + PageMemcgKmem(page) ? "(via objcg) " : "",
>>>> + onlined ? "" : "offlined ",
>>>> + name);
>>> I have asked in the previous version already but what makes the memcg
>>> stable (why it cannot go away and be reallocated for something else)
>>> while you are trying to get its name?
>> The memcg is not going away as long as the page isn't freed unless if it is
>> indirectly connected via objcg. Of course, there can be a race between the
>> page is going to be freed while the page_owner information is being
>> displayed.
> Right. And that means that cgtoup_name can go off the rail and wander
> through memory correct?
>
>> One solution is to add a simple bit lock to each of the
>> page_owner structure and acquire the lock when it is being written to or
>> read from.
> I do not really see how a bit lock could prevent memcg from going away.
> On the other hand I think RCU read lock should be sufficient to keep the
> memcg from going away completely.
Using rcu_read_lock() is also what I have been thinking of doing. So I
will update the patch to add that for safety.
>
>> Anyway a lot of these debugging aids or tools don't eliminate all
>> the race conditions that affect the accuracy of the displayed information. I
>> can add a patch to eliminate this direct memcg race if you think this is
>> necessary.
> I do not mind inaccurate information. That is natural but reading
> through a freed memory can be really harmfull. So this really need to be
> sorted out.
Thanks for the review.
Cheers,
Longman
next prev parent reply other threads:[~2022-02-02 16:13 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-31 19:23 [PATCH v3 0/4] mm/page_owner: Extend page_owner to show " Waiman Long
2022-01-31 19:23 ` [PATCH v3 1/4] lib/vsprintf: Avoid redundant work with 0 size Waiman Long
2022-01-31 20:42 ` Mike Rapoport
2022-01-31 19:23 ` [PATCH v3 2/4] mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check Waiman Long
2022-01-31 20:38 ` Roman Gushchin
2022-01-31 20:43 ` Mike Rapoport
2022-01-31 19:23 ` [PATCH v3 3/4] mm/page_owner: Print memcg information Waiman Long
2022-01-31 20:51 ` Mike Rapoport
2022-01-31 21:43 ` Waiman Long
2022-02-01 6:23 ` Mike Rapoport
2022-01-31 20:51 ` Roman Gushchin
2022-02-01 10:54 ` Michal Hocko
2022-02-01 17:04 ` Waiman Long
2022-02-02 8:49 ` Michal Hocko
2022-02-02 16:12 ` Waiman Long [this message]
2022-01-31 19:23 ` [PATCH v3 4/4] mm/page_owner: Record task command name Waiman Long
2022-01-31 20:54 ` Roman Gushchin
2022-01-31 21:46 ` Waiman Long
2022-01-31 22:03 ` [PATCH v4 " Waiman Long
2022-02-01 15:28 ` Michal Hocko
2022-02-02 16:53 ` Waiman Long
2022-02-03 12:10 ` Vlastimil Babka
2022-02-03 18:53 ` Waiman Long
2022-02-02 20:30 ` [PATCH v4 0/4] mm/page_owner: Extend page_owner to show memcg information Waiman Long
2022-02-02 23:06 ` Rafael Aquini
2022-02-02 20:30 ` [PATCH v4 1/4] lib/vsprintf: Avoid redundant work with 0 size Waiman Long
2022-02-08 10:08 ` Petr Mladek
2022-02-02 20:30 ` [PATCH v4 2/4] mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check Waiman Long
2022-02-03 15:46 ` Vlastimil Babka
2022-02-03 18:49 ` Waiman Long
2022-02-08 10:51 ` Petr Mladek
2022-02-02 20:30 ` [PATCH v4 3/4] mm/page_owner: Print memcg information Waiman Long
2022-02-03 6:53 ` Mike Rapoport
2022-02-03 12:46 ` Michal Hocko
2022-02-03 19:03 ` Waiman Long
2022-02-07 17:20 ` Michal Hocko
2022-02-07 19:09 ` Andrew Morton
2022-02-07 19:33 ` Waiman Long
2022-02-02 20:30 ` [PATCH v4 4/4] mm/page_owner: Record task command name Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=723f0d47-5450-a403-ed90-4643910f2eb2@redhat.com \
--to=longman@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=andriy.shevchenko@linux.intel.com \
--cc=aquini@redhat.com \
--cc=cgroups@vger.kernel.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=ira.weiny@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux@rasmusvillemoes.dk \
--cc=mhocko@suse.com \
--cc=pmladek@suse.com \
--cc=rientjes@google.com \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=senozhatsky@chromium.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox