From: Michal Hocko <mhocko@kernel.org>
To: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
Cc: cgroups@vger.kernel.org,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
Johannes Weiner <hannes@cmpxchg.org>,
"n.fahldieck@profihost.ag" <n.fahldieck@profihost.ag>,
Daniel Aberger - Profihost AG <d.aberger@profihost.ag>,
p.kramme@profihost.ag
Subject: Re: No memory reclaim while reaching MemoryHigh
Date: Fri, 26 Jul 2019 09:45:57 +0200 [thread overview]
Message-ID: <20190726074557.GF6142@dhcp22.suse.cz> (raw)
In-Reply-To: <028ff462-b547-b9a5-bdb0-e0de3a884afd@profihost.ag>
On Thu 25-07-19 23:37:14, Stefan Priebe - Profihost AG wrote:
> Hi Michal,
>
> Am 25.07.19 um 16:01 schrieb Michal Hocko:
> > On Thu 25-07-19 15:17:17, Stefan Priebe - Profihost AG wrote:
> >> Hello all,
> >>
> >> i hope i added the right list and people - if i missed someone i would
> >> be happy to know.
> >>
> >> While using kernel 4.19.55 and cgroupv2 i set a MemoryHigh value for a
> >> varnish service.
> >>
> >> It happens that the varnish.service cgroup reaches it's MemoryHigh value
> >> and stops working due to throttling.
> >
> > What do you mean by "stops working"? Does it mean that the process is
> > stuck in the kernel doing the reclaim? /proc/<pid>/stack would tell you
> > what the kernel executing for the process.
>
> The service no longer responses to HTTP requests.
>
> stack switches in this case between:
> [<0>] io_schedule+0x12/0x40
> [<0>] __lock_page_or_retry+0x1e7/0x4e0
> [<0>] filemap_fault+0x42f/0x830
> [<0>] __xfs_filemap_fault.constprop.11+0x49/0x120
> [<0>] __do_fault+0x57/0x108
> [<0>] __handle_mm_fault+0x949/0xef0
> [<0>] handle_mm_fault+0xfc/0x1f0
> [<0>] __do_page_fault+0x24a/0x450
> [<0>] do_page_fault+0x32/0x110
> [<0>] async_page_fault+0x1e/0x30
> [<0>] 0xffffffffffffffff
>
> and
>
> [<0>] poll_schedule_timeout.constprop.13+0x42/0x70
> [<0>] do_sys_poll+0x51e/0x5f0
> [<0>] __x64_sys_poll+0xe7/0x130
> [<0>] do_syscall_64+0x5b/0x170
> [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [<0>] 0xffffffffffffffff
Neither of the two seem to be memcg related. Have you tried to get
several snapshots and see if the backtrace is stable? strace would also
tell you whether your application is stuck in a single syscall or they
are just progressing very slowly (-ttt parameter should give you timing)
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2019-07-26 7:46 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-25 13:17 Stefan Priebe - Profihost AG
2019-07-25 14:01 ` Michal Hocko
2019-07-25 21:37 ` Stefan Priebe - Profihost AG
2019-07-26 7:45 ` Michal Hocko [this message]
2019-07-26 18:30 ` Stefan Priebe - Profihost AG
2019-07-28 21:11 ` Stefan Priebe - Profihost AG
2019-07-28 21:39 ` Chris Down
2019-07-29 5:34 ` Stefan Priebe - Profihost AG
2019-07-29 7:07 ` Stefan Priebe - Profihost AG
2019-07-29 7:45 ` Stefan Priebe - Profihost AG
2019-07-31 13:03 ` Michal Hocko
2019-07-25 14:53 ` Chris Down
2019-07-25 21:42 ` Stefan Priebe - Profihost AG
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190726074557.GF6142@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=cgroups@vger.kernel.org \
--cc=d.aberger@profihost.ag \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=n.fahldieck@profihost.ag \
--cc=p.kramme@profihost.ag \
--cc=s.priebe@profihost.ag \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox