linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Marinko Catovic <marinko.catovic@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	linux-mm@kvack.org, Christopher Lameter <cl@linux.com>
Subject: Re: Caching/buffers become useless after some time
Date: Fri, 2 Nov 2018 12:49:06 +0100	[thread overview]
Message-ID: <20181102114341.GB28039@dhcp22.suse.cz> (raw)
In-Reply-To: <CADF2uSq+wP8aF=y=MgO4EHjk=ThXY22JMx81zNPy1kzheS6f3w@mail.gmail.com>

On Fri 02-11-18 12:31:09, Marinko Catovic wrote:
> Am Fr., 2. Nov. 2018 um 09:05 Uhr schrieb Michal Hocko <mhocko@suse.com>:
> >
> > On Thu 01-11-18 23:46:27, Marinko Catovic wrote:
> > > Am Do., 1. Nov. 2018 um 14:23 Uhr schrieb Michal Hocko <mhocko@suse.com>:
> > > >
> > > > On Wed 31-10-18 20:21:42, Marinko Catovic wrote:
> > > > > Am Mi., 31. Okt. 2018 um 18:01 Uhr schrieb Michal Hocko <mhocko@suse.com>:
> > > > > >
> > > > > > On Wed 31-10-18 15:53:44, Marinko Catovic wrote:
> > > > > > [...]
> > > > > > > Well caching of any operations with find/du is not necessary imho
> > > > > > > anyway, since walking over all these millions of files in that time
> > > > > > > period is really not worth caching at all - if there is a way you
> > > > > > > mentioned to limit the commands there, that would be great.
> > > > > >
> > > > > > One possible way would be to run this find/du workload inside a memory
> > > > > > cgroup with high limit set to something reasonable (that will likely
> > > > > > require some tuning). I am not 100% sure that will behave for metadata
> > > > > > mostly workload without almost any pagecache to reclaim so it might turn
> > > > > > out this will result in other issues. But it is definitely worth trying.
> > > > >
> > > > > hm, how would that be possible..? every user has its UID, the group
> > > > > can also not be a factor, since this memory restriction would apply to
> > > > > all users then, find/du are running as UID 0 to have access to
> > > > > everyone's data.
> > > >
> > > > I thought you have a dedicated script(s) to do all the stats. All you
> > > > need is to run that particular script(s) within a memory cgroup
> > >
> > > yes, that is the case - the scripts are running as root, since as
> > > mentioned all users have own UIDs and specific groups, so to have
> > > access one would need root privileges.
> > > My question was how to limit this using cgroups, since afaik limits
> > > there apply to given UIDs/GIDs
> >
> > No. Limits apply to a specific memory cgroup and all tasks which are
> > associated with it. There are many tutorials on how to configure/use
> > memory cgroups or cgroups in general. If I were you I would simply do
> > this
> >
> > mount -t cgroup -o memory none $SOME_MOUNTPOINT
> > mkdir $SOME_MOUNTPOINT/A
> > echo 500M > $SOME_MOUNTPOINT/A/memory.limit_in_bytes
> >
> > Your script then just do
> > echo $$ > $SOME_MOUNTPOINT/A/tasks
> > # rest of your script
> > echo 1 > $SOME_MOUNTPOINT/A/memory.force_empty
> >
> > That should drop the memory cached on behalf of the memcg A including the
> > metadata.
> 
> well, that's an interesting approach, I did not know that this was
> possible to assign cgroups to PIDs, without additionally explicitly
> defining UID/GID. This way memory.force_empty basically acts like echo
> 3 > drop_caches, but only for the memory affected by the PIDs and its
> children/forks from the A/tasks-list, true?

Yup
 
> I'll give it a try with the nightly du/find jobs, thank you!

I am still a bit curious how that will work out on metadata mostly
workload because we usually have quite a lot of memory on normal LRUs to
reclaim (page cache, anonymous memory) and slab reclaim is just to
balance kmem. But let's see. Watch for memcg OOM killer invocations if
the reclaim is not sufficient.

> > [...]
> > > > > As I understand everyone would have this issue when extensive walking
> > > > > over files is performed, basically any `cloud`, shared hosting or
> > > > > storage systems should experience it, true?
> > > >
> > > > Not really. You need also a high demand for high order allocations to
> > > > require contiguous physical memory. Maybe there is something in your
> > > > workload triggering this particular pattern.
> > >
> > > I would not even know what triggers it, nor what it has to do with
> > > high order, I'm just running find/du, nothing special I'd say.
> >
> > Please note that find/du is mostly a fragmentation generator. It
> > seems there is other system activity which requires those high order
> > allocations.
> 
> any idea how to find out what that might be? I'd really have no idea,
> I also wonder why this never was an issue with 3.x
> find uses regex patterns, that's the only thing that may be unusual.

The allocation tracepoint has the stack trace so that might help. This
is quite a lot of work to pin point and find a pattern though. This is
way out the time scope I can devote to this unfortunately. This might be
some driver asking for more, or even the core kernel being more high
order memory hungry.

-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2018-11-02 11:52 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-11 13:18 Marinko Catovic
2018-07-12 11:34 ` Michal Hocko
2018-07-13 15:48   ` Marinko Catovic
2018-07-16 15:53     ` Marinko Catovic
2018-07-16 16:23       ` Michal Hocko
2018-07-16 16:33         ` Marinko Catovic
2018-07-16 16:45           ` Michal Hocko
2018-07-20 22:03             ` Marinko Catovic
2018-07-27 11:15               ` Vlastimil Babka
2018-07-30 14:40                 ` Michal Hocko
2018-07-30 22:08                   ` Marinko Catovic
2018-08-02 16:15                     ` Vlastimil Babka
2018-08-03 14:13                       ` Marinko Catovic
2018-08-06  9:40                         ` Vlastimil Babka
2018-08-06 10:29                           ` Marinko Catovic
2018-08-06 12:00                             ` Michal Hocko
2018-08-06 15:37                               ` Christopher Lameter
2018-08-06 18:16                                 ` Michal Hocko
2018-08-09  8:29                                   ` Marinko Catovic
2018-08-21  0:36                                     ` Marinko Catovic
2018-08-21  6:49                                       ` Michal Hocko
2018-08-21  7:19                                         ` Vlastimil Babka
2018-08-22 20:02                                           ` Marinko Catovic
2018-08-23 12:10                                             ` Vlastimil Babka
2018-08-23 12:21                                               ` Michal Hocko
2018-08-24  0:11                                                 ` Marinko Catovic
2018-08-24  6:34                                                   ` Vlastimil Babka
2018-08-24  8:11                                                     ` Marinko Catovic
2018-08-24  8:36                                                       ` Vlastimil Babka
2018-08-29 14:54                                                         ` Marinko Catovic
2018-08-29 15:01                                                           ` Michal Hocko
2018-08-29 15:13                                                             ` Marinko Catovic
2018-08-29 15:27                                                               ` Michal Hocko
2018-08-29 16:44                                                                 ` Marinko Catovic
2018-10-22  1:19                                                                   ` Marinko Catovic
2018-10-23 17:41                                                                     ` Marinko Catovic
2018-10-26  5:48                                                                       ` Marinko Catovic
2018-10-26  8:01                                                                     ` Michal Hocko
2018-10-26 23:31                                                                       ` Marinko Catovic
2018-10-27  6:42                                                                         ` Michal Hocko
     [not found]                                                                     ` <6e3a9434-32f2-0388-e0c7-2bd1c2ebc8b1@suse.cz>
2018-10-30 15:30                                                                       ` Michal Hocko
2018-10-30 16:08                                                                         ` Marinko Catovic
2018-10-30 17:00                                                                           ` Vlastimil Babka
2018-10-30 18:26                                                                             ` Marinko Catovic
2018-10-31  7:34                                                                               ` Michal Hocko
2018-10-31  7:32                                                                             ` Michal Hocko
2018-10-31 13:40                                                                             ` Vlastimil Babka
2018-10-31 14:53                                                                               ` Marinko Catovic
2018-10-31 17:01                                                                                 ` Michal Hocko
2018-10-31 19:21                                                                                   ` Marinko Catovic
2018-11-01 13:23                                                                                     ` Michal Hocko
2018-11-01 22:46                                                                                       ` Marinko Catovic
2018-11-02  8:05                                                                                         ` Michal Hocko
2018-11-02 11:31                                                                                           ` Marinko Catovic
2018-11-02 11:49                                                                                             ` Michal Hocko [this message]
2018-11-02 12:22                                                                                               ` Vlastimil Babka
2018-11-02 12:41                                                                                                 ` Marinko Catovic
2018-11-02 13:13                                                                                                   ` Vlastimil Babka
2018-11-02 13:50                                                                                                     ` Marinko Catovic
2018-11-02 14:49                                                                                                       ` Vlastimil Babka
2018-11-02 14:59                                                                                 ` Vlastimil Babka
2018-11-30 12:01                                                                                   ` Marinko Catovic
2018-12-10 21:30                                                                                     ` Marinko Catovic
2018-12-10 21:47                                                                                       ` Michal Hocko
2018-10-31 13:12                                                                     ` Vlastimil Babka
2018-08-24  6:24                                                 ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181102114341.GB28039@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=cl@linux.com \
    --cc=linux-mm@kvack.org \
    --cc=marinko.catovic@gmail.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox