linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yafang Shao <laoar.shao@gmail.com>
To: Michal Hocko <mhocko@suse.com>, longman@redhat.com
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	 Linux MM <linux-mm@kvack.org>
Subject: Re: [PATCH] mm, memcg: do full scan initially in force_empty
Date: Mon, 3 Aug 2020 22:26:10 +0800	[thread overview]
Message-ID: <CALOAHbBFCtTPXK-VwT1uWG7QF-STz6S988=+Ka7FvTn6swtnoA@mail.gmail.com> (raw)
In-Reply-To: <CALOAHbAACOODfWRUKS24K-j2Z0Lr1S-HwqjuBWoBH8FFudEgcw@mail.gmail.com>

On Mon, Aug 3, 2020 at 10:18 PM Yafang Shao <laoar.shao@gmail.com> wrote:
>
> On Mon, Aug 3, 2020 at 9:56 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Mon 03-08-20 21:20:44, Yafang Shao wrote:
> > > On Mon, Aug 3, 2020 at 6:12 PM Michal Hocko <mhocko@suse.com> wrote:
> > > >
> > > > On Fri 31-07-20 09:50:04, Yafang Shao wrote:
> > > > > On Thu, Jul 30, 2020 at 7:26 PM Michal Hocko <mhocko@suse.com> wrote:
> > > > > >
> > > > > > On Tue 28-07-20 03:40:32, Yafang Shao wrote:
> > > > > > > Sometimes we use memory.force_empty to drop pages in a memcg to work
> > > > > > > around some memory pressure issues. When we use force_empty, we want the
> > > > > > > pages can be reclaimed ASAP, however force_empty reclaims pages as a
> > > > > > > regular reclaimer which scans the page cache LRUs from DEF_PRIORITY
> > > > > > > priority and finally it will drop to 0 to do full scan. That is a waste
> > > > > > > of time, we'd better do full scan initially in force_empty.
> > > > > >
> > > > > > Do you have any numbers please?
> > > > > >
> > > > >
> > > > > Unfortunately the number doesn't improve obviously, while it is
> > > > > directly proportional to the numbers of total pages to be scanned.
> > > >
> > > > Your changelog claims an optimization and that should be backed by some
> > > > numbers. It is true that reclaim at a higher priority behaves slightly
> > > > and subtly differently but that urge for even more details in the
> > > > changelog.
> > > >
> > >
> > > With the below addition change (nr_to_scan also changed), the elapsed
> > > time of force_empty can be reduced by 10%.
> > >
> > > @@ -3208,6 +3211,7 @@ static inline bool memcg_has_children(struct
> > > mem_cgroup *memcg)
> > >  static int mem_cgroup_force_empty(struct mem_cgroup *memcg)
> > >  {
> > >         int nr_retries = MEM_CGROUP_RECLAIM_RETRIES;
> > > +       unsigned long size;
> > >
> > >         /* we call try-to-free pages for make this cgroup empty */
> > >         lru_add_drain_all();
> > > @@ -3215,14 +3219,15 @@ static int mem_cgroup_force_empty(struct
> > > mem_cgroup *memcg)
> > >         drain_all_stock(memcg);
> > >         /* try to free all pages in this cgroup */
> > > -       while (nr_retries && page_counter_read(&memcg->memory)) {
> > > +       while (nr_retries && (size = page_counter_read(&memcg->memory))) {
> > >                 int progress;
> > >
> > >                 if (signal_pending(current))
> > >                         return -EINTR;
> > > -               progress = try_to_free_mem_cgroup_pages(memcg, 1,
> > > -                                                       GFP_KERNEL, true);
> > > +               progress = try_to_free_mem_cgroup_pages(memcg, size,
> > > +                                                       GFP_KERNEL, true,
> > > +                                                       0);
> >
> > Have you tried this change without changing the reclaim priority?
> >
>
> I tried it again. Seems the improvement is mostly due to the change of
> nr_to_reclaim, rather the reclaim priority,
>
> -               progress = try_to_free_mem_cgroup_pages(memcg, 1,
> +               progress = try_to_free_mem_cgroup_pages(memcg, size,
>
>
> > > Below are the numbers for a 16G memcg with full clean pagecache.
> > > Without these change,
> > > $ time echo 1 > /sys/fs/cgroup/memory/foo/memory.force_empty
> > > real    0m2.247s
> > > user    0m0.000s
> > > sys     0m1.722s
> > >
> > > With these change,
> > > $ time echo 1 > /sys/fs/cgroup/memory/foo/memory.force_empty
> > > real    0m2.053s
> > > user    0m0.000s
> > > sys     0m1.529s
> > >
> > > But I'm not sure whether we should make this improvement, because
> > > force_empty is not a critical path.
> >
> > Well, an isolated change to force_empty would be more acceptable but it
> > is worth noting that a very large reclaim target might affect the
> > userspace triggering this path because it will potentially increase
> > latency to process any signals. I do not expect this to be a huge
> > problem in practice because even reclaim for a smaller target can take
> > quite long if the memory is not really reclaimable and it has to take
> > the full world scan. Moreovere most userspace will simply do
> > echo 1 > $MEMCG_PAGE/force_empty
> > and only care about killing that if it takes too long.
> >
>
> We may do it in a script to force empty many memcgs at the same time.
> Of course we can measure the time it takes to force empty, but that
> will be complicated.
>
> > > > > But then I notice that force_empty will try to write dirty pages, that
> > > > > is not expected by us, because this behavior may be dangerous in the
> > > > > production environment.
> > > >
> > > > I do not understand your claim here. Direct reclaim doesn't write dirty
> > > > page cache pages directly.
> > >
> > > It will write dirty pages once the sc->priority drops to a very low number.
> > > if (sc->priority < DEF_PRIORITY - 2)
> > >     sc->may_writepage = 1;
> >
> > OK, I see what you mean now. Please have a look above that check:
> >                         /*
> >                          * Only kswapd can writeback filesystem pages
> >                          * to avoid risk of stack overflow. But avoid
> >                          * injecting inefficient single-page IO into
> >                          * flusher writeback as much as possible: only
> >                          * write pages when we've encountered many
> >                          * dirty pages, and when we've already scanned
> >                          * the rest of the LRU for clean pages and see
> >                          * the same dirty pages again (PageReclaim).
> >                          */
> >
> > > >  And it is even less clear why that would be
> > > > dangerous if it did.
> > > >
> > >
> > > It will generate many IOs, which may block the others.
> > >
> > > > > What do you think introducing per memcg drop_cache ?
> > > >
> > > > I do not like the global drop_cache and per memcg is not very much
> > > > different. This all shouldn't be really necessary because we do have
> > > > means to reclaim memory in a memcg.
> > > > --
> > >
> > > We used to find an issue that there are many negative  dentries in some memcgs.
> >
> > Yes, negative dentries can build up but the memory reclaim should be
> > pretty effective reclaiming them.
> >
> > > These negative dentries were introduced by some specific workload in
> > > these memcgs,  and we want to drop them as soon as possible.
> > > But unfortunately there is no good way to drop them except the
> > > force_empy or global drop_caches.
> >
> > You can use memcg limits (e.g. memory high) to pro-actively reclaim
> > excess memory. Have you tried that?
> >
> > > The force_empty will also drop the pagecache pages, which is not
> > > expected by us.
> >
> > force_empty is intended to reclaim _all_ pages.
> >
> > > The global drop_caches can't work either because it will drop slabs in
> > > other memcgs.
> > > That is why I want to introduce per memcg drop_caches.
> >
> > Problems with negative dentries has been already discussed in the past.
> > I believe there was no conclusion so far. Please try to dig into
> > archives.
>
> I have read the proposal of Waiman. But it seems there isn't a conclusion yet.
> If the kernel can't fix this issue perfectly, then giving the user a
> chance to work around it would be a possible solution - drop_caches is
> that kind of workaround.
>
> [ adding Waiman to CC ]
>
>
> --

Forgot to reply to your suggestion that using memcg limit. Adding it below,

> You can use memcg limits (e.g. memory high) to pro-actively reclaim
> excess memory. Have you tried that?

The memcg limit not only reclaim the slabs, but also reclaim the pagecaches.
Furthermore, there is no per-memcg vm.vfs_cache_pressure neither.

-- 
Thanks
Yafang


  reply	other threads:[~2020-08-03 14:26 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-28  7:40 Yafang Shao
2020-07-30 11:26 ` Michal Hocko
2020-07-31  1:50   ` Yafang Shao
2020-08-03 10:12     ` Michal Hocko
2020-08-03 13:20       ` Yafang Shao
2020-08-03 13:56         ` Michal Hocko
2020-08-03 14:18           ` Yafang Shao
2020-08-03 14:26             ` Yafang Shao [this message]
2020-08-03 14:37               ` Michal Hocko
2020-08-03 14:34             ` Michal Hocko
2020-08-03 15:26             ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALOAHbBFCtTPXK-VwT1uWG7QF-STz6S988=+Ka7FvTn6swtnoA@mail.gmail.com' \
    --to=laoar.shao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=mhocko@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox