linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
To: mhocko@kernel.org
Cc: guro@fb.com, rientjes@google.com, hannes@cmpxchg.org,
	vdavydov.dev@gmail.com, tj@kernel.org, linux-mm@kvack.org,
	akpm@linux-foundation.org, torvalds@linux-foundation.org
Subject: Re: [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held.
Date: Fri, 25 May 2018 19:57:32 +0900	[thread overview]
Message-ID: <201805251957.EJJ09809.LFJHFFVOOSQOtM@I-love.SAKURA.ne.jp> (raw)
In-Reply-To: <20180525083118.GI11881@dhcp22.suse.cz>

Michal Hocko wrote:
> On Fri 25-05-18 10:17:42, Tetsuo Handa wrote:
> > Then, please show me (by writing a patch yourself) how to tell whether
> > we should sleep there. What I can come up is shown below.
> > 
> > -@@ -4241,6 +4240,12 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
> > -       /* Retry as long as the OOM killer is making progress */
> > -       if (did_some_progress) {
> > -               no_progress_loops = 0;
> > -+              /*
> > -+               * This schedule_timeout_*() serves as a guaranteed sleep for
> > -+               * PF_WQ_WORKER threads when __zone_watermark_ok() == false.
> > -+               */
> > -+              if (!tsk_is_oom_victim(current))
> > -+                      schedule_timeout_uninterruptible(1);
> > -               goto retry;
> > -       }
> > +@@ -3927,6 +3926,14 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
> > +               (*no_progress_loops)++;
> > 
> > +       /*
> > ++       * We do a short sleep here if the OOM killer/reaper/victims are
> > ++       * holding oom_lock, in order to try to give them some CPU resources
> > ++       * for releasing memory.
> > ++       */
> > ++      if (mutex_is_locked(&oom_lock) && !tsk_is_oom_victim(current))
> > ++              schedule_timeout_uninterruptible(1);
> > ++
> > ++      /*
> > +        * Make sure we converge to OOM if we cannot make any progress
> > +        * several times in the row.
> > +        */
> > 
> > As far as I know, whether a domain which the current thread belongs to is
> > already OOM is not known as of should_reclaim_retry(). Therefore, sleeping
> > there can become a pointless delay if the domain which the current thread
> > belongs to and the domain which the owner of oom_lock (it can be a random
> > thread inside out_of_memory() or exit_mmap()) belongs to differs.
> > 
> > But you insist sleeping there means that you don't care about such
> > pointless delay?
> 
> What is wrong with the folliwing? should_reclaim_retry should be a
> natural reschedule point. PF_WQ_WORKER is a special case which needs a
> stronger rescheduling policy. Doing that unconditionally seems more
> straightforward than depending on a zone being a good candidate for a
> further reclaim.

Where is schedule_timeout_uninterruptible(1) for !PF_KTHREAD threads?
My concern is that cond_resched() might be a too short sleep to give
enough CPU resources to the owner of the oom_lock.

#ifndef CONFIG_PREEMPT
extern int _cond_resched(void);
#else
static inline int _cond_resched(void) { return 0; }
#endif

#ifndef CONFIG_PREEMPT
int __sched _cond_resched(void)
{
	if (should_resched(0)) {
		preempt_schedule_common();
		return 1;
	}
	rcu_all_qs();
	return 0;
}
EXPORT_SYMBOL(_cond_resched);
#endif

#define cond_resched() ({                       \
        ___might_sleep(__FILE__, __LINE__, 0);  \
        _cond_resched();                        \
})

How do you prove that cond_resched() is an appropriate replacement for
schedule_timeout_killable(1) in out_of_memory() and
schedule_timeout_uninterruptible(1) in __alloc_pages_may_oom() ?

  reply	other threads:[~2018-05-25 10:57 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-12 14:18 Tetsuo Handa
2018-05-15  9:16 ` Michal Hocko
2018-05-18 10:14   ` Tetsuo Handa
2018-05-18 12:20     ` Michal Hocko
2018-05-20 15:56       ` Tetsuo Handa
2018-05-22  6:18         ` Michal Hocko
2018-05-23 10:24           ` Tetsuo Handa
2018-05-23 11:57             ` Michal Hocko
2018-05-23 13:45               ` Tetsuo Handa
2018-05-23 14:56                 ` Michal Hocko
2018-05-24 10:51                   ` Tetsuo Handa
2018-05-24 11:50                     ` Michal Hocko
2018-05-25  1:17                       ` Tetsuo Handa
2018-05-25  8:31                         ` Michal Hocko
2018-05-25 10:57                           ` Tetsuo Handa [this message]
2018-05-25 11:42                             ` Michal Hocko
2018-05-25 11:46                               ` Tetsuo Handa
2018-05-28 12:43                                 ` Michal Hocko
2018-05-28 20:57                                   ` Tetsuo Handa
2018-05-29  7:17                                     ` Michal Hocko
2018-05-29 23:07                                       ` Andrew Morton
2018-05-31 10:10                                         ` Tetsuo Handa
2018-05-31 10:44                                           ` Michal Hocko
2018-05-31 15:23                                             ` Tetsuo Handa
2018-05-31 18:47                                               ` Michal Hocko
2018-06-01  1:21                                                 ` Tetsuo Handa
2018-06-01  8:04                                                   ` Michal Hocko
2018-06-01 15:28                                         ` Michal Hocko
2018-06-01 21:11                                           ` Andrew Morton
2018-06-04  7:04                                             ` Michal Hocko
2018-06-04 10:41                                               ` Tetsuo Handa
2018-06-04 11:22                                                 ` Michal Hocko
2018-06-04 11:30                                                   ` Tetsuo Handa
2018-06-06  9:02                                                 ` David Rientjes
2018-06-06 13:37                                                   ` Tetsuo Handa
2018-06-06 18:44                                                     ` David Rientjes
2018-05-29  7:17             ` Michal Hocko
2018-05-29  8:16               ` Michal Hocko
2018-05-29 14:33                 ` Tetsuo Handa
2018-05-29 17:18                   ` Michal Hocko
2018-05-29 17:28                     ` Michal Hocko
2018-05-31 16:31                 ` [PATCH] mm, memcg, oom: fix pre-mature allocation failures kbuild test robot
  -- strict thread matches above, loose matches on Subject: below --
2018-03-22 10:51 [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held Tetsuo Handa
2018-03-22 11:45 ` Michal Hocko
2018-03-22 13:16   ` Tetsuo Handa
2018-01-22 13:46 Tetsuo Handa
2018-01-23  8:38 ` Michal Hocko
2018-01-23 12:07   ` Tetsuo Handa
2018-01-23 12:42     ` Michal Hocko
2018-01-24 13:28       ` Tetsuo Handa
2018-02-13 11:58         ` Tetsuo Handa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201805251957.EJJ09809.LFJHFFVOOSQOtM@I-love.SAKURA.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=akpm@linux-foundation.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=rientjes@google.com \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox