From: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
To: Michal Hocko <mhocko@kernel.org>
Cc: Shakeel Butt <shakeelb@google.com>, linux-mm@kvack.org
Subject: Re: [PATCH v3 3/3] oom: decouple mems_allowed from oom_unkillable_task
Date: Wed, 26 Jun 2019 20:46:02 +0900 [thread overview]
Message-ID: <3ec3304f-7d3f-cb08-5635-12c6b9c0905c@i-love.sakura.ne.jp> (raw)
In-Reply-To: <20190626104737.GQ17798@dhcp22.suse.cz>
On 2019/06/26 19:47, Michal Hocko wrote:
> On Wed 26-06-19 19:19:20, Tetsuo Handa wrote:
>> Is "mempolicy_nodemask_intersects(tsk) returning true when tsk already
>> passed mpol_put_task_policy(tsk) in do_exit()" what we want?
>>
>> If tsk is an already exit()ed thread group leader, that thread group is
>> needlessly selected by the OOM killer because mpol_put_task_policy()
>> returns true?
>
> I am sorry but I do not really see how this is related to this
> particular patch. Are you suggesting that has_intersects_mems_allowed is
> racy? More racy now?
I'm suspecting the correctness of has_intersects_mems_allowed().
If mask != NULL, mempolicy_nodemask_intersects() is called on each thread in
"start" thread group. And as soon as mempolicy_nodemask_intersects(tsk) returned
true, has_intersects_mems_allowed(start) returns true and "start" is considered
as an OOM victim candidate. And if one of threads in "tsk" thread group has already
passed mpol_put_task_policy(tsk) in do_exit() (e.g. dead thread group leader),
mempolicy_nodemask_intersects(tsk) returns true because tsk->mempolicy == NULL.
I don't know how mempolicy works, but can mempolicy be configured differently for
per a thread basis? If each thread in "start" thread group cannot have different
mempolicy->mode, there is (mostly) no need to use for_each_thread() at
has_intersects_mems_allowed(). Instead, we can use find_lock_task_mm(start)
(provided that MMF_OOM_SKIP is checked like
/* p may not have freeable memory in nodemask */
- if (!is_memcg_oom(oc) && !has_intersects_mems_allowed(task, oc))
+ if (!tsk_is_oom_victim(task) && !is_memcg_oom(oc) && !is_sysrq_oom(oc) &&
+ !has_intersects_mems_allowed(task, oc))
goto next;
) because find_lock_task_mm() == NULL thread groups won't be selected as
an OOM victim candidate...
next prev parent reply other threads:[~2019-06-26 11:46 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-24 21:26 [PATCH v3 1/3] mm, oom: refactor dump_tasks for memcg OOMs Shakeel Butt
2019-06-24 21:26 ` [PATCH v3 2/3] mm, oom: remove redundant task_in_mem_cgroup() check Shakeel Butt
2019-06-26 6:38 ` Michal Hocko
2019-06-28 2:12 ` Shakeel Butt
2019-06-24 21:26 ` [PATCH v3 3/3] oom: decouple mems_allowed from oom_unkillable_task Shakeel Butt
2019-06-26 6:55 ` Michal Hocko
2019-06-26 10:19 ` Tetsuo Handa
2019-06-26 10:47 ` Michal Hocko
2019-06-26 11:46 ` Tetsuo Handa [this message]
2019-06-26 12:15 ` Michal Hocko
2019-06-28 2:17 ` Shakeel Butt
2019-06-26 9:12 ` Hillf Danton
2019-06-26 9:18 ` Michal Hocko
2019-06-26 19:47 ` Roman Gushchin
2019-06-26 14:04 Hillf Danton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3ec3304f-7d3f-cb08-5635-12c6b9c0905c@i-love.sakura.ne.jp \
--to=penguin-kernel@i-love.sakura.ne.jp \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=shakeelb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox