From: Michal Hocko <mhocko@suse.com>
To: Jinjiang Tu <tujinjiang@huawei.com>
Cc: rientjes@google.com, shakeel.butt@linux.dev,
akpm@linux-foundation.org, david@redhat.com, ziy@nvidia.com,
matthew.brost@intel.com, joshua.hahnjy@gmail.com,
rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net,
ying.huang@linux.alibaba.com, apopple@nvidia.com,
linux-mm@kvack.org, wangkefeng.wang@huawei.com
Subject: Re: [PATCH] mm/oom_kill: kill current in OOM when binding to cpu-less nodes
Date: Fri, 5 Sep 2025 11:42:09 +0200 [thread overview]
Message-ID: <aLqwcXHzsaTxN3dM@tiehlicka> (raw)
In-Reply-To: <69180098-9fcf-44c1-ac6b-dc049b56459e@huawei.com>
On Fri 05-09-25 17:25:44, Jinjiang Tu wrote:
>
> 在 2025/9/5 17:10, Michal Hocko 写道:
> > On Fri 05-09-25 16:18:43, Jinjiang Tu wrote:
> > > 在 2025/9/5 16:08, Michal Hocko 写道:
> > > > On Fri 05-09-25 09:56:03, Jinjiang Tu wrote:
> > > > > 在 2025/9/4 22:25, Michal Hocko 写道:
> > > > > > On Thu 04-09-25 21:44:31, Jinjiang Tu wrote:
> > > > > > > out_of_memory() selects tasks without considering mempolicy. Assuming a
> > > > > > > cpu-less NUMA Node, ordinary process that don't set mempolicy don't
> > > > > > > allocate memory from this cpu-less Node, unless other NUMA Nodes are below
> > > > > > > low watermark. If a task binds to this cpu-less Node and triggers OOM, many
> > > > > > > tasks may be killed wrongly that don't occupy memory from this Node.
> > > > > > I can see how a miconfigured task that binds _only_ to memoryless nodes
> > > > > > should be killed but this is not what the patch does, right? Could you
> > > > > > tell us more about the specific situation?
> > > > > We have some cpu-less NUMA Nodes, the memory are hotpluged in, and the zone
> > > > > is configured as ZONE_MOVABLE to guarantee these used memory can be migrated when
> > > > > we want to offline the NUMA Node.
> > > > >
> > > > > Generally tasks doesn't configure any mempolicy and use the default mempolicy, i.e.
> > > > > allocate from NUMA Node where the task is running on, and fallback to other NUMA Nodes
> > > > > when the local NUMA Node is below low watermark.As a result, these cpu-less NUMA Nodes
> > > > > won't be allocated until the NUMA Nodes with cpus are with low memory. However, These
> > > > > cpu-less NUMA Nodes are configured as ZONE_MOVABLE, can't be used by kernel allocation,
> > > > > leading to OOM with large amount of MOVABLE memory.
> > > > Right, this is a fundamental constrain of movable zones. They cannot
> > > > satisfy non-movable allocations and you can get OOM for those requests
> > > > even if there is plenty of movable memory available. This is no
> > > > different from highmem systems and kernel allocations.
> > > >
> > > > > To avoid it, we make some tasks binds to these cpu-less NUMA Nodes to use these memory.
> > > > > When these tasks trigger OOM, tasks that don't use these cpu-less NUMA Nodes may be killed
> > > > > according to rss.Even worse, after one task is killed, the allocating task find there is
> > > > > still no memory, triggers OOM again and kills another wrong task.
> > > > Let's see whether I follow you here. So you are binding some tasks to movable
> > > > nodes only and if their allocation fails you want to kill that task
> > > > rather than invoking mempolicy OOM killer as that could kill tasks
> > > > which are not constrained to movable nodes, right?
> > > Yes. It't difficult to kill tasks that use movable nodes memory, because we have
> > > no information of per-numa rss of each task. So, kill current task is the simplest way
> > > to avoid killing wrongly.
> > There were attempts to make the oom killer cpuset aware. This would
> > allow to constrain the oom killer to a cpuset for which we cannot
> > satisfy the allocation for. I do not remember details why this reach
> > meargable state. Have you considered something like that as an option?
>
> Only select tasks that bind to one of these movable nodes, it seems better.
>
> Although oom killer could only select according to task mempolicy, not vma policy, it't better
> than blindly killing current.
Yes, I do not think we can ever support full mempolicy capabilities but
recognizing this is a cpuset allocation failure and selecting from the
cpuset tasks makes a lot of sense.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2025-09-05 9:42 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-04 13:44 Jinjiang Tu
2025-09-04 14:25 ` Michal Hocko
2025-09-05 1:56 ` Jinjiang Tu
2025-09-05 8:08 ` Michal Hocko
2025-09-05 8:18 ` Jinjiang Tu
2025-09-05 9:10 ` Michal Hocko
2025-09-05 9:25 ` Jinjiang Tu
2025-09-05 9:42 ` Michal Hocko [this message]
2025-09-06 1:56 ` Jinjiang Tu
2025-09-08 7:46 ` Michal Hocko
2025-09-08 8:16 ` Jinjiang Tu
2025-09-08 9:11 ` Michal Hocko
2025-09-08 11:07 ` Jinjiang Tu
2025-09-08 11:13 ` Jinjiang Tu
2025-09-08 11:26 ` Michal Hocko
2025-09-05 9:13 ` Michal Hocko
2025-09-04 14:26 ` Joshua Hahn
2025-09-04 14:36 ` Michal Hocko
2025-09-04 14:43 ` Joshua Hahn
2025-09-05 2:05 ` Jinjiang Tu
2025-09-08 17:50 ` Gregory Price
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aLqwcXHzsaTxN3dM@tiehlicka \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=byungchul@sk.com \
--cc=david@redhat.com \
--cc=gourry@gourry.net \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-mm@kvack.org \
--cc=matthew.brost@intel.com \
--cc=rakie.kim@sk.com \
--cc=rientjes@google.com \
--cc=shakeel.butt@linux.dev \
--cc=tujinjiang@huawei.com \
--cc=wangkefeng.wang@huawei.com \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox