在 2025/9/5 17:42, Michal Hocko 写道: > On Fri 05-09-25 17:25:44, Jinjiang Tu wrote: >> 在 2025/9/5 17:10, Michal Hocko 写道: >>> On Fri 05-09-25 16:18:43, Jinjiang Tu wrote: >>>> 在 2025/9/5 16:08, Michal Hocko 写道: >>>>> On Fri 05-09-25 09:56:03, Jinjiang Tu wrote: >>>>>> 在 2025/9/4 22:25, Michal Hocko 写道: >>>>>>> On Thu 04-09-25 21:44:31, Jinjiang Tu wrote: >>>>>>>> out_of_memory() selects tasks without considering mempolicy. Assuming a >>>>>>>> cpu-less NUMA Node, ordinary process that don't set mempolicy don't >>>>>>>> allocate memory from this cpu-less Node, unless other NUMA Nodes are below >>>>>>>> low watermark. If a task binds to this cpu-less Node and triggers OOM, many >>>>>>>> tasks may be killed wrongly that don't occupy memory from this Node. >>>>>>> I can see how a miconfigured task that binds _only_ to memoryless nodes >>>>>>> should be killed but this is not what the patch does, right? Could you >>>>>>> tell us more about the specific situation? >>>>>> We have some cpu-less NUMA Nodes, the memory are hotpluged in, and the zone >>>>>> is configured as ZONE_MOVABLE to guarantee these used memory can be migrated when >>>>>> we want to offline the NUMA Node. >>>>>> >>>>>> Generally tasks doesn't configure any mempolicy and use the default mempolicy, i.e. >>>>>> allocate from NUMA Node where the task is running on, and fallback to other NUMA Nodes >>>>>> when the local NUMA Node is below low watermark.As a result, these cpu-less NUMA Nodes >>>>>> won't be allocated until the NUMA Nodes with cpus are with low memory. However, These >>>>>> cpu-less NUMA Nodes are configured as ZONE_MOVABLE, can't be used by kernel allocation, >>>>>> leading to OOM with large amount of MOVABLE memory. >>>>> Right, this is a fundamental constrain of movable zones. They cannot >>>>> satisfy non-movable allocations and you can get OOM for those requests >>>>> even if there is plenty of movable memory available. This is no >>>>> different from highmem systems and kernel allocations. >>>>> >>>>>> To avoid it, we make some tasks binds to these cpu-less NUMA Nodes to use these memory. >>>>>> When these tasks trigger OOM, tasks that don't use these cpu-less NUMA Nodes may be killed >>>>>> according to rss.Even worse, after one task is killed, the allocating task find there is >>>>>> still no memory, triggers OOM again and kills another wrong task. >>>>> Let's see whether I follow you here. So you are binding some tasks to movable >>>>> nodes only and if their allocation fails you want to kill that task >>>>> rather than invoking mempolicy OOM killer as that could kill tasks >>>>> which are not constrained to movable nodes, right? >>>> Yes. It't difficult to kill tasks that use movable nodes memory, because we have >>>> no information of per-numa rss of each task. So, kill current task is the simplest way >>>> to avoid killing wrongly. >>> There were attempts to make the oom killer cpuset aware. This would >>> allow to constrain the oom killer to a cpuset for which we cannot >>> satisfy the allocation for. I do not remember details why this reach >>> meargable state. Have you considered something like that as an option? >> Only select tasks that bind to one of these movable nodes, it seems better. >> >> Although oom killer could only select according to task mempolicy, not vma policy, it't better >> than blindly killing current. > Yes, I do not think we can ever support full mempolicy capabilities but > recognizing this is a cpuset allocation failure and selecting from the > cpuset tasks makes a lot of sense. In our use case, movable nodes are in all cpusets, so that movable nodes can be used by all tasks. Even though we move tasks into cpusets that only allow to allocate from movable nodes, oom_cpuset_eligible()->cpuset_mems_allowed_intersects() returns true for all tasks. Maybe when oc->nodemask == movable nodes, only select tasks whose mempolicy intersects with oc->nodemask. Like the following: diff --git a/mm/mempolicy.c b/mm/mempolicy.c index eb83cff7db8c..e56b6de836a6 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2328,6 +2328,9 @@ bool mempolicy_in_oom_domain(struct task_struct *tsk, if (!mask) return ret; + if (!nodes_intersects(*oc->nodemask, node_states[N_CPU])) + ret = false; + task_lock(tsk); mempolicy = tsk->mempolicy; if (mempolicy && mempolicy->mode == MPOL_BIND)