I believe the patch should be accepted. While the race condition might be rare in simple workloads, it can become significant in containerized systems or Android devices. The proposed solution is simple, low-risk, and directly addresses the identified problem with minimal code changes. On August 15, 2025 3:09:14 AM GMT+04:00, Andrew Morton wrote: >On Thu, 14 Aug 2025 21:55:55 +0800 wrote: > >> When a process is OOM killed, if the OOM reaper and the thread running >> exit_mmap() execute at the same time, both will traverse the vma's maple >> tree along the same path. They may easily unmap the same vma, causing them >> to compete for the pte spinlock. This increases unnecessary load, causing >> the execution time of the OOM reaper and the thread running exit_mmap() to >> increase. > >Please tell me what I'm missing here. > >OOM kills are a rare event. And this race sounds like it will rarely >occur even if an oom-killing is happening. And the delay will be >relatively short. > >If I'm correct then we're addressing rare*rare*small, so why bother? > >> When a process exits, exit_mmap() traverses the vma's maple tree from low to high >> address. To reduce the chance of unmapping the same vma simultaneously, >> the OOM reaper should traverse vma's tree from high to low address. This reduces >> lock contention when unmapping the same vma. > >Sharing some before-and-after runtime measurements would be useful. Or >at least, detailed anecdotes. > >