From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28D7DC433EF for ; Tue, 7 Dec 2021 23:55:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 99A6E6B0072; Tue, 7 Dec 2021 18:55:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9498C6B0073; Tue, 7 Dec 2021 18:55:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 811C16B0074; Tue, 7 Dec 2021 18:55:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay026.a.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 7427C6B0072 for ; Tue, 7 Dec 2021 18:55:22 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4B27120E7C for ; Tue, 7 Dec 2021 23:48:05 +0000 (UTC) X-FDA: 78892638930.01.D73616E Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf01.hostedemail.com (Postfix) with ESMTP id C24B840005 for ; Tue, 7 Dec 2021 23:48:04 +0000 (UTC) Received: from mail.kernel.org (unknown [198.145.29.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1FF0BB81EC0; Tue, 7 Dec 2021 23:48:03 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 5A9AA60E09; Tue, 7 Dec 2021 23:48:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1638920881; bh=NZI5E6OTeAr2lobR6Gt7GWhIGmD0oHASC6XKqv3HqqE=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=ZHw/TP4mHfCUxYJrBbmiI6dsQ4uG9j3FaNwFJU6oduhWIODxQkhdkv3qL+14KaF3v 2kY5VnUnJDmFLALpkxgzzfspPD/sgeZ76jf2qLphMUqvySdnMKlsyYvt8aXIpyMfQ0 axl2zJYhPRpYT1PMQp4G2ZGzkTB0+iRJz4BdCI2g= Date: Tue, 7 Dec 2021 15:47:59 -0800 From: Andrew Morton To: Joel Savitz Cc: linux-kernel@vger.kernel.org, Waiman Long , linux-mm@kvack.org, Nico Pache , Peter Zijlstra , Michal Hocko Subject: Re: [PATCH] mm/oom_kill: wake futex waiters before annihilating victim shared mutex Message-Id: <20211207154759.3f3fe272349c77e0c4aca36f@linux-foundation.org> In-Reply-To: <20211207214902.772614-1-jsavitz@redhat.com> References: <20211207214902.772614-1-jsavitz@redhat.com> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: C24B840005 X-Stat-Signature: hgbypbo81z8wkmyn5ge66zjro6j6tqnu Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b="ZHw/TP4m"; spf=pass (imf01.hostedemail.com: domain of akpm@linux-foundation.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-HE-Tag: 1638920884-771460 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: (cc's added) On Tue, 7 Dec 2021 16:49:02 -0500 Joel Savitz wrote: > In the case that two or more processes share a futex located within > a shared mmaped region, such as a process that shares a lock between > itself and a number of child processes, we have observed that when > a process holding the lock is oom killed, at least one waiter is never > alerted to this new development and simply continues to wait. Well dang. Is there any way of killing off that waiting process, or do we have a resource leak here? > This is visible via pthreads by checking the __owner field of the > pthread_mutex_t structure within a waiting process, perhaps with gdb. > > We identify reproduction of this issue by checking a waiting process of > a test program and viewing the contents of the pthread_mutex_t, taking note > of the value in the owner field, and then checking dmesg to see if the > owner has already been killed. > > This issue can be tricky to reproduce, but with the modifications of > this small patch, I have found it to be impossible to reproduce. There > may be additional considerations that I have not taken into account in > this patch and I welcome any comments and criticism. > Co-developed-by: Nico Pache > Signed-off-by: Nico Pache > Signed-off-by: Joel Savitz > --- > mm/oom_kill.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > index 1ddabefcfb5a..fa58bd10a0df 100644 > --- a/mm/oom_kill.c > +++ b/mm/oom_kill.c > @@ -44,6 +44,7 @@ > #include > #include > #include > +#include > > #include > #include "internal.h" > @@ -890,6 +891,7 @@ static void __oom_kill_process(struct task_struct *victim, const char *message) > * in order to prevent the OOM victim from depleting the memory > * reserves from the user space under its control. > */ > + futex_exit_release(victim); > do_send_sig_info(SIGKILL, SEND_SIG_PRIV, victim, PIDTYPE_TGID); > mark_oom_victim(victim); > pr_err("%s: Killed process %d (%s) total-vm:%lukB, anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB, UID:%u pgtables:%lukB oom_score_adj:%hd\n", > @@ -930,6 +932,7 @@ static void __oom_kill_process(struct task_struct *victim, const char *message) > */ > if (unlikely(p->flags & PF_KTHREAD)) > continue; > + futex_exit_release(p); > do_send_sig_info(SIGKILL, SEND_SIG_PRIV, p, PIDTYPE_TGID); > } > rcu_read_unlock(); > -- > 2.33.1