From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 931A8C433E3 for ; Thu, 16 Jul 2020 13:41:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5321120760 for ; Thu, 16 Jul 2020 13:41:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5321120760 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=i-love.sakura.ne.jp Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E53996B0036; Thu, 16 Jul 2020 09:41:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E055A6B0070; Thu, 16 Jul 2020 09:41:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D19C26B0071; Thu, 16 Jul 2020 09:41:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id BC62C6B0036 for ; Thu, 16 Jul 2020 09:41:55 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3105C2DFC for ; Thu, 16 Jul 2020 13:41:55 +0000 (UTC) X-FDA: 77044052190.16.cork40_561605126f02 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id F2BED100E6903 for ; Thu, 16 Jul 2020 13:41:54 +0000 (UTC) X-HE-Tag: cork40_561605126f02 X-Filterd-Recvd-Size: 5001 Received: from www262.sakura.ne.jp (www262.sakura.ne.jp [202.181.97.72]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jul 2020 13:41:53 +0000 (UTC) Received: from fsav108.sakura.ne.jp (fsav108.sakura.ne.jp [27.133.134.235]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id 06GDfHDn085146; Thu, 16 Jul 2020 22:41:17 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav108.sakura.ne.jp (F-Secure/fsigk_smtp/550/fsav108.sakura.ne.jp); Thu, 16 Jul 2020 22:41:17 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/fsav108.sakura.ne.jp) Received: from [192.168.1.9] (M106072142033.v4.enabler.ne.jp [106.72.142.33]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id 06GDfGUv085143 (version=TLSv1.2 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 16 Jul 2020 22:41:17 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Subject: Re: [PATCH] binder: Don't use mmput() from shrinker function. To: Michal Hocko Cc: Greg Kroah-Hartman , Arve Hjonnevag , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , syzbot , acme@kernel.org, alexander.shishkin@linux.intel.com, jolsa@redhat.com, linux-kernel@vger.kernel.org, mark.rutland@arm.com, mingo@redhat.com, namhyung@kernel.org, peterz@infradead.org, syzkaller-bugs@googlegroups.com, "open list:ANDROID DRIVERS" , linux-mm References: <0000000000001fbbb605aa805c9b@google.com> <5ce3ee90-333e-638d-ac8c-cd6d7ab7aa3b@I-love.SAKURA.ne.jp> <20200716083506.GA20915@dhcp22.suse.cz> From: Tetsuo Handa Message-ID: <36db7016-98d6-2c6b-110b-b2481fd480ac@i-love.sakura.ne.jp> Date: Thu, 16 Jul 2020 22:41:14 +0900 User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20200716083506.GA20915@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: F2BED100E6903 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020/07/16 17:35, Michal Hocko wrote: > On Thu 16-07-20 08:36:52, Tetsuo Handa wrote: >> syzbot is reporting that mmput() from shrinker function has a risk of >> deadlock [1]. Don't start synchronous teardown of mm when called from >> shrinker function. > > Please add the actual lock dependency to the changelog. > > Anyway is this deadlock real? Mayve I have missed some details but the > call graph points to these two paths. > uprobe_mmap do_shrink_slab > uprobes_mmap_hash #lock > install_breakpoint binder_shrink_scan > set_swbp binder_alloc_free_page > uprobe_write_opcode __mmput > update_ref_ctr uprobe_clear_state > mutex_lock(&delayed_uprobe_lock) mutex_lock(&delayed_uprobe_lock); > allocation -> reclaim > static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm, short d) { mutex_lock(&delayed_uprobe_lock); ret = delayed_uprobe_add(uprobe, mm1) { du = kzalloc(sizeof(*du), GFP_KERNEL) { do_shrink_slab() { binder_shrink_scan() { binder_alloc_free_page() { mmget_not_zero(mm2); mmput(mm2) { __mmput(mm2) { uprobe_clear_state(mm2) { mutex_lock(&delayed_uprobe_lock); delayed_uprobe_remove(NULL, mm2); mutex_unlock(&delayed_uprobe_lock); } } } } } } } } mutex_unlock(&delayed_uprobe_lock); } > But in order for this to happen the shrinker would have to do the last > put on the mm. But mm cannot go away from under uprobe_mmap so those two > paths cannot race with each other. and mm1 != mm2 is possible, isn't it? > > Unless I am missing something this is a false positive. I do not mind > using mmput_async from the shrinker as a workaround but the changelog > should be explicit about the fact. > binder_alloc_free_page() is already using mmput_async() 14 lines later. It just took 18 months to hit this race for the third time, for it is quite difficult to let the owner of mm2 to call mmput(mm2) between binder_alloc_free_page() calls mmget_not_zero(mm2) and mmput(mm2). The reason I added you is to see whether we can do void mmput(struct mm_struct *mm) { might_sleep(); + /* Calling mmput() from shrinker context can deadlock. */ + WARN_ON(current->flags & PF_MEMALLOC); if (atomic_dec_and_test(&mm->mm_users)) __mmput(mm); } in order to catch this bug easier.