From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59F7AC43457 for ; Thu, 15 Oct 2020 00:00:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA60021D81 for ; Thu, 15 Oct 2020 00:00:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="HEYZPCQS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA60021D81 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 33E796B006E; Wed, 14 Oct 2020 20:00:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 29D896B0070; Wed, 14 Oct 2020 20:00:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EEBA6B0071; Wed, 14 Oct 2020 20:00:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id B83576B006E for ; Wed, 14 Oct 2020 20:00:57 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 47681180AD807 for ; Thu, 15 Oct 2020 00:00:57 +0000 (UTC) X-FDA: 77372204154.23.end38_3f1514b27210 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 24D2337604 for ; Thu, 15 Oct 2020 00:00:57 +0000 (UTC) X-HE-Tag: end38_3f1514b27210 X-Filterd-Recvd-Size: 9397 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Thu, 15 Oct 2020 00:00:56 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id t9so992277wrq.11 for ; Wed, 14 Oct 2020 17:00:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X+eg8YgelJaIrse4cpi7Ihnp7zRy6v8TM+zVlOepTsI=; b=HEYZPCQSEY6/zZcdjkGO/ocyGBY4uDIlADTK4kJrkxdC0NoC6FN45s4b9Y48I7MXdY YZH5LYzFpYbFD34aBR6uUqKngYLSocyGlWnapLc2uxIJyua77nyDSn9rnXvWGy2fIARo SgZoKw+kcrLlbz05EST4yTQuYm3+cR2S4jlgX9Zhfvk61DLdHNCmkxt2G88WL9d+MRAR nTRbE7x8iJCjoWDgBM7IsI92Br5Gibvrrjia/pDchZq24bjhxqiCsp6+CJFGLjGi2Mkx xmpA5yL4KuqggemD/KpwxyGrGi0nd+iAboQaY5R/hH2DQJtdUjVBSaZc/mly7DdZ4njC ZzvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X+eg8YgelJaIrse4cpi7Ihnp7zRy6v8TM+zVlOepTsI=; b=AEbkjrvTOUxQPm107WXdoRK+bGUJfK4n/tfx1zGwjBJnyvlCqX2/P9u0M2C1xE9az2 p1rdUUKnUQkhSfs7Ps6K51SpnlRT9Q7HwvUlnE7y/4pfY23o3Iu/dln+GMaxjVRkA/Of LjPNtjl3/fmCzuIQRFhLpszYHwlRSlu/BIBHL707V2XWO2on6N5X/I6463sV1tUU+H1a w+ymSwPI8Z+ZCEimzzqejSPVjOMsXECaZHiPU+y1i+nh7JBOF/So6hTAPbu1FYmpafjM K+ihGDYMiMABEdCUChD1jaLK0cHMDezVGkF/Tc3YDvxDlbeyGOJBaT0zTREa7St/Cm3D e9rw== X-Gm-Message-State: AOAM532hoUo3yThI8OufYP9FgwCHeAOdJLld3xEXt1fJc9vnSBQvmw+t 6lwIzWld7Kh2N3FYdeQztlePbQ== X-Google-Smtp-Source: ABdhPJyy7N9UEaPfdtetQQx+qXWmXhJLnRo6ObdxA5iADhOSskrhvebUWIovEChg8l+P+XJ+Ki5Xbw== X-Received: by 2002:a5d:558e:: with SMTP id i14mr1242206wrv.40.1602720055421; Wed, 14 Oct 2020 17:00:55 -0700 (PDT) Received: from localhost ([2a02:168:96c5:1:55ed:514f:6ad7:5bcc]) by smtp.gmail.com with ESMTPSA id q8sm1428293wro.32.2020.10.14.17.00.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Oct 2020 17:00:54 -0700 (PDT) From: Jann Horn To: Andrew Morton , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, "Eric W . Biederman" , Michel Lespinasse , Mauro Carvalho Chehab , Sakari Ailus , Jeff Dike , Richard Weinberger , Anton Ivanov , linux-um@lists.infradead.org, Jason Gunthorpe , John Hubbard , Johannes Berg Subject: [PATCH v3 1/2] mmap locking API: Order lock of nascent mm outside lock of live mm Date: Thu, 15 Oct 2020 02:00:40 +0200 Message-Id: <20201015000041.1734214-2-jannh@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog In-Reply-To: <20201015000041.1734214-1-jannh@google.com> References: <20201015000041.1734214-1-jannh@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Until now, the mmap lock of the nascent mm was ordered inside the mmap lo= ck of the old mm (in dup_mmap() and in UML's activate_mm()). A following patch will change the exec path to very broadly lock the nascent mm, but fine-grained locking should still work at the same time f= or the old mm. In particular, mmap locking calls are hidden behind the copy_from_user() calls and such that are reached through functions like copy_strings() - when a page fault occurs on a userspace memory access, the mmap lock will be taken. To do this in a way that lockdep is happy about, let's turn around the lo= ck ordering in both places that currently nest the locks. Since SINGLE_DEPTH_NESTING is normally used for the inner nesting layer, make up our own lock subclass MMAP_LOCK_SUBCLASS_NASCENT and use that instead. The added locking calls in exec_mmap() are temporary; the following patch will move the locking out of exec_mmap(). As Johannes Berg pointed out[1][2], moving the mmap locking of arch/um/'s activate_mm() up into the execve code also fixes an issue that would've caused a scheduling-in-atomic bug due to mmap_write_lock_nested(= ) while holding a spinlock if UM had support for voluntary preemption. (Even when a semaphore is uncontended, locking it can still trigger rescheduling via might_sleep().) [1] https://lore.kernel.org/linux-mm/115d17aa221b73a479e26ffee52899ddb18b= 1f53.camel@sipsolutions.net/ [2] https://lore.kernel.org/linux-mm/7b7d6954b74e109e653539d880173fa9cb5c= 5ddf.camel@sipsolutions.net/ Signed-off-by: Jann Horn --- arch/um/include/asm/mmu_context.h | 3 +-- fs/exec.c | 4 ++++ include/linux/mmap_lock.h | 23 +++++++++++++++++++++-- kernel/fork.c | 7 ++----- 4 files changed, 28 insertions(+), 9 deletions(-) diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_= context.h index 17ddd4edf875..c13bc5150607 100644 --- a/arch/um/include/asm/mmu_context.h +++ b/arch/um/include/asm/mmu_context.h @@ -48,9 +48,8 @@ static inline void activate_mm(struct mm_struct *old, s= truct mm_struct *new) * when the new ->mm is used for the first time. */ __switch_mm(&new->context.id); - mmap_write_lock_nested(new, SINGLE_DEPTH_NESTING); + mmap_assert_write_locked(new); uml_setup_stubs(new); - mmap_write_unlock(new); } =20 static inline void switch_mm(struct mm_struct *prev, struct mm_struct *n= ext,=20 diff --git a/fs/exec.c b/fs/exec.c index a91003e28eaa..229dbc7aa61a 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1114,6 +1114,8 @@ static int exec_mmap(struct mm_struct *mm) if (ret) return ret; =20 + mmap_write_lock_nascent(mm); + if (old_mm) { /* * Make sure that if there is a core dump in progress @@ -1125,6 +1127,7 @@ static int exec_mmap(struct mm_struct *mm) if (unlikely(old_mm->core_state)) { mmap_read_unlock(old_mm); mutex_unlock(&tsk->signal->exec_update_mutex); + mmap_write_unlock(mm); return -EINTR; } } @@ -1138,6 +1141,7 @@ static int exec_mmap(struct mm_struct *mm) tsk->mm->vmacache_seqnum =3D 0; vmacache_flush(tsk); task_unlock(tsk); + mmap_write_unlock(mm); if (old_mm) { mmap_read_unlock(old_mm); BUG_ON(active_mm !=3D old_mm); diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 0707671851a8..24de1fe99ee4 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -3,6 +3,18 @@ =20 #include =20 +/* + * Lock subclasses for the mmap_lock. + * + * MMAP_LOCK_SUBCLASS_NASCENT is for core kernel code that wants to lock= an mm + * that is still being constructed and wants to be able to access the ac= tive mm + * normally at the same time. It nests outside MMAP_LOCK_SUBCLASS_NORMAL= . + */ +enum { + MMAP_LOCK_SUBCLASS_NORMAL =3D 0, + MMAP_LOCK_SUBCLASS_NASCENT +}; + #define MMAP_LOCK_INITIALIZER(name) \ .mmap_lock =3D __RWSEM_INITIALIZER((name).mmap_lock), =20 @@ -16,9 +28,16 @@ static inline void mmap_write_lock(struct mm_struct *m= m) down_write(&mm->mmap_lock); } =20 -static inline void mmap_write_lock_nested(struct mm_struct *mm, int subc= lass) +/* + * Lock an mm_struct that is still being set up (during fork or exec). + * This nests outside the mmap locks of live mm_struct instances. + * No interruptible/killable versions exist because at the points where = you're + * supposed to use this helper, the mm isn't visible to anything else, s= o we + * expect the mmap_lock to be uncontended. + */ +static inline void mmap_write_lock_nascent(struct mm_struct *mm) { - down_write_nested(&mm->mmap_lock, subclass); + down_write_nested(&mm->mmap_lock, MMAP_LOCK_SUBCLASS_NASCENT); } =20 static inline int mmap_write_lock_killable(struct mm_struct *mm) diff --git a/kernel/fork.c b/kernel/fork.c index da8d360fb032..db67eb4ac7bd 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -474,6 +474,7 @@ static __latent_entropy int dup_mmap(struct mm_struct= *mm, unsigned long charge; LIST_HEAD(uf); =20 + mmap_write_lock_nascent(mm); uprobe_start_dup_mmap(); if (mmap_write_lock_killable(oldmm)) { retval =3D -EINTR; @@ -481,10 +482,6 @@ static __latent_entropy int dup_mmap(struct mm_struc= t *mm, } flush_cache_dup_mm(oldmm); uprobe_dup_mmap(oldmm, mm); - /* - * Not linked in yet - no deadlock potential: - */ - mmap_write_lock_nested(mm, SINGLE_DEPTH_NESTING); =20 /* No ordering required: file already has been exposed. */ RCU_INIT_POINTER(mm->exe_file, get_mm_exe_file(oldmm)); @@ -600,12 +597,12 @@ static __latent_entropy int dup_mmap(struct mm_stru= ct *mm, /* a new mm has just been created */ retval =3D arch_dup_mmap(oldmm, mm); out: - mmap_write_unlock(mm); flush_tlb_mm(oldmm); mmap_write_unlock(oldmm); dup_userfaultfd_complete(&uf); fail_uprobe_end: uprobe_end_dup_mmap(); + mmap_write_unlock(mm); return retval; fail_nomem_anon_vma_fork: mpol_put(vma_policy(tmp)); --=20 2.28.0.1011.ga647a8990f-goog