From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FF45C38147 for ; Wed, 18 Jan 2023 17:37:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C08F96B007B; Wed, 18 Jan 2023 12:36:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B92736B007D; Wed, 18 Jan 2023 12:36:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0E586B0082; Wed, 18 Jan 2023 12:36:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8A1106B007B for ; Wed, 18 Jan 2023 12:36:59 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5D191140A3A for ; Wed, 18 Jan 2023 17:36:59 +0000 (UTC) X-FDA: 80368625358.08.97E1C7A Received: from mail-yw1-f175.google.com (mail-yw1-f175.google.com [209.85.128.175]) by imf07.hostedemail.com (Postfix) with ESMTP id C450040009 for ; Wed, 18 Jan 2023 17:36:57 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=TdGBbEjT; spf=pass (imf07.hostedemail.com: domain of surenb@google.com designates 209.85.128.175 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674063417; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7FB5EOY5hb7iPFiUX3NpId4y3troCBBuKL10fDCZ/VE=; b=GpJlgm40/e/khYVj2Qz0z0iGBFaiyY2hYuv2/ZZZM052ml+RsoKd/W7JEvmkm+vndRUfaf 17dEBvtTkyaPsr0ct8jmj8zGc3ZQooxdXx73Q5lc3A677+BviuBpKec7qbURAYADdMhu4a ZEq3fqo60ZbeCHQUDseT+gSKBCBKKq8= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=TdGBbEjT; spf=pass (imf07.hostedemail.com: domain of surenb@google.com designates 209.85.128.175 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674063417; a=rsa-sha256; cv=none; b=mLFnLwXCTcdHOL2qadQWwLdr2KqzSfjqUz7liflwogg+Lh5Po38sFkDTIjPIBfHMRp5VIy yb6cy03/G5WoxLtk5PdHRCYxO07vnCqau8rLgnZTSwalp37tQRDcqiy2Tf4LzxZb9nF9Gq DtabVrO/hbuNFvXXDAAkQ2KvdO8ni4s= Received: by mail-yw1-f175.google.com with SMTP id 00721157ae682-4c24993965eso472007637b3.12 for ; Wed, 18 Jan 2023 09:36:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=7FB5EOY5hb7iPFiUX3NpId4y3troCBBuKL10fDCZ/VE=; b=TdGBbEjT3c8FkCFMbizsOKUqjInchQaYeYbuYCI45A0Of4q0DsrHxcxoNeSqgTTSo/ 1FC/lWARbpCLzdl/khoXTHfPlPOxXqsvpRF41br3KEPdLvUAbIeVyq41RkfEXu7OhDaB r3V01Bf0NKaliWQ6Qu4RzuXpj1m35e7+6+JPuB6zmgeDSlv8CAWRm1JjKPnbtCO+RDpx DzqUdpRuBR858L/xC0UnyaG6/ry+MRD6Lz3DVVcRbf7BUAFTrksTEHadWN6zivUozpGI bsQd/3/X6dn1SwD3uVh48zwzhCCmEHpGXkuRfqSZIpkJ2RL7bUzz3De0szbAzoTNvN2x 4CgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=7FB5EOY5hb7iPFiUX3NpId4y3troCBBuKL10fDCZ/VE=; b=EdXI4Iko8q4bY6PG5nCLuBTIdRTPYGQLda8gKMvJr0RBHFW+2QZOwvnSd4/QT3r59R ZyDwRf7kABcfNwiTnWwsNIqGFEoHw+guocHvNiXKIZDn4nN0x/mm9aeyWUgL5uyyYiQn zYVQOSk1XqNraZ7hZnLFkbi9xyxwYB2mdyStX1wwg3Z7/t+F0mgYW+5wJQQTbE8vi+D8 DJJ09aTtKnma5TFWcE/st9ycMngukZU5qNgsWTKkKnngE7iLpBi2xgBQyYKnnGMoVGU0 zmdsVu8Msv0EkY86yD0rP5IXYtihCt9Yq1ELBi67tA4OEuHtogDdJSYH9tDRBIXkY4mO PuSw== X-Gm-Message-State: AFqh2ko9+qu+lM38T34xgeg4PnolvrtK8R6Fdc4AfF6ByY79zTeEsH18 whJQeWeL+DHduVpyMJ2UKI16FNI8HcviKbTeofoSnw== X-Google-Smtp-Source: AMrXdXvGlQnqWtohEVY0K40X0Qaw7x+bFnBfbHiuNEi6ba/kM3CM/pTJjhlNp36i/YWBo1WjmgiXo2A0zCb3+hfB0F0= X-Received: by 2002:a81:6d8d:0:b0:490:89c3:21b0 with SMTP id i135-20020a816d8d000000b0049089c321b0mr1021671ywc.132.1674063416448; Wed, 18 Jan 2023 09:36:56 -0800 (PST) MIME-Version: 1.0 References: <20230109205336.3665937-1-surenb@google.com> <20230109205336.3665937-13-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Wed, 18 Jan 2023 09:36:44 -0800 Message-ID: Subject: Re: [PATCH 12/41] mm: add per-VMA lock and helper functions to control it To: Michal Hocko Cc: Jann Horn , peterz@infradead.org, Ingo Molnar , Will Deacon , akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, hughlynch@google.com, leewalsh@google.com, posk@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: mfwdgusyh9gaupdxrjq6nmpyunuq5u9b X-Rspamd-Queue-Id: C450040009 X-HE-Tag: 1674063417-520723 X-HE-Meta: U2FsdGVkX18yQjnwFZk5vUtIWQviMC+KKr3OsDUBroCvX/cPhc5NsjBJ2kl6wBWxx2cw/XClSGTtwW18jIptjJueHrYto0rQTu1t4OPvzoCll0z1HXduXHpJRzT9MSWYg+JPQiipR1a1dqVrrJ603MHCGtcOalJYW21jHowFwCaaq2P8iwCmWS0Os4Tf56N6kxQlyLfjMNk2n4qc68TACVkjxU1xPYLUamy8udQUeXacts9AQNkxR3M7WPLr4BmkO7H1ABb2fVP0u0cTa2YCsiJVgCBHmDu3jOHeNLczzsc01lU+6hviVDnHyDYLhMARdlFbqKX+zgdJyIkCKF4up4KMTYhWuxpjbgMDOl5NNevMZuVkt0UtmqI5sK4nVN9ND1+iJbR7+ehuXSbMcpp8L5IDLRhLHEufeV74ggn9G0Ijq5D8XCHoWR5cZwZ2OnUiLQXmY2NRGjMElEQdspcWjHTZOQUGWT8sSks9ChALSONwLWxQNZ6T1E3BDS4QFOVGf3NTPNMMGVpG467UQdiR5eWi0oMnghpQnHSaHoldIOIlK6pwvFe7nCitbRP1/qvZvdhdGrKqbqG7SBVWhtBS6GkeZCNrOaKCsx0NdomEOUlGVZtWEB39WEreYYRZqwVULNxdFbqM3sLLdQ+IAgQmYnxI5bL+T9zjTpav4F1uBpYdz8Qq+0vR9ZLZacV/Qxf5j1ny/gdSXwTUVbX0KZglgSJfLYnAsIQn+7InXWo1O78Ah+3SDaaLOtZcM9umbEWS8eqDx+hBa58vDMNKFESslJnB5wArmDYq493vdvPhej28o0VCE86ik3gb8RCNmhdnMvLNN5Nz67vx3NW3daBpfgwWzw/+u59vIlwSbaIYXAetANXVDXZ2nQPOr+pGNVRcXtCg0QyQumFu+TDGXwctu4h51lmFvLAy7Wd/47PR3wmH0CcdadBN7/2WC3Ff1s7jF5SYFxCp7Dm6QtuWuXr /WJY3mA0 9COFlltZzGM+4zic= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jan 18, 2023 at 7:11 AM 'Michal Hocko' via kernel-team wrote: > > On Wed 18-01-23 14:23:32, Jann Horn wrote: > > On Wed, Jan 18, 2023 at 1:28 PM Michal Hocko wrote: > > > On Tue 17-01-23 19:02:55, Jann Horn wrote: > > > > +locking maintainers > > > > > > > > On Mon, Jan 9, 2023 at 9:54 PM Suren Baghdasaryan wrote: > > > > > Introduce a per-VMA rw_semaphore to be used during page fault handling > > > > > instead of mmap_lock. Because there are cases when multiple VMAs need > > > > > to be exclusively locked during VMA tree modifications, instead of the > > > > > usual lock/unlock patter we mark a VMA as locked by taking per-VMA lock > > > > > exclusively and setting vma->lock_seq to the current mm->lock_seq. When > > > > > mmap_write_lock holder is done with all modifications and drops mmap_lock, > > > > > it will increment mm->lock_seq, effectively unlocking all VMAs marked as > > > > > locked. > > > > [...] > > > > > +static inline void vma_read_unlock(struct vm_area_struct *vma) > > > > > +{ > > > > > + up_read(&vma->lock); > > > > > +} > > > > > > > > One thing that might be gnarly here is that I think you might not be > > > > allowed to use up_read() to fully release ownership of an object - > > > > from what I remember, I think that up_read() (unlike something like > > > > spin_unlock()) can access the lock object after it's already been > > > > acquired by someone else. > > > > > > Yes, I think you are right. From a look into the code it seems that > > > the UAF is quite unlikely as there is a ton of work to be done between > > > vma_write_lock used to prepare vma for removal and actual removal. > > > That doesn't make it less of a problem though. > > > > > > > So if you want to protect against concurrent > > > > deletion, this might have to be something like: > > > > > > > > rcu_read_lock(); /* keeps vma alive */ > > > > up_read(&vma->lock); > > > > rcu_read_unlock(); > > > > > > > > But I'm not entirely sure about that, the locking folks might know better. > > > > > > I am not a locking expert but to me it looks like this should work > > > because the final cleanup would have to happen rcu_read_unlock. > > > > > > Thanks, I have completely missed this aspect of the locking when looking > > > into the code. > > > > > > Btw. looking at this again I have fully realized how hard it is actually > > > to see that vm_area_free is guaranteed to sync up with ongoing readers. > > > vma manipulation functions like __adjust_vma make my head spin. Would it > > > make more sense to have a rcu style synchronization point in > > > vm_area_free directly before call_rcu? This would add an overhead of > > > uncontended down_write of course. > > > > Something along those lines might be a good idea, but I think that > > rather than synchronizing the removal, it should maybe be something > > that splats (and bails out?) if it detects pending readers. If we get > > to vm_area_free() on a VMA that has pending readers, we might already > > be in a lot of trouble because the concurrent readers might have been > > traversing page tables while we were tearing them down or fun stuff > > like that. > > > > I think maybe Suren was already talking about something like that in > > another part of this patch series but I don't remember... > > This http://lkml.kernel.org/r/20230109205336.3665937-27-surenb@google.com? Yes, I spent a lot of time ensuring that __adjust_vma locks the right VMAs and that VMAs are freed or isolated under VMA write lock protection to exclude any readers. If the VM_BUG_ON_VMA in the patch Michal mentioned gets hit then it's a bug in my design and I'll have to fix it. But please, let's not add synchronize_rcu() in the vm_area_free(). That will slow down any path that frees a VMA, especially the exit path which might be freeing thousands of them. I had an SPF version with synchronize_rcu() in the vm_area_free() and phone vendors started yelling at me the very next day. call_rcu() with CONFIG_RCU_NOCB_CPU (which Android uses for power saving purposes) is already bad enough to show up in the benchmarks and that's why I had to add call_rcu() batching in https://lore.kernel.org/all/20230109205336.3665937-40-surenb@google.com. > > -- > Michal Hocko > SUSE Labs > > -- > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. >