From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE265C38147 for ; Tue, 17 Jan 2023 22:37:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 241276B0071; Tue, 17 Jan 2023 17:37:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F0C76B0072; Tue, 17 Jan 2023 17:37:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 092BD6B0073; Tue, 17 Jan 2023 17:37:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E97D76B0071 for ; Tue, 17 Jan 2023 17:37:00 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id ADE1A80BD1 for ; Tue, 17 Jan 2023 22:37:00 +0000 (UTC) X-FDA: 80365752600.18.524D9FB Received: from mail-yb1-f178.google.com (mail-yb1-f178.google.com [209.85.219.178]) by imf13.hostedemail.com (Postfix) with ESMTP id 26BDC2000B for ; Tue, 17 Jan 2023 22:36:58 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=KfwF+Chq; spf=pass (imf13.hostedemail.com: domain of surenb@google.com designates 209.85.219.178 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673995019; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lm5RcDiW+2rGYFCauDaSwKbjJDggVLN2u6xrwVauKK0=; b=KxnK+NClCLPNuMsEPyhjIprd59gLuvqyJCpeibvXYinlHz2bH9zi4EyzXYbQOwKHWvqHeq 6zxWIhNJSmAbcTlUypNTIvxshCWpNJvSEJt5SC/QlBJ5/A1xckEE/rrnNU90IWl+zd/PjT He4Pflnv4SLGfscowg1cXUvScw0jd0E= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=KfwF+Chq; spf=pass (imf13.hostedemail.com: domain of surenb@google.com designates 209.85.219.178 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673995019; a=rsa-sha256; cv=none; b=XDlI0bHAh7/Y89b6ZLBTCpMyVd8afds7N4f+4Frx0dwqdOnWVzY+2CDWz/YL7WdbIHfGVN Wc97GftlT/de1NFdLcHcuDYJeM61ZzIwt282L59LLbdamYkLZAlzMySWlMoxZ1/smdCqHv ZCuay7LpOlmyYPcZ7qbrINGOx/15BGE= Received: by mail-yb1-f178.google.com with SMTP id l139so35963997ybl.12 for ; Tue, 17 Jan 2023 14:36:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=lm5RcDiW+2rGYFCauDaSwKbjJDggVLN2u6xrwVauKK0=; b=KfwF+ChqQI+spUNrTrCSVtdcoF8j4D0OPeze22WkmAhXl1Ur+BZ1OCwKXZcEIx8hfQ a7ONLKPiTn/QuLt6KSIjYS4Y80WenNjPid00GPCbKS8iDZHStcl5tUCHc3R3GK/eDLoG QPWWrKo4rNJk86Jhb+yPUhpv6SQMlfPAFQ73OSt5M/CLajTFfWZXd3Wxwp12inxJu+y6 lP1Od8JnRwb/hbZQYbCYZy5IrI1v2NmOpcMZCXJTOwyTANa2XaglPW8Il8Q+vMrbXtJ/ niyJNGhwX1gcX6jFIAyiAosZn3ypXcghVF309a2hAgVCwZY7kHVhn903odOn3nwV/X6x YH+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=lm5RcDiW+2rGYFCauDaSwKbjJDggVLN2u6xrwVauKK0=; b=NWvXYAq553wJMMHSuw3fptKYzs7r7hv7PoVYG+n8136JtlTb8dVbH0Ej/FT9t5awlo F33FyXAkm0sDn83ECYM5NyurYdMyYBEmihpqk1Gt0QZoOEwhtNjUmnJoykVfivZnHbpL KF5Wk67dweLJmoH1EtYzF0IBtvuiFNIMy7wUQUzsxUCRtazkz69ui20iZJ5ZtXGxzEs0 zF8EWJgvU+SF42TBlFZr1riqkgsfR8GPICGlOsXQGnAfoUQ4G6XnR3segnhziFuTNDpE mCDXkE2LojJH4kTkyFVY5FWPDb61y9gU3ZstOMzQAkC8Lkf5oNPg7/K+LLfFxNutldNS zF5Q== X-Gm-Message-State: AFqh2kp1UcoWqG6M/i7RPKMmNl2OdI2Fe05S3Mffm3IGMa4FC8QB2yr6 ep26wap7Dd8AbJpHQhNWWJBe2wm5mnzSS1q2CSI+gg== X-Google-Smtp-Source: AMrXdXtGV2vg2X6848Sq5L1OnzZ5Haar7yCqj+6oRCCeUvwlEhxuO6ABcx4I+h0UVyNSvKE5u7BBhe5bspBtFhh/7Hs= X-Received: by 2002:a05:6902:685:b0:7e9:646d:2da3 with SMTP id i5-20020a056902068500b007e9646d2da3mr702515ybt.340.1673995017981; Tue, 17 Jan 2023 14:36:57 -0800 (PST) MIME-Version: 1.0 References: <20230109205336.3665937-1-surenb@google.com> <20230109205336.3665937-13-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Tue, 17 Jan 2023 14:36:47 -0800 Message-ID: Subject: Re: [PATCH 12/41] mm: add per-VMA lock and helper functions to control it To: Jann Horn Cc: peterz@infradead.org, Ingo Molnar , Will Deacon , akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, hughlynch@google.com, leewalsh@google.com, posk@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: s9whp6qypjsxzcrgugesz9igmphg11uo X-Rspamd-Queue-Id: 26BDC2000B X-HE-Tag: 1673995018-348653 X-HE-Meta: U2FsdGVkX1+078yM9ZbVpoYKgp7fbiP9G0JmqjPBGMpTKLYfIfozCAPNC0vTbEFWKHBPhyfBX7gTXZ56qBNaXfzf0Bsy+EbM8T6zhHCOnk0rv2d5sCuLiWDTwb9vh9bejZtAjd+xoOynYexHM9jg7EfxG2rpLEinFALTGs0TvdjI3WDuwLNQ+j+XJDUhixBH5J9mkUyEp+XFWPoeTM4sjkndt36MhS+RWmmlUi4iEYqJ1xhm+KkxR7DM2u5LmiEtp+UYcL7ZYqavdfb1xUT6uB/3yGGkFOKRdy0vYkYxaNEXSzvSGjScTSMYrXwY3ErnzKLuPUnbhV6kN0VVFAiE5JOilgDIuCdMDf7O83WXcnKRgZD/C5CdnDncYYhq8oZVQ1CJUJPD8k/xDfUyBthFEnBVHa+AG4KkDCdRza+hT8PW81+djyEJtaKfwRU2aKGQUFFafYnuoYkcSNQIrdZXnK3dtoo2NDqWQn1G8YbFVTS+Kg9Y89ta1M7Y2xXFv59ricTL4xjLhrSc6kWfcxdw+hEoiB+Y9+jWYdKmPs4S3xZ51quCRqMelGewbHaoTXEUK9FEY/9dnde8tjmcTe8JE7S7MJ4ecB/6XsC3ks8rIKCozMZ+1I8H5yGw0RntZo1rBgEduC+PowjseRakQj5DnIIe4vrCFcRs7Tf4cvEHZEySAuY9oD2O4Fbu3n8GZ9bZOcuqiVEBWCR3PRMdXXHgThCmQ7XjwXsH1/E7ibjmeTHgBrI/klgJ0VfnG1pbAspHNovFeuBY530B6/U4dl4EhX1dax6/oZM5RCUsl932OaTM0m13QsLBK10JxEQPIyLQ7MDu3GP8CyIpltol0l8CnPWeGCuRxIYtAmKf/qYclJ2gIqe465rlQBjlAyH0SnUS4uCdSFozkRHyGQlM4QJpO1c4DjU5SOQh3MGgSgTpI0118ZqZ13jgP0iGgu6Lk2oFcJppw5iO46YX742LZey x3XI5KSy CiIv58y4RTTvQS8s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jan 17, 2023 at 1:46 PM Jann Horn wrote: > > On Tue, Jan 17, 2023 at 10:28 PM Suren Baghdasaryan wrote: > > On Tue, Jan 17, 2023 at 10:03 AM Jann Horn wrote: > > > > > > +locking maintainers > > > > Thanks! I'll CC the locking maintainers in the next posting. > > > > > > > > On Mon, Jan 9, 2023 at 9:54 PM Suren Baghdasaryan wrote: > > > > Introduce a per-VMA rw_semaphore to be used during page fault handling > > > > instead of mmap_lock. Because there are cases when multiple VMAs need > > > > to be exclusively locked during VMA tree modifications, instead of the > > > > usual lock/unlock patter we mark a VMA as locked by taking per-VMA lock > > > > exclusively and setting vma->lock_seq to the current mm->lock_seq. When > > > > mmap_write_lock holder is done with all modifications and drops mmap_lock, > > > > it will increment mm->lock_seq, effectively unlocking all VMAs marked as > > > > locked. > > > [...] > > > > +static inline void vma_read_unlock(struct vm_area_struct *vma) > > > > +{ > > > > + up_read(&vma->lock); > > > > +} > > > > > > One thing that might be gnarly here is that I think you might not be > > > allowed to use up_read() to fully release ownership of an object - > > > from what I remember, I think that up_read() (unlike something like > > > spin_unlock()) can access the lock object after it's already been > > > acquired by someone else. So if you want to protect against concurrent > > > deletion, this might have to be something like: > > > > > > rcu_read_lock(); /* keeps vma alive */ > > > up_read(&vma->lock); > > > rcu_read_unlock(); > > > > But for deleting VMA one would need to write-lock the vma->lock first, > > which I assume can't happen until this up_read() is complete. Is that > > assumption wrong? > > __up_read() does: > > rwsem_clear_reader_owned(sem); > tmp = atomic_long_add_return_release(-RWSEM_READER_BIAS, &sem->count); > DEBUG_RWSEMS_WARN_ON(tmp < 0, sem); > if (unlikely((tmp & (RWSEM_LOCK_MASK|RWSEM_FLAG_WAITERS)) == > RWSEM_FLAG_WAITERS)) { > clear_nonspinnable(sem); > rwsem_wake(sem); > } > > The atomic_long_add_return_release() is the point where we are doing > the main lock-releasing. > > So if a reader dropped the read-lock while someone else was waiting on > the lock (RWSEM_FLAG_WAITERS) and no other readers were holding the > lock together with it, the reader also does clear_nonspinnable() and > rwsem_wake() afterwards. > But in rwsem_down_write_slowpath(), after we've set > RWSEM_FLAG_WAITERS, we can return successfully immediately once > rwsem_try_write_lock() sees that there are no active readers or > writers anymore (if RWSEM_LOCK_MASK is unset and the cmpxchg > succeeds). We're not necessarily waiting for the "nonspinnable" bit or > the wake. > > So yeah, I think down_write() can return successfully before up_read() > is done with its memory accesses. > > (Spinlocks are different - the kernel relies on being able to drop > references via spin_unlock() in some places.) Thanks for bringing this up. I can add rcu_read_{lock/unlock) as you suggested and that would fix the issue because we free VMAs from call_rcu(). However this feels to me as an issue of rw_semaphore design that this locking pattern is unsafe and might lead to UAF.