From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C791BE7717F for ; Mon, 16 Dec 2024 21:38:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D27D6B0092; Mon, 16 Dec 2024 16:38:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 233C26B0095; Mon, 16 Dec 2024 16:38:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0377A6B00D5; Mon, 16 Dec 2024 16:38:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D123A6B00CE for ; Mon, 16 Dec 2024 16:38:04 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 5A5A51A01C4 for ; Mon, 16 Dec 2024 21:38:04 +0000 (UTC) X-FDA: 82902133860.01.E7BFD39 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 4A918C0004 for ; Mon, 16 Dec 2024 21:37:31 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=lmj3uwnA; spf=none (imf22.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=peterz@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734385062; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eQZ4movVBV5OyxGyGtrXDRWuzYuGglmPmc94RPaLDPs=; b=7s+F5LrscvLMRp6zWtVxTlmbHVr3nkL+tpXHVJpB7cDZHkhvS0JFKHLQ6IWzx4SVhYW5YO AzKGmm4hTrJOGxDIS3fIt9EBWa2AFak5Z1SQTGSt5SiT2W+5vtFtqLRIkRYA1ZlPQdeaev n01hOPjJLNtCLWaQ3WWxrAyonyw86Oo= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=lmj3uwnA; spf=none (imf22.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=peterz@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734385062; a=rsa-sha256; cv=none; b=iAaNon5TLoRgIictbEuFYxYGHhfJrx1GyRUL/1dOYsOLdudG/pB3GuR2Q8QooMMCnR+KS9 xszfvRKrJ5uXY8PzrFDyG+8XijWYpsu/Rhg2QRC0ZLHHP8t/N+5sqV89Q+z/rAXvdQEFTT cCQL3Jum2MeP6ZmMX2InSedcT3wnKjc= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=eQZ4movVBV5OyxGyGtrXDRWuzYuGglmPmc94RPaLDPs=; b=lmj3uwnAWCk4GZUBwCltac1hMA og8Vhfg9PCVHyRLXmi8twh6OrPtnpDmcHtaBdsOdeitDDdO8KtsQb5llfs349+cHcUOMic7nY2vHH b9xKN4iCcOScmgcCSZW2k6NA0kX2oqkFoLNnG6yyVfn2riHTyz1LRNlDkfo1KnrU+0EtTd844hgHg nTD8qMKd4GnWTtB+mS35JVSXXo37bZcHWg5Py0W82dZK2P6nRtt7Qxbp92dCVPWVwBTHDcONx423M 28WizP5SJf8hugS7lKKMDr69gTnKKxAEYctS72derFFDaM1homYXNa7fuc6db9KipwUifzPpo/7Ud 4/mtuNFA==; Received: from 77-249-17-89.cable.dynamic.v4.ziggo.nl ([77.249.17.89] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.98 #2 (Red Hat Linux)) id 1tNImo-00000002IRi-1ITN; Mon, 16 Dec 2024 21:37:54 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id CEE0130031E; Mon, 16 Dec 2024 22:37:53 +0100 (CET) Date: Mon, 16 Dec 2024 22:37:53 +0100 From: Peter Zijlstra To: Suren Baghdasaryan Cc: akpm@linux-foundation.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Subject: Re: [PATCH v6 10/16] mm: replace vm_lock and detached flag with a reference count Message-ID: <20241216213753.GD9803@noisy.programming.kicks-ass.net> References: <20241216192419.2970941-1-surenb@google.com> <20241216192419.2970941-11-surenb@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20241216192419.2970941-11-surenb@google.com> X-Rspamd-Server: rspam05 X-Stat-Signature: z3etwft4ht5jim34ddqnzo4eabj4fqk1 X-Rspamd-Queue-Id: 4A918C0004 X-Rspam-User: X-HE-Tag: 1734385051-966965 X-HE-Meta: U2FsdGVkX18BZHAxCWl/OMEklMTXzw0s3hI/jX156SebYgnik/ZbN0K3/ghy+26xp8TC+z00Bw2CEqX9uS8PmlGbgbTAJneDGJJG9jiOheX/53A0ll/PBlwxmUUZhn/GP88piYreDRRQHychvTUqSEKfyY0WiYV4PA6N9YjtBx+ghINvtLQLA98Pg5AUr5of/UeT0K/knSSh9K+khix23KmPzV2E1oblmvOx2JxYH4g/6mGwoi+8xRp2RmspzUKIVOa89YQLcMnErIZe4x71P6/lMSoOe7PNG5f7gq+wvgEd+YIoNYhaxNQQCjYvcoQNl7FTirh7RDSvvDJiJR9Gi3lFTI6trfzHAwJZo9a3bAJw1BXBZFMma7gNelk1VHBo27+gV0uWhIatnonMq6UdhnKR5bVQDq+Evt5YtRokT139MW+1kOmdVZbMnQ1i4SsdvrKhnFY+9INTtG5uMH1h3w3mhqYLvY0eL6DCEkW5Pj/cTIDVPPNYUvDhBmt+/fdCkDpyi/pcsul1xfSzmBqq/nvFwqmFll9aA7dJ6xrW819BcfclItMJMpOrbnbkMjrQUkWWdPZyR/cDntALbH9M8Obq8Y5vpUMwoPPMAHw8OZSoEGNN+26oOA2+FljS1Q2w/2VbNEWEpgKY8bqG1TynwG5+39XqtnwGVQ1PnfBhLfzMZR1Lj5eaBMeUaZzg0jMv5ntitbnzmQrYxW3T3FByFyl2dT++Ys5+k883qaohTYnRtqNQFoR7X24g8RloLVYVb4DJjSSRKAALYENHW1iDDPkHfHC4M0i0ypoq6wDGbTwbqf0DVLDV/jB7jsiCtX/kLRtp5aETNY2KXD2ObbtJfYv+r84qDDgK+3BiIIOspNmxoC/BEmfL4WFuNhxIG2ln47f6HW6NSkbE8cLdiZziBao+AJC2Gl+57kG9XiF+78Z0tash9U+gAxuOuYLb0E936Rv31TJG/y9GFRKORWQ lIJmVDgs M82twRmR92ngTOnvKN/rwmZemJqKEwKaznoKGoDs7IhFBXNqvzlwL35hVI96MwhtByBHwiDFMKmYVoCr7lyt5aJITu3v5OECiLChXajASSBWq0TXFsBs50QXzmYW+eMDcXv9SPT/0+NQJGYbR716z/ajqF9Nsma1cpjVZDzSG0X8LrEurTXZysMCyAATFhkyLoPqGUq92neT8V6k9jxlJrBDAeRVrX2c6ScYBjAhIBprMl4Q1zQ0r3QNuEA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Dec 16, 2024 at 11:24:13AM -0800, Suren Baghdasaryan wrote: > +static inline void vma_refcount_put(struct vm_area_struct *vma) > +{ > + int refcnt; > + > + if (!__refcount_dec_and_test(&vma->vm_refcnt, &refcnt)) { > + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); > + > + if (refcnt & VMA_STATE_LOCKED) > + rcuwait_wake_up(&vma->vm_mm->vma_writer_wait); > + } > +} > + > /* > * Try to read-lock a vma. The function is allowed to occasionally yield false > * locked result to avoid performance overhead, in which case we fall back to > @@ -710,6 +728,8 @@ static inline void vma_lock_init(struct vm_area_struct *vma) > */ > static inline bool vma_start_read(struct vm_area_struct *vma) > { > + int oldcnt; > + > /* > * Check before locking. A race might cause false locked result. > * We can use READ_ONCE() for the mm_lock_seq here, and don't need > @@ -720,13 +740,20 @@ static inline bool vma_start_read(struct vm_area_struct *vma) > if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) > return false; > > + > + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 0, _RET_IP_); > + /* Limit at VMA_STATE_LOCKED - 2 to leave one count for a writer */ > + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, > + VMA_STATE_LOCKED - 2))) { > + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); > return false; > + } > + lock_acquired(&vma->vmlock_dep_map, _RET_IP_); > > /* > + * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result. > * False unlocked result is impossible because we modify and check > + * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq > * modification invalidates all existing locks. > * > * We must use ACQUIRE semantics for the mm_lock_seq so that if we are > @@ -734,10 +761,12 @@ static inline bool vma_start_read(struct vm_area_struct *vma) > * after it has been unlocked. > * This pairs with RELEASE semantics in vma_end_write_all(). > */ > + if (oldcnt & VMA_STATE_LOCKED || > + unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { > + vma_refcount_put(vma); Suppose we have detach race with a concurrent RCU lookup like: vma = mas_lookup(); vma_start_write(); mas_detach(); vma_start_read() rwsem_acquire_read() inc // success vma_mark_detach(); dec_and_test // assumes 1->0 // is actually 2->1 if (vm_lock_seq == vma->vm_mm_mm_lock_seq) // true vma_refcount_put dec_and_test() // 1->0 *NO* rwsem_release() > return false; > } > + > return true; > }