From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 273E6E7716A for ; Tue, 17 Dec 2024 10:30:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2A558D0002; Tue, 17 Dec 2024 05:30:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AD9668D0001; Tue, 17 Dec 2024 05:30:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A22F8D0002; Tue, 17 Dec 2024 05:30:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7F6C88D0001 for ; Tue, 17 Dec 2024 05:30:43 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 09B5C805D5 for ; Tue, 17 Dec 2024 10:30:43 +0000 (UTC) X-FDA: 82904081610.15.01D6EF8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 6081A40010 for ; Tue, 17 Dec 2024 10:30:09 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ClY9o+2U; spf=none (imf04.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=peterz@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734431412; a=rsa-sha256; cv=none; b=VnRJ1P+tgdM2fWDxw2O8tT+wYME9emTaCZoEU2yVv4z9Ps9jNIozAYABl30wa4xEMyUlgJ OYzwAsFrs6geztgOj6t0jaOFgA08rIX3OOBP+Hj1zq5OyW3uAnraiTf3YQXK9EDjbyqS0b MV0qnf4l6c94W4m6eH2vzJaKCpQhN+U= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ClY9o+2U; spf=none (imf04.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=peterz@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734431412; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cRjzNOV1ffJ13/MtKPV0Ob+50aKeLsbdFVRQjpo8i44=; b=2ulJJ5fgNtxkuQAjwg9TXzfamJfbYpyRBcU4/8t9zWT7XCjShD8WmE0GYecf+LkM0BD8SL V/LXiysyO/hKY7kio4XJCtUL2RAIQYamYX6GyrrxJk1M0JawUs1Rm3H9Ru+AVRynkgqwiI OTZQaHe5vqOxzvp9eQskm/O6RUWacjA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=cRjzNOV1ffJ13/MtKPV0Ob+50aKeLsbdFVRQjpo8i44=; b=ClY9o+2UruJ9cz1NKhNFGNqvo+ wPnfdc1ZhXBJe7kT8WONPHMeF4C3nXgKTY8bVozqBH0ah4tvPBS9KutMS3rxPekUv0I4lFR8PMCN7 3TEKYbq/w1JYAp0Vq5wu8fY8dYq3JSMTkLZGvw2LIHQgTZgPuXGRYuSwy0PV/KwXTORuqUD6JlX29 ic1eZC3I5R7JhGDxvWD9B9NE7gcPm1zk51N9GfUGyBZUh9SdDFhlHD51hknyO5rVDV/wMqI3HjRDY Rvf7MwdKTPZ7He2sa/eOxS3sQplVAdVKTs/ZJBTTGoMUSLW27y0BbL3V89ijkPMrfMv7uMvyBr//u m2Kja0Tg==; Received: from 77-249-17-89.cable.dynamic.v4.ziggo.nl ([77.249.17.89] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.98 #2 (Red Hat Linux)) id 1tNUqa-00000006Kxn-0oOT; Tue, 17 Dec 2024 10:30:36 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id ACBB330015F; Tue, 17 Dec 2024 11:30:35 +0100 (CET) Date: Tue, 17 Dec 2024 11:30:35 +0100 From: Peter Zijlstra To: Suren Baghdasaryan Cc: akpm@linux-foundation.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Subject: Re: [PATCH v6 10/16] mm: replace vm_lock and detached flag with a reference count Message-ID: <20241217103035.GD11133@noisy.programming.kicks-ass.net> References: <20241216192419.2970941-1-surenb@google.com> <20241216192419.2970941-11-surenb@google.com> <20241216213753.GD9803@noisy.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Queue-Id: 6081A40010 X-Stat-Signature: swfxbirzcouyodpsrnygq9x6we9u4myh X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1734431409-31978 X-HE-Meta: U2FsdGVkX1+WB+gx9UNWB/fvhoknKa2ApLE/v+IPBY7j0DqYnwaXSwN/CELi6NSnmM1J+Y/CYlpd6ChGOL5cJliiWNxbJUr9Pqs4IILIaq0VfaCOJoX3k26d5fMRcfLKeRFo8ormWb+X7CenRaUBaEnbVmIBULcJ6M2cNOOaym8wLsq8EesYyjPPj1SwywZNBsNGtmlGE3SBHrsqKF2RVkBgb4ZYsoyw9D73pHpn0YReFfq/5hmxbcQsVnFgCeFlWRPIifSLtHfGSbjDKrTrdGqyLgmO0DVVWZ7sk/NpqL5JiP+89xJ2g/9VWP/q5XMR5zYe5gWKpoLkCaFNaogdx0Jbpm40o36DV66kemYfHr5UNqOKyLq6YJklZCCC5TbOKl7QMP7Lg6c3QTZPhuxigW3Gg2rmPrkB64s/PvotFx6GI7HVk/p4ELqcMp4BKqw1Dk+AVfNH5qMgToYBtqdHLPyI/IMoW0HEGCK8t3ZGXIYLKTwfANW97jIzqf7fugI+6afJ9pL14sIdzAJCMKDfIIBIoV1kq5iVtcq6ONf33OIyM3FXDYhCh7kqrNZX2xhM6wS8EXS+w4fGNHXTqreNLz306V8xf/iViL5YeTuRjUQQ/f6GMAI+nXg6zEnouDtm9PN0+rzrUoyk29xtYYK/UuHrXwpM0eU6cDViEk91swKgHU6+cYNQ2PFQfs4NgRXI3r2afbi8su7djQ2L1IlRAiCd1va5GmSgWT13nwn12i02i6J/wniQlf2eMvQpM2C2k0QYXNU2TwuMdtmH0N96/MXsde8nwALyf769qSEbMrHFuR+iC1rgDnOvsbZCd+FUcS1/gQwFdOB8n6PD9g0Sc/IywN0zBIhNYyi1EbNWrvVQkEGhG94lMoIm9NpRZDLAyeUE8c+HP1r9WDbP5AAOAuy3cwklexaiX+U4nJxuaIGZYxHtTfyBqD8a7O1WYTkr7yggC2w8dl3mC2FJIfl g+sluo7j 80y6UflIvh5pmEK2/h6rtpjTall8HaXfLWF17ppbNrAKckIU4Tj3+1onmh3MnPRzuH45KbisPujaRb8e+OCSmGn/W1aCHY8rXiFu9ar6U/FxV7muzqqnt8YdlTkU9FIvCKNKfU6TDlsWheNZTi2HnZ7hyJn4cjXNOweVs1HkpscdXzk1V2WcFGdzMIf8Q9hMK9Q5+1jFc2nXt0UV2/VHjpFV7MmT4OrpoBezR6syiQOKiFVV/T9vxW2f3VSrrwWGvE+gN0WcTNKLo/4d9vAsKHgJJ1VxKsfJrNAbqlWZ5CNckm0mcIEIOca3YZVvfK21P0JXTDl00ZMKeKDtEpFKdSAKqQKS/17y7TxI/7OyP8JqTtYoP7qSDud+wKQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Dec 16, 2024 at 01:44:45PM -0800, Suren Baghdasaryan wrote: > On Mon, Dec 16, 2024 at 1:38 PM Peter Zijlstra wrote: > > > > On Mon, Dec 16, 2024 at 11:24:13AM -0800, Suren Baghdasaryan wrote: > > > +static inline void vma_refcount_put(struct vm_area_struct *vma) > > > +{ > > > + int refcnt; > > > + > > > + if (!__refcount_dec_and_test(&vma->vm_refcnt, &refcnt)) { > > > + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); > > > + > > > + if (refcnt & VMA_STATE_LOCKED) > > > + rcuwait_wake_up(&vma->vm_mm->vma_writer_wait); > > > + } > > > +} > > > + > > > /* > > > * Try to read-lock a vma. The function is allowed to occasionally yield false > > > * locked result to avoid performance overhead, in which case we fall back to > > > @@ -710,6 +728,8 @@ static inline void vma_lock_init(struct vm_area_struct *vma) > > > */ > > > static inline bool vma_start_read(struct vm_area_struct *vma) > > > { > > > + int oldcnt; > > > + > > > /* > > > * Check before locking. A race might cause false locked result. > > > * We can use READ_ONCE() for the mm_lock_seq here, and don't need > > > @@ -720,13 +740,20 @@ static inline bool vma_start_read(struct vm_area_struct *vma) > > > if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) > > > return false; > > > > > > + > > > + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 0, _RET_IP_); > > > + /* Limit at VMA_STATE_LOCKED - 2 to leave one count for a writer */ > > > + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, > > > + VMA_STATE_LOCKED - 2))) { > > > + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); > > > return false; > > > + } > > > + lock_acquired(&vma->vmlock_dep_map, _RET_IP_); > > > > > > /* > > > + * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result. > > > * False unlocked result is impossible because we modify and check > > > + * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq > > > * modification invalidates all existing locks. > > > * > > > * We must use ACQUIRE semantics for the mm_lock_seq so that if we are > > > @@ -734,10 +761,12 @@ static inline bool vma_start_read(struct vm_area_struct *vma) > > > * after it has been unlocked. > > > * This pairs with RELEASE semantics in vma_end_write_all(). > > > */ > > > + if (oldcnt & VMA_STATE_LOCKED || > > > + unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { > > > + vma_refcount_put(vma); > > > > Suppose we have detach race with a concurrent RCU lookup like: > > > > vma = mas_lookup(); > > > > vma_start_write(); > > mas_detach(); > > vma_start_read() > > rwsem_acquire_read() > > inc // success > > vma_mark_detach(); > > dec_and_test // assumes 1->0 > > // is actually 2->1 > > > > if (vm_lock_seq == vma->vm_mm_mm_lock_seq) // true > > vma_refcount_put > > dec_and_test() // 1->0 > > *NO* rwsem_release() > > > > Yes, this is possible. I think that's not a problem until we start > reusing the vmas and I deal with this race later in this patchset. > I think what you described here is the same race I mention in the > description of this patch: > https://lore.kernel.org/all/20241216192419.2970941-14-surenb@google.com/ > I introduce vma_ensure_detached() in that patch to handle this case > and ensure that vmas are detached before they are returned into the > slab cache for reuse. Does that make sense? So I just replied there, and no, I don't think it makes sense. Just put the kmem_cache_free() in vma_refcount_put(), to be done on 0. Anyway, my point was more about the weird entanglement of lockdep and the refcount. Just pull the lockdep annotation out of _put() and put it explicitly in the vma_start_read() error paths and vma_end_read(). Additionally, having vma_end_write() would allow you to put a lockdep annotation in vma_{start,end}_write() -- which was I think the original reason I proposed it a while back, that and having improved clarity when reading the code, since explicitly marking the end of a section is helpful.