From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8C55E77188 for ; Fri, 10 Jan 2025 17:01:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B18C6B00B0; Fri, 10 Jan 2025 12:01:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 013C36B00BD; Fri, 10 Jan 2025 12:01:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF5916B00BF; Fri, 10 Jan 2025 12:01:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id AA8E66B00B0 for ; Fri, 10 Jan 2025 12:01:28 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 476ED120AB3 for ; Fri, 10 Jan 2025 17:01:28 +0000 (UTC) X-FDA: 82992158256.09.7B786AF Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) by imf25.hostedemail.com (Postfix) with ESMTP id 873A8A0011 for ; Fri, 10 Jan 2025 17:01:25 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=desiato.20200630 header.b=K6q83u2G; dmarc=none; spf=none (imf25.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.92.199) smtp.mailfrom=peterz@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736528486; a=rsa-sha256; cv=none; b=EhtM2AU0JzmPflgyyzH7/T3v/e4zc25d9wFh43zUuHGaw/fTjbHKC3zt6LlTL+UaoXLA3e tUVmx7btZJSt47jSwWIf6fWVOQg1aIpPEu6cC67fprIc/O40kSZbrBh8TxGsYhtu/rQapW c1t4zi7kAB5shMBgNuHc0NhJT1jzxvs= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=desiato.20200630 header.b=K6q83u2G; dmarc=none; spf=none (imf25.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.92.199) smtp.mailfrom=peterz@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736528486; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WwlFQpt9M+HELNW/Cfi2PUgXZk2fMsAkqbKZW8ONXUA=; b=7EgamNMNB4UnxtD+Bqe4hBB/F24lQ/s7OoEGw2+gSkfkyatV55/9OvS20X0zK9PWW+tlJ4 tTJHYHId7E3L3HKc05Pv863lF9rM4QLrIS0Wmw6oi1y9gGucnb2Pdu/Px8ut7FLJmag8V1 SefFNIWBs2lDOkNL61BAvCyMi1WeCQE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=WwlFQpt9M+HELNW/Cfi2PUgXZk2fMsAkqbKZW8ONXUA=; b=K6q83u2GtSxLEi41GmLABq3rzu QqBKsOvXrOlG3PtwoJjVW8Yw2czhPkXN/OqRYaUCK/Ny8WzcMeJ1cBgn0CMMQ+Bt/ThfW/pZ46qab WpYMWbv/VWY8sDmuL8sjo8QdtVn18YycLQ1zL98S2oiI7ndRH1CE46SCFiVMa6Av7p1iBUQXbM4bs 5LaULWhq6tChCMV1vtzMCAj7RKWV8vG2uF3AtOAHT3nHCYOTDMh/7uoAkKPeZnDMvotLMHETw6FEk 2tp+jLQXeoG1nhcWcstapa3zN7lVPN3lAHqMF2MgI4yekWu3hctVqjjRX91qk1w4vUmmwyY8wgkEw QKbZ4OXw==; Received: from 77-249-17-89.cable.dynamic.v4.ziggo.nl ([77.249.17.89] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.98 #2 (Red Hat Linux)) id 1tWINe-00000009qiB-0d0C; Fri, 10 Jan 2025 17:01:13 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id B74FE3005D6; Fri, 10 Jan 2025 18:01:05 +0100 (CET) Date: Fri, 10 Jan 2025 18:01:05 +0100 From: Peter Zijlstra To: Suren Baghdasaryan Cc: akpm@linux-foundation.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Subject: Re: [PATCH v8 00/16] move per-vma lock into vm_area_struct Message-ID: <20250110170105.GE4213@noisy.programming.kicks-ass.net> References: <20250109023025.2242447-1-surenb@google.com> <20250109115142.GC2981@noisy.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Stat-Signature: g6zwub77n7mewb6hbdjqjkag6bsb4nxs X-Rspam-User: X-Rspamd-Queue-Id: 873A8A0011 X-Rspamd-Server: rspam08 X-HE-Tag: 1736528485-989425 X-HE-Meta: U2FsdGVkX1+Ea+8VToWi4Zn4+GL+Kf7VoyelNeyz0KbPnPIlRjRe4n/KrPSVIrVfvU2hg6bFKBWQmtJoSWojOzCCEA4n6HTb65J+oSPE7IuRFi0n9IgKeCB5DV8Oxl90FT2OFqlpQCKqfWdLlx5y3zC9raiZ1TkTUyOhnlHjT68D2vwJCUHkIjuqObmdx1R3YwjCNMa7q+sDyal3HQDFJxnVbt5/Z4fnMdfI10ryn+sSdWMarBupeSs/1hI0HV1Seha9lP4Q1/cOkKeamyMZR95LASD4LJBf+bvpu15cosLG462jadgpzJ9o9qsPnsRkO8MMDS9pgoAW/Z7Drapuw+BffG+IOmg4/MeFzK9KX/44DK5HpztQZtuOGvPpuypgv6AtEJ5jQQFHL0j0GGtQ+mJCb9GM3+2HH8/xxvh2MLHoCKY3ECuk9dslqkRgbkH2fopoCz428hGIqrNIX0sc8iazQQbj8rchO7+Y09xN7q1sdRr/Eb9rGnD0UQurFvfWJ1ODh7sqiizrgzdHqFvkEA2CRJDXWyW+EX+98TYWVv3BUg2zkS/03i8u4xcZbtQViOJM+T/lE24FkPSmrpqj/KlSTANqBRL10nCi03oPEYdYlDLLiLthuXCRw1kRBfheGgVgG2lrDxcEgwn9XJgGbRi9qingx/CSYJ6T9VCOAZYzvKc9HXMkeBk1DoD4Jak5N0GZ2rmO0OT0S9h+hTDLK2JvHeH8QIsJlC9lLUrKQ4sdftugujkzQtKyx7REzhvO5jVei/xFQCvSoOuEzndyDVDQ1iA5TmbTm3psYpMF2p8bOtB9+BHJTryts8jU4ng4eWil5VU6BKkUnM4xTCXVBblDmqG4ssnKeybAOV84uOb6TV3gfG9ZLSPRyR/2+Ff0zJrkYHUJmiKAtbaC5uS2HILLRmQ56FG2vKq3YjMq3z5tg9D8KOE5nG4ifAsQ7RS6BYatxMAyX8cu1HvMgCL 4qmDndKc Mo9+jNrkUQtOUhpsnSg9mmSVxDXdUEAsjC2WK4XuWiZ+8mc8/li22gkdK/0ClSZ5pcY0LHcXOQ9oGU5+n0E3OWwzxDlF3FqqsnxCWoKan+L9TtbDPV94WIbd3JbefOHmE/mv5myygg9VIZBgXr6uK7gyCE5bMAlqHpf4VZnYNWxw3aYhoMnd3i/hUr0gcdVOt0eQRVzS4BiX1zBSqVVjnK4Q3p+RTQUEz9XU6q7xAKQx4r247Vh128VilVQD10JLkOYye5aPE07tIFqkZ+6Gy8SwnISJkDCTzA6vkrMNQDAQuMHFirsuRZpEr+Z5UXiLuj6VUelEA05kXT/4gRdsQoLKQKvwb+B9sOvJEhj196tQt3LrADMVN4MVyM/RDKgErxmeTVT8xK8DnevAuBlwj1Fre/g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jan 09, 2025 at 07:48:32AM -0800, Suren Baghdasaryan wrote: > On Thu, Jan 9, 2025 at 3:51 AM Peter Zijlstra wrote: > > > > On Wed, Jan 08, 2025 at 06:30:09PM -0800, Suren Baghdasaryan wrote: > > > Back when per-vma locks were introduces, vm_lock was moved out of > > > vm_area_struct in [1] because of the performance regression caused by > > > false cacheline sharing. Recent investigation [2] revealed that the > > > regressions is limited to a rather old Broadwell microarchitecture and > > > even there it can be mitigated by disabling adjacent cacheline > > > prefetching, see [3]. > > > Splitting single logical structure into multiple ones leads to more > > > complicated management, extra pointer dereferences and overall less > > > maintainable code. When that split-away part is a lock, it complicates > > > things even further. With no performance benefits, there are no reasons > > > for this split. Merging the vm_lock back into vm_area_struct also allows > > > vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset. > > > This patchset: > > > 1. moves vm_lock back into vm_area_struct, aligning it at the cacheline > > > boundary and changing the cache to be cacheline-aligned to minimize > > > cacheline sharing; > > > 2. changes vm_area_struct initialization to mark new vma as detached until > > > it is inserted into vma tree; > > > 3. replaces vm_lock and vma->detached flag with a reference counter; > > > 4. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for their > > > reuse and to minimize call_rcu() calls. > > > > Does not clean up that reattach nonsense :-( > > Oh, no. I think it does. That's why in [1] I introduce > vma_iter_store_attached() to be used on already attached vmas and to > avoid marking them attached again. Also I added assertions in > vma_mark_attached()/vma_mark_detached() to avoid re-attaching or > re-detaching. Unless I misunderstood your comment? Hmm, I'll go read the thing again, maybe I missed it.