From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC1F6E7719C for ; Fri, 10 Jan 2025 00:17:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 610DA6B00A5; Thu, 9 Jan 2025 19:17:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5C1DD6B00A8; Thu, 9 Jan 2025 19:17:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 488516B00AA; Thu, 9 Jan 2025 19:17:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 0EB2C6B00A5 for ; Thu, 9 Jan 2025 19:17:14 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BCE5E1C532D for ; Fri, 10 Jan 2025 00:17:13 +0000 (UTC) X-FDA: 82989627546.17.7DE6D72 Received: from mail-qt1-f179.google.com (mail-qt1-f179.google.com [209.85.160.179]) by imf28.hostedemail.com (Postfix) with ESMTP id D1B01C0018 for ; Fri, 10 Jan 2025 00:17:11 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=n39vQECU; spf=pass (imf28.hostedemail.com: domain of surenb@google.com designates 209.85.160.179 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736468231; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QoU/LDQTHOIrhPA2dtqGJt9Cqtv4Fi0XOmNpoCbmDTA=; b=1OezFKNKVEL/qWqsSRraO/+4gEkvuyqXL97LpKFCa8Vnk43mhl81AWYeFEmLM1o3fSJWKg wafOb8+qot2YPfyO/ForQ1XMof1qQrgxMNmC3w0ILc/wn3b/2ZwtasJbRj3R7GGfusnI4N i7kTMH1nFNxj3Ud6vkf0FEe0e6KO9YQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736468231; a=rsa-sha256; cv=none; b=rTeJpOZxZ8MX7z+A0Nqa65BTDjLkTU1/MRUI78vCJzl6F7Dieqk40P9lkgYQAM3qQAGRc0 GzJMv+vAdJ3K7657QjlV/NtEQgFrdvFuKgeZzoIl6EWjIhcBrRyXEOHBqWWip4tbGI5fN3 PObdUeyz1nI4Gvw9eXkXghPk6wAaMfA= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=n39vQECU; spf=pass (imf28.hostedemail.com: domain of surenb@google.com designates 209.85.160.179 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-qt1-f179.google.com with SMTP id d75a77b69052e-467abce2ef9so101601cf.0 for ; Thu, 09 Jan 2025 16:17:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736468231; x=1737073031; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=QoU/LDQTHOIrhPA2dtqGJt9Cqtv4Fi0XOmNpoCbmDTA=; b=n39vQECUx6dDxb5MzWW60endmGaCcGrt0wlwD02ml1JqiKKdQrot4De+kBYHDn2p91 MZnrMhYy6w6/0uEc0Bb7Eyh3u37wXCb6y/8ObJtC3q8uKByzYK0BKacAwR9mS6tcE/mf ubctAhGM8HjNlubIByldpSCk03HEnnMP+RoNk9Z8RMGDQlWEgc6FXaYHH1QXNLPlL6iZ MonxWAqmhpje7CAhuEu4oPy9fExvO/3QD0+M9qD/ewla7FzCI6wez/5/ENRQNGjt4Pkg +5kpiqWvPPI5EAFwTe/9PJiRdd2fpePAc9DqZJKu1ap2Gv+k82Qpf0CUVEYOUdPzGuW1 TuAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736468231; x=1737073031; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QoU/LDQTHOIrhPA2dtqGJt9Cqtv4Fi0XOmNpoCbmDTA=; b=G9RqmzV3gp/H5Qd5fD9jCoelnvmE916PxbmMiMb1Upx+7FEydpgEu+v3WUd82uDsc6 2XWK3/omRLN28IBXtJ5ri8NqGYu3enlXg9axhEBVkpzUkuNHMHfH2Si/kCQvTupHrS5L mvdhhMxKMzYmKc/N7D4J7z1f15RT2pgKhOQtCnJ1s4FatMxLJ+gtx6EnaY6R3dTKquNI iVPmuabTM72Ed92dX5CWv6uyvtwPpW4mWLCZXrf3BIWm+n20MTxx94eO+KGdvIrUr7vO wyFCioQsCiIZfaZjFqfjyjTVoo2EDmkkgUeTeaFLCztxdpvRCwN4NN8WiP7ryN+yq/ND JG1w== X-Forwarded-Encrypted: i=1; AJvYcCUeedrjSB9xOr9fUSuqzuoZm0uexs66Bcrj6hKjHYGwH3jVSzFYuTpa65wPoa9aAFO3PhgARAc4ww==@kvack.org X-Gm-Message-State: AOJu0YzzWpR+Zrn9tp2njKT7RU62u88XRfJCioPAEbwA6SF8TgixaVPT RmLAvwlvCP0NHyy7035bCQL9M5jQxBzNq+kR6mRFqH6VeNpxf2geyPE8sY6FpiLvsQBGZ47kIsG 4mqDbMePjUCKHyi03/x5owBXQz+bDfl6Lf60b X-Gm-Gg: ASbGncvnTLW/9VcD61G4Ym66lSnn2ARb72bRRYz2v/jIrK2PYl9kjNUS6vm5O5cNY1Q yhTGnfmiUdb6xtlA+m36KasIe+MvFFYd3kz+AEQ== X-Google-Smtp-Source: AGHT+IH8R9lNkSU79aI63/hyl3O0cG94NwCliOTzx+9z8wdzXKZzEsiCu+yRcA72XLQKGwY71uaMUkh5UqOBi7P7VL8= X-Received: by 2002:a05:622a:289:b0:466:861a:f633 with SMTP id d75a77b69052e-46c89d3d958mr524481cf.5.1736468230527; Thu, 09 Jan 2025 16:17:10 -0800 (PST) MIME-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> <52ecd3fa-5978-4f4b-b969-c42b00a5b885@suse.cz> In-Reply-To: From: Suren Baghdasaryan Date: Thu, 9 Jan 2025 16:16:59 -0800 X-Gm-Features: AbW1kvYVaIqpX294RVcNpC79-J005BUW5OkvT5xWAaEdwX55Dd_kFzO6EYuoREo Message-ID: Subject: Re: [PATCH v8 00/16] move per-vma lock into vm_area_struct To: Vlastimil Babka Cc: akpm@linux-foundation.org, peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: D1B01C0018 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: r4eeyexkoeuyink7jchet8nkg4rc3qne X-HE-Tag: 1736468231-808527 X-HE-Meta: U2FsdGVkX18oRwxSrVcEzqqcjbnsP6wozMwzWMk56l7wWKK/Nbjb+p2uQHDXkqWVi27cAAJqfnIjcZZAaCY2WK734l0FzHoEX9xQXcKkQJ9TkoMEIUh/WtBnuGihdo7nyJoAUeH8pYULLZs0aqyIfwep7odMP0HuYwY6k+5bTBtRDErgxDez0yBf9ipnYjFM1MN7TGfo589anqNE+sSOXJNepuISQUvpIN3F29XBLj1uHv3y89vcNMJKcCe7VbHz/j5xLj+9ZWGOxYoGhE9wF+cK5SrcC3s+2+wHgoCytWELZ6gmD9lzJwbB1DYT7K1W2p7ODF7OoQuQ5Mj74rzibs+E8lvtC3UBEIBjAliEDuGH99xaw0ntXRyYd1+mlYR6NBzLj8cYAMj+B3bgT6Dgcc3JlL3V1LMF73en5iv9P7iwR+MOCOYTph8XAIUL94r/WZrxGEwpn1wD7q51DQi4+2FMfyBk7nhaBoJ7neYuu6t1oxUy+8649lHHIVp+aeTIKIfOZ3/vSaVTPoBZcnWUzSdYpecugU9yAl3DfHuJ8wKc0nOlg6G0bzuCIzOcVCoaG28aWiAOwZGHs93+n0SjgK7hb/RJzdWVx3Rw8cGmI9KGnNOzFKyRvR1ms39Ouc6/93pCzIYnYGO0okEMMvKuTkrC7ehIiY41LHWtdsLIgLCNDNayMrNs2WPr1vDM63+GZnb6KmPvIyOA20btagwe3u166eSG9e/MtD17zBTWg8tG5pT8rBRHlOUvc+tPB6WzUNZ1oSj1OEAxszVO3k1MQeUIOMaXsRiJfNPjAQ//nIhKrREL1Y5+huvYfJ4WL/+pgEHSG0USH9qkv4kePwW7cot68exypU+zYsCoDCePanwgsWAxOcxQ3osZG9Aypgjv6TujpjLY3NvEUhfgVFJsF9pU4wk9Wbddh1bQe23mXhUOfE1XKcjO1KALvfzpx9rWABhH2nfBwdAXn9qa97S EdYCMVRD qbIuoHyHl5BXr0dCDqaiN2kcCcNrnOAimhhTsEH0E1rHL2tCTQeH0UIjli4NC6pZa6LUGpe94UQ0JQDAtoMuDN77bajvKMPxv6K9+JSW1RefB4IU9ps3oQk2wggNt+Xrg/ypyr5i4CRif+ZF0cPi6qB5/rm9X6/8eKpwrT5v3f0jtEauIp7XbaGmPOrkqhEfzs1nG699vjeXQZH5ZBC5AWCZHBIF8dRzoLbjdKe2FXXI2wzGDO42uY6pbTHJR8K4pq1GVeBTna2oU3lyeFbEp6Pz8SizzJT4pLptLyH8Xz+asAi1dBOeeFogQF9sXpM4t6W4Fsr+V4sAgMLsgAIl5RHsxndt2WWLZ13VaktlYCbLTQ+uuJYKi1dW9t0TLgK7lcKjX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jan 9, 2025 at 7:59=E2=80=AFAM Suren Baghdasaryan wrote: > > On Thu, Jan 9, 2025 at 5:40=E2=80=AFAM Vlastimil Babka w= rote: > > > > Btw the subject became rather incomplete given all the series does :) > > Missed this one. What do you think is worth mentioning here? It's > true, the patchset does many small things but I wanted to outline the > main conceptual changes. Please LMK if you think there are more > changes big enough to be mentioned here. I just realized that your comment was only about the subject of this cover letter. Maybe something like this: per-vma lock and vm_area_struct cache optimizations Would that be better? > > > > > On 1/9/25 3:30 AM, Suren Baghdasaryan wrote: > > > Back when per-vma locks were introduces, vm_lock was moved out of > > > vm_area_struct in [1] because of the performance regression caused by > > > false cacheline sharing. Recent investigation [2] revealed that the > > > regressions is limited to a rather old Broadwell microarchitecture an= d > > > even there it can be mitigated by disabling adjacent cacheline > > > prefetching, see [3]. > > > Splitting single logical structure into multiple ones leads to more > > > complicated management, extra pointer dereferences and overall less > > > maintainable code. When that split-away part is a lock, it complicate= s > > > things even further. With no performance benefits, there are no reaso= ns > > > for this split. Merging the vm_lock back into vm_area_struct also all= ows > > > vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset. > > > This patchset: > > > 1. moves vm_lock back into vm_area_struct, aligning it at the cacheli= ne > > > boundary and changing the cache to be cacheline-aligned to minimize > > > cacheline sharing; > > > 2. changes vm_area_struct initialization to mark new vma as detached = until > > > it is inserted into vma tree; > > > 3. replaces vm_lock and vma->detached flag with a reference counter; > > > 4. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for = their > > > reuse and to minimize call_rcu() calls. > > > > > > Pagefault microbenchmarks show performance improvement: > > > Hmean faults/cpu-1 507926.5547 ( 0.00%) 506519.3692 * -0.= 28%* > > > Hmean faults/cpu-4 479119.7051 ( 0.00%) 481333.6802 * 0.= 46%* > > > Hmean faults/cpu-7 452880.2961 ( 0.00%) 455845.6211 * 0.= 65%* > > > Hmean faults/cpu-12 347639.1021 ( 0.00%) 352004.2254 * 1.= 26%* > > > Hmean faults/cpu-21 200061.2238 ( 0.00%) 229597.0317 * 14.= 76%* > > > Hmean faults/cpu-30 145251.2001 ( 0.00%) 164202.5067 * 13.= 05%* > > > Hmean faults/cpu-48 106848.4434 ( 0.00%) 120641.5504 * 12.= 91%* > > > Hmean faults/cpu-56 92472.3835 ( 0.00%) 103464.7916 * 11.= 89%* > > > Hmean faults/sec-1 507566.1468 ( 0.00%) 506139.0811 * -0.= 28%* > > > Hmean faults/sec-4 1880478.2402 ( 0.00%) 1886795.6329 * 0.= 34%* > > > Hmean faults/sec-7 3106394.3438 ( 0.00%) 3140550.7485 * 1.= 10%* > > > Hmean faults/sec-12 4061358.4795 ( 0.00%) 4112477.0206 * 1.= 26%* > > > Hmean faults/sec-21 3988619.1169 ( 0.00%) 4577747.1436 * 14.= 77%* > > > Hmean faults/sec-30 3909839.5449 ( 0.00%) 4311052.2787 * 10.= 26%* > > > Hmean faults/sec-48 4761108.4691 ( 0.00%) 5283790.5026 * 10.= 98%* > > > Hmean faults/sec-56 4885561.4590 ( 0.00%) 5415839.4045 * 10.= 85%* > > > > Given how patch 2 discusses memory growth due to moving the lock, shoul= d > > also patch 11 discuss how the replacement with refcount reduces the > > memory footprint? And/or the cover letter could summarize the impact of > > the whole series in that aspect? Perhaps the refcount doesn't reduce > > anything as it's smaller but sits alone in the cacheline? Could it be > > grouped with some non-hot fields instead as a followup, so could we get > > to <=3D192 (non-debug) size without impacting performance? > > > > > Changes since v7 [4]: > > > - Removed additional parameter for vma_iter_store() and introduced > > > vma_iter_store_attached() instead, per Vlastimil Babka and > > > Liam R. Howlett > > > - Fixed coding style nits, per Vlastimil Babka > > > - Added Reviewed-bys and Acked-bys, per Vlastimil Babka > > > - Added Reviewed-bys and Acked-bys, per Liam R. Howlett > > > - Added Acked-by, per Davidlohr Bueso > > > - Removed unnecessary patch changeing nommu.c > > > - Folded a fixup patch [5] into the patch it was fixing > > > - Changed calculation in __refcount_add_not_zero_limited() to avoid > > > overflow, to change the limit to be inclusive and to use INT_MAX to > > > indicate no limits, per Vlastimil Babka and Matthew Wilcox > > > - Folded a fixup patch [6] into the patch it was fixing > > > - Added vm_refcnt rules summary in the changelog, per Liam R. Howlett > > > - Changed writers to not increment vm_refcnt and adjusted VMA_REF_LIM= IT > > > to not reserve one count for a writer, per Liam R. Howlett > > > - Changed vma_refcount_put() to wake up writers only when the last re= ader > > > is leaving, per Liam R. Howlett > > > - Fixed rwsem_acquire_read() parameters when read-locking a vma to ma= tch > > > the way down_read_trylock() does lockdep, per Vlastimil Babka > > > - Folded vma_lockdep_init() into vma_lock_init() for simplicity > > > - Brought back vma_copy() to keep vm_refcount at 0 during reuse, > > > per Vlastimil Babka > > > > > > What I did not include in this patchset: > > > - Liam's suggestion to change dump_vma() output since it's unclear to= me > > > how it should look like. The patch is for debug only and not critical= for > > > the rest of the series, we can change the output later or even drop i= t if > > > necessary. > > > > > > [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@goog= le.com/ > > > [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-902= 0/ > > > [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53= cT_kbfP_pR+-2g@mail.gmail.com/ > > > [4] https://lore.kernel.org/all/20241226170710.1159679-1-surenb@googl= e.com/ > > > [5] https://lore.kernel.org/all/20250107030415.721474-1-surenb@google= .com/ > > > [6] https://lore.kernel.org/all/20241226200335.1250078-1-surenb@googl= e.com/ > > > > > > Patchset applies over mm-unstable after reverting v7 > > > (current SHA range: 588f0086398e - fb2270654630) > > > > > > Suren Baghdasaryan (16): > > > mm: introduce vma_start_read_locked{_nested} helpers > > > mm: move per-vma lock into vm_area_struct > > > mm: mark vma as detached until it's added into vma tree > > > mm: introduce vma_iter_store_attached() to use with attached vmas > > > mm: mark vmas detached upon exit > > > types: move struct rcuwait into types.h > > > mm: allow vma_start_read_locked/vma_start_read_locked_nested to fai= l > > > mm: move mmap_init_lock() out of the header file > > > mm: uninline the main body of vma_start_write() > > > refcount: introduce __refcount_{add|inc}_not_zero_limited > > > mm: replace vm_lock and detached flag with a reference count > > > mm/debug: print vm_refcnt state when dumping the vma > > > mm: remove extra vma_numab_state_init() call > > > mm: prepare lock_vma_under_rcu() for vma reuse possibility > > > mm: make vma cache SLAB_TYPESAFE_BY_RCU > > > docs/mm: document latest changes to vm_lock > > > > > > Documentation/mm/process_addrs.rst | 44 +++++---- > > > include/linux/mm.h | 152 ++++++++++++++++++++++-----= -- > > > include/linux/mm_types.h | 36 ++++--- > > > include/linux/mmap_lock.h | 6 -- > > > include/linux/rcuwait.h | 13 +-- > > > include/linux/refcount.h | 20 +++- > > > include/linux/slab.h | 6 -- > > > include/linux/types.h | 12 +++ > > > kernel/fork.c | 128 +++++++++++------------- > > > mm/debug.c | 12 +++ > > > mm/init-mm.c | 1 + > > > mm/memory.c | 94 +++++++++++++++--- > > > mm/mmap.c | 3 +- > > > mm/userfaultfd.c | 32 +++--- > > > mm/vma.c | 23 ++--- > > > mm/vma.h | 15 ++- > > > tools/testing/vma/linux/atomic.h | 5 + > > > tools/testing/vma/vma_internal.h | 93 ++++++++---------- > > > 18 files changed, 435 insertions(+), 260 deletions(-) > > > > >