From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D339D41D4D for ; Tue, 12 Nov 2024 03:28:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9370F6B00E1; Mon, 11 Nov 2024 22:28:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8BED06B00E3; Mon, 11 Nov 2024 22:28:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 738B56B00E7; Mon, 11 Nov 2024 22:28:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4FCB26B00E1 for ; Mon, 11 Nov 2024 22:28:11 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id EC660161BBF for ; Tue, 12 Nov 2024 03:28:10 +0000 (UTC) X-FDA: 82776007692.19.2C8D35D Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by imf12.hostedemail.com (Postfix) with ESMTP id 8CAFE4000B for ; Tue, 12 Nov 2024 03:27:50 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=zY3Wa+7p; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of surenb@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731381914; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fCAf8+dxqLdX7eOAb6wjNbdElfKRHRt7U7TeGqx35oE=; b=gzmcWB63zZKgJUCyQKs6EGmPxN+r+YmVw7c9wrg8W5t/F7e5YCK7NG3LG9HdmRcP08jTDO pbQhfHENQeJ8fdL7fQa6hzRpCox4NipTh/+B+KjxhKGV5FdimXhuHI9ihCzaxR6gtsv+r2 ngNaQej017FLgzpYm1BhxzXqLgCcqNw= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=zY3Wa+7p; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of surenb@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731381914; a=rsa-sha256; cv=none; b=1qyHfhBO2gsAryf8oJgAutDIoLq7tUaHUAlOB3yCV5alnCHlZM+rh/chKyPsta+hlcdaVn qp+gYtyLmHepkJNdt2BWDoCy4Ybm/IAzSmWnWuNlNoOGaRziFFgQ7r6uqT0+Wb8dIUrluT zHctD35p+2jnhQhxUWNbaSsdudjuGVg= Received: by mail-qt1-f169.google.com with SMTP id d75a77b69052e-460a8d1a9b7so88991cf.1 for ; Mon, 11 Nov 2024 19:28:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731382088; x=1731986888; darn=kvack.org; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=fCAf8+dxqLdX7eOAb6wjNbdElfKRHRt7U7TeGqx35oE=; b=zY3Wa+7pjrkqkIjQeE1J5M058m0HRKaBPFXT9Kf9MLqZ7gxSpBWEKg14Oncriph9IP pFVXFTkvcTnstRjvLTEd06wsYaA3lmh04yX6L2PuECp2s7oaWm9dGkUHiblUI8d6T3Jx vZRUKOLnmknLD7DiE4W+iDR1G5K6PAQcDYkbg1hhs8AVs1Q5ykG8Fql4BbALCWDqBsjj +HQVMs+fQbWpwS0Fb5bs5in/LjEegIhTm1WFZ8KWjkhqVSrXXhpv10xPE22DGJGlGdC2 RektWtUov3gmVIjxYhDv54wFUhlbb5IZ5m13TKL6HOy1fJK0NvKJ5S+asD+8jsV6J0tJ raHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731382088; x=1731986888; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fCAf8+dxqLdX7eOAb6wjNbdElfKRHRt7U7TeGqx35oE=; b=n58ERlF2HwPZArepJW9PmQGLPLf5GtY5FhTsn+4w7McdHYl6cTT4NnwxhbOrhD7Czj Cr3K5BBd+4Gqvzns962EeCjU81cEnaCJKijaof8Q9ClDk44v5bMlQVtMA8rUKnG0iKit iF/ufjiO8sNLH6IjDkJ40++UXmw9g+CUoHNndBbgw4zfL+v8GYSaZ0G6+jtVVsXGtcki dLHalm+yI9i7NW8bGrvO5zm/FJJHNWbRHCy2cITDtdzI7vlOzKO4MOcVglasYlX4/Hoq mkxupW8dD0TRNenGAnnxLVY2cODeobbHN5gi58Syd67eqJMdn203AUTa/9ErTL4awmJY MkBw== X-Forwarded-Encrypted: i=1; AJvYcCXsEYvqLHhIR9BICuyeRFid7g+0rs4rNBUX9tve3zL55JGwn8S6xzisjjHgc0Q0H99uXalJ0Nhp+A==@kvack.org X-Gm-Message-State: AOJu0YyU/sGHnnU+2ojiRQQBka0K/BWGEihNzHeklavl/xgmkVT62pfe SSHN2Ewq2aU9GfpCA+aHBxvy9HduRkgItL1qitt7MUcF0WU/E9HqrVe1XuN/b83wCsMhM6d5nqu b16F1HTrwPdCQHTXqTbyvXdRu4ZOQb1JI/rSP X-Gm-Gg: ASbGncv0x0KNVpfpytknDsfJ4aF4A5cH26CTnhWTWfOdIU5y2ezAu8FQc+tLgQmnACA 36MOwGpmJBpKYgX3VZ0/y77ZuJ1mOcdM= X-Google-Smtp-Source: AGHT+IEFUiciqb5uHKeO2gzGG4PQfgq0SXj49h9oTt8eAjKR6bvm7l1OmVT7yDxhJ3nxVCJF4yqHihwKWnlC7XWqyU8= X-Received: by 2002:a05:622a:5289:b0:462:ad94:3554 with SMTP id d75a77b69052e-46342886136mr535501cf.17.1731382087741; Mon, 11 Nov 2024 19:28:07 -0800 (PST) MIME-Version: 1.0 References: <20241111205506.3404479-1-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Mon, 11 Nov 2024 19:27:56 -0800 Message-ID: Subject: Re: [PATCH 0/4] move per-vma lock into vm_area_struct To: "Liam R. Howlett" , Suren Baghdasaryan , akpm@linux-foundation.org, willy@infradead.org, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8CAFE4000B X-Stat-Signature: 7iigkus51hwoi6izxw7n9s7zcwnuom73 X-Rspam-User: X-HE-Tag: 1731382070-859296 X-HE-Meta: U2FsdGVkX18TixyIbgQI7TZZ0+trqzj2qbiTPzl0mb8sIYZo2eutKS9vNvUZKS7jIBCdfAdvEdsrj/kHg2kIVY9gm208F9rWY86Ex8TvfCUSNLAhQzpBP6lrIWxnROTJF9pYOCSFGDoAbTcaY3EhimlfL6pCwqqn3/GMjX5W1kUQaNVj5XHmxEUBr1kkIfrUMfxKG+eynZgZqTnyjJX5WAmIBqGjg5GUm7F4WFjaqWSbm7vTA0VwS4Pwnprk7JKK4EPTwfRdODYk0M+/oQuZWlBdObhW9iCy54kn66VG2s+HYKzcG2IOM4gysa9pM5ROTZ/Xo7SeS1SR8rlEWwozMTbXpGk07wYnmgWNU+JgP2Lo3+esoOba+iB2+jZC9+RXhR1AHjJz9wbg6aYDyzNYdNrdv2OjLQ0qsBIIoF9oH4VfcYf+Sh9iuUjbUzm2MP4z/U8paYPzYJTClmb4H25eJhbytqR75h2LGktEevIM6VGSUl6Ux42lfTxEMXiixJ0l1MXCA60WfskzmCbqw7onCJM/9M3H1ubDrCceXJOeE11wqo9EYT8/RmtQ5K9LsxnJHdy0NjsaQbhlWAtpcVU6Kr8TwWYLNSj8w7KFH5d/fmg9VQRXz9L0EWGhKqTSeeyKI6xiQtS+Hcbub8t73YEwB+2mkLdgb/0hn2tHanr8Vj9U3/0NSAJZQj5zX1CtCpTXZNx5BndIqfzH5JY5hcl6KPxi7GeH1eFA6GKscKrjt0eZb41N10q+B+m6Umh5MJpHHNH4j6g9QTlB3o1IqTILFRnkO65qB5V4d7Cpsxwq5Ml6O94gZuelIpodS9Ep1rA4yCDfIVlESbEb+eUCn5CDAksoOt8262qWJPt0W6w8a027HKpBaxTo0tyBva9m6YjW1wCsisz1389pGuoXYJBvJGL6DfmCQN//lXX7kqdnDD9wblJWv6yTPUZDN91znODZ+1fQikDTJLI2Y/fJJyn 0yYAJKym ea8e25NKwx1PDKs3IumK+S3yO5H9gbYnbr0uFKvUK3IK1oBvnrAD/sUNmhWeEvmQBzXjPLpejcOngnjc8GNkdt/rM9sodAV8kXgeLcKltZo3qBoxbtt4Dr4J16IfEsfg8MQR22ha6SLhkUlh+wUgbXiTDIJ0qYcTfQ9pR3I5FMK6rc/zyYfCIx6kFXxJBHOw9ApnRrZcLGu+7aAPvrHq24RBUHvJf48+mvbI9yHdNPhwYgnLpWtVuem6hcl9mdb5znHi3Eyc92XyBL2AXg8/jOKjtzC2G8m6FBz6AqI788jWBsNsqAhAqjEm3ZP+PT8aE8aX3GvMzfPdjXuq2wEONAxwNRw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 11, 2024 at 6:48=E2=80=AFPM 'Liam R. Howlett' via kernel-team wrote: > > * Suren Baghdasaryan [241111 16:41]: > > On Mon, Nov 11, 2024 at 12:55=E2=80=AFPM Suren Baghdasaryan wrote: > > > > > > Back when per-vma locks were introduces, vm_lock was moved out of > > > vm_area_struct in [1] because of the performance regression caused by > > > false cacheline sharing. Recent investigation [2] revealed that the > > > regressions is limited to a rather old Broadwell microarchitecture an= d > > > even there it can be mitigated by disabling adjacent cacheline > > > prefetching, see [3]. > > > This patchset moves vm_lock back into vm_area_struct, aligning it at = the > > > cacheline boundary and changing the cache to be cache-aligned as well= . > > > This causes VMA memory consumption to grow from 160 (vm_area_struct) = + 40 > > > (vm_lock) bytes to 256 bytes: > > > > > > slabinfo before: > > > ... : ... > > > vma_lock ... 40 102 1 : ... > > > vm_area_struct ... 160 51 2 : ... > > > > > > slabinfo after moving vm_lock: > > > ... : ... > > > vm_area_struct ... 256 32 2 : ... > > > > > > Aggregate VMA memory consumption per 1000 VMAs grows from 50 to 64 pa= ges, > > > which is 5.5MB per 100000 VMAs. > > > To minimize memory overhead, vm_lock implementation is changed from > > > using rw_semaphore (40 bytes) to an atomic (8 bytes) and several > > > vm_area_struct members are moved into the last cacheline, resulting > > > in a less fragmented structure: > > Wait a second, this is taking 40B down to 8B, but the alignment of the > vma will surely absorb that 32B difference? The struct sum is 153B > according to what you have below so we won't go over 192B. What am I > missing? Take a look at the last patch in the series called "[PATCH 4/4] mm: move lesser used vma_area_struct members into the last cacheline". I move some struct members from the earlier cachelines into cacheline #4 where the vm_lock is staying. > > > > > > > struct vm_area_struct { > > > union { > > > struct { > > > long unsigned int vm_start; /* 0 = 8 */ > > > long unsigned int vm_end; /* 8 = 8 */ > > > }; /* 0 = 16 */ > > > struct callback_head vm_rcu ; /* 0 = 16 */ > > > } __attribute__((__aligned__(8))); /* 0 = 16 */ > > > struct mm_struct * vm_mm; /* 16 = 8 */ > > > pgprot_t vm_page_prot; /* 24 = 8 */ > > > union { > > > const vm_flags_t vm_flags; /* 32 = 8 */ > > > vm_flags_t __vm_flags; /* 32 = 8 */ > > > }; /* 32 = 8 */ > > > bool detached; /* 40 = 1 */ > > > > > > /* XXX 3 bytes hole, try to pack */ > > > > > > unsigned int vm_lock_seq; /* 44 = 4 */ > > > struct list_head anon_vma_chain; /* 48 = 16 */ > > > /* --- cacheline 1 boundary (64 bytes) --- */ > > > struct anon_vma * anon_vma; /* 64 = 8 */ > > > const struct vm_operations_struct * vm_ops; /* 72 = 8 */ > > > long unsigned int vm_pgoff; /* 80 = 8 */ > > > struct file * vm_file; /* 88 = 8 */ > > > void * vm_private_data; /* 96 = 8 */ > > > atomic_long_t swap_readahead_info; /* 104 = 8 */ > > > struct mempolicy * vm_policy; /* 112 = 8 */ > > > > > > /* XXX 8 bytes hole, try to pack */ > > > > > > /* --- cacheline 2 boundary (128 bytes) --- */ > > > struct vma_lock vm_lock (__aligned__(64)); /* 128 = 4 */ > > > > > > /* XXX 4 bytes hole, try to pack */ > > > > > > struct { > > > struct rb_node rb (__aligned__(8)); /* 136 = 24 */ > > > long unsigned int rb_subtree_last; /* 160 = 8 */ > > > } __attribute__((__aligned__(8))) shared; /* 136 = 32 */ > > > struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 168 = 0 */ > > > > > > /* size: 192, cachelines: 3, members: 17 */ > > > /* sum members: 153, holes: 3, sum holes: 15 */ > > > /* padding: 24 */ > > > /* forced alignments: 3, forced holes: 2, sum forced holes: 1= 2 */ > > > } __attribute__((__aligned__(64))); > > > > > > Memory consumption per 1000 VMAs becomes 48 pages, saving 2 pages com= pared > > > to the 50 pages in the baseline: > > > > > > slabinfo after vm_area_struct changes: > > > ... : ... > > > vm_area_struct ... 192 42 2 : ... > > > > > > Performance measurements using pft test on x86 do not show considerab= le > > > difference, on Pixel 6 running Android it results in 3-5% improvement= in > > > faults per second. > > > > > > [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@goog= le.com/ > > > [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-902= 0/ > > > [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53= cT_kbfP_pR+-2g@mail.gmail.com/ > > > > And of course I forgot to update Lorenzo's new locking documentation :/ > > Will add that in the next version. > > > > > > > > Suren Baghdasaryan (4): > > > mm: introduce vma_start_read_locked{_nested} helpers > > > mm: move per-vma lock into vm_area_struct > > > mm: replace rw_semaphore with atomic_t in vma_lock > > > mm: move lesser used vma_area_struct members into the last cachelin= e > > > > > > include/linux/mm.h | 163 +++++++++++++++++++++++++++++++++++-= -- > > > include/linux/mm_types.h | 59 +++++++++----- > > > include/linux/mmap_lock.h | 3 + > > > kernel/fork.c | 50 ++---------- > > > mm/init-mm.c | 2 + > > > mm/userfaultfd.c | 14 ++-- > > > 6 files changed, 205 insertions(+), 86 deletions(-) > > > > > > > > > base-commit: 931086f2a88086319afb57cd3925607e8cda0a9f > > > -- > > > 2.47.0.277.g8800431eea-goog > > > > > To unsubscribe from this group and stop receiving emails from it, send an= email to kernel-team+unsubscribe@android.com. >