From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 310CFC5AD49 for ; Wed, 28 May 2025 09:01:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C32276B0093; Wed, 28 May 2025 05:01:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE2996B0095; Wed, 28 May 2025 05:01:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD15F6B0096; Wed, 28 May 2025 05:01:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8FF8E6B0093 for ; Wed, 28 May 2025 05:01:57 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 232D6BB28B for ; Wed, 28 May 2025 09:01:57 +0000 (UTC) X-FDA: 83491724274.16.99E8F5A Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) by imf08.hostedemail.com (Postfix) with ESMTP id 4BA69160012 for ; Wed, 28 May 2025 09:01:55 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=g7d7CztJ; spf=pass (imf08.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.173 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748422915; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YAO2Idap4modZx0HiEJ1tVpbkOJVus0Rb2W6b2Sfxvk=; b=SBo53upDT4gbnhcekSzcLZyJQ+diX/KPMgso/QSbG2PcASnJp5AaGltJ5926CQBDJ4hnHr McUQEwez74jDpjmfWcVU+mkCJgMLTn+wx8z6vY4FnziK+IOtJxu1epX9xbwlTpopTF5hRj ZaIqeFRcgceHUd8PR7VPHlC/BEu4GvM= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=g7d7CztJ; spf=pass (imf08.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.173 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748422915; a=rsa-sha256; cv=none; b=Vx7hpaeUgYF91E2i874D00RJ/DkBez6EJ9Jak6AHYrRJBredsMbzPZg67RESDyIi8ZCnKT L++AsU7SlUKeg49FJA81b7ulez0q709VuQXorqTPSfWzTd3LhJguUQgPVxiU6ox5LEqPeE reLBwZyLAObFRUcFDr1//NeSVHPNSZk= Received: by mail-qk1-f173.google.com with SMTP id af79cd13be357-7c9376c4bddso457610485a.3 for ; Wed, 28 May 2025 02:01:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1748422914; x=1749027714; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=YAO2Idap4modZx0HiEJ1tVpbkOJVus0Rb2W6b2Sfxvk=; b=g7d7CztJSaj6Hew1ji1aC2kumJ3+9E91nd5DPuyzUZAXQ0ESi1EvQ6FmufZUqODoAL ZeIdfb5NLFO5DbciuYz/QYcikYTQ6GjwIXXUaaxBFdUJz546k3H951eWWpj0wumRySET zhTqMr3Cgj3zTecT/sePTjxA1UtKZS2t4lXYNQsjgMmkEml5idkhxYXSPTtsKecJzfYv hBsrdd86kwRD6TN8SSQYaiH5ugOOj3gLBtBpsyK85si6LqZIfZd81CKrDWjvp6pQAuNb pFMSldl9t4L6101NHhpnIIW98tTeAhacEteruvbCMry1qV1qmKSsvYfLS6Cpp7+qHRYB ycvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748422914; x=1749027714; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YAO2Idap4modZx0HiEJ1tVpbkOJVus0Rb2W6b2Sfxvk=; b=SykKFVwuseV5/gSug5W/QANN8YRoFIUA516smV/YFo5XO22ORRQ6EuCLnRHb3fpQUm dncjui+fjcs56kg5qzg1I5w/8bcWl+IJvJ9ajWDoSL3Qn56JjTBVovv6pOGsBgo4fqt4 btsn/GVqpxxcu0JmpKlHsJvpJFv8zR1QfK9T1T7LEwTzLo4A7vCGPqdRL0FHThOdvj4z NyXsqfWmm5pbdVp8rD/H/R3QReyaYeXwvGyAcjtmXB23tLyH7sCimJoSc7D07RVxhJFI ZMQtio0YvNaF0E8RRRwRwBZb8vN9NNmpQUJn94n20hr+P/MQPXCeQwkvuTAAh/WdYiwx rvsg== X-Forwarded-Encrypted: i=1; AJvYcCX8uLmkbBTSF9R6gFLUz+vEfU/FkWzIog4y83lU/BQm2iLwH6YSSI2CZrCJxYgZ2G6Us79HDy1vXA==@kvack.org X-Gm-Message-State: AOJu0YxPFnoUMv7IE34gWqNSCEUiz+ubDAO0pB2re/As4gzuj9X7ueQ8 zZIy5apU4rC1+BBXfJFyot0VbVuzTzP6ksBhOJRikvK/4TFIylZU/hCsEeYgd3NZHeKmBcp94WV EIX7rtjEp2p3rFS/Z/A5Q9RvRMGllITUsXivi X-Gm-Gg: ASbGnctcajbbhvA2H6TLHKcGFQ0aYH4CEL3FfQ/fz0EucOnAQQl17FpPdvPpFe2XDAA 4X6BOX2xD6563FF7TQaxkuTWJlGnywTtK4aRwWoYGzYF3Ppi3ZRVk2alwnfHRgBRkumOdXFcIMf AjKvGqaLhGWWshGR6ljQpgsOMXvl2ZNOEsbg== X-Google-Smtp-Source: AGHT+IFddMdN8TkEBUClv4I5WFEHFYLjzV6G6ueXzSvoHY+GpT78N8JnSl7g6bM0Y0VifL3kAybwNTXqtaBQwaXDxUg= X-Received: by 2002:a05:6102:4b0d:b0:4e2:86e6:468a with SMTP id ada2fe7eead31-4e4240d0da9mr12093790137.6.1748422903970; Wed, 28 May 2025 02:01:43 -0700 (PDT) MIME-Version: 1.0 References: <20250527044145.13153-1-21cnbao@gmail.com> <93385672-927f-4de5-a158-fc3fc0424be0@lucifer.local> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Wed, 28 May 2025 17:01:32 +0800 X-Gm-Features: AX0GCFs1VQUR8Htw0bjd4lZQvxMrXGcQOnANQhmXWqLyxp0YD-C73ZtqKV4bL5Y Message-ID: Subject: Re: [PATCH RFC] mm: use per_vma lock for MADV_DONTNEED To: Lokesh Gidra Cc: Lorenzo Stoakes , akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song , "Liam R. Howlett" , David Hildenbrand , Vlastimil Babka , Jann Horn , Suren Baghdasaryan , Tangquan Zheng Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Queue-Id: 4BA69160012 X-Rspamd-Server: rspam09 X-Stat-Signature: 7u39d3pwe38a4yrrzkcauru7ycg84krk X-HE-Tag: 1748422915-902388 X-HE-Meta: U2FsdGVkX1+8MjZDstkkevGBdhCT5jJKWNe3TrtOrLKhI7K1L1QJABaRdIKEefimbBWKWMCc/Y+FKYyvFk2HSnzGR9Toifdjmm/kU9Le4bBx2Xtf/Sd9mDopa1mCQFHqY3P76eZMH8LmR718KcFIxNd9D3x7juKcSvx16Z/ro/Jp15XzYP3Kv5x7efz0EByZ8BfrVFf5u0nZ5LnRzWgED/Dqslbd47IQdTgnoJw7m1YXvOv+pnZZjObZsSguKvpx1TH6sKR9S/wuFITnxEBTIiVP+iDk1NwpZfZSt4BDmOscS0RkK4jET/fdz3C/mkXOWxJRDne6MSICuUmQmQaNZvqjNlhtg9vTAFYoVNipDJUy0IhssMvcO0BF1Od6wTdgwQDf6O2e+FDhNDvBl9wkzpnz+pCCK6YslLKOFal+pjgTlKmzZlvFLi3gUOvnfsvtEfyHp+QY/r7GpYhPMDcL4k3TBOJlv3XWFcRee39xBE7ycnBt1JiLcnhpHiRiV6t0XxNTPVDEGZYd1TDPflF36UqFvVNYztxhk8jKq1oVv9RlB0gLRGypMgWMjUkycCcJM6CYIpPu5p2W39rJwvmqq5mJ1FykXvZ3kliWwSXB0p1FVTt3tVx8BCUDnYkjYUZ60cFSaYCNyrUoBiqnWbOXuCCJQK0xrkrORawK8YVAZHwiniSw7/PaMXrgV56JrolHhqWLt3QUvt9pmBEcUDKk1zoe/yBd6De4oJo4WqzTMvQhGg/p/N9kxWINbze5oVzfq6PJOLFN6c7+GLAnPPDBnAktWyso6YhRxWEAUZB1LXorde0XU5g6B014TXOHufDfh0AsY5fVyOCMdtIFKnWKWwb8BoodV9bKUQNFnHRXjiF7dibBz3KJiFh2R20MSgWCE7apc8NOlzaBTBlecr6KurU8R9FBOtNK94I+7gsVwXi1z5Z8ztdOIGG5zgKy62Jap2g8EiRt/+LH/ZFhpzy Ie0alOlA KVqFuMQP1zRfgQkbsIH/qGgQEeMgMqSm6YCE1cs3lAu3DHpNrltKXjOlEgjBVFRtLLOa1KkQz6/VtXvkCmdevseH+eevryLnuKzaIa3v8Yec134V/M1l6iRIa+h84bqy8hPf7B+CI63eXW9LhRgS/KUvhsr1xyfBwmtoos5qBvLEy6Hex3Tj+lnSM3MitVmU983n+lyKy/p7yFavO9zNg7OxiVriE/ml9NGA7baJnODGGjhvsRbOOtEsO5TgXGVQQkKrV3v0dOJIX6SADq9M5Nvt1tQLcHVrvEzR1PIUcZfIklxeFvaZpNqwlIn5GorJC259E8n4XVXz1i+16RGE75o2Ghr4JrJ9QRk/oQhTUpER/2WGzhtkpbynh/Xw9M7wFvUwLCNgaTogHC+J7uj9/5NCFOVOV4Uv95FGfHHS3Tkk6M0/bx8lsSuRJKM4Mdoqq+utNLQXmiPnyINCqz++QGTm+t2SU3eLESPuZ/mxg0wqbreEq2275bMRud394gXlzLz6i X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, May 28, 2025 at 4:40=E2=80=AFAM Lokesh Gidra wrote: > > Thanks a lot, Barry. This was overdue. > > On Tue, May 27, 2025 at 2:20=E2=80=AFAM Lorenzo Stoakes > wrote: > > > > Overall - thanks for this, and I'm not sure why we didn't think of doin= g > > this sooner :P this seems like a super valid thing to try to use the vm= a > > lock with. > > > > I see you've cc'd Suren who has the most expertise in this and can > > hopefully audit this and ensure all is good, but from the process addre= ss > > doc (see below), I think we're good to just have the VMA stabilised for= a > > zap. > > > > On Tue, May 27, 2025 at 04:41:45PM +1200, Barry Song wrote: > > > From: Barry Song > > > > > > Certain madvise operations, especially MADV_DONTNEED, occur far more > > > frequently than other madvise options, particularly in native and Jav= a > > > heaps for dynamic memory management. > > Actually, it would be great if you can also optimize the MADV_FREE > case. There are quite a few performance critical situations where > MADV_FREE is performed and are done within a single VMA. Yes, that was definitely on the list. However, the reason MADV_FREE wasn't included in this patch is because madvise_free_single_vma() calls walk_page_range(), which requires the mmap_lock to locate the VMA using find_vma()=E2=80=94which feels redundant and awkward, since we already have= the VMA. We should be able to use walk_page_range_vma() instead, given that the VMA is already known. But even walk_page_range_vma() asserts that the mmap_lock is held, so the issue remains. int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private) { ... process_mm_walk_lock(walk.mm, ops->walk_lock); ... return __walk_page_range(start, end, &walk); } MADV_DONTNEED doesn't use walk_page_range() at all. So I'd prefer to settle MADV_DONTNEED first as the initial step. We need more time to investigate the behavior and requirements of walk_page_range(). > > > > Ack yeah, I have gathered that this is the case previously. > > > > > > > > Currently, the mmap_lock is always held during these operations, even= when > > > unnecessary. This causes lock contention and can lead to severe prior= ity > > > inversion, where low-priority threads=E2=80=94such as Android's HeapT= askDaemon=E2=80=94 > > > hold the lock and block higher-priority threads. > > > > That's very nasty... we definitely want to eliminate as much mmap_lock > > contention as possible. > > > > > > > > This patch enables the use of per-VMA locks when the advised range li= es > > > entirely within a single VMA, avoiding the need for full VMA traversa= l. In > > > practice, userspace heaps rarely issue MADV_DONTNEED across multiple = VMAs. > > > > Yeah this single VMA requirement is obviously absolutely key. > > > > As per my docs [0] actually, for zapping a single VMA, 'The VMA need on= ly be > > kept stable for this operation.' (I had to look this up to remind mysel= f :P) > > > > [0]: https://kernel.org/doc/html/latest/mm/process_addrs.html > > > > So we actually... should be good here, locking-wise. > > > > > > > > Tangquan=E2=80=99s testing shows that over 99.5% of memory reclaimed = by Android > > > benefits from this per-VMA lock optimization. After extended runtime, > > > 217,735 madvise calls from HeapTaskDaemon used the per-VMA path, whil= e > > > only 1,231 fell back to mmap_lock. > > I am quite confident that this pattern is not limited to > HeapTaskDaemon. For instance, I'm sure the Scudo allocator must also > be calling MADV_DONTNEED within single VMAs. That's right. We've also optimized the native heaps. > > > > Thanks, this sounds really promising! > > > > I take it then you have as a result, heavily tested this change? > > > > > > > > To simplify handling, the implementation falls back to the standard > > > mmap_lock if userfaultfd is enabled on the VMA, avoiding the complexi= ty of > > > userfaultfd_remove(). > > > > Oh GOD do I hate how we implement uffd. Have I ever mentioned that? Wel= l, > > let me mention it again... > > > > > > > > Cc: "Liam R. Howlett" > > > Cc: Lorenzo Stoakes > > > Cc: David Hildenbrand > > > Cc: Vlastimil Babka > > > Cc: Jann Horn > > > Cc: Suren Baghdasaryan > > > Cc: Lokesh Gidra > > > Cc: Tangquan Zheng > > > Signed-off-by: Barry Song > > > --- > > > mm/madvise.c | 34 ++++++++++++++++++++++++++++++++++ > > > 1 file changed, 34 insertions(+) > > > > > > diff --git a/mm/madvise.c b/mm/madvise.c > > > index 8433ac9b27e0..da016a1d0434 100644 > > > --- a/mm/madvise.c > > > +++ b/mm/madvise.c > > > @@ -1817,6 +1817,39 @@ int do_madvise(struct mm_struct *mm, unsigned = long start, size_t len_in, int beh > > > > > > if (madvise_should_skip(start, len_in, behavior, &error)) > > > return error; > > > + > > > + /* > > > + * MADV_DONTNEED is commonly used with userspace heaps and most= often > > > + * affects a single VMA. In these cases, we can use per-VMA loc= ks to > > > + * reduce contention on the mmap_lock. > > > + */ > > > + if (behavior =3D=3D MADV_DONTNEED || behavior =3D=3D MADV_DONTN= EED_LOCKED) { > > > > So firstly doing this here means process_madvise() doesn't get this ben= efit, and > > we're inconsistent between the two which we really want to avoid. > > > > But secondly - we definitely need to find a better way to do this :) th= is > > basically follows the 'ignore the existing approach and throw in an if > > (special case) { ... }' pattern that I feel we really need to do all we= can > > to avoid in the kernel. > > > > This lies the way of uffd, hugetlb, and thus horrors beyond imagining. > > > > I can see why you did this as this is kind of special-cased a bit, and = we > > already do this kind of thing all over the place but let's try to avoid > > this here. > > > > So I suggest: > > > > - Remove any code for this from do_madvise() and thus make it available= to > > process_madvise() also. > > > > - Try to avoid the special casing here as much as humanly possible :) > > > > - Update madvise_lock()/unlock() to get passed a pointer to struct > > madvise_behavior to which we can add a boolean or even better I think= - > > an enum indicating which lock type was taken (this can simplify > > madvise_unlock() also). > > > > - Update madvise_lock() to do all of the checks below, we already > > effectively do a switch (behavior) so it's not so crazy to do this. A= nd > > you can also do the fallthrough logic there. > > > > - Obviously madvise_unlock() can be updated to do vma_end_read(). > > > > > + struct vm_area_struct *prev, *vma; > > > + unsigned long untagged_start, end; > > > + > > > + untagged_start =3D untagged_addr(start); > > > + end =3D untagged_start + len_in; > > > + vma =3D lock_vma_under_rcu(mm, untagged_start); > > > + if (!vma) > > > + goto lock; > > > + if (end > vma->vm_end || userfaultfd_armed(vma)) { > > > + vma_end_read(vma); > > > + goto lock; > > > + } > > > + if (unlikely(!can_modify_vma_madv(vma, behavior))) { > > > + error =3D -EPERM; > > > + vma_end_read(vma); > > > + goto out; > > > + } > > > + madvise_init_tlb(&madv_behavior, mm); > > > + error =3D madvise_dontneed_free(vma, &prev, untagged_st= art, > > > + end, &madv_behavior); > > > + madvise_finish_tlb(&madv_behavior); > > > + vma_end_read(vma); > > > + goto out; > > > + } > > > + > > > +lock: > > > error =3D madvise_lock(mm, behavior); > > > if (error) > > > return error; > > > @@ -1825,6 +1858,7 @@ int do_madvise(struct mm_struct *mm, unsigned l= ong start, size_t len_in, int beh > > > madvise_finish_tlb(&madv_behavior); > > > madvise_unlock(mm, behavior); > > > > > > +out: > > > return error; > > > } > > > > > > -- > > > 2.39.3 (Apple Git-146) > > > > > > > Cheers, Lorenzo Thanks Barry