From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CACAC5B543 for ; Sat, 7 Jun 2025 22:02:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BA7876B0088; Sat, 7 Jun 2025 18:02:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B58216B0089; Sat, 7 Jun 2025 18:02:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A47A76B008A; Sat, 7 Jun 2025 18:02:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7F3F66B0088 for ; Sat, 7 Jun 2025 18:02:10 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B8298B5C8C for ; Sat, 7 Jun 2025 22:02:09 +0000 (UTC) X-FDA: 83529978378.16.D5EA659 Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by imf18.hostedemail.com (Postfix) with ESMTP id D79561C0012 for ; Sat, 7 Jun 2025 22:02:07 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GyoVnluP; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf18.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.210.180 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749333727; a=rsa-sha256; cv=none; b=ss8Q9b4/kT3d2A9MHGcJTW3PbbztIrq3GQ7ifkVkjcsOMG6KJIWxW65WYkCgnePEKtEZHV Gok842wEVEurTS4y9pEl9Alwo9iJ6aWDbNgOlAmZ5r8h96C91PYK2ExNJ62aQtMbB32Rtr m/ujIPhQZa21aVRuhK4/jGsYyBYS0J0= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GyoVnluP; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf18.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.210.180 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749333727; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=9T2hh4LXe9mzTbq9SoRU54PmUX07cPYHS8buY92NpiA=; b=piWCasgw1vPCiAb49iRKCEpTgB0dd6zFgZC/Ww9fgQhScnKAmMOeO/8QQbesMVHt3R4S0o 8AMniLon2SI/yz2WqhLl9p4FQuO4mEq7Xx1PHqEryOlhj5GkTjyLeYNX+DHSgq2jikuKIg I69jOIlav4hHDZLe0zrp8gk6Zu/ce68= Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-742c73f82dfso2539455b3a.2 for ; Sat, 07 Jun 2025 15:02:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1749333726; x=1749938526; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=9T2hh4LXe9mzTbq9SoRU54PmUX07cPYHS8buY92NpiA=; b=GyoVnluPuQQlwfzNjmO9L8ustmZZgPaPvI1tHfad9RAttbt1rb4BEeU1SPUnKn5BmS /oVVrdU5TXcsiZaBLTe6k1lH7e4rgG4Cdl621HRACEmHs26G0wDJ9D1YACm6losiiJ69 me6yYnRLe0tEV6e87UYNeaRevFYR+l4Is7qsOBYr7UcPhxoj7HEQf9oCQzLpMo+/pE0g UWgWNwdJ9mCsynUHJTZZ0UYDBF1qwDS26CbmC5pzBeBBjLx1SDaozGSxEDAt5ApNk7lW A5IU/eXvUCxLU04KvsVxR8WnKwA9UeeTtDkOWxopuSIuDE4FSqptu4Pt9yS1WBB+Br7w OFtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749333726; x=1749938526; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9T2hh4LXe9mzTbq9SoRU54PmUX07cPYHS8buY92NpiA=; b=c6zf9owrHuMsDm0TadQil182uXMkHC5c+4icgLEhsEOLcKnw7qC1F3Gyy6udie8788 bRLPuSNtb+63kqrI8Y5deCOArrUqqZQqKsU+SOsNvoPJWkBQAhJ2DP80YJV3/+RWeIiX x4pHjalQiNZE+q/YBkjjzK7REyf5kJCyiDYqjl4TtAnpO5y85aBqaeBlg2DPS9L8mlE4 U/5yk2q8XmFueCKcf8kQFNXJoiq4yHSZcPRyxM8wQzTOBriNTr2jOpkLdWXpbw2u60QO 9fuC5Eng8F3zuAjjPL+hx1uyibRAcUmgr4cyb22mlGtz426JB1uMw+Ayqfly5Oi2dtC9 DPWA== X-Forwarded-Encrypted: i=1; AJvYcCU6T7qiZaHrORj8MxzpFnzE6+8lsAio/vFn4my6bahfgJIL4zMFPYLxTSY2QyIvf6ruz+r61GQiQw==@kvack.org X-Gm-Message-State: AOJu0YxkBHUNhunI6nxHOLPWFIAmrZxi7/f2Y2luZAOzOKz4yqMHZriS mgrfkdWwl8gY/xqlmQgcU4MLUfi4U3jC4hEFlWDnuAheQlUkidtDP67s X-Gm-Gg: ASbGncv/ADFAtjLkiB8rdw1hF6RlT1yhjLbFBRJoEy8UJx4HKDUXDg+8hILiCtny4ee CqFnHtJFUihjc5CvTRP0mL0EIz5ZSeXEK2mN4yt2r6Vd1+ClO+Pbf39ZjC+lUBIMRnBwjSolsgc 88CZI74dXpP7rtXHrzl50vx055vSRsBN/sLhTd6cQxBMPndQml7XRZuRGdF4GCuamm49Hc0k+dp gDmtPpxqsT/BJSO0Z37CI028fXK5xbjO78Q4Esh7XGJtgxtJkM2DYoaYYi3E/oKWIlh3l4VLEgc l8ykUDl1O30/eHf5is8GY+TNO+AOUwJdjl1+ANOWv7JUONVRr6OVXrE/hDp45sQ= X-Google-Smtp-Source: AGHT+IFUzvuj2HHkXu+av+nYCrNdVfg2JBeIYJtIt3WbE7P0guHHqSuc8AEf9+s+Jx1CCciM9A4HsA== X-Received: by 2002:a05:6a00:848:b0:736:5f75:4a3b with SMTP id d2e1a72fcca58-74827e74aacmr11067010b3a.7.1749333726461; Sat, 07 Jun 2025 15:02:06 -0700 (PDT) Received: from Barrys-MBP.hub ([118.92.145.159]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7482af7b09dsm3173179b3a.43.2025.06.07.15.02.01 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sat, 07 Jun 2025 15:02:05 -0700 (PDT) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Barry Song , Lorenzo Stoakes , "Liam R. Howlett" , David Hildenbrand , Vlastimil Babka , Jann Horn , Suren Baghdasaryan , Lokesh Gidra , Tangquan Zheng , Qi Zheng Subject: [PATCH v4] mm: use per_vma lock for MADV_DONTNEED Date: Sun, 8 Jun 2025 10:01:50 +1200 Message-Id: <20250607220150.2980-1-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: D79561C0012 X-Stat-Signature: hcdxr1t797p4taueugoaa5fafof94ssi X-Rspam-User: X-HE-Tag: 1749333727-240845 X-HE-Meta: U2FsdGVkX19ZFGENbNM29llM3IqC0yWIYMQBx1+WH/SOzG7dyF3fxlyy0IOldGSYhYUtkMVNDfbWHDOeOuiaIeCIDAkQfb37+P1JkXT+IEz7r2qA/IdOjHH1g5FXhVhQMXvvG2e5oefKt4fzt6DtJylX/XkA1PGLDqbpspWKn/QyAbIJr+VUlL4lqOuzhLd8gF6s+HC3p+y5hqkrhTBCqL+qpPHcFTy4NqtMgNz+WBLfhqNIYciO3ntBEChDNwmwXHg/zAi5mbxFF1W1FIFNRjOUYMV6RNP/S+UOZN59NfTAiWnSNj/G0FR+qIjxRHz7NrkSGXVCTCpGJhCBNfn+tnYui0FqVGx749BCIOsD8rzz0PLPl8D8YBEYreEeKZwD/mlR3+KTCUtxgi8SV5Y9PAcEJRRekNA4UmHMsZd0Dv78dm8jPvqsCTLfOG9KCBObj2WNd6laAh5o6wJv5pRaO4MGETKJeYzb7v8sXIbuk3ZYnCbbBn2o1NMkzdLv7R3M+Jy8aL225dBH8Ttyd3SWrle4vDFGJIGebxuCwjT5xhc8WKnjno9jV+TlPlj80yHyi0p1uG28yECcPr3VT1IusMjaA8+0RlmZwciTAoeOVBKuJOyOaPCUhQgJKuRsMMZqLP7W+BE8jZcvWEr0LqEKGMtWZ18db73KHh+bXMxvX5wNkQB5w93+EdXsWS5yjx9Vrt2ri8Ie4DVxReaZwR9TTFG4IrI7d5Mk5RK05OFsITNNnIL2YVvZAxOX2KZzjTPJjMm9zCUX0SZa89wXTDf3cHe2uhozmm9H27sNB3qNDdR/pPi2Lt6qm4JAoOxngh9rkC/aLRjQO0rcec9mSqPSnN2H6Yy3Xuh49GpLZB5cRJtAeqoI3CZ6Nyb5NKn3pR47wVaLJBo+ScSlmit8Ng2rf7hXpvld19SOUsVGyf0uwHdHekYiaBc8CETPEsAG6ca7MEyMTxCCEG3lA1pKhD0 HNO+gr8j nhuuDbsl3n+5vWfOTF4q52AqlVgAdztrgXk5FYGwepduNHDSuBTlFlt+RJ09bZg6oAyJg+2Vx0CTDDRyEzPEtxja738N1kWk73O4zfHUpSz2Am30CR910JmAFMnK77HvK5P4xKIrrlCEfV4vukOthQiomoVagRzvonP9RsvBt/JvvVx02Mkk1XmKWyjOdzcPxNBgenD7ujpCZBzhuY3pKS4Nu+83wGGo5KczamMb9gIPgXR5WBSsqWXS91fzVN511ZSHJZajj7B0u2l6YDO5zFK4rzIfAzHdpWKu7f4q192/t5Eyw7zWGK3tNzM2AjQWKcBZi5znh78V3h1fxEysPNYZ4sfwOQ8OcOYzHCbDRXl3Sw4mZuJvSAs6cN/GPWjblzupm1NFn7LrfR8wa4ZaCi0qrVOELimkrOH++Ek4/ahuKpquv67C3Vh08obeAABSHHqywfGV3+qggDii/9Cmg5n25Ub2FACSWjahsyVo8CyKa5vA1PQjU5tmqK7Yiz23YA+J9cQGZEqTZCg1n3HBxTOjSrHoIBQ6jxxUnyN0iwyOSCvj3rZu9gQVN8S1vp179i5nMwi3It3h9SIdYYejN1rKj2V5f5nyisTbA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Barry Song Certain madvise operations, especially MADV_DONTNEED, occur far more frequently than other madvise options, particularly in native and Java heaps for dynamic memory management. Currently, the mmap_lock is always held during these operations, even when unnecessary. This causes lock contention and can lead to severe priority inversion, where low-priority threads—such as Android's HeapTaskDaemon— hold the lock and block higher-priority threads. This patch enables the use of per-VMA locks when the advised range lies entirely within a single VMA, avoiding the need for full VMA traversal. In practice, userspace heaps rarely issue MADV_DONTNEED across multiple VMAs. Tangquan’s testing shows that over 99.5% of memory reclaimed by Android benefits from this per-VMA lock optimization. After extended runtime, 217,735 madvise calls from HeapTaskDaemon used the per-VMA path, while only 1,231 fell back to mmap_lock. To simplify handling, the implementation falls back to the standard mmap_lock if userfaultfd is enabled on the VMA, avoiding the complexity of userfaultfd_remove(). Many thanks to Lorenzo's work[1] on: "Refactor the madvise() code to retain state about the locking mode utilised for traversing VMAs. Then use this mechanism to permit VMA locking to be done later in the madvise() logic and also to allow altering of the locking mode to permit falling back to an mmap read lock if required." One important point, as pointed out by Jann[2], is that untagged_addr_remote() requires holding mmap_lock. This is because address tagging on x86 and RISC-V is quite complex. Until untagged_addr_remote() becomes atomic—which seems unlikely in the near future—we cannot support per-VMA locks for remote processes. So for now, only local processes are supported. Link: https://lore.kernel.org/all/0b96ce61-a52c-4036-b5b6-5c50783db51f@lucifer.local/ [1] Link: https://lore.kernel.org/all/CAG48ez11zi-1jicHUZtLhyoNPGGVB+ROeAJCUw48bsjk4bbEkA@mail.gmail.com/ [2] Reviewed-by: Lorenzo Stoakes Cc: "Liam R. Howlett" Cc: David Hildenbrand Cc: Vlastimil Babka Cc: Jann Horn Cc: Suren Baghdasaryan Cc: Lokesh Gidra Cc: Tangquan Zheng Cc: Qi Zheng Signed-off-by: Barry Song --- -v4: * collect Lorenzo's RB; * use visit() for per-vma path mm/madvise.c | 195 ++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 147 insertions(+), 48 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 56d9ca2557b9..8382614b71d1 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -48,38 +48,19 @@ struct madvise_walk_private { bool pageout; }; +enum madvise_lock_mode { + MADVISE_NO_LOCK, + MADVISE_MMAP_READ_LOCK, + MADVISE_MMAP_WRITE_LOCK, + MADVISE_VMA_READ_LOCK, +}; + struct madvise_behavior { int behavior; struct mmu_gather *tlb; + enum madvise_lock_mode lock_mode; }; -/* - * Any behaviour which results in changes to the vma->vm_flags needs to - * take mmap_lock for writing. Others, which simply traverse vmas, need - * to only take it for reading. - */ -static int madvise_need_mmap_write(int behavior) -{ - switch (behavior) { - case MADV_REMOVE: - case MADV_WILLNEED: - case MADV_DONTNEED: - case MADV_DONTNEED_LOCKED: - case MADV_COLD: - case MADV_PAGEOUT: - case MADV_FREE: - case MADV_POPULATE_READ: - case MADV_POPULATE_WRITE: - case MADV_COLLAPSE: - case MADV_GUARD_INSTALL: - case MADV_GUARD_REMOVE: - return 0; - default: - /* be safe, default to 1. list exceptions explicitly */ - return 1; - } -} - #ifdef CONFIG_ANON_VMA_NAME struct anon_vma_name *anon_vma_name_alloc(const char *name) { @@ -1486,6 +1467,44 @@ static bool process_madvise_remote_valid(int behavior) } } +/* + * Try to acquire a VMA read lock if possible. + * + * We only support this lock over a single VMA, which the input range must + * span either partially or fully. + * + * This function always returns with an appropriate lock held. If a VMA read + * lock could be acquired, we return the locked VMA. + * + * If a VMA read lock could not be acquired, we return NULL and expect caller to + * fallback to mmap lock behaviour. + */ +static struct vm_area_struct *try_vma_read_lock(struct mm_struct *mm, + struct madvise_behavior *madv_behavior, + unsigned long start, unsigned long end) +{ + struct vm_area_struct *vma; + + vma = lock_vma_under_rcu(mm, start); + if (!vma) + goto take_mmap_read_lock; + /* + * Must span only a single VMA; uffd and remote processes are + * unsupported. + */ + if (end > vma->vm_end || current->mm != mm || + userfaultfd_armed(vma)) { + vma_end_read(vma); + goto take_mmap_read_lock; + } + return vma; + +take_mmap_read_lock: + mmap_read_lock(mm); + madv_behavior->lock_mode = MADVISE_MMAP_READ_LOCK; + return NULL; +} + /* * Walk the vmas in range [start,end), and call the visit function on each one. * The visit function will get start and end parameters that cover the overlap @@ -1496,7 +1515,8 @@ static bool process_madvise_remote_valid(int behavior) */ static int madvise_walk_vmas(struct mm_struct *mm, unsigned long start, - unsigned long end, void *arg, + unsigned long end, struct madvise_behavior *madv_behavior, + void *arg, int (*visit)(struct vm_area_struct *vma, struct vm_area_struct **prev, unsigned long start, unsigned long end, void *arg)) @@ -1505,6 +1525,20 @@ int madvise_walk_vmas(struct mm_struct *mm, unsigned long start, struct vm_area_struct *prev; unsigned long tmp; int unmapped_error = 0; + int error; + + /* + * If VMA read lock is supported, apply madvise to a single VMA + * tentatively, avoiding walking VMAs. + */ + if (madv_behavior && madv_behavior->lock_mode == MADVISE_VMA_READ_LOCK) { + vma = try_vma_read_lock(mm, madv_behavior, start, end); + if (vma) { + error = visit(vma, &prev, start, end, arg); + vma_end_read(vma); + return error; + } + } /* * If the interval [start,end) covers some unmapped address @@ -1516,8 +1550,6 @@ int madvise_walk_vmas(struct mm_struct *mm, unsigned long start, prev = vma; for (;;) { - int error; - /* Still start < end. */ if (!vma) return -ENOMEM; @@ -1598,34 +1630,86 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start, if (end == start) return 0; - return madvise_walk_vmas(mm, start, end, anon_name, + return madvise_walk_vmas(mm, start, end, NULL, anon_name, madvise_vma_anon_name); } #endif /* CONFIG_ANON_VMA_NAME */ -static int madvise_lock(struct mm_struct *mm, int behavior) + +/* + * Any behaviour which results in changes to the vma->vm_flags needs to + * take mmap_lock for writing. Others, which simply traverse vmas, need + * to only take it for reading. + */ +static enum madvise_lock_mode get_lock_mode(struct madvise_behavior *madv_behavior) { + int behavior = madv_behavior->behavior; + if (is_memory_failure(behavior)) - return 0; + return MADVISE_NO_LOCK; - if (madvise_need_mmap_write(behavior)) { + switch (behavior) { + case MADV_REMOVE: + case MADV_WILLNEED: + case MADV_COLD: + case MADV_PAGEOUT: + case MADV_FREE: + case MADV_POPULATE_READ: + case MADV_POPULATE_WRITE: + case MADV_COLLAPSE: + case MADV_GUARD_INSTALL: + case MADV_GUARD_REMOVE: + return MADVISE_MMAP_READ_LOCK; + case MADV_DONTNEED: + case MADV_DONTNEED_LOCKED: + return MADVISE_VMA_READ_LOCK; + default: + return MADVISE_MMAP_WRITE_LOCK; + } +} + +static int madvise_lock(struct mm_struct *mm, + struct madvise_behavior *madv_behavior) +{ + enum madvise_lock_mode lock_mode = get_lock_mode(madv_behavior); + + switch (lock_mode) { + case MADVISE_NO_LOCK: + break; + case MADVISE_MMAP_WRITE_LOCK: if (mmap_write_lock_killable(mm)) return -EINTR; - } else { + break; + case MADVISE_MMAP_READ_LOCK: mmap_read_lock(mm); + break; + case MADVISE_VMA_READ_LOCK: + /* We will acquire the lock per-VMA in madvise_walk_vmas(). */ + break; } + + madv_behavior->lock_mode = lock_mode; return 0; } -static void madvise_unlock(struct mm_struct *mm, int behavior) +static void madvise_unlock(struct mm_struct *mm, + struct madvise_behavior *madv_behavior) { - if (is_memory_failure(behavior)) + switch (madv_behavior->lock_mode) { + case MADVISE_NO_LOCK: return; - - if (madvise_need_mmap_write(behavior)) + case MADVISE_MMAP_WRITE_LOCK: mmap_write_unlock(mm); - else + break; + case MADVISE_MMAP_READ_LOCK: mmap_read_unlock(mm); + break; + case MADVISE_VMA_READ_LOCK: + /* We will drop the lock per-VMA in madvise_walk_vmas(). */ + break; + } + + madv_behavior->lock_mode = MADVISE_NO_LOCK; } static bool madvise_batch_tlb_flush(int behavior) @@ -1710,6 +1794,21 @@ static bool is_madvise_populate(int behavior) } } +/* + * untagged_addr_remote() assumes mmap_lock is already held. On + * architectures like x86 and RISC-V, tagging is tricky because each + * mm may have a different tagging mask. However, we might only hold + * the per-VMA lock (currently only local processes are supported), + * so untagged_addr is used to avoid the mmap_lock assertion for + * local processes. + */ +static inline unsigned long get_untagged_addr(struct mm_struct *mm, + unsigned long start) +{ + return current->mm == mm ? untagged_addr(start) : + untagged_addr_remote(mm, start); +} + static int madvise_do_behavior(struct mm_struct *mm, unsigned long start, size_t len_in, struct madvise_behavior *madv_behavior) @@ -1721,7 +1820,7 @@ static int madvise_do_behavior(struct mm_struct *mm, if (is_memory_failure(behavior)) return madvise_inject_error(behavior, start, start + len_in); - start = untagged_addr_remote(mm, start); + start = get_untagged_addr(mm, start); end = start + PAGE_ALIGN(len_in); blk_start_plug(&plug); @@ -1729,7 +1828,7 @@ static int madvise_do_behavior(struct mm_struct *mm, error = madvise_populate(mm, start, end, behavior); else error = madvise_walk_vmas(mm, start, end, madv_behavior, - madvise_vma_behavior); + madv_behavior, madvise_vma_behavior); blk_finish_plug(&plug); return error; } @@ -1817,13 +1916,13 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh if (madvise_should_skip(start, len_in, behavior, &error)) return error; - error = madvise_lock(mm, behavior); + error = madvise_lock(mm, &madv_behavior); if (error) return error; madvise_init_tlb(&madv_behavior, mm); error = madvise_do_behavior(mm, start, len_in, &madv_behavior); madvise_finish_tlb(&madv_behavior); - madvise_unlock(mm, behavior); + madvise_unlock(mm, &madv_behavior); return error; } @@ -1847,7 +1946,7 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter, total_len = iov_iter_count(iter); - ret = madvise_lock(mm, behavior); + ret = madvise_lock(mm, &madv_behavior); if (ret) return ret; madvise_init_tlb(&madv_behavior, mm); @@ -1880,8 +1979,8 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter, /* Drop and reacquire lock to unwind race. */ madvise_finish_tlb(&madv_behavior); - madvise_unlock(mm, behavior); - ret = madvise_lock(mm, behavior); + madvise_unlock(mm, &madv_behavior); + ret = madvise_lock(mm, &madv_behavior); if (ret) goto out; madvise_init_tlb(&madv_behavior, mm); @@ -1892,7 +1991,7 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter, iov_iter_advance(iter, iter_iov_len(iter)); } madvise_finish_tlb(&madv_behavior); - madvise_unlock(mm, behavior); + madvise_unlock(mm, &madv_behavior); out: ret = (total_len - iov_iter_count(iter)) ? : ret; -- 2.39.3 (Apple Git-146)