From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECECFC5AE59 for ; Tue, 3 Jun 2025 20:17:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7535F6B0507; Tue, 3 Jun 2025 16:17:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7044E6B0508; Tue, 3 Jun 2025 16:17:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 619716B0509; Tue, 3 Jun 2025 16:17:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 39E8A6B0507 for ; Tue, 3 Jun 2025 16:17:25 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 910EE815CA for ; Tue, 3 Jun 2025 20:17:24 +0000 (UTC) X-FDA: 83515199208.21.DAC7072 Received: from mail-qt1-f171.google.com (mail-qt1-f171.google.com [209.85.160.171]) by imf14.hostedemail.com (Postfix) with ESMTP id 89B2910000C for ; Tue, 3 Jun 2025 20:17:22 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="ARGMJd/C"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of surenb@google.com designates 209.85.160.171 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748981842; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RWNWJILrl8zMyCKCeNWlZdeGTw+/BxsN341ZE/ALRtY=; b=3feIoqxrN4szuFO1YQOqdHlx7eKKU9KV+1aErQCE265gpNk9C96tLO8Z+j09pMfHhfW33q 4hdEul0nNkZ/YBAnDGDRQTG040+WVkLWzQuYlbJ29cPMn8078ZkRvibswhHtX7McUd+TEZ mLr+aIUGxDiiAK9V4qehj53B/WZ+OiY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748981842; a=rsa-sha256; cv=none; b=2TlGIGKp29QmnKbTRBmWU0ePHT5csJgWMv5H6F40MyjK5ebXAkHf2vrnQjegINGjynCN2A 6Z4x7bM6Gf6vozjU8hT5kwEwufYpRUWg4j+3fwSKsgWgQYQ/6a/DHQsXl+bA1pb2V9nFnP 9hwPFgw85ACI5ulDJibScSxrGt1zK3k= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="ARGMJd/C"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of surenb@google.com designates 209.85.160.171 as permitted sender) smtp.mailfrom=surenb@google.com Received: by mail-qt1-f171.google.com with SMTP id d75a77b69052e-4a433f52485so119001cf.0 for ; Tue, 03 Jun 2025 13:17:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748981841; x=1749586641; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=RWNWJILrl8zMyCKCeNWlZdeGTw+/BxsN341ZE/ALRtY=; b=ARGMJd/C+t04iUT/Pe3dlELE9pDDDGLR+7xUI1VuMEPQ+rOLDBwttCHAVkmM1fCOjT b8I7HiCUk85HAnyW2a3S111BlSk/G9De4sf4yjVRtSp2e0zV59CdrvVhLnMS0ZU6zLc8 e6Datj2HLeGAOs9P2p8kJgJGzmKw9WHLmjCtTZk5jaU3CznzOSVUz7Dvqr9hprmdGywv 8n0FMtgw4ymbJmH3avxeoUdEQmPWG/J4hLGMU9jXNZ9qDZdm/FqSN5ltmpMhLCtMCpMs U0T50SeteuK3rqMK/2HOAu20n/9/Hrj7UHCvNnhdjspwAE7bi95w1rfcgUg3Rc9UTYd3 jRUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748981842; x=1749586642; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RWNWJILrl8zMyCKCeNWlZdeGTw+/BxsN341ZE/ALRtY=; b=wYaaGQ2FLreWVIjfRU2Bi3k0wsZjb72D4zL0CopV3zSOefVkhTCFtSMRcPfScmtSZW 53ZSU4A8oksDyFnsoeAZWHFJ/EK7IfXI9EuD/0T18R5o1UUTKKQP21v9bkm0iNyZspYE 1++PVfp9togWelGzVVIeBTHMz43HJSWZs4aiXKyp2ysq70wwA3tU7y6bz/liyG9NFABR 4360N+1PtaGePnj3mHq4bNwRKD14UvRG5wduKyLYYDyZ7AoVWcnP6JvpuZbcbmw4LDq/ CD0SgebGzZPHmBJb+3MBLPgh1WUhr6NPFpxTZ77R8ae5Wjz2U9F+2tgC8chsObvQTvMB Z24g== X-Forwarded-Encrypted: i=1; AJvYcCX35+YhMJwkQ9UkPFfJ8AATW1oryPeglOyQ/+S4PT9feOAogjj+Miq7Tgsy1oFl9Q4NEAYN0SQSQg==@kvack.org X-Gm-Message-State: AOJu0YxdbUV0yWs8SKSTy7gP+LyIVcNJ+D4LwjkbytMAkEbi4fxTsFxI kAaYoovY1+Kil2u8mVprewPIQZcvdNjV77GB876YMNJKd5HHtqSFO2LbM3d3eYB/7mx+LXCplMR g8ydESKraOaa5EtBzpE7lH13LCi7hhR77zH4eD5uc X-Gm-Gg: ASbGncswYS8sWooXwbKsgrjrdnWQmFPDRIo3fiD+mR0N2yRDZoTdSNBcQfHBImaI/Cq eOHzYnPYF+40GlKa14LpO924X1rHFawTZWXTHbAWICGc+w9pr+vdDj9FKHqPOZkpExDp1/UNMSz wgRXW+GwyWGkS+81Z+8And8rOeCHaOF1Xs1eB5VfyFTV9nfVncjSTNe5jAOH7WbUsxjI7VXgLd X-Google-Smtp-Source: AGHT+IFL03qfsrnADtjwe9NyPXflxFWItUlS46fmT+Zu7AmB4ymfCv7GSbWpC1a+jK7Di0S0cEMqJ9H21LfsdSTkork= X-Received: by 2002:a05:622a:a70e:b0:47d:4e8a:97f0 with SMTP id d75a77b69052e-4a5a53f89c1mr476291cf.29.1748981841245; Tue, 03 Jun 2025 13:17:21 -0700 (PDT) MIME-Version: 1.0 References: <20250530104439.64841-1-21cnbao@gmail.com> <0b96ce61-a52c-4036-b5b6-5c50783db51f@lucifer.local> In-Reply-To: <0b96ce61-a52c-4036-b5b6-5c50783db51f@lucifer.local> From: Suren Baghdasaryan Date: Tue, 3 Jun 2025 13:17:10 -0700 X-Gm-Features: AX0GCFtmE9iOS-BWDiyxVniMExXEGtq8o7sqRnk-B4Ne6fjJYbiGuM1qncdfZr0 Message-ID: Subject: Re: [PATCH RFC v2] mm: use per_vma lock for MADV_DONTNEED To: Lorenzo Stoakes Cc: Barry Song <21cnbao@gmail.com>, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Barry Song , "Liam R. Howlett" , David Hildenbrand , Vlastimil Babka , Jann Horn , Lokesh Gidra , Tangquan Zheng Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 89B2910000C X-Stat-Signature: 5db1s7j6ft5s96ohawqgtbgw5icnhpdj X-Rspam-User: X-HE-Tag: 1748981842-600356 X-HE-Meta: U2FsdGVkX1/UmCuunwCtBtquBAwqBlmUXHLaA3WbbAEFERsaYpWrsPNr7TDBPJzOmABsvrt4gCM0TEKIo4vYSyEboczBZijQqkrGxw5DCDcBCdsLsta4NuRxdMtxTXfutKxtViygctXw+AGjL+JI5didbTe4r4qY6GGcLk48PsNfDjjzRdAD9YVIN7XY+XwXZDB4u4YST3zqVHtmDkpRApibmBzn7YfPaULLPq4FETauliiUtxeSuNiWepQlyDsYwKvNGUmFDATm4yWIzjkAt+rXpKdxwHBzmxtKFkwRFZvb5wu/5Q2DdZHMo3OX7ijKXWEPaZmcx/BhWb+NZUTDx8avE5hh13IRFu5HQpc6sq+jYSqJzkVcISmi7vNQdrPUJ34ppwGKpmrciESN83FMU2ysuVeWspE7JAfgsF8HGm5AzjmngIup6ejAKw+gLUcnyFF9BP8ti+wbiSv+JPjxw8CKsLtf6eSViuTXjwzZMa3DosifWmS085uMX1wDEh3Pzey6S36xX4RXdXguFBeqkZ6ZbsnXKTLKfHGe/edHDh5GLbVkdXIcSeo4oD4hFNQjl58C7wQkXWFTkYKS+Q1B/ssYT+woyV/1IN/aqzSoSxzH3YppybRNUPE2sSuqgFrOKBwtxBj3Ea8znOhb+UoOjYFhfHTh0F4HOwu0RmBtZTlozIKZdWO71hnJI3wrv4jNOFszcJkmBrpdZU4FnmK6TQ78CAnYG5TiNx9v3EqJGZoes0Rb3tjDvEURWypxI3VDiQxZMvzlx5/9AVmW4ClebWLV9J8SCd+XH2mTGeoasofrZfbmRqXMBrr890EsBeYl89vUWiiAjyD7lT0dOrBy7FADBZfxW9nFpTJFfnqBu1Fp6DFTbPk+64NQNpWRO3FbIYCwOD082HzjRMpsu2LFe357CMISEqPJ2NtYL4QqCUPblWaeJUTBZLeGDwxYJ4SprqMRjFwbb0L1dnuOtZL UlswuQgI BpCwyw+wIsEP4IoLgdjMyyms4EIl+yhLWAHOAouXWYONkoIAqDF9WsXAyTdCN0x57zY+NWtpGhHdSy3ZW0LhvMhlMeOKCPogzsUl6dPNrlYm3cBH11xzm/BvrqQg41ZtTKMLP8C/eZoNgSaLDthIgIVnnPrMlpTCM/miMKRGV8EnIrzVJyP3mc1xXAl07xm2HfN/eUKCcjQb7wQVQODEvvxfjHiyw8t5qBwQNQttu7CPBYHinbG9wW3dA6Lpcg4961wxfM28j4m+K4XQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jun 3, 2025 at 11:43=E2=80=AFAM Lorenzo Stoakes wrote: > > Hi Barry, > > As promised, I enclose a patch to give a sense of how I think we might > thread state through this operation. > > There's a todo on the untagged stuff so you can figure that out. This is > based on the v1 so it might not encompass everything you addressed in the > v2. > > Passing in madv_behavior to madvise_walk_vmas() twice kinda sucks, I > _despise_ the void *arg function ptr stuff there added just for the anon > vma name stuff (ughhh) so might be the only sensible way of threading > state. > > I don't need any attribution, so please use this patch as you see > fit/adapt/delete/do whatever with it, just an easier way for me to show t= he > idea! > > I did some very basic testing and it seems to work, but nothing deeper. > > Cheers, Lorenzo > > ----8<---- > From ff4ba0115cb31a0630b6f8c02c68f11b3fb71f7a Mon Sep 17 00:00:00 2001 > From: Lorenzo Stoakes > Date: Tue, 3 Jun 2025 18:22:55 +0100 > Subject: [PATCH] mm/madvise: support VMA read locks for MADV_DONTNEED[_LO= CKED] > > Refactor the madvise() code to retain state about the locking mode utilis= ed > for traversing VMAs. > > Then use this mechanism to permit VMA locking to be done later in the > madvise() logic and also to allow altering of the locking mode to permit > falling back to an mmap read lock if required. > > Signed-off-by: Lorenzo Stoakes > --- > mm/madvise.c | 174 +++++++++++++++++++++++++++++++++++++-------------- > 1 file changed, 127 insertions(+), 47 deletions(-) > > diff --git a/mm/madvise.c b/mm/madvise.c > index 5f7a66a1617e..a3a6d73d0bd5 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -48,38 +48,19 @@ struct madvise_walk_private { > bool pageout; > }; > > +enum madvise_lock_mode { > + MADVISE_NO_LOCK, > + MADVISE_MMAP_READ_LOCK, > + MADVISE_MMAP_WRITE_LOCK, > + MADVISE_VMA_READ_LOCK, > +}; > + > struct madvise_behavior { > int behavior; > struct mmu_gather *tlb; > + enum madvise_lock_mode lock_mode; > }; > > -/* > - * Any behaviour which results in changes to the vma->vm_flags needs to > - * take mmap_lock for writing. Others, which simply traverse vmas, need > - * to only take it for reading. > - */ > -static int madvise_need_mmap_write(int behavior) > -{ > - switch (behavior) { > - case MADV_REMOVE: > - case MADV_WILLNEED: > - case MADV_DONTNEED: > - case MADV_DONTNEED_LOCKED: > - case MADV_COLD: > - case MADV_PAGEOUT: > - case MADV_FREE: > - case MADV_POPULATE_READ: > - case MADV_POPULATE_WRITE: > - case MADV_COLLAPSE: > - case MADV_GUARD_INSTALL: > - case MADV_GUARD_REMOVE: > - return 0; > - default: > - /* be safe, default to 1. list exceptions explicitly */ > - return 1; > - } > -} > - > #ifdef CONFIG_ANON_VMA_NAME > struct anon_vma_name *anon_vma_name_alloc(const char *name) > { > @@ -1486,6 +1467,43 @@ static bool process_madvise_remote_valid(int behav= ior) > } > } > > +/* > + * Try to acquire a VMA read lock if possible. > + * > + * We only support this lock over a single VMA, which the input range mu= st > + * span.either partially or fully. > + * > + * This function always returns with an appropriate lock held. If a VMA = read > + * lock could be acquired, we return the locked VMA. > + * > + * If a VMA read lock could not be acquired, we return NULL and expect c= aller to Worth mentioning that the function itself will fall back to taking mmap_read_lock in such a case. > + * fallback to mmap lock behaviour. > + */ > +static struct vm_area_struct *try_vma_read_lock(struct mm_struct *mm, > + struct madvise_behavior *madv_behavior, > + unsigned long start, unsigned long end) > +{ > + struct vm_area_struct *vma; > + > + if (!madv_behavior || madv_behavior->lock_mode !=3D MADVISE_VMA_R= EAD_LOCK) nit: I think it would be better to do this check before calling try_vma_read_lock(). IMHO it does not make sense to call try_vma_read_lock() when lock_mode !=3D MADVISE_VMA_READ_LOCK. It also makes reading this function easier. The first time I looked at it and saw "return NULL" in one place that takes mmap_read_lock() and another place which returns the same NULL but does not take mmap_lock really confused me. > + return NULL; > + > + vma =3D lock_vma_under_rcu(mm, start); > + if (!vma) > + goto take_mmap_read_lock; > + /* We must span only a single VMA, uffd unsupported. */ > + if (end > vma->vm_end || userfaultfd_armed(vma)) { vma->vm_end is not inclusive, so the above condition I think should be (end >=3D vma->vm_end || ...) > + vma_end_read(vma); > + goto take_mmap_read_lock; > + } > + return vma; > + > +take_mmap_read_lock: > + mmap_read_lock(mm); > + madv_behavior->lock_mode =3D MADVISE_MMAP_READ_LOCK; > + return NULL; > +} > + > /* > * Walk the vmas in range [start,end), and call the visit function on ea= ch one. > * The visit function will get start and end parameters that cover the o= verlap > @@ -1496,7 +1514,8 @@ static bool process_madvise_remote_valid(int behavi= or) > */ > static > int madvise_walk_vmas(struct mm_struct *mm, unsigned long start, > - unsigned long end, void *arg, > + unsigned long end, struct madvise_behavior *madv_be= havior, > + void *arg, > int (*visit)(struct vm_area_struct *vma, > struct vm_area_struct **prev, unsigned= long start, > unsigned long end, void *arg)) > @@ -1505,6 +1524,15 @@ int madvise_walk_vmas(struct mm_struct *mm, unsign= ed long start, > struct vm_area_struct *prev; > unsigned long tmp; > int unmapped_error =3D 0; > + int error; > + > + /* If VMA read lock supported, we apply advice to a single VMA on= ly. */ > + vma =3D try_vma_read_lock(mm, madv_behavior, start, end); > + if (vma) { > + error =3D visit(vma, &prev, start, end, arg); > + vma_end_read(vma); > + return error; > + } > > /* > * If the interval [start,end) covers some unmapped address > @@ -1516,8 +1544,6 @@ int madvise_walk_vmas(struct mm_struct *mm, unsigne= d long start, > prev =3D vma; > > for (;;) { > - int error; > - > /* Still start < end. */ > if (!vma) > return -ENOMEM; > @@ -1598,34 +1624,86 @@ int madvise_set_anon_name(struct mm_struct *mm, u= nsigned long start, > if (end =3D=3D start) > return 0; > > - return madvise_walk_vmas(mm, start, end, anon_name, > + return madvise_walk_vmas(mm, start, end, anon_name, NULL, > madvise_vma_anon_name); > } > #endif /* CONFIG_ANON_VMA_NAME */ > > -static int madvise_lock(struct mm_struct *mm, int behavior) > + > +/* > + * Any behaviour which results in changes to the vma->vm_flags needs to > + * take mmap_lock for writing. Others, which simply traverse vmas, need > + * to only take it for reading. > + */ > +static enum madvise_lock_mode get_lock_mode(struct madvise_behavior *mad= v_behavior) > { > + int behavior =3D madv_behavior->behavior; > + > if (is_memory_failure(behavior)) > - return 0; > + return MADVISE_NO_LOCK; > > - if (madvise_need_mmap_write(behavior)) { > + switch (behavior) { > + case MADV_REMOVE: > + case MADV_WILLNEED: > + case MADV_COLD: > + case MADV_PAGEOUT: > + case MADV_FREE: > + case MADV_POPULATE_READ: > + case MADV_POPULATE_WRITE: > + case MADV_COLLAPSE: > + case MADV_GUARD_INSTALL: > + case MADV_GUARD_REMOVE: > + return MADVISE_MMAP_READ_LOCK; > + case MADV_DONTNEED: > + case MADV_DONTNEED_LOCKED: > + return MADVISE_VMA_READ_LOCK; > + default: > + return MADVISE_MMAP_WRITE_LOCK; > + } > +} > + > +static int madvise_lock(struct mm_struct *mm, > + struct madvise_behavior *madv_behavior) > +{ > + enum madvise_lock_mode lock_mode =3D get_lock_mode(madv_behavior)= ; > + > + switch (lock_mode) { > + case MADVISE_NO_LOCK: > + break; > + case MADVISE_MMAP_WRITE_LOCK: > if (mmap_write_lock_killable(mm)) > return -EINTR; > - } else { > + break; > + case MADVISE_MMAP_READ_LOCK: > mmap_read_lock(mm); > + break; > + case MADVISE_VMA_READ_LOCK: > + /* We will acquire the lock per-VMA in madvise_walk_vmas(= ). */ > + break; > } > + > + madv_behavior->lock_mode =3D lock_mode; > return 0; > } > > -static void madvise_unlock(struct mm_struct *mm, int behavior) > +static void madvise_unlock(struct mm_struct *mm, > + struct madvise_behavior *madv_behavior) > { > - if (is_memory_failure(behavior)) > + switch (madv_behavior->lock_mode) { > + case MADVISE_NO_LOCK: > return; > - > - if (madvise_need_mmap_write(behavior)) > + case MADVISE_MMAP_WRITE_LOCK: > mmap_write_unlock(mm); > - else > + break; > + case MADVISE_MMAP_READ_LOCK: > mmap_read_unlock(mm); > + break; > + case MADVISE_VMA_READ_LOCK: > + /* We will drop the lock per-VMA in madvise_walk_vmas(). = */ > + break; > + } > + > + madv_behavior->lock_mode =3D MADVISE_NO_LOCK; > } > > static bool madvise_batch_tlb_flush(int behavior) > @@ -1721,6 +1799,8 @@ static int madvise_do_behavior(struct mm_struct *mm= , > > if (is_memory_failure(behavior)) > return madvise_inject_error(behavior, start, start + len_= in); > + > + // TODO: handle untagged stuff here... > start =3D untagged_addr(start); //untagged_addr_remote(mm, start)= ; > end =3D start + PAGE_ALIGN(len_in); > > @@ -1729,7 +1809,7 @@ static int madvise_do_behavior(struct mm_struct *mm= , > error =3D madvise_populate(mm, start, end, behavior); > else > error =3D madvise_walk_vmas(mm, start, end, madv_behavior= , > - madvise_vma_behavior); > + madv_behavior, madvise_vma_beha= vior); > blk_finish_plug(&plug); > return error; > } > @@ -1817,13 +1897,13 @@ int do_madvise(struct mm_struct *mm, unsigned lon= g start, size_t len_in, int beh > > if (madvise_should_skip(start, len_in, behavior, &error)) > return error; > - error =3D madvise_lock(mm, behavior); > + error =3D madvise_lock(mm, &madv_behavior); > if (error) > return error; > madvise_init_tlb(&madv_behavior, mm); > error =3D madvise_do_behavior(mm, start, len_in, &madv_behavior); > madvise_finish_tlb(&madv_behavior); > - madvise_unlock(mm, behavior); > + madvise_unlock(mm, &madv_behavior); > > return error; > } > @@ -1847,7 +1927,7 @@ static ssize_t vector_madvise(struct mm_struct *mm,= struct iov_iter *iter, > > total_len =3D iov_iter_count(iter); > > - ret =3D madvise_lock(mm, behavior); > + ret =3D madvise_lock(mm, &madv_behavior); > if (ret) > return ret; > madvise_init_tlb(&madv_behavior, mm); > @@ -1880,8 +1960,8 @@ static ssize_t vector_madvise(struct mm_struct *mm,= struct iov_iter *iter, > > /* Drop and reacquire lock to unwind race. */ > madvise_finish_tlb(&madv_behavior); > - madvise_unlock(mm, behavior); > - ret =3D madvise_lock(mm, behavior); > + madvise_unlock(mm, &madv_behavior); > + ret =3D madvise_lock(mm, &madv_behavior); > if (ret) > goto out; > madvise_init_tlb(&madv_behavior, mm); > @@ -1892,7 +1972,7 @@ static ssize_t vector_madvise(struct mm_struct *mm,= struct iov_iter *iter, > iov_iter_advance(iter, iter_iov_len(iter)); > } > madvise_finish_tlb(&madv_behavior); > - madvise_unlock(mm, behavior); > + madvise_unlock(mm, &madv_behavior); > > out: > ret =3D (total_len - iov_iter_count(iter)) ? : ret; > -- > 2.49.0