From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AA74D1358B for ; Mon, 28 Oct 2024 00:57:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 80DC86B007B; Sun, 27 Oct 2024 20:57:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E3B46B0083; Sun, 27 Oct 2024 20:57:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 684916B0085; Sun, 27 Oct 2024 20:57:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 448746B007B for ; Sun, 27 Oct 2024 20:57:50 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 587681C7CBA for ; Mon, 28 Oct 2024 00:57:24 +0000 (UTC) X-FDA: 82721197566.01.0E6EA02 Received: from mail-oa1-f43.google.com (mail-oa1-f43.google.com [209.85.160.43]) by imf16.hostedemail.com (Postfix) with ESMTP id 975CE180010 for ; Mon, 28 Oct 2024 00:57:25 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SC0OVaZn; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of andrii.nakryiko@gmail.com designates 209.85.160.43 as permitted sender) smtp.mailfrom=andrii.nakryiko@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730076940; a=rsa-sha256; cv=none; b=JLGRVBRbEpk9cfLevQSqvsrOM8idcjRfQvCH8dklfP6Ms7d/A1EY3zgnWJoQI2ZSZMX77d kSanEiu8NF7+FsKxjal41HamluoBncP3Qv7hDMeUqBZIMy9FUL7FuhGz0ciOUqnuHjrrck Z0/x0uL5WFsztXx2rXm3k5Jr0nRak3M= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SC0OVaZn; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of andrii.nakryiko@gmail.com designates 209.85.160.43 as permitted sender) smtp.mailfrom=andrii.nakryiko@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730076940; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/4QBPLq7a004rv8ai6kkCUUNzEpogEnxOy7q8leSeWc=; b=M9mOcrmYFshMhT0cjc0bWviblJOzKWD9RXR76xm8l3LSfBY/6VrKZxEGGO3Ev+Fs7QiYME Y9+ylOmG0ETDK47FL7Btv95vhZY3Z1zfQ1aesAMRHHn4k1ffNpYkm7J3VPbgRtGbsvPu4C Db+x1zyzVFZDAxeY9DvyjttF/c0+myc= Received: by mail-oa1-f43.google.com with SMTP id 586e51a60fabf-290d8d5332cso125238fac.2 for ; Sun, 27 Oct 2024 17:57:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730077067; x=1730681867; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=/4QBPLq7a004rv8ai6kkCUUNzEpogEnxOy7q8leSeWc=; b=SC0OVaZnS6OVo4Qyyi+Sa7DXRwdpz9xssOUaHc3/rzsLSsGMWcpBUqEaekcjEYGx9X eRKNti2S/YR9CKvnZn/AN1mOdIayff90neBquFCRMmGj/rEcZTPi+E55kO6kyq118Mxi JnX91xInhSmaTQD3lWHTQf8C0h6XqGy/0P97Y7uq0K2GZzop5sqz0ppEVOxbnNaDb5Oq bFOXZCJD6EVRYPtanHpxs2oKU6UpPGj6P+aLBKaIPFvlW4gPI/qUD2+TepttvYThSBHg zWtNeUYoHkv67QxUP6NH4ub85Vnlp5KOgvdpGSqErJOtFg0KQiJYhUUH4MR9HKq+hIwy OiYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730077067; x=1730681867; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/4QBPLq7a004rv8ai6kkCUUNzEpogEnxOy7q8leSeWc=; b=hkQiecbhqhKtAHl/7eNvC/LME+XRtg7I9MVynya4PX3aO6SrGNb8rSzy9MGF5NlpTi ZbntS9vHpJNLpB7GepM1vcpEPhlQWuNqYkImOzFrTfUhYAtlXpuqdw1DOq0WZaXAnU18 nRBrCMdHwlRufd0jl3OCSR2rlShULpt8ApML5AYHbTbXzZzP+SgnrCOR0b350Sr/SOjw v/G6yhynX7JNx1jgSEqL4f5chV12CfHuqJB26W4mznRoqghMAvawTdPXqvJAZMqU5QbM /sGVrk1DnFYxngA/kCsczEac8d4GkfH6LtvAMfMw+XFSHhCnvq3wSImv+oqHAEx3niRJ N6ow== X-Forwarded-Encrypted: i=1; AJvYcCVnZjWWJJsPav2vbAAwH9haVjFD4Uoh42ZVVbOsJMczPoudVzjDd52tMKCpK/Ru3VUe0IO30N5bEA==@kvack.org X-Gm-Message-State: AOJu0YxMEHvx8EHsIqMmzdsMNku7KBSvkP1qyP+vLO42Dyw39yOH33vN vfOqNE6d+I3K7hgbkcx8leRGD7QdxqpL/C9lOK9xaRPkpXWCw8V5tfXkeSe4f8hVBJgybpxRT3F sLFIyM8tkZzj8RQDWW6sBFBOWQ0Y= X-Google-Smtp-Source: AGHT+IEjQ94VsgFmtD4EoeQd4hUT/7FXQBzsJm+1R5ItM6JK1hqtC6yf/JSLwYpvI8QkMF0F4nsaR9/CWpDXoZFqMEo= X-Received: by 2002:a05:6870:3328:b0:270:1dab:64a9 with SMTP id 586e51a60fabf-29051b7415dmr4691188fac.14.1730077066819; Sun, 27 Oct 2024 17:57:46 -0700 (PDT) MIME-Version: 1.0 References: <20241024205231.1944747-1-surenb@google.com> In-Reply-To: <20241024205231.1944747-1-surenb@google.com> From: Andrii Nakryiko Date: Sun, 27 Oct 2024 17:57:34 -0700 Message-ID: Subject: Re: [PATCH 1/2] mm: convert mm_lock_seq to a proper seqcount To: Suren Baghdasaryan Cc: akpm@linux-foundation.org, peterz@infradead.org, andrii@kernel.org, jannh@google.com, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, mhocko@kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, david@redhat.com, willy@infradead.org, brauner@kernel.org, oleg@redhat.com, arnd@arndb.de, richard.weiyang@gmail.com, zhangpeng.00@bytedance.com, linmiaohe@huawei.com, viro@zeniv.linux.org.uk, hca@linux.ibm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 975CE180010 X-Stat-Signature: u86sirzgcz1zrw5zhepf8h41ka64p8mt X-Rspam-User: X-HE-Tag: 1730077045-735730 X-HE-Meta: U2FsdGVkX1/fc2e8n05B7EN4jokZ6/Iiw/EIZCF/UWopJDrdF8yZ+hE3QHlHA3aY+XzpCsjiFlqz41Chjwxc5G0j78WJqlzqMSk3OKy4SxZ7Xzqhx+qf3EI3Fy+/wi2D4m+AqCi2vhY3haKY2B39tnNM+ixDCZ12fokSOML7bxaRbydeilWq21tTEn5Y3a1Yj5JjuwctJkmXWySOCRvvWjeD1ieCtiwWX2ibBe9SwK6VRBOmOw6aXSe8PMbKfVPB2wZYCyrQ+S+cfC1Upftp8RVHp4XBWMGbM9fVDZvSGSVs0W47E8QCjff32W+2AHbqy8MXiBSz/jQF53St+2AEAyR0EC92rOn3H1LiriRz9W8LcEPJNnPDbZiYRd8gkyj+rpQ+dLx3gTzEjuPTx5THl3aoi5g4JvaZd0IlSSE+W4FVbJ+7NpPJwgBiCKyGOuBL6UNfqFLcsod3/EaiLDtPHf9camx8mXHDXlH8GWiY6BG0YctunXlTYUFieM1+5BeHJ6SHISZgkCA6UBclPd7z6A2LbTLI/alnVhDCqdzYCuqIs3zP6JO+h/O58Pj4Aruro5ZnpzIiafXJcUD2+p6zkvAtbnTxRqohE/yraPe9Dok1Ez39+RD4UeYbqsRTBZj5+vb39ZPM4T2KHso/o/pLM/uR39FgY3HmL8Cwi7V5OgobKi2XTBYyuHcdXuojvcK2aZP0ZizvoqleFwQ0Bo1PO6B4lU075jTyT8QkMF8JXlzR7JvmYVTloxoYHuLN3NTo2Os746ZTZv+sLtwOvesahMgR0HITiWXHPbCNqTZ5drXD47vxxacR65UKOrFelbnFdGPbzGJDov0a932ISJbgXSvDLCQPiCN9/SmuRIrCBLp+ADavGfv4+Mr8bR939l1wdXHttDY+xwIhglyFthtr53QsFrRVQM8lyx8dwGT8r4uLUJzIf1erqR3pYsHfHCcqvWGHCs+wyLdhxghOrUD UmKCRtxr JYBN29D+Gfei/vIEmRXt0klOtWlwMS9L4vtrB9Q4koBWHGtgPEt1dMuz7YcHtdVzWUhmx3hO93Qd9HZg9DSp0wAgdZGT8GX+UE7WNxqukc1l4g5woXLq5wQxMXQj7bLuvCVkIbALkxSUVd9bg7HugAisjl3Xu2CjCRerdBKoF6qUnSRZFvk5lxpkiHRQy9xv+eKWZKqshusboeKuYoKaJ+brN3LXxNce6YBBN4uPtroWgfsgYCjjd197D8ou6yyJ/uttCI+N9TFIDcdQU+GszdWLc6b/P1sIKO2GIBpdVE7iR48rNdi2bkxq5Cng3eb+OyLvQRl1muvYBz2T50FWyHIul8W7BrhNWwJRWo8H5WU0YcLr5Px7Hq+0rQscL0RKEWIm/bnX6aCDRIHddcyfi3AcVjpH6XWGvLHABj1hzkJHTGvle82JCyT5pRlVyJdGR2FJ+YOmlCwosIlgdP6ZvjgVURAb19Osd4kKRnruKbsvR+0ELqOaSp6iTkuTS1vxcwxMqWzFdAbgAWp2bI2r6rGxWHnc4wwq9boSSqUxKaJm3iUdU0Fx8VV4jtzQPQaUXWnGe X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Oct 24, 2024 at 1:52=E2=80=AFPM Suren Baghdasaryan wrote: > > Convert mm_lock_seq to be seqcount_t and change all mmap_write_lock > variants to increment it, in-line with the usual seqcount usage pattern. > This lets us check whether the mmap_lock is write-locked by checking > mm_lock_seq.sequence counter (odd=3Dlocked, even=3Dunlocked). This will b= e > used when implementing mmap_lock speculation functions. > As a result vm_lock_seq is also change to be unsigned to match the type > of mm_lock_seq.sequence. > > Suggested-by: Peter Zijlstra > Signed-off-by: Suren Baghdasaryan > --- > Applies over mm-unstable > > This conversion was discussed at [1] and these patches will likely be > incorporated into the next version od Andrii's patcset. I got a notification that this patch set was applied to mm-unstable by Andrew. But I was wondering if Andrew and Peter would agree to move the patches into tip's perf/core branch, given this is a dependency of my pending uprobe series ([0]) and, as far as I'm aware, there is no urgent need for this API in mm tree(s). If we route this through mm-unstable, we either need a stable tag from mm tree for Peter to merge into perf/core (a bit of a hassle for both Andrew and Peter, I'm sure), or I'd have to wait for 5-6 weeks until after next merge window closes, which would be a huge bummer, given I'd been at this for a while already with basically done patches, and would prefer to get my changes sooner. So, I'd very much prefer to just route these changes through perf/core, if mm folks don't oppose this? In fact, I'll go ahead and will send my patches with Suren's patches included with the assumption that we can reroute all this. Thanks for understanding! P.S. And yeah, Suren's patches apply cleanly to perf/core just as well, I checked. [0] https://lore.kernel.org/linux-trace-kernel/20241010205644.3831427-1-a= ndrii@kernel.org/ > The issue of the seqcount_t.sequence being an unsigned rather than > unsigned long will be addressed separately in collaoration with Jann Horn= . > > [1] https://lore.kernel.org/all/20241010205644.3831427-2-andrii@kernel.or= g/ > > include/linux/mm.h | 12 +++---- > include/linux/mm_types.h | 7 ++-- > include/linux/mmap_lock.h | 58 +++++++++++++++++++++----------- > kernel/fork.c | 5 +-- > mm/init-mm.c | 2 +- > tools/testing/vma/vma.c | 4 +-- > tools/testing/vma/vma_internal.h | 4 +-- > 7 files changed, 56 insertions(+), 36 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 4ef8cf1043f1..77644118b200 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -698,7 +698,7 @@ static inline bool vma_start_read(struct vm_area_stru= ct *vma) > * we don't rely on for anything - the mm_lock_seq read against w= hich we > * need ordering is below. > */ > - if (READ_ONCE(vma->vm_lock_seq) =3D=3D READ_ONCE(vma->vm_mm->mm_l= ock_seq)) > + if (READ_ONCE(vma->vm_lock_seq) =3D=3D READ_ONCE(vma->vm_mm->mm_l= ock_seq.sequence)) > return false; > > if (unlikely(down_read_trylock(&vma->vm_lock->lock) =3D=3D 0)) > @@ -715,7 +715,7 @@ static inline bool vma_start_read(struct vm_area_stru= ct *vma) > * after it has been unlocked. > * This pairs with RELEASE semantics in vma_end_write_all(). > */ > - if (unlikely(vma->vm_lock_seq =3D=3D smp_load_acquire(&vma->vm_mm= ->mm_lock_seq))) { > + if (unlikely(vma->vm_lock_seq =3D=3D raw_read_seqcount(&vma->vm_m= m->mm_lock_seq))) { > up_read(&vma->vm_lock->lock); > return false; > } > @@ -730,7 +730,7 @@ static inline void vma_end_read(struct vm_area_struct= *vma) > } > > /* WARNING! Can only be used if mmap_lock is expected to be write-locked= */ > -static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lo= ck_seq) > +static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned i= nt *mm_lock_seq) > { > mmap_assert_write_locked(vma->vm_mm); > > @@ -738,7 +738,7 @@ static bool __is_vma_write_locked(struct vm_area_stru= ct *vma, int *mm_lock_seq) > * current task is holding mmap_write_lock, both vma->vm_lock_seq= and > * mm->mm_lock_seq can't be concurrently modified. > */ > - *mm_lock_seq =3D vma->vm_mm->mm_lock_seq; > + *mm_lock_seq =3D vma->vm_mm->mm_lock_seq.sequence; > return (vma->vm_lock_seq =3D=3D *mm_lock_seq); > } > > @@ -749,7 +749,7 @@ static bool __is_vma_write_locked(struct vm_area_stru= ct *vma, int *mm_lock_seq) > */ > static inline void vma_start_write(struct vm_area_struct *vma) > { > - int mm_lock_seq; > + unsigned int mm_lock_seq; > > if (__is_vma_write_locked(vma, &mm_lock_seq)) > return; > @@ -767,7 +767,7 @@ static inline void vma_start_write(struct vm_area_str= uct *vma) > > static inline void vma_assert_write_locked(struct vm_area_struct *vma) > { > - int mm_lock_seq; > + unsigned int mm_lock_seq; > > VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma); > } > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index ff8627acbaa7..80fef38d9d64 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -715,7 +715,7 @@ struct vm_area_struct { > * counter reuse can only lead to occasional unnecessary use of t= he > * slowpath. > */ > - int vm_lock_seq; > + unsigned int vm_lock_seq; > /* Unstable RCU readers are allowed to read this. */ > struct vma_lock *vm_lock; > #endif > @@ -887,6 +887,9 @@ struct mm_struct { > * Roughly speaking, incrementing the sequence number is > * equivalent to releasing locks on VMAs; reading the seq= uence > * number can be part of taking a read lock on a VMA. > + * Incremented every time mmap_lock is write-locked/unloc= ked. > + * Initialized to 0, therefore odd values indicate mmap_l= ock > + * is write-locked and even values that it's released. > * > * Can be modified under write mmap_lock using RELEASE > * semantics. > @@ -895,7 +898,7 @@ struct mm_struct { > * Can be read with ACQUIRE semantics if not holding writ= e > * mmap_lock. > */ > - int mm_lock_seq; > + seqcount_t mm_lock_seq; > #endif > > > diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h > index de9dc20b01ba..6b3272686860 100644 > --- a/include/linux/mmap_lock.h > +++ b/include/linux/mmap_lock.h > @@ -71,39 +71,38 @@ static inline void mmap_assert_write_locked(const str= uct mm_struct *mm) > } > > #ifdef CONFIG_PER_VMA_LOCK > -/* > - * Drop all currently-held per-VMA locks. > - * This is called from the mmap_lock implementation directly before rele= asing > - * a write-locked mmap_lock (or downgrading it to read-locked). > - * This should normally NOT be called manually from other places. > - * If you want to call this manually anyway, keep in mind that this will= release > - * *all* VMA write locks, including ones from further up the stack. > - */ > -static inline void vma_end_write_all(struct mm_struct *mm) > +static inline void mm_lock_seqcount_init(struct mm_struct *mm) > { > - mmap_assert_write_locked(mm); > - /* > - * Nobody can concurrently modify mm->mm_lock_seq due to exclusiv= e > - * mmap_lock being held. > - * We need RELEASE semantics here to ensure that preceding stores= into > - * the VMA take effect before we unlock it with this store. > - * Pairs with ACQUIRE semantics in vma_start_read(). > - */ > - smp_store_release(&mm->mm_lock_seq, mm->mm_lock_seq + 1); > + seqcount_init(&mm->mm_lock_seq); > +} > + > +static inline void mm_lock_seqcount_begin(struct mm_struct *mm) > +{ > + do_raw_write_seqcount_begin(&mm->mm_lock_seq); > +} > + > +static inline void mm_lock_seqcount_end(struct mm_struct *mm) > +{ > + do_raw_write_seqcount_end(&mm->mm_lock_seq); > } > + > #else > -static inline void vma_end_write_all(struct mm_struct *mm) {} > +static inline void mm_lock_seqcount_init(struct mm_struct *mm) {} > +static inline void mm_lock_seqcount_begin(struct mm_struct *mm) {} > +static inline void mm_lock_seqcount_end(struct mm_struct *mm) {} > #endif > > static inline void mmap_init_lock(struct mm_struct *mm) > { > init_rwsem(&mm->mmap_lock); > + mm_lock_seqcount_init(mm); > } > > static inline void mmap_write_lock(struct mm_struct *mm) > { > __mmap_lock_trace_start_locking(mm, true); > down_write(&mm->mmap_lock); > + mm_lock_seqcount_begin(mm); > __mmap_lock_trace_acquire_returned(mm, true, true); > } > > @@ -111,6 +110,7 @@ static inline void mmap_write_lock_nested(struct mm_s= truct *mm, int subclass) > { > __mmap_lock_trace_start_locking(mm, true); > down_write_nested(&mm->mmap_lock, subclass); > + mm_lock_seqcount_begin(mm); > __mmap_lock_trace_acquire_returned(mm, true, true); > } > > @@ -120,10 +120,30 @@ static inline int mmap_write_lock_killable(struct m= m_struct *mm) > > __mmap_lock_trace_start_locking(mm, true); > ret =3D down_write_killable(&mm->mmap_lock); > + if (!ret) > + mm_lock_seqcount_begin(mm); > __mmap_lock_trace_acquire_returned(mm, true, ret =3D=3D 0); > return ret; > } > > +/* > + * Drop all currently-held per-VMA locks. > + * This is called from the mmap_lock implementation directly before rele= asing > + * a write-locked mmap_lock (or downgrading it to read-locked). > + * This should normally NOT be called manually from other places. > + * If you want to call this manually anyway, keep in mind that this will= release > + * *all* VMA write locks, including ones from further up the stack. > + */ > +static inline void vma_end_write_all(struct mm_struct *mm) > +{ > + mmap_assert_write_locked(mm); > + /* > + * Nobody can concurrently modify mm->mm_lock_seq due to exclusiv= e > + * mmap_lock being held. > + */ > + mm_lock_seqcount_end(mm); > +} > + > static inline void mmap_write_unlock(struct mm_struct *mm) > { > __mmap_lock_trace_released(mm, true); > diff --git a/kernel/fork.c b/kernel/fork.c > index fd528fb5e305..0cae6fc651f0 100644 > --- a/kernel/fork.c > +++ b/kernel/fork.c > @@ -447,7 +447,7 @@ static bool vma_lock_alloc(struct vm_area_struct *vma= ) > return false; > > init_rwsem(&vma->vm_lock->lock); > - vma->vm_lock_seq =3D -1; > + vma->vm_lock_seq =3D UINT_MAX; > > return true; > } > @@ -1260,9 +1260,6 @@ static struct mm_struct *mm_init(struct mm_struct *= mm, struct task_struct *p, > seqcount_init(&mm->write_protect_seq); > mmap_init_lock(mm); > INIT_LIST_HEAD(&mm->mmlist); > -#ifdef CONFIG_PER_VMA_LOCK > - mm->mm_lock_seq =3D 0; > -#endif > mm_pgtables_bytes_init(mm); > mm->map_count =3D 0; > mm->locked_vm =3D 0; > diff --git a/mm/init-mm.c b/mm/init-mm.c > index 24c809379274..6af3ad675930 100644 > --- a/mm/init-mm.c > +++ b/mm/init-mm.c > @@ -40,7 +40,7 @@ struct mm_struct init_mm =3D { > .arg_lock =3D __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), > .mmlist =3D LIST_HEAD_INIT(init_mm.mmlist), > #ifdef CONFIG_PER_VMA_LOCK > - .mm_lock_seq =3D 0, > + .mm_lock_seq =3D SEQCNT_ZERO(init_mm.mm_lock_seq), > #endif > .user_ns =3D &init_user_ns, > .cpu_bitmap =3D CPU_BITS_NONE, > diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c > index 8fab5e13c7c3..9bcf1736bf18 100644 > --- a/tools/testing/vma/vma.c > +++ b/tools/testing/vma/vma.c > @@ -89,7 +89,7 @@ static struct vm_area_struct *alloc_and_link_vma(struct= mm_struct *mm, > * begun. Linking to the tree will have caused this to be increme= nted, > * which means we will get a false positive otherwise. > */ > - vma->vm_lock_seq =3D -1; > + vma->vm_lock_seq =3D UINT_MAX; > > return vma; > } > @@ -214,7 +214,7 @@ static bool vma_write_started(struct vm_area_struct *= vma) > int seq =3D vma->vm_lock_seq; > > /* We reset after each check. */ > - vma->vm_lock_seq =3D -1; > + vma->vm_lock_seq =3D UINT_MAX; > > /* The vma_start_write() stub simply increments this value. */ > return seq > -1; > diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_int= ernal.h > index e76ff579e1fd..1d9fc97b8e80 100644 > --- a/tools/testing/vma/vma_internal.h > +++ b/tools/testing/vma/vma_internal.h > @@ -241,7 +241,7 @@ struct vm_area_struct { > * counter reuse can only lead to occasional unnecessary use of t= he > * slowpath. > */ > - int vm_lock_seq; > + unsigned int vm_lock_seq; > struct vma_lock *vm_lock; > #endif > > @@ -416,7 +416,7 @@ static inline bool vma_lock_alloc(struct vm_area_stru= ct *vma) > return false; > > init_rwsem(&vma->vm_lock->lock); > - vma->vm_lock_seq =3D -1; > + vma->vm_lock_seq =3D UINT_MAX; > > return true; > } > > base-commit: 9c111059234a949a4d3442a413ade19cc65ab927 > -- > 2.47.0.163.g1226f6d8fa-goog >