From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD97ED13590 for ; Mon, 28 Oct 2024 01:24:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 788BC6B0092; Sun, 27 Oct 2024 21:24:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 711C86B0093; Sun, 27 Oct 2024 21:24:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58B3B6B0095; Sun, 27 Oct 2024 21:24:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 31F876B0092 for ; Sun, 27 Oct 2024 21:24:54 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 6FF3B161C9E for ; Mon, 28 Oct 2024 01:24:28 +0000 (UTC) X-FDA: 82721266194.23.09874BD Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) by imf17.hostedemail.com (Postfix) with ESMTP id 48A8540003 for ; Mon, 28 Oct 2024 01:24:34 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=dNVOWaQg; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf17.hostedemail.com: domain of andrii.nakryiko@gmail.com designates 209.85.215.181 as permitted sender) smtp.mailfrom=andrii.nakryiko@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730078638; a=rsa-sha256; cv=none; b=Hkj/z9OaqGXS5yE+e6TBPeFc1tzh3DWZAOgXhmYjWIpNjdySzaUjtwfP2XhrorqVnkbEuU +uA81wX9R331K5kDA8ylXDykZ+OXHgIbBmC6H/dzqObnyhW9Fdw3K/tiFtVylox/JOJKp7 8TJYo0qChGkuCisSA+2fCZTOyxI/SyM= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=dNVOWaQg; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf17.hostedemail.com: domain of andrii.nakryiko@gmail.com designates 209.85.215.181 as permitted sender) smtp.mailfrom=andrii.nakryiko@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730078638; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vTs1iHW13ccnL4FX3KYPNOsHUWdayjp3Yw3+RlaAtxU=; b=lhNp2Zeed70JENl+FIij6tHpjPA0PA+PZUovc7WS196t1/HqSejpzcA4O8AvdOIwcNyUV4 Qu3OAVIpXg2MQxcfsT1UGuTpo1hqOQuyElktt3eVG7CaLm11GW8jsE1g1HrOH9KX+6/Mvl 8zlPkw3asOv0lfhvbq5UCtFvuqRU9TQ= Received: by mail-pg1-f181.google.com with SMTP id 41be03b00d2f7-7eb0bc007edso1510637a12.3 for ; Sun, 27 Oct 2024 18:24:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730078690; x=1730683490; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=vTs1iHW13ccnL4FX3KYPNOsHUWdayjp3Yw3+RlaAtxU=; b=dNVOWaQgKl3e+jfDFui7TVM2PYKuoMnjoZqMiCxHY4QCdjxTXG0EhLYZ4M4fNI9Vjx Pwm691t5K0toidjEFuAaysWzSSHZybLi+0O/JNbTALMBPZWFWWjEVNWBYhGkE8h0Zsym m1+EAgMQVfQyD4LFI9XFNIHLAs9g3qiQdTcK4OCLIWT/dmGh9Ly9buC9ERwJ4WEcOG7z LYemE2JebM4ZeWG62FpdBOWC/JQn809JYjvjdZLj9BTclEc80/8yaEdL8TftBdvVLrHI AejtwGgpNMuMzUx0jK64HWRZt0n3ea3qJDfvrdK25ypiDAthteaUDEoh65msVNIm9XN4 wyVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730078690; x=1730683490; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vTs1iHW13ccnL4FX3KYPNOsHUWdayjp3Yw3+RlaAtxU=; b=a5zHt5V9iavOXQJHKb/8VFDs13xAsTJjZlza+flS+bz7xG8pqI6Q0qzbW6AujEi15d y4VtkB0Tk5fihl5GZon4c3FN9rnJ8ChJXmRVjoUKpeK5EY5wEoOYsW4FD2QNR1cARMb+ 51D2lUfMc9qDdCoTC8jqZGrR8iDKft+GyQx44BmBq02Kp2YvsoCMNsjIUbpA6YoyWBJO ECvoXa1Fs3h7DON4ftTqVagDnY4LEYzTPxTGC370AQPDPTR+75CHt6HlLKyNRZLyfiTs pXn/EDM/2LTZacWd5Zwr/t64KNxPDa4ShbYGfMkwHZxSbJUI3mKAG2kyQTt33BB5smQ9 W8hg== X-Forwarded-Encrypted: i=1; AJvYcCVhUQGfMeEif3JKYIo3BxfhIxZGcU5HB4p0iqAlqFo0hVqKuxdgFanlSfjjPB+8vz0qnzyKOA5UmA==@kvack.org X-Gm-Message-State: AOJu0YwlgdK7EPNu4uflFOzWfixle9iMjQYl+KRMK82sTcFZ91YmGi6x mvIqsz40L3mL99FLtBD6o2pxER7CKh8AnEZShM1rBnPRnBR6od4ZWFmniVo9WLaNGN6MMYpZWaw hi1N25zIVAqQt3+921PEI9YoWj0g= X-Google-Smtp-Source: AGHT+IFt4Fi0i5Lp2y2vWOlife4xrjbN8KHPBWvK1ZJSw+VxXnWEO/gPUq00+dH9sW9TjD0Rx3eVVVssn9iyTUNqDxc= X-Received: by 2002:a05:6a21:1707:b0:1d9:b48:8b0b with SMTP id adf61e73a8af0-1d9a83b53abmr10228379637.5.1730078690511; Sun, 27 Oct 2024 18:24:50 -0700 (PDT) MIME-Version: 1.0 References: <20241024205231.1944747-1-surenb@google.com> In-Reply-To: From: Andrii Nakryiko Date: Sun, 27 Oct 2024 18:24:38 -0700 Message-ID: Subject: Re: [PATCH 1/2] mm: convert mm_lock_seq to a proper seqcount To: Suren Baghdasaryan Cc: akpm@linux-foundation.org, peterz@infradead.org, andrii@kernel.org, jannh@google.com, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, mhocko@kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, david@redhat.com, willy@infradead.org, brauner@kernel.org, oleg@redhat.com, arnd@arndb.de, richard.weiyang@gmail.com, zhangpeng.00@bytedance.com, linmiaohe@huawei.com, viro@zeniv.linux.org.uk, hca@linux.ibm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Queue-Id: 48A8540003 X-Rspamd-Server: rspam01 X-Stat-Signature: jsmkxpez9kcwutsc6h45eou5ted1timn X-HE-Tag: 1730078674-579999 X-HE-Meta: U2FsdGVkX1/TE3+oh0fOvMMdNYM0HWXbUSKlK7j7cqKPYnU1gepz17CLTsUnv/mFN1RXuZtkNeEoTzXpjNtPB4otlsiLoYbfB4qSLw8PewLcT2XK5Vf4AKNY4wi/uXBMNf6Hw+5w4n8/+csYvAYO1BbnwAvuCywB43QLLffU24BWNpjRfBXRvJrkhsqyKiQcBrqQgdJeF62LH+/j7E2Rc7adzn7+rB0CyzInmL7hVMaoj8f0O3XKR79tBCIDa9XcsSlIaBfMlZoVGzQseMdjzJUpc9zf7wFTMlbewty/H2Hg1su+1+vcxrjcfhnz3pXknfLqa8h+WJGcZnff4wKlV6TLC7ShiXnxewlJZ/AlPKLmZKEfoI6XpgQ8OYv91ALc9UphrJfL3PfarzUz2cqShZkZ/l/y7ap1oR6U3gTfbszM2E+ayrauAPUu3DCOLjaKkr4CFIUqTOp8TkSoH6Np7mthl/zMAnIT04vNfi3Xj1WwVWsAzV8NX0+QkddFHYl3jR/GH+zlTgSnOKuTy7Wjk/z4fYvKnPn83gjGHFuQ/X7fXFYK4FjtdviiIw2/P4k73RGKoHCUFeaPAJhch/w11URq0nMNG2JZsqDxZ9hSI6mQAHwn0ySNdwCnA5T/MnjzuWRqopGv6JDEQVeMbjNL5M448pimvRW9v96NDo+f3g+Y7HaQU2FcTnCRssyvvlMIO6BKwcavKIxsrUbYN3Dduzu+EVGHucGuhhF8dc+6c4cfqyEGb9PHPWTfRcbQdKXD7QjFT42LIQ+WkfDAiL+YbdM43C+yIVwN6J8TWVpqtx8ccuG0DrdiHDZdG3pL7TsZmC1qP0rE4cy2dcaTYiIe0pP40AhQ0M3i/cZbMaLwrp9LSa0154ZExn9htlpK0RkS+PulC2QSErUOX4FcvssodH/A8DvxWAX8R0zKxSl8VtKQMEYE0neCWa1E84onYloNj9oS77NiacH/An0EORM EK6/DfEn lJ0h9N6uPJ5m0NYCatFE/vPPLWyR23vaL1Ehd4o+dh+78mSP5zNH+k4cYwVDgQw8Xc0lDougHdK6lbtsAznU4HNJOLVnNmXSOQUQhLABJidjbbeifWgBpuw3GOhtvdba2nHorXYtiKa1ASajEiXjC/eUBLg0va0HfSRpmciZcAHTpfi+RUEfWQLh87HEGR6knEuvzXsBWPGnfz4DXJsBjt1HBJea1MQUffWbDP8SJtdu3CLCIORImrXql7AvtNAj07L3KaaRaFJsQcu+aTu5Dzp5aNCPemq923HfbP9D7PVqQQpww5LPx2fwaOiJSOQud9J01MnOSN74zSn6qOpYAvRalKYkgFoyVLs32ehbz1EgcjFGscsHU08w+BbSna9yphYSRv4TxsiT7nCKRpBNk6PXQl4CNHq1Ab89CD3zatfD74KiCIAeaFy1eoyuyqAxwnrzqUwGYNOQn9+UOrPWp5hNJO/fsFNF5TMM6MQuVTKwo5XBM5AsntRjTYeWi6iVSOapGMdR+9M3ivNx1AeAUDOLMU20KhMSP51IUkClMi2YsleDwZpUgtEYE0Q7DFyjBsFG+22YlIr1uuf+G2e7jglF+NPD9X3siyTg3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sun, Oct 27, 2024 at 5:57=E2=80=AFPM Andrii Nakryiko wrote: > > On Thu, Oct 24, 2024 at 1:52=E2=80=AFPM Suren Baghdasaryan wrote: > > > > Convert mm_lock_seq to be seqcount_t and change all mmap_write_lock > > variants to increment it, in-line with the usual seqcount usage pattern= . > > This lets us check whether the mmap_lock is write-locked by checking > > mm_lock_seq.sequence counter (odd=3Dlocked, even=3Dunlocked). This will= be > > used when implementing mmap_lock speculation functions. > > As a result vm_lock_seq is also change to be unsigned to match the type > > of mm_lock_seq.sequence. > > > > Suggested-by: Peter Zijlstra > > Signed-off-by: Suren Baghdasaryan > > --- > > Applies over mm-unstable > > > > This conversion was discussed at [1] and these patches will likely be > > incorporated into the next version od Andrii's patcset. > > I got a notification that this patch set was applied to mm-unstable by > Andrew. But I was wondering if Andrew and Peter would agree to move > the patches into tip's perf/core branch, given this is a dependency of > my pending uprobe series ([0]) and, as far as I'm aware, there is no > urgent need for this API in mm tree(s). If we route this through > mm-unstable, we either need a stable tag from mm tree for Peter to > merge into perf/core (a bit of a hassle for both Andrew and Peter, I'm > sure), or I'd have to wait for 5-6 weeks until after next merge window > closes, which would be a huge bummer, given I'd been at this for a > while already with basically done patches, and would prefer to get my > changes sooner. > > So, I'd very much prefer to just route these changes through > perf/core, if mm folks don't oppose this? In fact, I'll go ahead and > will send my patches with Suren's patches included with the assumption > that we can reroute all this. Thanks for understanding! > > P.S. And yeah, Suren's patches apply cleanly to perf/core just as > well, I checked. Sent all of that as v4, see [2] [2] https://lore.kernel.org/linux-trace-kernel/20241028010818.2487581-1-a= ndrii@kernel.org/ > > [0] https://lore.kernel.org/linux-trace-kernel/20241010205644.3831427-1= -andrii@kernel.org/ > > > The issue of the seqcount_t.sequence being an unsigned rather than > > unsigned long will be addressed separately in collaoration with Jann Ho= rn. > > > > [1] https://lore.kernel.org/all/20241010205644.3831427-2-andrii@kernel.= org/ > > > > include/linux/mm.h | 12 +++---- > > include/linux/mm_types.h | 7 ++-- > > include/linux/mmap_lock.h | 58 +++++++++++++++++++++----------- > > kernel/fork.c | 5 +-- > > mm/init-mm.c | 2 +- > > tools/testing/vma/vma.c | 4 +-- > > tools/testing/vma/vma_internal.h | 4 +-- > > 7 files changed, 56 insertions(+), 36 deletions(-) > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 4ef8cf1043f1..77644118b200 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -698,7 +698,7 @@ static inline bool vma_start_read(struct vm_area_st= ruct *vma) > > * we don't rely on for anything - the mm_lock_seq read against= which we > > * need ordering is below. > > */ > > - if (READ_ONCE(vma->vm_lock_seq) =3D=3D READ_ONCE(vma->vm_mm->mm= _lock_seq)) > > + if (READ_ONCE(vma->vm_lock_seq) =3D=3D READ_ONCE(vma->vm_mm->mm= _lock_seq.sequence)) > > return false; > > > > if (unlikely(down_read_trylock(&vma->vm_lock->lock) =3D=3D 0)) > > @@ -715,7 +715,7 @@ static inline bool vma_start_read(struct vm_area_st= ruct *vma) > > * after it has been unlocked. > > * This pairs with RELEASE semantics in vma_end_write_all(). > > */ > > - if (unlikely(vma->vm_lock_seq =3D=3D smp_load_acquire(&vma->vm_= mm->mm_lock_seq))) { > > + if (unlikely(vma->vm_lock_seq =3D=3D raw_read_seqcount(&vma->vm= _mm->mm_lock_seq))) { > > up_read(&vma->vm_lock->lock); > > return false; > > } > > @@ -730,7 +730,7 @@ static inline void vma_end_read(struct vm_area_stru= ct *vma) > > } > > > > /* WARNING! Can only be used if mmap_lock is expected to be write-lock= ed */ > > -static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_= lock_seq) > > +static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned= int *mm_lock_seq) > > { > > mmap_assert_write_locked(vma->vm_mm); > > > > @@ -738,7 +738,7 @@ static bool __is_vma_write_locked(struct vm_area_st= ruct *vma, int *mm_lock_seq) > > * current task is holding mmap_write_lock, both vma->vm_lock_s= eq and > > * mm->mm_lock_seq can't be concurrently modified. > > */ > > - *mm_lock_seq =3D vma->vm_mm->mm_lock_seq; > > + *mm_lock_seq =3D vma->vm_mm->mm_lock_seq.sequence; > > return (vma->vm_lock_seq =3D=3D *mm_lock_seq); > > } > > > > @@ -749,7 +749,7 @@ static bool __is_vma_write_locked(struct vm_area_st= ruct *vma, int *mm_lock_seq) > > */ > > static inline void vma_start_write(struct vm_area_struct *vma) > > { > > - int mm_lock_seq; > > + unsigned int mm_lock_seq; > > > > if (__is_vma_write_locked(vma, &mm_lock_seq)) > > return; > > @@ -767,7 +767,7 @@ static inline void vma_start_write(struct vm_area_s= truct *vma) > > > > static inline void vma_assert_write_locked(struct vm_area_struct *vma) > > { > > - int mm_lock_seq; > > + unsigned int mm_lock_seq; > > > > VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma); > > } > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > > index ff8627acbaa7..80fef38d9d64 100644 > > --- a/include/linux/mm_types.h > > +++ b/include/linux/mm_types.h > > @@ -715,7 +715,7 @@ struct vm_area_struct { > > * counter reuse can only lead to occasional unnecessary use of= the > > * slowpath. > > */ > > - int vm_lock_seq; > > + unsigned int vm_lock_seq; > > /* Unstable RCU readers are allowed to read this. */ > > struct vma_lock *vm_lock; > > #endif > > @@ -887,6 +887,9 @@ struct mm_struct { > > * Roughly speaking, incrementing the sequence number i= s > > * equivalent to releasing locks on VMAs; reading the s= equence > > * number can be part of taking a read lock on a VMA. > > + * Incremented every time mmap_lock is write-locked/unl= ocked. > > + * Initialized to 0, therefore odd values indicate mmap= _lock > > + * is write-locked and even values that it's released. > > * > > * Can be modified under write mmap_lock using RELEASE > > * semantics. > > @@ -895,7 +898,7 @@ struct mm_struct { > > * Can be read with ACQUIRE semantics if not holding wr= ite > > * mmap_lock. > > */ > > - int mm_lock_seq; > > + seqcount_t mm_lock_seq; > > #endif > > > > > > diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h > > index de9dc20b01ba..6b3272686860 100644 > > --- a/include/linux/mmap_lock.h > > +++ b/include/linux/mmap_lock.h > > @@ -71,39 +71,38 @@ static inline void mmap_assert_write_locked(const s= truct mm_struct *mm) > > } > > > > #ifdef CONFIG_PER_VMA_LOCK > > -/* > > - * Drop all currently-held per-VMA locks. > > - * This is called from the mmap_lock implementation directly before re= leasing > > - * a write-locked mmap_lock (or downgrading it to read-locked). > > - * This should normally NOT be called manually from other places. > > - * If you want to call this manually anyway, keep in mind that this wi= ll release > > - * *all* VMA write locks, including ones from further up the stack. > > - */ > > -static inline void vma_end_write_all(struct mm_struct *mm) > > +static inline void mm_lock_seqcount_init(struct mm_struct *mm) > > { > > - mmap_assert_write_locked(mm); > > - /* > > - * Nobody can concurrently modify mm->mm_lock_seq due to exclus= ive > > - * mmap_lock being held. > > - * We need RELEASE semantics here to ensure that preceding stor= es into > > - * the VMA take effect before we unlock it with this store. > > - * Pairs with ACQUIRE semantics in vma_start_read(). > > - */ > > - smp_store_release(&mm->mm_lock_seq, mm->mm_lock_seq + 1); > > + seqcount_init(&mm->mm_lock_seq); > > +} > > + > > +static inline void mm_lock_seqcount_begin(struct mm_struct *mm) > > +{ > > + do_raw_write_seqcount_begin(&mm->mm_lock_seq); > > +} > > + > > +static inline void mm_lock_seqcount_end(struct mm_struct *mm) > > +{ > > + do_raw_write_seqcount_end(&mm->mm_lock_seq); > > } > > + > > #else > > -static inline void vma_end_write_all(struct mm_struct *mm) {} > > +static inline void mm_lock_seqcount_init(struct mm_struct *mm) {} > > +static inline void mm_lock_seqcount_begin(struct mm_struct *mm) {} > > +static inline void mm_lock_seqcount_end(struct mm_struct *mm) {} > > #endif > > > > static inline void mmap_init_lock(struct mm_struct *mm) > > { > > init_rwsem(&mm->mmap_lock); > > + mm_lock_seqcount_init(mm); > > } > > > > static inline void mmap_write_lock(struct mm_struct *mm) > > { > > __mmap_lock_trace_start_locking(mm, true); > > down_write(&mm->mmap_lock); > > + mm_lock_seqcount_begin(mm); > > __mmap_lock_trace_acquire_returned(mm, true, true); > > } > > > > @@ -111,6 +110,7 @@ static inline void mmap_write_lock_nested(struct mm= _struct *mm, int subclass) > > { > > __mmap_lock_trace_start_locking(mm, true); > > down_write_nested(&mm->mmap_lock, subclass); > > + mm_lock_seqcount_begin(mm); > > __mmap_lock_trace_acquire_returned(mm, true, true); > > } > > > > @@ -120,10 +120,30 @@ static inline int mmap_write_lock_killable(struct= mm_struct *mm) > > > > __mmap_lock_trace_start_locking(mm, true); > > ret =3D down_write_killable(&mm->mmap_lock); > > + if (!ret) > > + mm_lock_seqcount_begin(mm); > > __mmap_lock_trace_acquire_returned(mm, true, ret =3D=3D 0); > > return ret; > > } > > > > +/* > > + * Drop all currently-held per-VMA locks. > > + * This is called from the mmap_lock implementation directly before re= leasing > > + * a write-locked mmap_lock (or downgrading it to read-locked). > > + * This should normally NOT be called manually from other places. > > + * If you want to call this manually anyway, keep in mind that this wi= ll release > > + * *all* VMA write locks, including ones from further up the stack. > > + */ > > +static inline void vma_end_write_all(struct mm_struct *mm) > > +{ > > + mmap_assert_write_locked(mm); > > + /* > > + * Nobody can concurrently modify mm->mm_lock_seq due to exclus= ive > > + * mmap_lock being held. > > + */ > > + mm_lock_seqcount_end(mm); > > +} > > + > > static inline void mmap_write_unlock(struct mm_struct *mm) > > { > > __mmap_lock_trace_released(mm, true); > > diff --git a/kernel/fork.c b/kernel/fork.c > > index fd528fb5e305..0cae6fc651f0 100644 > > --- a/kernel/fork.c > > +++ b/kernel/fork.c > > @@ -447,7 +447,7 @@ static bool vma_lock_alloc(struct vm_area_struct *v= ma) > > return false; > > > > init_rwsem(&vma->vm_lock->lock); > > - vma->vm_lock_seq =3D -1; > > + vma->vm_lock_seq =3D UINT_MAX; > > > > return true; > > } > > @@ -1260,9 +1260,6 @@ static struct mm_struct *mm_init(struct mm_struct= *mm, struct task_struct *p, > > seqcount_init(&mm->write_protect_seq); > > mmap_init_lock(mm); > > INIT_LIST_HEAD(&mm->mmlist); > > -#ifdef CONFIG_PER_VMA_LOCK > > - mm->mm_lock_seq =3D 0; > > -#endif > > mm_pgtables_bytes_init(mm); > > mm->map_count =3D 0; > > mm->locked_vm =3D 0; > > diff --git a/mm/init-mm.c b/mm/init-mm.c > > index 24c809379274..6af3ad675930 100644 > > --- a/mm/init-mm.c > > +++ b/mm/init-mm.c > > @@ -40,7 +40,7 @@ struct mm_struct init_mm =3D { > > .arg_lock =3D __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), > > .mmlist =3D LIST_HEAD_INIT(init_mm.mmlist), > > #ifdef CONFIG_PER_VMA_LOCK > > - .mm_lock_seq =3D 0, > > + .mm_lock_seq =3D SEQCNT_ZERO(init_mm.mm_lock_seq), > > #endif > > .user_ns =3D &init_user_ns, > > .cpu_bitmap =3D CPU_BITS_NONE, > > diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c > > index 8fab5e13c7c3..9bcf1736bf18 100644 > > --- a/tools/testing/vma/vma.c > > +++ b/tools/testing/vma/vma.c > > @@ -89,7 +89,7 @@ static struct vm_area_struct *alloc_and_link_vma(stru= ct mm_struct *mm, > > * begun. Linking to the tree will have caused this to be incre= mented, > > * which means we will get a false positive otherwise. > > */ > > - vma->vm_lock_seq =3D -1; > > + vma->vm_lock_seq =3D UINT_MAX; > > > > return vma; > > } > > @@ -214,7 +214,7 @@ static bool vma_write_started(struct vm_area_struct= *vma) > > int seq =3D vma->vm_lock_seq; > > > > /* We reset after each check. */ > > - vma->vm_lock_seq =3D -1; > > + vma->vm_lock_seq =3D UINT_MAX; > > > > /* The vma_start_write() stub simply increments this value. */ > > return seq > -1; > > diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_i= nternal.h > > index e76ff579e1fd..1d9fc97b8e80 100644 > > --- a/tools/testing/vma/vma_internal.h > > +++ b/tools/testing/vma/vma_internal.h > > @@ -241,7 +241,7 @@ struct vm_area_struct { > > * counter reuse can only lead to occasional unnecessary use of= the > > * slowpath. > > */ > > - int vm_lock_seq; > > + unsigned int vm_lock_seq; > > struct vma_lock *vm_lock; > > #endif > > > > @@ -416,7 +416,7 @@ static inline bool vma_lock_alloc(struct vm_area_st= ruct *vma) > > return false; > > > > init_rwsem(&vma->vm_lock->lock); > > - vma->vm_lock_seq =3D -1; > > + vma->vm_lock_seq =3D UINT_MAX; > > > > return true; > > } > > > > base-commit: 9c111059234a949a4d3442a413ade19cc65ab927 > > -- > > 2.47.0.163.g1226f6d8fa-goog > >