From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B9DDC3DA4A for ; Thu, 8 Aug 2024 20:04:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9D5A96B0089; Thu, 8 Aug 2024 16:04:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 986766B008A; Thu, 8 Aug 2024 16:04:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 84C736B008C; Thu, 8 Aug 2024 16:04:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 6152F6B0089 for ; Thu, 8 Aug 2024 16:04:08 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0193E1C5002 for ; Thu, 8 Aug 2024 20:04:07 +0000 (UTC) X-FDA: 82430154576.11.3CF3E63 Received: from mail-lf1-f42.google.com (mail-lf1-f42.google.com [209.85.167.42]) by imf21.hostedemail.com (Postfix) with ESMTP id F0CD91C0019 for ; Thu, 8 Aug 2024 20:04:05 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="NGs9K/gz"; spf=pass (imf21.hostedemail.com: domain of mjguzik@gmail.com designates 209.85.167.42 as permitted sender) smtp.mailfrom=mjguzik@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723147394; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pHo2bWArmiJcnl4yTkhnUjmNKz0bV7s3htlSuSHuKls=; b=C1zJHBvA8O3MZsnydK5I4AExba8EQ9qZkuPb4PiObQWBHFCUrRwR7C80AJqThACs978xag W2lKr0CnH4+uqZxgTx+8eUQvu7aQSLfSNs2M0x1187lGXQ6P1ci978bzca/HbEum0O58aJ TK5wrnLexEeMPBhP7VcBT02oEt0Mfro= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="NGs9K/gz"; spf=pass (imf21.hostedemail.com: domain of mjguzik@gmail.com designates 209.85.167.42 as permitted sender) smtp.mailfrom=mjguzik@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723147394; a=rsa-sha256; cv=none; b=oftauYti0fqyQ/RV187GJIfhftw5MstGCT5IcY7iQdcXnGE7k4P9QvTmwDFFG6aQeIleQe Vaktu9dF9R4+r7kaIMGuZu7b06rvYTiOwFUC/9xEvJOsNlxX16T8VeW5mgAHmuSRXnourx LHcwaZhbCzy0C0zghg4J0clrD6NwmS0= Received: by mail-lf1-f42.google.com with SMTP id 2adb3069b0e04-52f01afa11cso1678019e87.0 for ; Thu, 08 Aug 2024 13:04:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723147444; x=1723752244; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=pHo2bWArmiJcnl4yTkhnUjmNKz0bV7s3htlSuSHuKls=; b=NGs9K/gz4tSQIRc5wr8e5S9vIWtn69nnwLe3XpPB+dwX9dRyUf+hwrXWF7rwq6V10h R0E4zhpYjLHulwu1Kdt2X5aWZnsAvjqpItaMZ1lQlxXLzjOKUgpUzmlz+EwguEf5FdIG 8vmAULNJMpsQzn3m7vEkJCaY8c/8Li7tR7GQDgLE6zNiLt39frLUFWqgGZBpe1HwjoxA z+WJNhk3PH6XaY8TwHKS2Igb+sWGW7Dcbcox4RxQ/dCJR2B/R/5VyVrk6Im+6TN2zNPc h7/9Q7e7yWCxOjyjdnUWWiZDOaK2X/TQh4spKEN3SRz7QRvFjKQrPFuNHAkSw8RJ+/s9 jQZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723147444; x=1723752244; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pHo2bWArmiJcnl4yTkhnUjmNKz0bV7s3htlSuSHuKls=; b=mBlOJ3kb94MR3+tDXSWL2msFleecDiFgddNhNr1Kx8jK9WS27Yon1dhy/PvKay/U1M F/hoeHdPJsmdD81+qhOqW7hx7iTDfxk4gRO5UH46z+5v+wujInZOBrXCf2rkf7NNFUkY qaMF4iC0KDipJXgkxRAu38vk84biOea9QQxVAhYqH6s7yEagKHqPv1o3QUypHiOkXZWD TqlBOU4ZlNPmmmEh4oUgvWpXZ73Jgwdop+YN9xCxZkaUTUdJfHDuhyCmBZjHaNObpzmz 6pz4oNbvEgSXGhrWG5a0cDT/rTW58BpIPyIWN2NGoLPgeVLWGz+e8f6eEAojtreVna9l /UCQ== X-Forwarded-Encrypted: i=1; AJvYcCULKM09PlKkCh0L/qtIsEx89u9z8ZnAvyfYsSpmTM+HJSpYbffblEvDjTObPydIj0eW91QIArFbl5W5vBWvrUXk7QY= X-Gm-Message-State: AOJu0YxsblKkS3Iyt/9b6ocKfMCZm0ZEzq+y7MVK9Dw+euPbkiMkjH1D ANhWrztlzS2hu61zILO1GOBIJGtikukiBianw99bXcDIENRKui7P7iVs3sv9tz3btIZSm865q90 jigVfhFAyryB6vz3vo89mSbOt+MY= X-Google-Smtp-Source: AGHT+IFg6Ots6VLl1spUF5CAN89IJIsJF8kdqiyFnulLMRk5DNM1jAnEWhy6pKLG1ASDbsaIk36m1tq2b3JWZJ4BMYs= X-Received: by 2002:a05:6512:b8f:b0:52c:e030:1450 with SMTP id 2adb3069b0e04-530e5829a11mr1745921e87.14.1723147443339; Thu, 08 Aug 2024 13:04:03 -0700 (PDT) MIME-Version: 1.0 References: <20240808185949.1094891-1-mjguzik@gmail.com> In-Reply-To: From: Mateusz Guzik Date: Thu, 8 Aug 2024 22:03:51 +0200 Message-ID: Subject: Re: [RFC PATCH] vm: align vma allocation and move the lock back into the struct To: Suren Baghdasaryan Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Liam.Howlett@oracle.com, vbabka@suse.cz, lstoakes@gmail.com, pedro.falcato@gmail.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: z9pucbzjqs9ybqpa13k6aix634b36a6k X-Rspam-User: X-Rspamd-Queue-Id: F0CD91C0019 X-Rspamd-Server: rspam02 X-HE-Tag: 1723147445-642047 X-HE-Meta: U2FsdGVkX1+HeGEqDfvShHYKsEjIyjZPRwRXEMWP/2ij6bidRMwWzmfCnlzq7F30JmeHQypmrKWfB/KkE5/7PJ86d0PQlv/3t6WhK4SLBsA+Vqqz3eN01h9WKuTrM3XN1DjVf+/f6SvJdqaB21WySo4Ls7Ksa5r/v3BN5XNX09z/gNP9pWufhZjeiw4S7NsySFWNTS7yTwQ+henZdaRIrjGtV4NuMQYxGzv5+gewuwO0jmxRFHlsyy8qDGn2O6CLaHd8dbVAFyp9gW23kv7p6gvzglrXAuL3tdaS4wHbAKdazKfIKp/Y1wOm8bXxgZ81Og+fAWRdVXYHpD3ApYMU5lcHCyolRrcZZUzwHD80rtdZCPrCNbnn7GSFrlG2EdHuhVV1qqhiRv9t1iueAVh//gHkicAcrTxG1TPZhWkoA3ufw1JQGWdXkWMi0WfwxVGiYC4oGA2HGlx6zXTJXUdyuLcey8VROWYUpaWch6hTXxE3ZJ2BLFXrs/Yq6QEHA88bswRDYZeyCEMo4AjU1rDPfnyyPb6vhsA7ErjqPP7ss/SAfTMenBB9QhdlmKtv47CAxDIcreixOHlLulCHJ7ZKzsYrLjQoMxlVkcOBJazCDdJTIiumIK+8rL3GM52N5SXx1lliBMuw2+5FNrZef2tvb3ZCbeHz2yGSR+5mqOOZfH7rygvQkUfKCp98nDKBWzSx0YtIgIBSYryAF/1b0zPNVy7TCjmj3pC8OU2fYI4XZAl1sZ78FqjGma+BxvUBbDgcstXJy6h3cJ0seqfQexiuTlvkJ/zxINO939waAmSQ3R5x7UZu0PJ4efy1YxZMo7VMEXXSHS3vgmmicwjPRLCdhBuelX3YhR+ws8xSFfhUZsm3ejCf0KLZImA18iKL3Pr100t1jIHdH7stc55v+UoGErvftU0gng8uMtuqNVj0WwSAZKl0+KXMUETrxk4nXYP2Hp9FKbHACnvFauw8/FF r81il0aW sslKkyhSgD9BkHQhzo7QYsV4j3+wi6fA7m5xPFdiZXpbzKA9T5UdzOQN/9vjhMnvwmtn80nkDFkVpUdZudPdbq5qbQMcGG8lK8pNltVuQmOogSqqfh6N0QdVv9qIkQ/d17agM3OSEXUQPHr+t/Kn2/IIxtxv64xkU86baJucndG5VBNAEnG1XLzEy+UCqQzYHFHfN5P7mNOnY19J4v46LpHFBt3aOnJ5QLL6bxnmyFjpaSmL7vs3qxWMH6cOfdpuDpclSLlNpo+quQrSAjzsLlSUvbLjxoEM5aNlcleHjQgqPUsktRRAfJRnZmhkCFDNouF6St4dgjM3z+NHrO9hEAJBvKRwVQtB6j9j5J4GW2gc9WF3wvEwJpiFqKd24vUGZfTPg5Pmwb9BugK0J40vM+HEZ9IkT2/BzI45d9sVuV87Nx+KtzXOBzOCEnrKzP59AKcrfDjJgOV2ohByjRxpX7R2hxQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Aug 8, 2024 at 9:39=E2=80=AFPM Suren Baghdasaryan wrote: > > On Thu, Aug 8, 2024 at 7:00=E2=80=AFPM Mateusz Guzik = wrote: > > > > ACHTUNG: this is more of a request for benchmarking than a patch > > proposal at this stage > > > > I was pointed at your patch which moved the vma lock to a separate > > allocation [1]. The commit message does not say anything about making > > sure the object itself is allocated with proper alignment and I found > > that the cache creation lacks the HWCACHE_ALIGN flag, which may or may > > not be the problem. > > > > I verified with a simple one-liner than on a stock kernel the vmas keep > > roaming around with a 16-byte alignment: > > # bpftrace -e 'kretprobe:vm_area_alloc { @[retval & 0x3f] =3D count();= }' > > @[16]: 39 > > @[0]: 46 > > @[32]: 53 > > @[48]: 56 > > > > Note the stock vma lock cache also lacks the alignment flag. While I > > have not verified experimentally, if they are also romaing it would mea= n > > that 2 unrelated vmas can false-share locks. If the patch below is a > > bust, the flag should probably be added to that one. > > > > The patch has slapped-around vma lock cache removal + HWALLOC for the > > vma cache. I left a pointer to not change relative offsets between > > current fields. I does compile without CONFIG_PER_VMA_LOCK. > > > > Vlastimil says you tested a case where the struct got bloated to 256 > > bytes, but the lock remained separate. It is unclear to me if this > > happened with allocations made with the HWCACHE_ALIGN flag though. > > > > There is 0 urgency on my end, but it would be nice if you could try > > this out with your test rig. > > Hi Mateusz, > Sure, I'll give it a spin but I'm not optimistic. Your code looks > almost identical to my latest attempt where I tried placing vm_lock > into different cachelines including a separate one and using > HWCACHE_ALIGN. And yet all my attempts showed regression. > Just FYI, the test I'm using is the pft-threads test from mmtests > suite. I'll post results today evening. > Thanks, > Suren. Ok, well maybe you did not leave the pointer in place? :) It is plausible the problem is on vs off cpu behavior of rwsems -- there is a corner case where they neglect to spin. It is plausible perf goes down simply because there is less on cpu time. Thus you bench can you make sure to time(1)? For example with zsh I got: ./run-mmtests.sh --no-monitor --config configs/config-workload-pft-threads 39.35s user 445.45s system 390% cpu 124.04s (2:04.04) total I verified with offcputime-bpfcc -K that indeed there is a bunch of pft going off cpu from down_read/down_write even at the modest scale this was running in my case. > > > > > 1: https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.= com/T/#u > > > > --- > > include/linux/mm.h | 18 +++++++-------- > > include/linux/mm_types.h | 10 ++++----- > > kernel/fork.c | 47 ++++------------------------------------ > > mm/userfaultfd.c | 6 ++--- > > 4 files changed, 19 insertions(+), 62 deletions(-) > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 43b40334e9b2..6d8b668d3deb 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -687,7 +687,7 @@ static inline bool vma_start_read(struct vm_area_st= ruct *vma) > > if (READ_ONCE(vma->vm_lock_seq) =3D=3D READ_ONCE(vma->vm_mm->mm= _lock_seq)) > > return false; > > > > - if (unlikely(down_read_trylock(&vma->vm_lock->lock) =3D=3D 0)) > > + if (unlikely(down_read_trylock(&vma->vm_lock) =3D=3D 0)) > > return false; > > > > /* > > @@ -702,7 +702,7 @@ static inline bool vma_start_read(struct vm_area_st= ruct *vma) > > * This pairs with RELEASE semantics in vma_end_write_all(). > > */ > > if (unlikely(vma->vm_lock_seq =3D=3D smp_load_acquire(&vma->vm_= mm->mm_lock_seq))) { > > - up_read(&vma->vm_lock->lock); > > + up_read(&vma->vm_lock); > > return false; > > } > > return true; > > @@ -711,7 +711,7 @@ static inline bool vma_start_read(struct vm_area_st= ruct *vma) > > static inline void vma_end_read(struct vm_area_struct *vma) > > { > > rcu_read_lock(); /* keeps vma alive till the end of up_read */ > > - up_read(&vma->vm_lock->lock); > > + up_read(&vma->vm_lock); > > rcu_read_unlock(); > > } > > > > @@ -740,7 +740,7 @@ static inline void vma_start_write(struct vm_area_s= truct *vma) > > if (__is_vma_write_locked(vma, &mm_lock_seq)) > > return; > > > > - down_write(&vma->vm_lock->lock); > > + down_write(&vma->vm_lock); > > /* > > * We should use WRITE_ONCE() here because we can have concurre= nt reads > > * from the early lockless pessimistic check in vma_start_read(= ). > > @@ -748,7 +748,7 @@ static inline void vma_start_write(struct vm_area_s= truct *vma) > > * we should use WRITE_ONCE() for cleanliness and to keep KCSAN= happy. > > */ > > WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); > > - up_write(&vma->vm_lock->lock); > > + up_write(&vma->vm_lock); > > } > > > > static inline void vma_assert_write_locked(struct vm_area_struct *vma) > > @@ -760,7 +760,7 @@ static inline void vma_assert_write_locked(struct v= m_area_struct *vma) > > > > static inline void vma_assert_locked(struct vm_area_struct *vma) > > { > > - if (!rwsem_is_locked(&vma->vm_lock->lock)) > > + if (!rwsem_is_locked(&vma->vm_lock)) > > vma_assert_write_locked(vma); > > } > > > > @@ -827,10 +827,6 @@ static inline void assert_fault_locked(struct vm_f= ault *vmf) > > > > extern const struct vm_operations_struct vma_dummy_vm_ops; > > > > -/* > > - * WARNING: vma_init does not initialize vma->vm_lock. > > - * Use vm_area_alloc()/vm_area_free() if vma needs locking. > > - */ > > static inline void vma_init(struct vm_area_struct *vma, struct mm_stru= ct *mm) > > { > > memset(vma, 0, sizeof(*vma)); > > @@ -839,6 +835,8 @@ static inline void vma_init(struct vm_area_struct *= vma, struct mm_struct *mm) > > INIT_LIST_HEAD(&vma->anon_vma_chain); > > vma_mark_detached(vma, false); > > vma_numab_state_init(vma); > > + init_rwsem(&vma->vm_lock); > > + vma->vm_lock_seq =3D -1; > > } > > > > /* Use when VMA is not part of the VMA tree and needs no locking */ > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > > index 003619fab20e..caffdb4eeb94 100644 > > --- a/include/linux/mm_types.h > > +++ b/include/linux/mm_types.h > > @@ -615,10 +615,6 @@ static inline struct anon_vma_name *anon_vma_name_= alloc(const char *name) > > } > > #endif > > > > -struct vma_lock { > > - struct rw_semaphore lock; > > -}; > > - > > struct vma_numab_state { > > /* > > * Initialised as time in 'jiffies' after which VMA > > @@ -716,8 +712,7 @@ struct vm_area_struct { > > * slowpath. > > */ > > int vm_lock_seq; > > - /* Unstable RCU readers are allowed to read this. */ > > - struct vma_lock *vm_lock; > > + void *vm_dummy; > > #endif > > > > /* > > @@ -770,6 +765,9 @@ struct vm_area_struct { > > struct vma_numab_state *numab_state; /* NUMA Balancing state= */ > > #endif > > struct vm_userfaultfd_ctx vm_userfaultfd_ctx; > > +#ifdef CONFIG_PER_VMA_LOCK > > + struct rw_semaphore vm_lock ____cacheline_aligned_in_smp; > > +#endif > > } __randomize_layout; > > > > #ifdef CONFIG_NUMA > > diff --git a/kernel/fork.c b/kernel/fork.c > > index 92bfe56c9fed..eab04a24d5f1 100644 > > --- a/kernel/fork.c > > +++ b/kernel/fork.c > > @@ -436,35 +436,6 @@ static struct kmem_cache *vm_area_cachep; > > /* SLAB cache for mm_struct structures (tsk->mm) */ > > static struct kmem_cache *mm_cachep; > > > > -#ifdef CONFIG_PER_VMA_LOCK > > - > > -/* SLAB cache for vm_area_struct.lock */ > > -static struct kmem_cache *vma_lock_cachep; > > - > > -static bool vma_lock_alloc(struct vm_area_struct *vma) > > -{ > > - vma->vm_lock =3D kmem_cache_alloc(vma_lock_cachep, GFP_KERNEL); > > - if (!vma->vm_lock) > > - return false; > > - > > - init_rwsem(&vma->vm_lock->lock); > > - vma->vm_lock_seq =3D -1; > > - > > - return true; > > -} > > - > > -static inline void vma_lock_free(struct vm_area_struct *vma) > > -{ > > - kmem_cache_free(vma_lock_cachep, vma->vm_lock); > > -} > > - > > -#else /* CONFIG_PER_VMA_LOCK */ > > - > > -static inline bool vma_lock_alloc(struct vm_area_struct *vma) { return= true; } > > -static inline void vma_lock_free(struct vm_area_struct *vma) {} > > - > > -#endif /* CONFIG_PER_VMA_LOCK */ > > - > > struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) > > { > > struct vm_area_struct *vma; > > @@ -474,10 +445,6 @@ struct vm_area_struct *vm_area_alloc(struct mm_str= uct *mm) > > return NULL; > > > > vma_init(vma, mm); > > - if (!vma_lock_alloc(vma)) { > > - kmem_cache_free(vm_area_cachep, vma); > > - return NULL; > > - } > > > > return vma; > > } > > @@ -496,10 +463,8 @@ struct vm_area_struct *vm_area_dup(struct vm_area_= struct *orig) > > * will be reinitialized. > > */ > > data_race(memcpy(new, orig, sizeof(*new))); > > - if (!vma_lock_alloc(new)) { > > - kmem_cache_free(vm_area_cachep, new); > > - return NULL; > > - } > > + init_rwsem(&new->vm_lock); > > + new->vm_lock_seq =3D -1; > > INIT_LIST_HEAD(&new->anon_vma_chain); > > vma_numab_state_init(new); > > dup_anon_vma_name(orig, new); > > @@ -511,7 +476,6 @@ void __vm_area_free(struct vm_area_struct *vma) > > { > > vma_numab_state_free(vma); > > free_anon_vma_name(vma); > > - vma_lock_free(vma); > > kmem_cache_free(vm_area_cachep, vma); > > } > > > > @@ -522,7 +486,7 @@ static void vm_area_free_rcu_cb(struct rcu_head *he= ad) > > vm_rcu); > > > > /* The vma should not be locked while being destroyed. */ > > - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock->lock), vma); > > + VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock), vma); > > __vm_area_free(vma); > > } > > #endif > > @@ -3192,10 +3156,7 @@ void __init proc_caches_init(void) > > SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, > > NULL); > > > > - vm_area_cachep =3D KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_A= CCOUNT); > > -#ifdef CONFIG_PER_VMA_LOCK > > - vma_lock_cachep =3D KMEM_CACHE(vma_lock, SLAB_PANIC|SLAB_ACCOUN= T); > > -#endif > > + vm_area_cachep =3D KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_A= CCOUNT|SLAB_HWCACHE_ALIGN); > > mmap_init(); > > nsproxy_cache_init(); > > } > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > index 3b7715ecf292..e95ecb2063d2 100644 > > --- a/mm/userfaultfd.c > > +++ b/mm/userfaultfd.c > > @@ -92,7 +92,7 @@ static struct vm_area_struct *uffd_lock_vma(struct mm= _struct *mm, > > * mmap_lock, which guarantees that nobody can lock the > > * vma for write (vma_start_write()) under us. > > */ > > - down_read(&vma->vm_lock->lock); > > + down_read(&vma->vm_lock); > > } > > > > mmap_read_unlock(mm); > > @@ -1468,9 +1468,9 @@ static int uffd_move_lock(struct mm_struct *mm, > > * See comment in uffd_lock_vma() as to why not using > > * vma_start_read() here. > > */ > > - down_read(&(*dst_vmap)->vm_lock->lock); > > + down_read(&(*dst_vmap)->vm_lock); > > if (*dst_vmap !=3D *src_vmap) > > - down_read_nested(&(*src_vmap)->vm_lock->lock, > > + down_read_nested(&(*src_vmap)->vm_lock, > > SINGLE_DEPTH_NESTING); > > } > > mmap_read_unlock(mm); > > -- > > 2.43.0 > > --=20 Mateusz Guzik