From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C261BC3DA4A for ; Thu, 8 Aug 2024 21:19:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43DB66B0089; Thu, 8 Aug 2024 17:19:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3ED806B008A; Thu, 8 Aug 2024 17:19:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28E5D6B008C; Thu, 8 Aug 2024 17:19:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0556B6B0089 for ; Thu, 8 Aug 2024 17:19:44 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 7088714013C for ; Thu, 8 Aug 2024 21:19:44 +0000 (UTC) X-FDA: 82430345088.25.713C891 Received: from mail-yw1-f170.google.com (mail-yw1-f170.google.com [209.85.128.170]) by imf30.hostedemail.com (Postfix) with ESMTP id A65B880012 for ; Thu, 8 Aug 2024 21:19:42 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="zbG/rrmL"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of surenb@google.com designates 209.85.128.170 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723151973; a=rsa-sha256; cv=none; b=k8Im4BrkKmLrnTm7qsXvMwtqZlOwH3fiudYT4TGV54x84z50K1TB3iXxhsHm0+3BP0MZGm tmQogTRcnagKgK2yy4RzjpWaRAueNjEFi1XmOrUjI1o216YNinYgBMkFYV0juiLbRzfXMH L+vcfcgtUmXglHcwneGBKuKzpW0SYUA= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="zbG/rrmL"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of surenb@google.com designates 209.85.128.170 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723151973; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wpltPMV1dV0xpt4bs09W+0HY8kECmjByBy+/ViP+3NQ=; b=MF+AVDPrhONGXesI3+ONdR3ABtjikXtouGAbuwgYIAzKf5g7p7FPOgLJFBUAnujbzROZ0f 3w5Rg10JIO6SFI353PF2tmVdEJoXL346bDg41GDVtJIraeu/2xF2p57gRMYlm/C3joUQ+r tgT0LBNUBNx7Dy9RllauZTjoIR2vJlM= Received: by mail-yw1-f170.google.com with SMTP id 00721157ae682-64b417e1511so12638757b3.3 for ; Thu, 08 Aug 2024 14:19:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1723151982; x=1723756782; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=wpltPMV1dV0xpt4bs09W+0HY8kECmjByBy+/ViP+3NQ=; b=zbG/rrmLf+o1iN2lCYwuFfUoD4guOy3zX8OMRM11ksz+FiG90oEW523bwWX22GJUKi JgP3Vl2UcXkk4MN+U87Su2eMjrnIJII1eF9vriEnoRUwzCfF3HSCNybaaBV3qkrTobIT jtRuDD9Dzszl/4zTRGCiKYO06lB2FA/qy5vhVs/s1iHdOftHPI1yOsSzSwJg5oTPP0kv Z9m4f51jnAw3z/GhpsT/8MLhcO6kZeVF0zeHl8OYDIhrcHIQadbf5KgfLszCsoUa5NZU K0INQ3kS9Qyi7MDDnw9/bHeZwcmhgOhmIZxHncuL52+umQQjIlf3+zoxXi7rPyb4iF7h dsmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723151982; x=1723756782; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wpltPMV1dV0xpt4bs09W+0HY8kECmjByBy+/ViP+3NQ=; b=hNxBM6uCayzg8slsPzQ09vOVhFZ0cX7i9oIlQXYsvOjPUYdSfH5O0U0a9hQqDs1FoQ nJwAnmf5BHh6O3QgQlO25bTnsTUqlMXauDSJmK5y+atnLIWznLLF5QO7K+edLl+a/fqe bYBSofJvN/m2qXnVhpKhLARZO9Ue5jcKoIc+lnVRY0UPUCnEsZUFbRgASF+6wjqbbvcM QIY4jbYXcoNtHWu2VIHWvf8ocuO/XH3xKDaagoR7etw4O16d0yNZFtI5k9A37ihdt+wH H0I646h9H0TEww90YhPb9EOfjcsAUoIyzOsxZWBbsVZFU8raJQ2cdEBB/y75dhAaJbu8 N++Q== X-Forwarded-Encrypted: i=1; AJvYcCUuWJixinw/UKs75LwTx28xm2Ac3Pj5iPcCKcKNUB/3umL/7LmRUqJUxzcaoijFMPQEFjCCzaQd5Sjsf44q22ZQEJc= X-Gm-Message-State: AOJu0YwySa86VRgW+sJUco03lOUyxdmKl73rYY4XHzgdDKEDMmNyIQGD ZAjvxEPXqaACK9p6goJNDAiHxMJKMLsUOQmfDRRjqEV5CkKbovBk4vpQwhLYTdidnFhSwMRrF7x ex80zXYUXxkMaxweGffMmg/NkJcye2i+t7/aC X-Google-Smtp-Source: AGHT+IF03qtBAxkRakR8h/7HFDMvp5qxxPussBYGU5H0UVRQB/ezK/UAPBb2AsnF6c5rCMxYRK929f2Lkpf5ctiOYxU= X-Received: by 2002:a05:690c:2844:b0:673:1ac6:4be0 with SMTP id 00721157ae682-69bfbbbdc1amr30033237b3.44.1723151981347; Thu, 08 Aug 2024 14:19:41 -0700 (PDT) MIME-Version: 1.0 References: <20240808185949.1094891-1-mjguzik@gmail.com> In-Reply-To: From: Suren Baghdasaryan Date: Thu, 8 Aug 2024 14:19:30 -0700 Message-ID: Subject: Re: [RFC PATCH] vm: align vma allocation and move the lock back into the struct To: Mateusz Guzik Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Liam.Howlett@oracle.com, vbabka@suse.cz, lstoakes@gmail.com, pedro.falcato@gmail.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Queue-Id: A65B880012 X-Rspamd-Server: rspam01 X-Stat-Signature: yiexsjk9ee84njs4qocwxyhe17pzhw4b X-HE-Tag: 1723151982-285813 X-HE-Meta: U2FsdGVkX1+apbTi4C7ybP1QN/p88aU/eUPP1fMjgBYMzwjlw0OfqJZNSFyaA1GWAPZxfwF1dlT8XHbTC6pTRl4ybAEp4IuvClUizgRcPnGInCQC2clAJ2uEzkz9cDeX2fHFJra+d0uKJvaVNJtE5ADVxW0t/Mj/D41GFBfk9qJSOGfh1uUXzforKGhOw82G3ZeRq+4b/Q3eOxpIZJ5fNrVQAX9vHtmHpAEO9m7JZfAXWe0smqtfYs36t0B4degsRRGK93AcNgfc04h2H3PWueTHAlL1DD8wCT1cSgxfTCv8ICWYtyPU1E8Td/Dg4TcGULf5SaXgQz6FvVRnDaHm/Qgw1Q0lkS0NpMNFIbNqEi7TFi+JXreRa/3v9k0OrN0b09JEVd++5PY030h4w2UJjOSzkYKhUukJzOtu0Esj6gpVlLopz1EGsYr4O6vWZY0y8uBBtk1mIMfus1JsOpigp6AkKQKfJhbiInu8aEptzAU3JKqHzab79s1NrU5+S1cwSx4vNft0OWceNlAB5/Q6W66hfS4IWH8dWwko4kNxQlSZ8s+93lphb0Jbd3RPq3YnfZydm9W+IQCdSECcY7kcNYetUZTm3mJAnJ05tEF2VeHQdwwXO7uZHjo9Os/drQd67Z9BhaGkPOa/Ms1tSyeHNQjqrbrlYEuRIsUkGtF5p3BXYo3aahnRcvwgZYK5pkou1fJ1dodWXHVvpZRizCTNekw9px+C7OrIt4bSOhSTymAxqyhYbBvjs+sSME+9pvrVAd6DIpirl+mb5Mo19Oz+hDkFC8WBz03WpNR58Suv98zdPfoTLdPaexOItpe0aDtUvwHM8ScRflQ4UYCqEmX3EDkHgNjUiquUbFtGuZwuAhAHfrll5QTpYPx71vTFcuSOrEFWfDTlkmQWUvEu9NHDcL1wwB73zSujYtpKnKmpTiO0k5aHzGFcVSLirJhm+9QszDwGqx3PsaTzTUHChLc DorbBkWW v3ISX3ZtPXXF1FmEzxGV4RUTV20uMSBGmH4kK4o2CCAJ/FGZ6q4KqTlK1WI5dE3Sg+FyQAb+C4R80za5iRNwRWJox8G04NvyyO8I/3dIJMoVy/yYqb1vcV6ui7SNXHUT41DV0T/TkAYFMzZcnRuIJjW7ANuh06LvFI+2jMDb33VAUty67hbFjC0Enw0YQSxIJrPcc95Hu3u6rM3fHImoj/8ygVG7T7EQsxeuMNzqlN3Lhp39+VE5tJvbBEzGdXD+NBV8a5M/pQDBS4KHAc4gZvdfrWMcErEblXR/JrQtza7UK8H8ZiowcqKcnaqsM1J4OLbiy7rEu9gbaHIYQOI3zVsYs8A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Aug 8, 2024 at 1:04=E2=80=AFPM Mateusz Guzik wr= ote: > > On Thu, Aug 8, 2024 at 9:39=E2=80=AFPM Suren Baghdasaryan wrote: > > > > On Thu, Aug 8, 2024 at 7:00=E2=80=AFPM Mateusz Guzik wrote: > > > > > > ACHTUNG: this is more of a request for benchmarking than a patch > > > proposal at this stage > > > > > > I was pointed at your patch which moved the vma lock to a separate > > > allocation [1]. The commit message does not say anything about making > > > sure the object itself is allocated with proper alignment and I found > > > that the cache creation lacks the HWCACHE_ALIGN flag, which may or ma= y > > > not be the problem. > > > > > > I verified with a simple one-liner than on a stock kernel the vmas ke= ep > > > roaming around with a 16-byte alignment: > > > # bpftrace -e 'kretprobe:vm_area_alloc { @[retval & 0x3f] =3D count(= ); }' > > > @[16]: 39 > > > @[0]: 46 > > > @[32]: 53 > > > @[48]: 56 > > > > > > Note the stock vma lock cache also lacks the alignment flag. While I > > > have not verified experimentally, if they are also romaing it would m= ean > > > that 2 unrelated vmas can false-share locks. If the patch below is a > > > bust, the flag should probably be added to that one. > > > > > > The patch has slapped-around vma lock cache removal + HWALLOC for the > > > vma cache. I left a pointer to not change relative offsets between > > > current fields. I does compile without CONFIG_PER_VMA_LOCK. > > > > > > Vlastimil says you tested a case where the struct got bloated to 256 > > > bytes, but the lock remained separate. It is unclear to me if this > > > happened with allocations made with the HWCACHE_ALIGN flag though. > > > > > > There is 0 urgency on my end, but it would be nice if you could try > > > this out with your test rig. > > > > Hi Mateusz, > > Sure, I'll give it a spin but I'm not optimistic. Your code looks > > almost identical to my latest attempt where I tried placing vm_lock > > into different cachelines including a separate one and using > > HWCACHE_ALIGN. And yet all my attempts showed regression. > > Just FYI, the test I'm using is the pft-threads test from mmtests > > suite. I'll post results today evening. > > Thanks, > > Suren. > > Ok, well maybe you did not leave the pointer in place? :) True, maybe that will make a difference. I'll let you know soon. > > It is plausible the problem is on vs off cpu behavior of rwsems -- > there is a corner case where they neglect to spin. It is plausible > perf goes down simply because there is less on cpu time. > > Thus you bench can you make sure to time(1)? Sure, will do once I'm home. Thanks for the hints! > > For example with zsh I got: > ./run-mmtests.sh --no-monitor --config configs/config-workload-pft-thread= s > > 39.35s user 445.45s system 390% cpu 124.04s (2:04.04) total > > I verified with offcputime-bpfcc -K that indeed there is a bunch of > pft going off cpu from down_read/down_write even at the modest scale > this was running in my case. > > > > > > > > > 1: https://lore.kernel.org/all/20230227173632.3292573-34-surenb@googl= e.com/T/#u > > > > > > --- > > > include/linux/mm.h | 18 +++++++-------- > > > include/linux/mm_types.h | 10 ++++----- > > > kernel/fork.c | 47 ++++----------------------------------= -- > > > mm/userfaultfd.c | 6 ++--- > > > 4 files changed, 19 insertions(+), 62 deletions(-) > > > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > > index 43b40334e9b2..6d8b668d3deb 100644 > > > --- a/include/linux/mm.h > > > +++ b/include/linux/mm.h > > > @@ -687,7 +687,7 @@ static inline bool vma_start_read(struct vm_area_= struct *vma) > > > if (READ_ONCE(vma->vm_lock_seq) =3D=3D READ_ONCE(vma->vm_mm->= mm_lock_seq)) > > > return false; > > > > > > - if (unlikely(down_read_trylock(&vma->vm_lock->lock) =3D=3D 0)= ) > > > + if (unlikely(down_read_trylock(&vma->vm_lock) =3D=3D 0)) > > > return false; > > > > > > /* > > > @@ -702,7 +702,7 @@ static inline bool vma_start_read(struct vm_area_= struct *vma) > > > * This pairs with RELEASE semantics in vma_end_write_all(). > > > */ > > > if (unlikely(vma->vm_lock_seq =3D=3D smp_load_acquire(&vma->v= m_mm->mm_lock_seq))) { > > > - up_read(&vma->vm_lock->lock); > > > + up_read(&vma->vm_lock); > > > return false; > > > } > > > return true; > > > @@ -711,7 +711,7 @@ static inline bool vma_start_read(struct vm_area_= struct *vma) > > > static inline void vma_end_read(struct vm_area_struct *vma) > > > { > > > rcu_read_lock(); /* keeps vma alive till the end of up_read *= / > > > - up_read(&vma->vm_lock->lock); > > > + up_read(&vma->vm_lock); > > > rcu_read_unlock(); > > > } > > > > > > @@ -740,7 +740,7 @@ static inline void vma_start_write(struct vm_area= _struct *vma) > > > if (__is_vma_write_locked(vma, &mm_lock_seq)) > > > return; > > > > > > - down_write(&vma->vm_lock->lock); > > > + down_write(&vma->vm_lock); > > > /* > > > * We should use WRITE_ONCE() here because we can have concur= rent reads > > > * from the early lockless pessimistic check in vma_start_rea= d(). > > > @@ -748,7 +748,7 @@ static inline void vma_start_write(struct vm_area= _struct *vma) > > > * we should use WRITE_ONCE() for cleanliness and to keep KCS= AN happy. > > > */ > > > WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); > > > - up_write(&vma->vm_lock->lock); > > > + up_write(&vma->vm_lock); > > > } > > > > > > static inline void vma_assert_write_locked(struct vm_area_struct *vm= a) > > > @@ -760,7 +760,7 @@ static inline void vma_assert_write_locked(struct= vm_area_struct *vma) > > > > > > static inline void vma_assert_locked(struct vm_area_struct *vma) > > > { > > > - if (!rwsem_is_locked(&vma->vm_lock->lock)) > > > + if (!rwsem_is_locked(&vma->vm_lock)) > > > vma_assert_write_locked(vma); > > > } > > > > > > @@ -827,10 +827,6 @@ static inline void assert_fault_locked(struct vm= _fault *vmf) > > > > > > extern const struct vm_operations_struct vma_dummy_vm_ops; > > > > > > -/* > > > - * WARNING: vma_init does not initialize vma->vm_lock. > > > - * Use vm_area_alloc()/vm_area_free() if vma needs locking. > > > - */ > > > static inline void vma_init(struct vm_area_struct *vma, struct mm_st= ruct *mm) > > > { > > > memset(vma, 0, sizeof(*vma)); > > > @@ -839,6 +835,8 @@ static inline void vma_init(struct vm_area_struct= *vma, struct mm_struct *mm) > > > INIT_LIST_HEAD(&vma->anon_vma_chain); > > > vma_mark_detached(vma, false); > > > vma_numab_state_init(vma); > > > + init_rwsem(&vma->vm_lock); > > > + vma->vm_lock_seq =3D -1; > > > } > > > > > > /* Use when VMA is not part of the VMA tree and needs no locking */ > > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > > > index 003619fab20e..caffdb4eeb94 100644 > > > --- a/include/linux/mm_types.h > > > +++ b/include/linux/mm_types.h > > > @@ -615,10 +615,6 @@ static inline struct anon_vma_name *anon_vma_nam= e_alloc(const char *name) > > > } > > > #endif > > > > > > -struct vma_lock { > > > - struct rw_semaphore lock; > > > -}; > > > - > > > struct vma_numab_state { > > > /* > > > * Initialised as time in 'jiffies' after which VMA > > > @@ -716,8 +712,7 @@ struct vm_area_struct { > > > * slowpath. > > > */ > > > int vm_lock_seq; > > > - /* Unstable RCU readers are allowed to read this. */ > > > - struct vma_lock *vm_lock; > > > + void *vm_dummy; > > > #endif > > > > > > /* > > > @@ -770,6 +765,9 @@ struct vm_area_struct { > > > struct vma_numab_state *numab_state; /* NUMA Balancing sta= te */ > > > #endif > > > struct vm_userfaultfd_ctx vm_userfaultfd_ctx; > > > +#ifdef CONFIG_PER_VMA_LOCK > > > + struct rw_semaphore vm_lock ____cacheline_aligned_in_smp; > > > +#endif > > > } __randomize_layout; > > > > > > #ifdef CONFIG_NUMA > > > diff --git a/kernel/fork.c b/kernel/fork.c > > > index 92bfe56c9fed..eab04a24d5f1 100644 > > > --- a/kernel/fork.c > > > +++ b/kernel/fork.c > > > @@ -436,35 +436,6 @@ static struct kmem_cache *vm_area_cachep; > > > /* SLAB cache for mm_struct structures (tsk->mm) */ > > > static struct kmem_cache *mm_cachep; > > > > > > -#ifdef CONFIG_PER_VMA_LOCK > > > - > > > -/* SLAB cache for vm_area_struct.lock */ > > > -static struct kmem_cache *vma_lock_cachep; > > > - > > > -static bool vma_lock_alloc(struct vm_area_struct *vma) > > > -{ > > > - vma->vm_lock =3D kmem_cache_alloc(vma_lock_cachep, GFP_KERNEL= ); > > > - if (!vma->vm_lock) > > > - return false; > > > - > > > - init_rwsem(&vma->vm_lock->lock); > > > - vma->vm_lock_seq =3D -1; > > > - > > > - return true; > > > -} > > > - > > > -static inline void vma_lock_free(struct vm_area_struct *vma) > > > -{ > > > - kmem_cache_free(vma_lock_cachep, vma->vm_lock); > > > -} > > > - > > > -#else /* CONFIG_PER_VMA_LOCK */ > > > - > > > -static inline bool vma_lock_alloc(struct vm_area_struct *vma) { retu= rn true; } > > > -static inline void vma_lock_free(struct vm_area_struct *vma) {} > > > - > > > -#endif /* CONFIG_PER_VMA_LOCK */ > > > - > > > struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) > > > { > > > struct vm_area_struct *vma; > > > @@ -474,10 +445,6 @@ struct vm_area_struct *vm_area_alloc(struct mm_s= truct *mm) > > > return NULL; > > > > > > vma_init(vma, mm); > > > - if (!vma_lock_alloc(vma)) { > > > - kmem_cache_free(vm_area_cachep, vma); > > > - return NULL; > > > - } > > > > > > return vma; > > > } > > > @@ -496,10 +463,8 @@ struct vm_area_struct *vm_area_dup(struct vm_are= a_struct *orig) > > > * will be reinitialized. > > > */ > > > data_race(memcpy(new, orig, sizeof(*new))); > > > - if (!vma_lock_alloc(new)) { > > > - kmem_cache_free(vm_area_cachep, new); > > > - return NULL; > > > - } > > > + init_rwsem(&new->vm_lock); > > > + new->vm_lock_seq =3D -1; > > > INIT_LIST_HEAD(&new->anon_vma_chain); > > > vma_numab_state_init(new); > > > dup_anon_vma_name(orig, new); > > > @@ -511,7 +476,6 @@ void __vm_area_free(struct vm_area_struct *vma) > > > { > > > vma_numab_state_free(vma); > > > free_anon_vma_name(vma); > > > - vma_lock_free(vma); > > > kmem_cache_free(vm_area_cachep, vma); > > > } > > > > > > @@ -522,7 +486,7 @@ static void vm_area_free_rcu_cb(struct rcu_head *= head) > > > vm_rcu); > > > > > > /* The vma should not be locked while being destroyed. */ > > > - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock->lock), vma); > > > + VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock), vma); > > > __vm_area_free(vma); > > > } > > > #endif > > > @@ -3192,10 +3156,7 @@ void __init proc_caches_init(void) > > > SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, > > > NULL); > > > > > > - vm_area_cachep =3D KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB= _ACCOUNT); > > > -#ifdef CONFIG_PER_VMA_LOCK > > > - vma_lock_cachep =3D KMEM_CACHE(vma_lock, SLAB_PANIC|SLAB_ACCO= UNT); > > > -#endif > > > + vm_area_cachep =3D KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB= _ACCOUNT|SLAB_HWCACHE_ALIGN); > > > mmap_init(); > > > nsproxy_cache_init(); > > > } > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > > index 3b7715ecf292..e95ecb2063d2 100644 > > > --- a/mm/userfaultfd.c > > > +++ b/mm/userfaultfd.c > > > @@ -92,7 +92,7 @@ static struct vm_area_struct *uffd_lock_vma(struct = mm_struct *mm, > > > * mmap_lock, which guarantees that nobody can lock t= he > > > * vma for write (vma_start_write()) under us. > > > */ > > > - down_read(&vma->vm_lock->lock); > > > + down_read(&vma->vm_lock); > > > } > > > > > > mmap_read_unlock(mm); > > > @@ -1468,9 +1468,9 @@ static int uffd_move_lock(struct mm_struct *mm, > > > * See comment in uffd_lock_vma() as to why not using > > > * vma_start_read() here. > > > */ > > > - down_read(&(*dst_vmap)->vm_lock->lock); > > > + down_read(&(*dst_vmap)->vm_lock); > > > if (*dst_vmap !=3D *src_vmap) > > > - down_read_nested(&(*src_vmap)->vm_lock->lock, > > > + down_read_nested(&(*src_vmap)->vm_lock, > > > SINGLE_DEPTH_NESTING); > > > } > > > mmap_read_unlock(mm); > > > -- > > > 2.43.0 > > > > > > > -- > Mateusz Guzik