From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A067FE77188 for ; Fri, 10 Jan 2025 16:47:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B4D78D0006; Fri, 10 Jan 2025 11:47:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 13E348D0003; Fri, 10 Jan 2025 11:47:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED2C48D0006; Fri, 10 Jan 2025 11:47:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C68A38D0003 for ; Fri, 10 Jan 2025 11:47:32 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5263B44D39 for ; Fri, 10 Jan 2025 16:47:32 +0000 (UTC) X-FDA: 82992123144.10.2E5CA3D Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) by imf01.hostedemail.com (Postfix) with ESMTP id 77B9940003 for ; Fri, 10 Jan 2025 16:47:30 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="h/pvxZBQ"; spf=pass (imf01.hostedemail.com: domain of surenb@google.com designates 209.85.160.172 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736527650; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zb2ZwceU8qY+5YgbZ50Kl/MtDEF3hr48v4JEzpUbSAY=; b=bgOQTaRUJirZOIjJBe8n90n7luZQrr5JcmdWRSWLobyFQhG0lc4yWChVIT+1W0Na3igS6N oq9kc3jDacviMdqKKHuXGHJGMD/Knji4va9tEhekEOs9PDUHgyPe+z/PUjSQDUPtG/el0W ksgWSYNV7/QkrLfgdxSXhbPgGQ7x0hU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736527650; a=rsa-sha256; cv=none; b=gK+FBgy6UbjZjmZG5g+H/PEcIro3NYu3JVuWWI83m8z8RVWtoiU/GPJW+CS8He6Sda8F7g LrXGJ0ZkrSWwn73sK0Wl88XOTJwnYGUJrRefAoj+/x3ilvwR1OoBfRawCW2rygtho/cqzB Nh8YfWqE7idUGnHHOLuDmDU8js537tE= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="h/pvxZBQ"; spf=pass (imf01.hostedemail.com: domain of surenb@google.com designates 209.85.160.172 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-qt1-f172.google.com with SMTP id d75a77b69052e-4678c9310afso241831cf.1 for ; Fri, 10 Jan 2025 08:47:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736527649; x=1737132449; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=zb2ZwceU8qY+5YgbZ50Kl/MtDEF3hr48v4JEzpUbSAY=; b=h/pvxZBQbCLPrEK24MvHNw46PMqITdiiw4KyuRQ2S0IGBoLQhxVnT2dMM51AOjZSh4 Hn/CJhimbPiXFoDkYbZqjBCWHn9kcQTj4Qy7G9l8ycdEyLZxJP1Mr3dkuPY0Pl1v+AtM RMJ3XAY34lLBdPoWyKM9poVtHAiNxghI7kqrWuQutFKWopYWdxJqWTll7kTx1yqoGmUj W7vllwnJ6EvGdedqah8JO3NNbMHjuymy9xt0jVJ8AZv+6yG8ATGsjDtxjGuVVc0KQ90B aGQyS/xBbZBXHHgfLKq98zsTg2JJ3vIid4zfWn1VO1TXNCA3dKssdM4/WrF4qgtGXMu8 tVcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736527649; x=1737132449; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zb2ZwceU8qY+5YgbZ50Kl/MtDEF3hr48v4JEzpUbSAY=; b=w9QV9gdCcCkWpfoncvoU2z424LlWahD34oiU+jfvDJnXbimfh8eU2atYkgIRgjK3VW MUuE27svyg68l1HJE+mDjdtNlFCOUNv4NxgPtnKyG0xcM+I4IR8ZddiQda7jAyC6eC+L phehYw/d4PkiFxQ/juWQbLVaeisqbULVXno3UagEpfyqDyrQOikOfDa8+eZfu0F/ifXH sh9H3l8TjkoQOtgW/QBFeznzdkm0F9/O/MPxrYRbv4kaMec2s9Xi1fi/ZPJSD6UKlYTw JTYog/O3wV5fCqHqoKMFulhVUS1mQqHvmWn+wQK0Ztxz3jWlAP1V645pAop2FJ9FTy0f +6yQ== X-Forwarded-Encrypted: i=1; AJvYcCUfzI9NShY8OOsHp7jWEXbi/xCOdL2pCBHJPqxwunxaq3TlMpgKkCyv2/eIwK1w3Zkdc1QJ+XXPTA==@kvack.org X-Gm-Message-State: AOJu0Yy0FN0StYKs8paqOWUgYkrdW5+F+SglV8LXn8w/WOD9Mt+9cRbU oK9Qg2V8vBdbsplRLCcmyY6y6KWWhaUpfNKpFfyOuAeONWKLsNLpcahzlm+gzeGK2/dZkDfQqGl bT6boWKL25NyiNB5LPs18Rn4f18yf8gdj6v6f X-Gm-Gg: ASbGnctXZb6T1qBxWJ0D4vL58M1R5WKGzl8j8XPQMlxTbkjh1PYlNVAzhf/+b2qGTwE Gr5dCWvNnBltkZ40c1pH2rdx8w3oPS1pb5wg3Hw== X-Google-Smtp-Source: AGHT+IHher0vZel2teo02WsSUy/q+0DY41PXskS0+txa/PN6LvQOK6Mqo8vPtOhryqbWiWco5Oil1uSaT1O7M52efMs= X-Received: by 2002:a05:622a:341:b0:460:f093:f259 with SMTP id d75a77b69052e-46c87f4a88bmr4035381cf.22.1736527649226; Fri, 10 Jan 2025 08:47:29 -0800 (PST) MIME-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> <20250109023025.2242447-12-surenb@google.com> <95e9d80e-6c19-4a1f-9c21-307006858dff@suse.cz> In-Reply-To: From: Suren Baghdasaryan Date: Fri, 10 Jan 2025 08:47:18 -0800 X-Gm-Features: AbW1kvbZvb-s1cwLhjyGPaNnnNpYbYM_Y167BHbDR1jueVGdwHp6FdQOANHfwRc Message-ID: Subject: Re: [PATCH v8 11/16] mm: replace vm_lock and detached flag with a reference count To: Vlastimil Babka Cc: akpm@linux-foundation.org, peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 77B9940003 X-Stat-Signature: sft3i3u4y5m1ca1wpu61hhhb4c73jfpz X-Rspam-User: X-HE-Tag: 1736527650-909535 X-HE-Meta: U2FsdGVkX19lGYxJkrjrKa1krBP0DOWkhfWBuvXbCWTDRkL2f22awlyhmItw9WI0wQLuPKj+BzwmpvzCQo9kMqCvXm86KF86I3DrbGRAmuROeWsAdsqlgt1OTamMdIplJ1PWEV0V7EyNMRiX3UhLCoFC1LJd76sOYXc7qYkLnmGcIinfMQEb5Wi/m/hM39P/GQbT8xc6YZNv1tNjiQdOo04JLV+Qap6NDg8P0Bz7ojiD/xfNWZsPyBZmRSVguaSdy/+NYz8F2RYO82vHNlpEbdHgZZRsez8EMgclQsq1Eo9YSr6DXHiVDYMlNH99DiXK1P/Ub+qfx3+uwotcyq04FhKnKIVSd6/xzxxMiKirFmbt74RJNZqMVxdzPVqApc0NkOAQeyvb0OxKq+omrpJJef7o72Z+09g2TioPqYjz9sei0kRWf3l+QcUNMGSGp/AHdzcIQFA3cqrZCkJS05gtR1+kNiSc+cNX6bOKq9oUBWZ6orBSErzZNHdxISTNC0G8+IIU//++hm3cbCQ6Lx6lZpcLKKLedcQWoq5mKhTSeKlMah2wl74A2wU3t25vIRu9Os1i+gF0N89sAGdqemwhCkjBCatuEH64jaNzY2rCDbgNugrz4K4aPsp7nOZkmgr5zobgVLtjJalLBWwDP8tNZQc1FD0+8+UV7SQ005Imhp+ElQsFH5j6GybpZE5X5B1/lcSb8nNCebgO8ra6XUNWa8F0ys0lcRVq8YoWOKHVCe/6d+Zrtehwp/C7Uns+NtAwjPqy7fon6QLRNLFby7ixI1bZ1cZDIPozMUn+hNZmlFCLrtTtETNqG9uIvHniJzIxTKLK9vEsZv559jpDVan5XSAMzdccgAPOLWD25SNy68qyRfM+z9ax2mcH0Pa73jXhN6kgilq9a4FDdipP4rMObunizAuGnGVHiHRy32T9ff4IhAQLIkwieVdw8chLXtWdS2vjUKLGTEU2T8br1WC PwO7logy HWTujMP2KtYEOEaEmbDiPVHD75ibKYH+tyLva++feXO441TGumZkzKpDcjV/meUejVyZn0QacYiVpPOgltNNPiO38HGNi805Y8Dp2eVS7SqkgkUu9qRd7ZBqvNa4lZ57NVSASgmvRwUeKznwbe2rzQw5CCctENPWdqRmlVSjuXCy5beuoA3VqIRYpf8EQV3OPkeDZRjgqhLYRAoA5ZiaRUh6LjmnmSadZJ5+agRPl48rltRwnQ9HSEfRw4u9rdyvyHMqlZnM98t94w+tWtaqhQnR46aTRMannWWWIxFuT4tzoaQOs8Js+w6ascuyVzFkoNzgcmyG+SKwNd7nf3DG81TrI8HD/B8vXtzKMmAiIM6YnzMzM3jL8qWDIShNHuSREYKyQAupFVmlJxw8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jan 10, 2025 at 7:56=E2=80=AFAM Suren Baghdasaryan wrote: > > On Fri, Jan 10, 2025 at 6:33=E2=80=AFAM Vlastimil Babka = wrote: > > > > On 1/9/25 3:30 AM, Suren Baghdasaryan wrote: > > > rw_semaphore is a sizable structure of 40 bytes and consumes > > > considerable space for each vm_area_struct. However vma_lock has > > > two important specifics which can be used to replace rw_semaphore > > > with a simpler structure: > > > 1. Readers never wait. They try to take the vma_lock and fall back to > > > mmap_lock if that fails. > > > 2. Only one writer at a time will ever try to write-lock a vma_lock > > > because writers first take mmap_lock in write mode. > > > Because of these requirements, full rw_semaphore functionality is not > > > needed and we can replace rw_semaphore and the vma->detached flag wit= h > > > a refcount (vm_refcnt). > > > When vma is in detached state, vm_refcnt is 0 and only a call to > > > vma_mark_attached() can take it out of this state. Note that unlike > > > before, now we enforce both vma_mark_attached() and vma_mark_detached= () > > > to be done only after vma has been write-locked. vma_mark_attached() > > > changes vm_refcnt to 1 to indicate that it has been attached to the v= ma > > > tree. When a reader takes read lock, it increments vm_refcnt, unless = the > > > top usable bit of vm_refcnt (0x40000000) is set, indicating presence = of > > > a writer. When writer takes write lock, it sets the top usable bit to > > > indicate its presence. If there are readers, writer will wait using n= ewly > > > introduced mm->vma_writer_wait. Since all writers take mmap_lock in w= rite > > > mode first, there can be only one writer at a time. The last reader t= o > > > release the lock will signal the writer to wake up. > > > refcount might overflow if there are many competing readers, in which= case > > > read-locking will fail. Readers are expected to handle such failures. > > > In summary: > > > 1. all readers increment the vm_refcnt; > > > 2. writer sets top usable (writer) bit of vm_refcnt; > > > 3. readers cannot increment the vm_refcnt if the writer bit is set; > > > 4. in the presence of readers, writer must wait for the vm_refcnt to = drop > > > to 1 (ignoring the writer bit), indicating an attached vma with no re= aders; > > > 5. vm_refcnt overflow is handled by the readers. > > > > > > Suggested-by: Peter Zijlstra > > > Suggested-by: Matthew Wilcox > > > Signed-off-by: Suren Baghdasaryan > > > > Reviewed-by: Vlastimil Babka > > > > But think there's a problem that will manifest after patch 15. > > Also I don't feel qualified enough about the lockdep parts though > > (although I think I spotted another issue with those, below) so best if > > PeterZ can review those. > > Some nits below too. > > > > > + > > > +static inline void vma_refcount_put(struct vm_area_struct *vma) > > > +{ > > > + int oldcnt; > > > + > > > + if (!__refcount_dec_and_test(&vma->vm_refcnt, &oldcnt)) { > > > + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); > > > > Shouldn't we rwsem_release always? And also shouldn't it precede the > > refcount operation itself? > > Yes. Hillf pointed to the same issue. It will be fixed in the next versio= n. > > > > > > + if (is_vma_writer_only(oldcnt - 1)) > > > + rcuwait_wake_up(&vma->vm_mm->vma_writer_wait); > > > > Hmm hmm we should maybe read the vm_mm pointer before dropping the > > refcount? In case this races in a way that is_vma_writer_only tests tru= e > > but the writer meanwhile finishes and frees the vma. It's safe now but > > not after making the cache SLAB_TYPESAFE_BY_RCU ? > > Hmm. But if is_vma_writer_only() is true that means the writed is > blocked and is waiting for the reader to drop the vm_refcnt. IOW, it > won't proceed and free the vma until the reader calls > rcuwait_wake_up(). Your suggested change is trivial and I can do it > but I want to make sure I'm not missing something. Am I? Ok, after thinking some more, I think the race you might be referring to is this: writer reader __vma_enter_locked refcount_add_not_zero(VMA_LOCK_OFFSET, ...) vma_refcount_put __refcount_dec_and_t= est() if (is_vma_writer_only()) rcuwait_wait_event(&vma->vm_mm->vma_writer_wait, ...) __vma_exit_locked refcount_sub_and_test(VMA_LOCK_OFFSET, ...) free the vma rcuwait_wake_up(&vma->vm_mm->vma_writer_wait); I think it's possible and your suggestion of storing the mm before doing __refcount_dec_and_test() should work. Thanks for pointing this out! I'll fix it in the next version. > > > > > > + } > > > +} > > > + > > > > > static inline void vma_end_read(struct vm_area_struct *vma) > > > { > > > rcu_read_lock(); /* keeps vma alive till the end of up_read */ > > > > This should refer to vma_refcount_put(). But after fixing it I think we > > could stop doing this altogether? It will no longer keep vma "alive" > > with SLAB_TYPESAFE_BY_RCU. > > Yeah, I think the comment along with rcu_read_lock()/rcu_read_unlock() > here can be safely removed. > > > > > > - up_read(&vma->vm_lock.lock); > > > + vma_refcount_put(vma); > > > rcu_read_unlock(); > > > } > > > > > > > > > > > > --- a/mm/memory.c > > > +++ b/mm/memory.c > > > @@ -6370,9 +6370,41 @@ struct vm_area_struct *lock_mm_and_find_vma(st= ruct mm_struct *mm, > > > #endif > > > > > > #ifdef CONFIG_PER_VMA_LOCK > > > +static inline bool __vma_enter_locked(struct vm_area_struct *vma, un= signed int tgt_refcnt) > > > +{ > > > + /* > > > + * If vma is detached then only vma_mark_attached() can raise t= he > > > + * vm_refcnt. mmap_write_lock prevents racing with vma_mark_att= ached(). > > > + */ > > > + if (!refcount_add_not_zero(VMA_LOCK_OFFSET, &vma->vm_refcnt)) > > > + return false; > > > + > > > + rwsem_acquire(&vma->vmlock_dep_map, 0, 0, _RET_IP_); > > > + rcuwait_wait_event(&vma->vm_mm->vma_writer_wait, > > > + refcount_read(&vma->vm_refcnt) =3D=3D tgt_refcnt, > > > + TASK_UNINTERRUPTIBLE); > > > + lock_acquired(&vma->vmlock_dep_map, _RET_IP_); > > > + > > > + return true; > > > +} > > > + > > > +static inline void __vma_exit_locked(struct vm_area_struct *vma, boo= l *detached) > > > +{ > > > + *detached =3D refcount_sub_and_test(VMA_LOCK_OFFSET, &vma->vm_r= efcnt); > > > + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); > > > +} > > > + > > > void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_l= ock_seq) > > > { > > > - down_write(&vma->vm_lock.lock); > > > + bool locked; > > > + > > > + /* > > > + * __vma_enter_locked() returns false immediately if the vma is= not > > > + * attached, otherwise it waits until refcnt is (VMA_LOCK_OFFSE= T + 1) > > > + * indicating that vma is attached with no readers. > > > + */ > > > + locked =3D __vma_enter_locked(vma, VMA_LOCK_OFFSET + 1); > > > > Wonder if it would be slightly better if tgt_refcount was just 1 (or 0 > > below in vma_mark_detached()) and the VMA_LOCK_OFFSET added to it in > > __vma_enter_locked() itself as it's the one adding it in the first plac= e. > > Well, it won't be called tgt_refcount then. Maybe "bool vma_attached" > and inside __vma_enter_locked() we do: > > unsigned int tgt_refcnt =3D VMA_LOCK_OFFSET + vma_attached ? 1 : 0; > > Is that better? > > >