From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F28BDE77188 for ; Fri, 10 Jan 2025 20:40:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8209B6B009E; Fri, 10 Jan 2025 15:40:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D05E6B00A5; Fri, 10 Jan 2025 15:40:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 622DB6B00A8; Fri, 10 Jan 2025 15:40:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 435C66B009E for ; Fri, 10 Jan 2025 15:40:43 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E488912087A for ; Fri, 10 Jan 2025 20:40:42 +0000 (UTC) X-FDA: 82992710724.23.23B191D Received: from mail-qt1-f182.google.com (mail-qt1-f182.google.com [209.85.160.182]) by imf17.hostedemail.com (Postfix) with ESMTP id 2914B40008 for ; Fri, 10 Jan 2025 20:40:40 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="O60raBB/"; spf=pass (imf17.hostedemail.com: domain of surenb@google.com designates 209.85.160.182 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736541641; a=rsa-sha256; cv=none; b=lBBf4vjfCTqUgx7S5Mr9yE+GqlTQqKbAPzPG7U8IWurjV4/Evlzfcj3yjCuFDF+/8Efzuy 8mIETrV6TsssuhI8WfSzMOgxRp745sv7I2UC0HGRKWD59YwjnyfIy12K1Y5nxAqYcJaRVi 0xizutY7+DFDYm+jC5p+nB2RbIKfu5s= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="O60raBB/"; spf=pass (imf17.hostedemail.com: domain of surenb@google.com designates 209.85.160.182 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736541641; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0j9q3EvSlNOzUk56M/7wY4HxaZ19rkuvixH5fQJVD9c=; b=Rlejlf0awV+9qyn1a+8ehWenGCpTcHW9/16yci4WWd0rejNJzYYrslmWYeQ3lOBEtHKsFG Dbq7F2DIVCxBOu83FfhG0NUWnPUxAv15mXEZFlFZAnONhcQcVCtbuS/kdq0jplo5uw/3Jp EdL2DVZmD8fr0Ab5xOdkuQPD8Mm3roo= Received: by mail-qt1-f182.google.com with SMTP id d75a77b69052e-4679b5c66d0so44901cf.1 for ; Fri, 10 Jan 2025 12:40:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736541640; x=1737146440; darn=kvack.org; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=0j9q3EvSlNOzUk56M/7wY4HxaZ19rkuvixH5fQJVD9c=; b=O60raBB/1xF0IExvMgUX/VCMDGOq0E5U4rvmh9Vaglghju6F2jBWQPrYlP9Z7PrxMX e4yHl+W0hmkpEbxUms3MQq+WeTZIl57Lq/Y6LsuaT12A98jnvt2l9m208xS7Wu4ErPMO X8x3JBBy09at9WXSPln1MagEFgS09L0gT6G2DoGJxMwDC4oZVns+gBP8jbGeNQCIpYXj nX3S/mF4CaedcOKkL62dDi2tOrRY0kFVEM2RE2Z2rjJlOMkLhjCI/tkiPb33NK5fgpeq UF2ufRN9LXyE50kNlql6K8SqVVVNd+aJOJvFLWcA7d6y1BbVPNguDGQMNIsR6aBNz8a1 7xeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736541640; x=1737146440; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0j9q3EvSlNOzUk56M/7wY4HxaZ19rkuvixH5fQJVD9c=; b=K9a9ZiVZ6j48pXXijrNBmaSc8o+VKbe++F82Tim82yMf5KIXAYZcjjly0M8BU0fAhW +pfGWAg5afR0kDgSAn1RmkbV7isP2/ByVzpQQ93k7RnfEewufNI/phhfvHqCQzE1fgs+ 1jN4K7XinqYyJPsmtfuHNmt1ZphgKEwfcHOApVRd6vbZkvnBh6T04hQ+tpIQ5tpUd4N0 1jWrbHG5bUxdYpYpr2OkCgCRPiuxDAK7VuQrJ2+DEfkpE/sA0VKAhzkuBG1GmFJ8NEXF Fi7ZKaE4PWxY7lWMwwXyniH4l2gaQJSSMtdWpSe/imR9z8TtXX5kFz8YPOaNknOPoGiy ZhAA== X-Forwarded-Encrypted: i=1; AJvYcCVKt4YUOdWFliV4KchYltSXXaFVD58oxzGVl1rp8gvyUKbVUOFQAOkzGSGjVnlReEYqD+8P/5qrqQ==@kvack.org X-Gm-Message-State: AOJu0YyQJWFYxIH9Bt43BNHxcMbTUxoWlY+BDxpeR45r5gFjdVShSlE7 9TM7OzgFCzOFeGY+xfoOdrrW5/Cip5PrbM+OnhRk0I+tWvRJWSC5yQMmKGesnTkUZ2rmXui6WXS eVXesG3YpHR+LiFZYnnK9dUSh3qpO0Gm4m7xN X-Gm-Gg: ASbGnct1G1j5FOYoIU8pQcn8eMWkJExeNINAMjlx6OolZGR+y0PhAEPupgPi7tFqUxH e2L57YbOaqGVtY0lkOTT4XO0Ql6NxR7E1B2tXWw== X-Google-Smtp-Source: AGHT+IF1VotsgFTkcaAiLjuJ1NIcP7LQ3ZQW+HyxGse1VwxA+L42yFvK6EoIG27eUCJK7tlj4JMwft3vGkNDe6vbhVI= X-Received: by 2002:a05:622a:201:b0:466:975f:b219 with SMTP id d75a77b69052e-46c87e0cfc5mr4876031cf.8.1736541639926; Fri, 10 Jan 2025 12:40:39 -0800 (PST) MIME-Version: 1.0 References: <20250109023025.2242447-1-surenb@google.com> <20250109023025.2242447-16-surenb@google.com> <6vdkyipj4v7kmgra7huvebbkimz2t63tx6rkbjxbavaccmlbmb@udqijfgkbgfv> In-Reply-To: <6vdkyipj4v7kmgra7huvebbkimz2t63tx6rkbjxbavaccmlbmb@udqijfgkbgfv> From: Suren Baghdasaryan Date: Fri, 10 Jan 2025 12:40:28 -0800 X-Gm-Features: AbW1kvYVW7DKRtDDuKyuASICTgxzK5316_TEZ1YUaQPs3g3epPpRyIweLIwDWcE Message-ID: Subject: Re: [PATCH v8 15/16] mm: make vma cache SLAB_TYPESAFE_BY_RCU To: "Liam R. Howlett" , Suren Baghdasaryan , akpm@linux-foundation.org, peterz@infradead.org, willy@infradead.org, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 2914B40008 X-Stat-Signature: a7q4owani8ijo155q6ftk8mrb1dyrgmm X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1736541640-814746 X-HE-Meta: U2FsdGVkX18htGqNEsW3x+LREIF3PtawlpFfs49OGa8yCK+5617PRBYEY8wgkolk8kuFCQcYGvqXjSmGVvoseZX8jrJRfKpqL5QQnMu0Ap35gsESyR2RiT4kpkPqrM+FqMMePt1bwwCF28YnDfMXveJLYqM+SRNcckB1UXeezRJlI00N5/kvmfq++yhBqKPDV+X5iGJSOa3k6wR8P29uupc4dy8jPHfUJeAUegfyhE5xGtuHNoS81VPwTnFtgLmwgrO52fBEP0MsEzfcAVDtty+VSVv4a4M2gnu+2Z+ReEmTNxDUr2Nb7S/YBVaaYrixQCPnszbBKXSU3n28nqeFJ/3f1zBn9R8v4jPWNljogAacS4XS2EH0FgqbUMNWH1HA9U1+m0EUf0kbuo5WWmG0RcXlrYLdKk/ERCOPbodnt1el9OvfUv8jFSXBFV2okMqOxbDOrmstUEXMiI+PuV8TwGzkUsZbrHqxGsqj6dWYSpsfSG/U3RgNfwWdMKrKkfcO3sKhfKrorqKKTwVplQy7cgN+OSNZHd1drcBiHlo8LA41OCK+9MSt3J3yHpX7sSBwtIG8bHZjC/CQnIA9hslvfcf142hdzsiSOisM2DogWl0YLlWTM4sP/wGtwX/Pp6I4LSZn2quccGuRL3DECmkhqbEhVmChtuKfAP2QDMZaxvt8bux/NLDjOlTE3Loh4mndlZPgHPmnMWGBFqgfHTpQrXyANt0U0aJaSvbUisy2S5Gv1GDJf5P9ia0cZzKopXwHny8pUpzQhVA3EgwbGRJrK8OQxlQ8EGN8V0wTxh0g4VqHP6iOHnt4pfq6z8VBv8QpBREpIUMlmpl3ya215AeU5N0A50ASf5i8BDX6zsZAvyKcyGKHJ/tYHCWXdtKn3QDyZZK0QwefQPWAQ+270DLzSMKBplyn4JoF7T5OO2lLtWPLemjMXX8Nhpe3aAf+ztyU0+MMNRVkBKA1xMNbhFX l6+GmXlo /sF1qoBir7pQ5HYBEYLZh0ftJI0rbLxos4WLCLXlYkYgBEYiNNnZcRwCIcHJWYGwyLNEqyyoBU2naC2H0ICC5/0CV4/13GYt1wA5R+/khHakPaQKSi0flHILJ/K+Rmap5fhgJ8l3POShc+gvNwihJfAc/acf7NxI4SK+VJUkdCTpidY1AQYscHA6HcKr8SmET+ZfopeYfDWGOQcWzIk0NRgu4nARgC7GoCBwtOE3ElcJEmWag7QaIRpyCATqj47QuE5Bx9gZ4FBBk3bWrto8ldxMXco8BposPqMQcyIPbVqG9f3VwiQdgHuQt40NBffl0p96yGfu/P5kGW9PtR6EquK8FX57eHuRgLsTv0hbptuEakEnmYeEcVsanZuRR2w1MzFQ6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jan 10, 2025 at 11:51=E2=80=AFAM 'Liam R. Howlett' via kernel-team wrote: > > * Suren Baghdasaryan [250110 14:08]: > > On Fri, Jan 10, 2025 at 9:48=E2=80=AFAM Liam R. Howlett wrote: > > > > > > * Suren Baghdasaryan [250108 21:31]: > > > > To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that > > > > object reuse before RCU grace period is over will be detected by > > > > lock_vma_under_rcu(). > > > > Current checks are sufficient as long as vma is detached before it = is > > > > freed. The only place this is not currently happening is in exit_mm= ap(). > > > > Add the missing vma_mark_detached() in exit_mmap(). > > > > Another issue which might trick lock_vma_under_rcu() during vma reu= se > > > > is vm_area_dup(), which copies the entire content of the vma into a= new > > > > one, overriding new vma's vm_refcnt and temporarily making it appea= r as > > > > attached. This might trick a racing lock_vma_under_rcu() to operate= on > > > > a reused vma if it found the vma before it got reused. To prevent t= his > > > > situation, we should ensure that vm_refcnt stays at detached state = (0) > > > > when it is copied and advances to attached state only after it is a= dded > > > > into the vma tree. Introduce vma_copy() which preserves new vma's > > > > vm_refcnt and use it in vm_area_dup(). Since all vmas are in detach= ed > > > > state with no current readers when they are freed, lock_vma_under_r= cu() > > > > will not be able to take vm_refcnt after vma got detached even if v= ma > > > > is reused. > > > > Finally, make vm_area_cachep SLAB_TYPESAFE_BY_RCU. This will facili= tate > > > > vm_area_struct reuse and will minimize the number of call_rcu() cal= ls. > > > > > > > > Signed-off-by: Suren Baghdasaryan > > > > --- > > > > include/linux/mm.h | 2 - > > > > include/linux/mm_types.h | 10 +++-- > > > > include/linux/slab.h | 6 --- > > > > kernel/fork.c | 72 ++++++++++++++++++++--------= ---- > > > > mm/mmap.c | 3 +- > > > > mm/vma.c | 11 ++--- > > > > mm/vma.h | 2 +- > > > > tools/testing/vma/vma_internal.h | 7 +--- > > > > 8 files changed, 59 insertions(+), 54 deletions(-) > > > > > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > > > index 1d6b1563b956..a674558e4c05 100644 > > > > --- a/include/linux/mm.h > > > > +++ b/include/linux/mm.h > > > > @@ -258,8 +258,6 @@ void setup_initial_init_mm(void *start_code, vo= id *end_code, > > > > struct vm_area_struct *vm_area_alloc(struct mm_struct *); > > > > struct vm_area_struct *vm_area_dup(struct vm_area_struct *); > > > > void vm_area_free(struct vm_area_struct *); > > > > -/* Use only if VMA has no other users */ > > > > -void __vm_area_free(struct vm_area_struct *vma); > > > > > > > > #ifndef CONFIG_MMU > > > > extern struct rb_root nommu_region_tree; > > > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > > > > index 2d83d79d1899..93bfcd0c1fde 100644 > > > > --- a/include/linux/mm_types.h > > > > +++ b/include/linux/mm_types.h > > > > @@ -582,6 +582,12 @@ static inline void *folio_get_private(struct f= olio *folio) > > > > > > > > typedef unsigned long vm_flags_t; > > > > > > > > +/* > > > > + * freeptr_t represents a SLUB freelist pointer, which might be en= coded > > > > + * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is ena= bled. > > > > + */ > > > > +typedef struct { unsigned long v; } freeptr_t; > > > > + > > > > /* > > > > * A region containing a mapping of a non-memory backed file under= NOMMU > > > > * conditions. These are held in a global tree and are pinned by = the VMAs that > > > > @@ -695,9 +701,7 @@ struct vm_area_struct { > > > > unsigned long vm_start; > > > > unsigned long vm_end; > > > > }; > > > > -#ifdef CONFIG_PER_VMA_LOCK > > > > - struct rcu_head vm_rcu; /* Used for deferred freeing.= */ > > > > -#endif > > > > + freeptr_t vm_freeptr; /* Pointer used by SLAB_TYPESAF= E_BY_RCU */ > > > > }; > > > > > > > > /* > > > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > > > index 10a971c2bde3..681b685b6c4e 100644 > > > > --- a/include/linux/slab.h > > > > +++ b/include/linux/slab.h > > > > @@ -234,12 +234,6 @@ enum _slab_flag_bits { > > > > #define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED > > > > #endif > > > > > > > > -/* > > > > - * freeptr_t represents a SLUB freelist pointer, which might be en= coded > > > > - * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is ena= bled. > > > > - */ > > > > -typedef struct { unsigned long v; } freeptr_t; > > > > - > > > > /* > > > > * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. > > > > * > > > > diff --git a/kernel/fork.c b/kernel/fork.c > > > > index 9d9275783cf8..770b973a099c 100644 > > > > --- a/kernel/fork.c > > > > +++ b/kernel/fork.c > > > > @@ -449,6 +449,41 @@ struct vm_area_struct *vm_area_alloc(struct mm= _struct *mm) > > > > return vma; > > > > } > > > > > > > > > > There exists a copy_vma() which copies the vma to a new area in the m= m > > > in rmap. Naming this vma_copy() is confusing :) > > > > > > It might be better to just put this code in the vm_area_dup() or call= it > > > __vm_area_dup(), or __vma_dup() ? > > > > Hmm. It's not really duplicating a vma but copying its content (no > > allocation). How about __vm_area_copy() to indicate it is copying > > vm_area_struct content? > > > Sorry, I missed this. it's not copying all the content either. > > vm_area_init_dup() maybe? Ah, how about vm_area_init_from(src, dest)? > > Considering the scope of the series, I'm not sure I want to have a > bike shed conversation.. But I also don't want copy_ _copy > confusion in the future. > > To unsubscribe from this group and stop receiving emails from it, send an= email to kernel-team+unsubscribe@android.com. >