From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BFF1C02180 for ; Wed, 15 Jan 2025 04:00:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B6206B0085; Tue, 14 Jan 2025 23:00:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9408E6B0088; Tue, 14 Jan 2025 23:00:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E09E6B0089; Tue, 14 Jan 2025 23:00:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5B78D6B0085 for ; Tue, 14 Jan 2025 23:00:14 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 93941140F48 for ; Wed, 15 Jan 2025 04:00:12 +0000 (UTC) X-FDA: 83008333464.01.4FBC5B9 Received: from mail-ed1-f51.google.com (mail-ed1-f51.google.com [209.85.208.51]) by imf25.hostedemail.com (Postfix) with ESMTP id 988DFA0015 for ; Wed, 15 Jan 2025 04:00:10 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=QkmlmyKQ; spf=pass (imf25.hostedemail.com: domain of mjguzik@gmail.com designates 209.85.208.51 as permitted sender) smtp.mailfrom=mjguzik@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736913610; a=rsa-sha256; cv=none; b=VKbBDeYttbdHme2eYTAvgNUpv0rD7xik2vUWEHLMOZE3NW6PHCSi5SEoCoYRxRd42Ot7m+ aGZuUC9rhybqYF47v/5wcKWALEymlYkrCluW4q8LAFDAjaE2+McP+LjVjIbK3/Ehbj/1JS nO2KsFJZO41JHV3L7r0r/zr1WYW60bE= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=QkmlmyKQ; spf=pass (imf25.hostedemail.com: domain of mjguzik@gmail.com designates 209.85.208.51 as permitted sender) smtp.mailfrom=mjguzik@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736913610; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Kv2u0uJzn/Ts463acJ9jwBqvhz9v41RCMEi3nxpdcxc=; b=yTyhVIlyZGQfqYSzmPJevNSvEf1zZ7UvpYEAP6maDnyMad+pN+VMAGiDpKw98jS33Wha8u 9QjLGPFh8gJH7qPAMySHuUf74GtzqVIX9fn0tZ0OvfhNglX76tM0HXGtlLn8aX8JxO7meB 34+9gFjLHUarRoSJxlu7soVUXgekBbw= Received: by mail-ed1-f51.google.com with SMTP id 4fb4d7f45d1cf-5d3e8f64d5dso12057830a12.3 for ; Tue, 14 Jan 2025 20:00:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736913609; x=1737518409; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Kv2u0uJzn/Ts463acJ9jwBqvhz9v41RCMEi3nxpdcxc=; b=QkmlmyKQghA2nzpU2/LPWbU+xFJ4gpe76VREfV41sj4U1vR02HFk3mCqbzB/cvC83U 3nrtBqII33gXclbGsY8XlIzIe1/4zC8uDveob1DcRvAjtIITrk4eVha+pndk/x7WU72g F8+HSec2iTy0wBv+3pf/6GYMDEZsvy5RuA/6TvSppbMo9Nj+VATwP8WmVsgiSk3yFE55 N67bUkGITpJw+M/ugGxI4GNi5YMwIxehjMmDagFOsux0ciA5GaNonOdjiToAfXqqni39 bjXIKWUJqre351xWP5elC3hoE3WG++zigHb7WD6cX2XW+rFl5Uf7T/eFBPzXtItgLDPO /7Tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736913609; x=1737518409; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Kv2u0uJzn/Ts463acJ9jwBqvhz9v41RCMEi3nxpdcxc=; b=pguv+H/6QC+mqucpDhsHK2jvcdowjhcTcY/Hvwcv9U4HPWDZTRrJsWrNl3Yayl77nJ M3OPIq2GHewvKxjIoHoK6HKmFb7SoAHp6pIV6tSFh/AiIQV8aEdqDntQIKo2/SZAktuM anGBYUVUYFrx0Lq1zblYAcL/aNGY/R4xzn/c0REbOp/KUAMBf1Q/eJsSvNXbZUUYv7PG 8adT9aPDZs8DxkOS8jGPtAPAfptOKeC86Pzzt/S9K4EtaizZPRK+D84L4VDJSMjy3cz7 IxA0vvPb+3OBL4YMJplzq6rofcFxfU0XsQpWtyD0ncB3074vJqRuj1pa8wuXkREAMQtQ +yHA== X-Forwarded-Encrypted: i=1; AJvYcCVtTqyrf+/ojWCbApi2xmgIHx6EDvtMiHMmkKraeyg8wZ4IHWmAAn4IYqWB4A8EUp/4v8eRqZvuXw==@kvack.org X-Gm-Message-State: AOJu0YxA3GvXKixc05LjZ+nZDIf6ds0Ebi5PTI0jepSTsmpFOWEQlLIJ glH5rDZYn3NhCPfw9MKS31JxtVuubJsioIfmWAeqd78FXM4aJYFZGV+rvJ4vnPvkgpvOB5r6Ra6 idualW8FbJV2DU7lUR9f2Rn8HYEc= X-Gm-Gg: ASbGncsazmJIPZcY8pY24PbIKZwkj0gQQeQEj4fEQiL9aOTQu3a6+lKGVr0WWtnZnr8 PMCKCKl/ru04zqGw1k9bFbbQLurBRc17MUamP X-Google-Smtp-Source: AGHT+IEEg1XJyQx+U5nOwfPrBRvC6/6ng8zywQNu0QqYgTdeQe2oZZA1UImZ/MMw4Mn5IeGnj2yd2vaBzhM7BErqxKQ= X-Received: by 2002:a05:6402:2355:b0:5d9:a55:4307 with SMTP id 4fb4d7f45d1cf-5d972e4eeabmr25957199a12.22.1736913608740; Tue, 14 Jan 2025 20:00:08 -0800 (PST) MIME-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> <20250111042604.3230628-17-surenb@google.com> <20250115022703.hqbqdqawvqgrfgxb@master> In-Reply-To: From: Mateusz Guzik Date: Wed, 15 Jan 2025 04:59:56 +0100 X-Gm-Features: AbW1kvadBBfWrXSDBSr3BCQnFpffG_Arl4O4sHftvcF6KJ0SGYlligKzAXYSYLM Message-ID: Subject: Re: [PATCH v9 16/17] mm: make vma cache SLAB_TYPESAFE_BY_RCU To: Suren Baghdasaryan Cc: Wei Yang , akpm@linux-foundation.org, peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 988DFA0015 X-Stat-Signature: 4sfczd5at7gzj991gnj838k61u34upyk X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1736913610-275015 X-HE-Meta: U2FsdGVkX1/BEP3FjysYwyaqRecn1CAbcHisUIFzQnxMBHyE3r2MzAeiJaGpizjGZcKe1QG204RaQnQZ+yk6p25LevuPAdjBSnzYlGa5SREajeacRYb/GuPQoLYvPR/B3wCQIchQ2CWP2xrLPZHqumMlbY507r2rcxMueB/BXEWibrOwDbEyoCw9lV7e9921bK5cmkFHZd2ddVySRxy2yCTmCq94rEpxuEVFw9PJr8VUdE0ydC4ICkziNiuojZX7+rrsfIGddgCaT9iHTTWdx1ldMcJCy8ls4o8z2cWBI6uSFx2YyWXqe+7UWUbD5Nd9M/aJj4EjPEjIuihJ+AIZLXRoauJzMohrEiaNyJCrUJbux4KVRadDKvp5o8RZnkfvc9jP9I83xkRBLLrpgUhv/QqxwJ5WDucNOtNKmqe7xh2B2zD/NvXyt9mSlTr125i2lhGqPrTH5dVTnEMBFwP4nlfGz0ynX4E95Jd6n0pR10I/DhWYWt44DbdebpdiJ2zWZ5Zts8eJ4oX8cn6Z5dhgKLFWVYaFVfaGKv0NH614HWawZWuDbjCqE55Urw9JXYvPUNO4y7NZNHZ9YH3gJDG3B+U6Vvpz1/pps7IFCzJAacw1ZBvV4/4whoFNjeTBfcrWsHtHrKg69AlpXlbqgl6Q9p7oDVunV6MDVVLSSQ8n2rPqW+RvcfZoeju/1+0u605UDuX4jn1sKDlIVXfuc2qyoI0tv5vwDCgKR4FT5tMkvSOzOwHSf+N/5mGvTULmo4TQwbpmEPRnIFSl4j2LxOY+8JPqc04eKQBCW+ZWGon4Enj3S+B5W4U+4oh2zXUyR3mdxJY4r5xhOwjxyiItnVlPrIuNtlfvU9U/eRQkGElyHhtN3feLVxnmQ2SG/9p3vzVFUYs7kxjmt/5z6nNmnlFYri6JlsqswZOJ8Ocrq3L9LOmqxDs5ndWPuJyhcCZ4eKrmIfVrhhDeWnAToSdY3+n BzlRh7mf c1xIAN9U6e7zgetTLoMD9sRCTR0ZcZiB13ytR3X/czCPE1lHhV8xWSOISNijmCzsVzmrWLJ+MpqEW7M7HtbTPcxkSnVrF4qGOwCBUu9ZFhWUhQKBq8Feiqo8AScRB6WAyFrk2PNG+lVodVupn031lOQ4EGrE/SLOzL/OTqrUfDJzyaUQ2v2E244mqWHVAppEEd0ogMR3lIsB/IKoPTIupM14ifAUsQGKbquP0XjaC5iVLwy26G/FrOfMKWLgSNasPrCgbgjmfnQvSpb17ACY8XXj0V93o5h5F+llhqmx2it1XQVkoT6SIFdiLpD+O8AwASQEGkrAcgCI1lHPfvs57j8wUkxkl/GBH7JATy+H0zbNxgYCkL5M3WNV7OZ2+nRe9/jfAHKN7P9tfGk9v55CKQ17CUA4IbLhX388l8HBaRWkOZ5bUkJot7RMS5QQ+Q2AAnAA5cCqUJ2+YSmHa+mb4eeiI1xJ3SjOS0QMdxT7omRtM2Yc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jan 15, 2025 at 4:15=E2=80=AFAM Suren Baghdasaryan wrote: > > On Tue, Jan 14, 2025 at 6:27=E2=80=AFPM Wei Yang wrote: > > > > On Fri, Jan 10, 2025 at 08:26:03PM -0800, Suren Baghdasaryan wrote: > > > > >diff --git a/kernel/fork.c b/kernel/fork.c > > >index 9d9275783cf8..151b40627c14 100644 > > >--- a/kernel/fork.c > > >+++ b/kernel/fork.c > > >@@ -449,6 +449,42 @@ struct vm_area_struct *vm_area_alloc(struct mm_st= ruct *mm) > > > return vma; > > > } > > > > > >+static void vm_area_init_from(const struct vm_area_struct *src, > > >+ struct vm_area_struct *dest) > > >+{ [snip] > > Would this be difficult to maintain? We should make sure not miss or ov= erwrite > > anything. > > Yeah, it is less maintainable than a simple memcpy() but I did not > find a better alternative. I added a warning above the struct > vm_area_struct definition to update this function every time we change > that structure. Not sure if there is anything else I can do to help > with this. > Bare minimum this could have a BUILD_BUG_ON in below the func for the known-covered size. But it would have to be conditional on arch and some macros, somewhat nasty. KASAN or KMSAN (I don't remember which) can be used to find missing copies. To that end the target struct could be marked as fully uninitialized before copy and have a full read performed from it afterwards -- guaranteed to trip over any field which any field not explicitly covered (including padding though). I don't know what magic macros can be used to do in Linux, I am saying the support to get this result is there. I understand most people don't use this, but this still should be enough to trip over buggy patches in -next. Finally, the struct could have macros delimiting copy/non-copy sections (with macros expanding to field names), for illustrative purposes: diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 332cee285662..25063a3970c8 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -677,6 +677,7 @@ struct vma_numab_state { * getting a stable reference. */ struct vm_area_struct { +#define vma_start_copy0 vm_rcu /* The first cache line has the info for VMA tree walking. */ union { @@ -731,6 +732,7 @@ struct vm_area_struct { /* Unstable RCU readers are allowed to read this. */ struct vma_lock *vm_lock; #endif +#define vma_end_copy1 vm_lock /* * For areas with an address space and backing store, then you would do everything with a series of calls however, the __randomize_layout annotation whacks that idea (maybe it can be retired?) --=20 Mateusz Guzik