From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B54FFC369DC for ; Mon, 28 Apr 2025 16:08:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 42BB96B00A6; Mon, 28 Apr 2025 12:08:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B4B36B00A8; Mon, 28 Apr 2025 12:08:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 22DFB6B00A9; Mon, 28 Apr 2025 12:08:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id F3CEA6B00A6 for ; Mon, 28 Apr 2025 12:08:33 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B6607BFC55 for ; Mon, 28 Apr 2025 16:08:33 +0000 (UTC) X-FDA: 83383935306.20.67A7B10 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf21.hostedemail.com (Postfix) with ESMTP id 1E1BE1C0010 for ; Mon, 28 Apr 2025 16:08:30 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UyNvYQQe; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745856511; a=rsa-sha256; cv=none; b=DXauJyiVCvJHplTLK3bbHN8SaIF8S5bhPOA1w0bXyDwy4y88OJPmYWczOk+jD6VpAqRG7s OolMLvwBH9lOBhvTEG/sTJPaOkhrsuXv1wvU45wobwPHIhWG5LepO6m1AMwcVI0WZeMdI3 QBBh71aFVloYepiNI+KdsjwKzzOsMMc= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UyNvYQQe; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745856511; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RjlHZlqdBLidaHviO2ndJKZu4KjLnMsljSpmBM9kRPM=; b=m9jQrSVJoz6KDf6OYzZPZyQhYn+okK022sxfVCblkh0D0IpjpXNFcqE4Axpq5s/ax7zAqT OZOH36TVhkSpggbYXyfZfOGZfn0dx47rVfxp2DgoITGTSxJ6wqsSA5Ejbn9HVPnX2c6i4b WpKSTWknXglyl2lNnT+alhEcZDDWkzI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1745856510; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=RjlHZlqdBLidaHviO2ndJKZu4KjLnMsljSpmBM9kRPM=; b=UyNvYQQeYRoijryXqswUbs92HIwLvDg79JzsNRRZJjzU3iUsR8kVlaCvujbCEUlY29q2Hp 7COa+v7q2snPB2lof6PNMGbVhO7fA1KCKjaMvSoy0ykoNP4/Uo2JANVtGmxxpLJOseOj8R mg+HvgkFVSQmvG3EUzo2vCaXtpfBPBI= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-601-gtykasoGNiq-XcpA4OgmmQ-1; Mon, 28 Apr 2025 12:08:29 -0400 X-MC-Unique: gtykasoGNiq-XcpA4OgmmQ-1 X-Mimecast-MFC-AGG-ID: gtykasoGNiq-XcpA4OgmmQ_1745856508 Received: by mail-qk1-f197.google.com with SMTP id af79cd13be357-7c955be751aso844441985a.2 for ; Mon, 28 Apr 2025 09:08:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745856508; x=1746461308; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=RjlHZlqdBLidaHviO2ndJKZu4KjLnMsljSpmBM9kRPM=; b=q6Wikr5mzdcp1DfARa6WBmuB9uGBYlW1ILfdzk9TM5Y6CDuqhMAjv+xPfZ+Cx8ONL1 Vl52NT/1Nc7kSRSyB7eBPUw6uSkcFaXsZs3ItP9GHlLLfkkIjUleqDYpvEaND/57TNl/ 1OtnP+TjSdVNJYnHG1mA0yTYRsHg1AABhnM3ux9D7CKBMU2Yj6UyLQQk65FAQcL8VD9P h9TXYjOIDEILsX3QRTfKQCZSDQLnWlDU7oAv+Cqpw2s5wnVjFh+amNqfYHNdmMR6ILq8 gr+Uya6q3NxJy+JT5V0LhujMZBN5EUuub6ym6Gn5puaCO1Jqs8l/K4wtmPKhvP/VdXp+ FbEg== X-Forwarded-Encrypted: i=1; AJvYcCVloPQrhWOYyroViI/m7IuNMnouu1eLhhXU/7bmFzvgYdXAvxMlJyIhDTLZrwyvSt182MMq8fzWYg==@kvack.org X-Gm-Message-State: AOJu0Yx2CGCXiUYJcb+ZMqe+/alXHsMbvs0HCY9QIwMwll6brr9k7yu4 FOxzMOCXGaiC7lHGamchfA2dN6LjfUuRXjEsLZVI/+48STl1XBXi8k8FjLOR2/TtlkwI30xYV6Q S7v5yJ5YGDCHu+HiKQlWWXBFTVxXhawJq1BNJjhfXekfLQgO2 X-Gm-Gg: ASbGnctLgs7pmiSe/afFfyqIOy/11oJpZ8S05Zq/mda2A3tkBZPQrJt4rC+bNugD0rm FEcAUbzocLJxiRaOVopatUd7IvSElXY/47cAfcgZJeJnP8fz1xrjma2pQ4WfFJrgps1kp5dIVgS PFlW8Wq4OCxS6Hc2DSBP77JSpz88p4lCbMateDuB0fvbz2zcWv8neLRLEMtJUpQrKXPJ0BbQB3v POrpfd8UdER1ZfMuYB9sdv/SDu1CDDRR4YFh0OnsqQwqsqXOe1fpXuSQRnP5esYW7CIWnVqJ5LK QLI= X-Received: by 2002:a05:620a:10a1:b0:7c5:5768:409f with SMTP id af79cd13be357-7c9619fa9a7mr1645601785a.57.1745856507673; Mon, 28 Apr 2025 09:08:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGW5BlJDkcwxdtW1DAIlXgy/Si+TW6ulb7oYDt7LUfQf0Ngyzow9trm051yins+fPxPd9qHkw== X-Received: by 2002:a05:620a:10a1:b0:7c5:5768:409f with SMTP id af79cd13be357-7c9619fa9a7mr1645595485a.57.1745856507123; Mon, 28 Apr 2025 09:08:27 -0700 (PDT) Received: from x1.local ([85.131.185.92]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7c958e9f111sm636846285a.93.2025.04.28.09.08.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Apr 2025 09:08:26 -0700 (PDT) Date: Mon, 28 Apr 2025 12:08:22 -0400 From: Peter Xu To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-trace-kernel@vger.kernel.org, Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Andrew Morton , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pedro Falcato Subject: Re: [PATCH v1 05/11] mm: convert VM_PFNMAP tracking to pfnmap_track() + pfnmap_untrack() Message-ID: References: <20250425081715.1341199-1-david@redhat.com> <20250425081715.1341199-6-david@redhat.com> MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: aF0mF-GKEBKaNHJG0GD1XZbLqRifI-vYEU8tJgPkhrY_1745856508 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspamd-Queue-Id: 1E1BE1C0010 X-Stat-Signature: ceh17c1xwt5pdixksprdcfg5hqpwz9tb X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1745856510-298588 X-HE-Meta: U2FsdGVkX1+lQd2pwVsodvTXoG/BOEiM1yves2D6NQQwCGYxyqjac/f5s9/E+5boM55Q1IvxTjRClFFXGa6m4fXrC2y4TSQ3x/VIOHzKU7FBP4E8zaHHuh3IHS860HMW5eX0TJL7Vn+FJuAyTM+OQ/P0d3MFCfGuu0t0B2eDFsq1ovRYSEbVmGL9PMeb4SxkBq9H5ze9yTakLT7bCArhhOmP0iZ77t5A1Z6ea3LBVpUUhPS5OlAoIYE5kMI7zuGPClrRzCBnGbbGTng4sDbiGPiKsGEH6rf7ntrbq1RZarihnoliQSyzxSa3oWkzfXrTdy5X/HqnZPiVKraIc/mnilIFwtjmfJ33kHcnQpopZarAOC1S+56cbJT25wMZVuOPg55MEunZgix2BRVMelQX4nmBTzfQnID+OCnkLZ1gAnThYe3/VCaHxBoo3HwP9PsEuQwyQzo8lspSxZbIGAU+4hY7e6lx7Xeaok2DbLOmW8FD7Ki2Q9zaOcNOl8nYFt57p2qEZAg7c7Nm7VJa5KITRKpV1rOzY2OD2tYC/ikMuZ5VJhr+tgIjQpbJGyVhaXQARP5GS0JCS5SepC+tOUdextJuof7k76x29T5BB6mBEa9zwaUYvYtFmStbaM5dZY1bKeqMi2iM7P4H6Z84fEl1K9DZDS4wEixAP2DCRFskHQIhU3KFM1R/JIps6mKU3Qs/xv+RtKos07De0omZk6P6C9VguKxKPMjaEKvvPJ2DQTanvzimPzzW4W8/AfthvYlrKrM8caC45yhtJM/0rrWd9zD/YJOyHix/g7iG+yj50jJHHHvi/kt4pon8/51jpHhYVXOWs5UN/t9L8+MchntvjZtVcGubcU55Bmi3eGTChzKMu+C1SExEJ6laXD0stA8hT3jQw0lAgpfslHdLONwfowcdfNEXDClUyM1bzPUWXcN4VLc95saiYC9zVKj4kMT2KDxjp/K0K9TnLhRX9q0 ecRjYYF9 JzvMGkEIS7zFHYWxd7fOlMJTRtUKWBvQiURCJ1yradfL4X5FZWGKNrE1ePE/FKQS5iCjUFysLRus0nMPHh+ZFC2av1OD34G9oU32dVlmWxRdZNdHqyMIoE17mHJQMtp1lIM7z2zQY6IpxjK6ThePFM6A/UI8zt48fwWtw8QY1csFngaTc1jM/ZaYZkXg1jBAtOvWMMizDVzUGchP8Vp1NBsFccnPW21EkY6NUGWsDFaR1m0XRSg74CQfWZmyb9VkmTsvvLw6pJRjwl8ll3Hx6ibhGA6mKE8Qtj0XzKvzmmNR81g8zwbWjzd/tzM0EoKVXUmYK2+SxalszHeHSoLxKbchN4v/2psxNnwypvy9t99cnkFoMeyQC0wTO/N8Uoad4LSlxBYZswJOQR3Dff/wUUAKwliGgKVqdOdQD X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Apr 25, 2025 at 10:36:55PM +0200, David Hildenbrand wrote: > On 25.04.25 22:23, Peter Xu wrote: > > On Fri, Apr 25, 2025 at 10:17:09AM +0200, David Hildenbrand wrote: > > > Let's use our new interface. In remap_pfn_range(), we'll now decide > > > whether we have to track (full VMA covered) or only sanitize the pgprot > > > (partial VMA covered). > > > > > > Remember what we have to untrack by linking it from the VMA. When > > > duplicating VMAs (e.g., splitting, mremap, fork), we'll handle it similar > > > to anon VMA names, and use a kref to share the tracking. > > > > > > Once the last VMA un-refs our tracking data, we'll do the untracking, > > > which simplifies things a lot and should sort our various issues we saw > > > recently, for example, when partially unmapping/zapping a tracked VMA. > > > > > > This change implies that we'll keep tracking the original PFN range even > > > after splitting + partially unmapping it: not too bad, because it was > > > not working reliably before. The only thing that kind-of worked before > > > was shrinking such a mapping using mremap(): we managed to adjust the > > > reservation in a hacky way, now we won't adjust the reservation but > > > leave it around until all involved VMAs are gone. > > > > > > Signed-off-by: David Hildenbrand > > > --- > > > include/linux/mm_inline.h | 2 + > > > include/linux/mm_types.h | 11 ++++++ > > > kernel/fork.c | 54 ++++++++++++++++++++++++-- > > > mm/memory.c | 81 +++++++++++++++++++++++++++++++-------- > > > mm/mremap.c | 4 -- > > > 5 files changed, 128 insertions(+), 24 deletions(-) > > > > > > diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h > > > index f9157a0c42a5c..89b518ff097e6 100644 > > > --- a/include/linux/mm_inline.h > > > +++ b/include/linux/mm_inline.h > > > @@ -447,6 +447,8 @@ static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1, > > > #endif /* CONFIG_ANON_VMA_NAME */ > > > +void pfnmap_track_ctx_release(struct kref *ref); > > > + > > > static inline void init_tlb_flush_pending(struct mm_struct *mm) > > > { > > > atomic_set(&mm->tlb_flush_pending, 0); > > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > > > index 56d07edd01f91..91124761cfda8 100644 > > > --- a/include/linux/mm_types.h > > > +++ b/include/linux/mm_types.h > > > @@ -764,6 +764,14 @@ struct vma_numab_state { > > > int prev_scan_seq; > > > }; > > > +#ifdef __HAVE_PFNMAP_TRACKING > > > +struct pfnmap_track_ctx { > > > + struct kref kref; > > > + unsigned long pfn; > > > + unsigned long size; > > > +}; > > > +#endif > > > + > > > /* > > > * This struct describes a virtual memory area. There is one of these > > > * per VM-area/task. A VM area is any part of the process virtual memory > > > @@ -877,6 +885,9 @@ struct vm_area_struct { > > > struct anon_vma_name *anon_name; > > > #endif > > > struct vm_userfaultfd_ctx vm_userfaultfd_ctx; > > > +#ifdef __HAVE_PFNMAP_TRACKING > > > + struct pfnmap_track_ctx *pfnmap_track_ctx; > > > +#endif > > > > So this was originally the small concern (or is it small?) that this will > > grow every vma on x86, am I right? > > Yeah, and last time I looked into this, it would have grown it such that it would > require a bigger slab. Right now: Probably due to what config you have. E.g., when I'm looking mine it's much bigger and already consuming 256B, but it's because I enabled more things (userfaultfd, lockdep, etc.). > > Before this change: > > struct vm_area_struct { > union { > struct { > long unsigned int vm_start; /* 0 8 */ > long unsigned int vm_end; /* 8 8 */ > }; /* 0 16 */ > freeptr_t vm_freeptr; /* 0 8 */ > }; /* 0 16 */ > struct mm_struct * vm_mm; /* 16 8 */ > pgprot_t vm_page_prot; /* 24 8 */ > union { > const vm_flags_t vm_flags; /* 32 8 */ > vm_flags_t __vm_flags; /* 32 8 */ > }; /* 32 8 */ > unsigned int vm_lock_seq; /* 40 4 */ > > /* XXX 4 bytes hole, try to pack */ > > struct list_head anon_vma_chain; /* 48 16 */ > /* --- cacheline 1 boundary (64 bytes) --- */ > struct anon_vma * anon_vma; /* 64 8 */ > const struct vm_operations_struct * vm_ops; /* 72 8 */ > long unsigned int vm_pgoff; /* 80 8 */ > struct file * vm_file; /* 88 8 */ > void * vm_private_data; /* 96 8 */ > atomic_long_t swap_readahead_info; /* 104 8 */ > struct mempolicy * vm_policy; /* 112 8 */ > struct vma_numab_state * numab_state; /* 120 8 */ > /* --- cacheline 2 boundary (128 bytes) --- */ > refcount_t vm_refcnt __attribute__((__aligned__(64))); /* 128 4 */ > > /* XXX 4 bytes hole, try to pack */ > > struct { > struct rb_node rb __attribute__((__aligned__(8))); /* 136 24 */ > long unsigned int rb_subtree_last; /* 160 8 */ > } __attribute__((__aligned__(8))) shared __attribute__((__aligned__(8))); /* 136 32 */ > struct anon_vma_name * anon_name; /* 168 8 */ > struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 176 0 */ > > /* size: 192, cachelines: 3, members: 18 */ > /* sum members: 168, holes: 2, sum holes: 8 */ > /* padding: 16 */ > /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */ > } __attribute__((__aligned__(64))); > > After this change: > > struct vm_area_struct { > union { > struct { > long unsigned int vm_start; /* 0 8 */ > long unsigned int vm_end; /* 8 8 */ > }; /* 0 16 */ > freeptr_t vm_freeptr; /* 0 8 */ > }; /* 0 16 */ > struct mm_struct * vm_mm; /* 16 8 */ > pgprot_t vm_page_prot; /* 24 8 */ > union { > const vm_flags_t vm_flags; /* 32 8 */ > vm_flags_t __vm_flags; /* 32 8 */ > }; /* 32 8 */ > unsigned int vm_lock_seq; /* 40 4 */ > > /* XXX 4 bytes hole, try to pack */ > > struct list_head anon_vma_chain; /* 48 16 */ > /* --- cacheline 1 boundary (64 bytes) --- */ > struct anon_vma * anon_vma; /* 64 8 */ > const struct vm_operations_struct * vm_ops; /* 72 8 */ > long unsigned int vm_pgoff; /* 80 8 */ > struct file * vm_file; /* 88 8 */ > void * vm_private_data; /* 96 8 */ > atomic_long_t swap_readahead_info; /* 104 8 */ > struct mempolicy * vm_policy; /* 112 8 */ > struct vma_numab_state * numab_state; /* 120 8 */ > /* --- cacheline 2 boundary (128 bytes) --- */ > refcount_t vm_refcnt __attribute__((__aligned__(64))); /* 128 4 */ > > /* XXX 4 bytes hole, try to pack */ > > struct { > struct rb_node rb __attribute__((__aligned__(8))); /* 136 24 */ > long unsigned int rb_subtree_last; /* 160 8 */ > } __attribute__((__aligned__(8))) shared __attribute__((__aligned__(8))); /* 136 32 */ > struct anon_vma_name * anon_name; /* 168 8 */ > struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 176 0 */ > struct pfnmap_track_ctx * pfnmap_track_ctx; /* 176 8 */ > > /* size: 192, cachelines: 3, members: 19 */ > /* sum members: 176, holes: 2, sum holes: 8 */ > /* padding: 8 */ > /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */ > } __attribute__((__aligned__(64))); > > Observe that we allocate 192 bytes with or without pfnmap_track_ctx. (IIRC, > slab sizes are ... 128, 192, 256, 512, ...) True. I just double checked, vm_area_cachep has SLAB_HWCACHE_ALIGN set, I think it means it's working like that on x86_64 at least indeed. So looks like the new field at least isn't an immediate concern. Thanks, -- Peter Xu