From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7869EC02198 for ; Wed, 12 Feb 2025 19:06:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 095966B0083; Wed, 12 Feb 2025 14:06:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 046D76B0085; Wed, 12 Feb 2025 14:06:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2873280001; Wed, 12 Feb 2025 14:06:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C37576B0083 for ; Wed, 12 Feb 2025 14:06:18 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4FC34B0362 for ; Wed, 12 Feb 2025 19:06:18 +0000 (UTC) X-FDA: 83112223236.15.F98EAF1 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf21.hostedemail.com (Postfix) with ESMTP id 52B061C0018 for ; Wed, 12 Feb 2025 19:06:15 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eDAsxYuy; spf=pass (imf21.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739387175; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pZqhwESqCcdEHW5lNopFcaR8tQ/a65RYXVOVgLdYBZA=; b=xMfSMklPHeWDA4KkVFvsr7/jx0pXqop9JEdPgC7GUrhw6QXLb3A4/URK+shxYu4f8qWMd2 lozBwQEQGYxNlKiJWrw0Wd4SFjfGxTK3/kpuzj6CQmoglyjD9GCetK2pZT5AzFiPcsjgwB 69BBB2vDMy1vKBKXEx6E7ViJ3qO9ztI= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eDAsxYuy; spf=pass (imf21.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739387175; a=rsa-sha256; cv=none; b=Lq0vbz/vFGZVdy7AATyf+xSMdaKG9JQ4+Y0hoprq+A/e+fdWnmBJrbNnLE/Dm4IVz8U1Py dKQzV2sTfDrk258ld+HDNTV+pY9S0jNIVo4EfN3ZyL7goZhn+WvtILKnGW4SJ6XjHH2J48 o0Nuno+LNLpA6CZqEOn4yJTTwMIbytg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1739387174; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=pZqhwESqCcdEHW5lNopFcaR8tQ/a65RYXVOVgLdYBZA=; b=eDAsxYuyc92G3yl3nT6oql6jrXnzkzM+e8+gbmEgJMDXCuveneCrBR2qzkJqm5+MBmlKen huC7An53wObgvSKR3Izdn+BphHdzrQ+lRcDe8v1SoZo2NgnbvJ6rbqax3vRp3VCOru9tVC +a4kbbTVkoiCMS42gujE7cyJ1Vm7Ero= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-232-LgEjmu2IP6uTrtSVU2BMhg-1; Wed, 12 Feb 2025 14:06:13 -0500 X-MC-Unique: LgEjmu2IP6uTrtSVU2BMhg-1 X-Mimecast-MFC-AGG-ID: LgEjmu2IP6uTrtSVU2BMhg_1739387172 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-38dd6edef72so34335f8f.0 for ; Wed, 12 Feb 2025 11:06:13 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739387172; x=1739991972; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:from:references:cc:to:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=pZqhwESqCcdEHW5lNopFcaR8tQ/a65RYXVOVgLdYBZA=; b=pOloEgKTVcbPiYpzF96o8lYc5Oz9hKVMptj99QEV+vHTtSl/gcnl1HHsUV8wq3dQJx yCqsyllwMEgYLud8ZyFzD0JO15BxdRf32cLkLdsB4YWnsg+mAN/UYaSsKNj6QBUGy6C4 MwCZf5eq1ZXfE0cofH727p2Mu/qLMHWaCNuVyA444KqnIAOBDn39z4uKIMvzaCEBEheA tUh7SWOTHn4IQloaMKAVQJ6uBPZYPbjHteNAjTxIMwApWqYwZAMCaV647Dv5Yaungs5A fe4PLbcZdAVMbXN6PaKpidXL+QHe6YnTMimTVbW2rK1ybzuE57xtf8eUowTJvHOuQMRa XCRQ== X-Forwarded-Encrypted: i=1; AJvYcCVtDm46R+qW1mXeJkFmnvgH0MXBX8/UNkMGdWxrJ9pNP6s8hyt+he4H6sKWxfco1VwyscsgOl7/Vg==@kvack.org X-Gm-Message-State: AOJu0YyOPUZwNDyQjkL9AGpvPtNlDWAXbNXWD2bJh70ssF3OQIbOOWk5 FF7SydopMfJyiz5kfT2hMZRVQXOnwTCbA7EARmTNG6cZKCb/BX5cdkC0kPpo8Pgs4pV/QtaNofE N/qfIbV/JAvJsqhuCFYYIPowxyYvXBzwBDeR/CctahNyycMSr X-Gm-Gg: ASbGncvTDw3pByFSHEItI5LiAb112oM4ROruxeBN3c0z3zAIKnQ/HtbEFSDWgUXsP0z x3qGUm2Utt4QO31yT3P+n+9WXydtUK9NY0u3rCreD0aga+NnJp2MCnTbUE4uMPCVIBWPGa0sf/A gaD/IA/6P/CRMudlj3EzdIQOmIYvPdYy+0Co4+u66JAttVNVZNae+80+ij7kOB2kQNHfXd86ke4 uAIcl65OnPVBTghuppmPq0pbsYq+kE48Cs9iBnGpxEUr7vjM7Z8HVRF7KImktW3iHLLiefcI7yU n2o7FOAjHyeMnPNhnpCUT/1FJnWndmSNXvVj1KIfM3//Up5HwN2rHaxm1LcJqmWP95z9/uT0JNn 1iDAujjH1f6DjTgI79nDssaOO0KEaMw== X-Received: by 2002:a05:6000:2ac:b0:38d:d5bf:5189 with SMTP id ffacd0b85a97d-38f24d11d24mr85543f8f.16.1739387172048; Wed, 12 Feb 2025 11:06:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IEA8/Xz1fcATs1tFbwBP/tz2GEi6pNABEYNlkCbyWivN96o0SYEeVqyP5SRiZestqifKz2OpA== X-Received: by 2002:a05:6000:2ac:b0:38d:d5bf:5189 with SMTP id ffacd0b85a97d-38f24d11d24mr85516f8f.16.1739387171613; Wed, 12 Feb 2025 11:06:11 -0800 (PST) Received: from ?IPV6:2003:cb:c70c:a600:1e3e:c75:d269:867a? (p200300cbc70ca6001e3e0c75d269867a.dip0.t-ipconnect.de. [2003:cb:c70c:a600:1e3e:c75:d269:867a]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38dd6548b25sm12290131f8f.32.2025.02.12.11.06.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 12 Feb 2025 11:06:10 -0800 (PST) Message-ID: Date: Wed, 12 Feb 2025 20:06:08 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 0/6] rust: page: Support borrowing `struct page` and physaddr conversion To: Asahi Lina , Zi Yan Cc: Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?UTF-8?Q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Jann Horn , Matthew Wilcox , Paolo Bonzini , Danilo Krummrich , Wedson Almeida Filho , Valentin Obst , Andrew Morton , linux-mm@kvack.org, airlied@redhat.com, Abdiel Janulgue , rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, asahi@lists.linux.dev, Oscar Salvador , Muchun Song References: <20250202-rust-page-v1-0-e3170d7fe55e@asahilina.net> <41ca3445-80cd-43c1-8f9e-634c195c9187@asahilina.net> <37A0729B-A711-4D45-B9F0-328FDB9ADD28@nvidia.com> <0e19e1c3-293b-4740-93f3-2c410893288b@redhat.com> <82047858-480a-45e3-b826-3a46fbebe842@asahilina.net> <1e9ae833-4293-4e48-83b2-c0af36cb3fdc@asahilina.net> <026c1a0c-e53a-4a5e-92da-6e4f18ce0fee@redhat.com> <6bcd3315-a0f9-463c-ab97-a43736f9b4f4@redhat.com> <2a513c3e-818c-4040-b3d3-7835861bab4f@asahilina.net> <0dffaa7d-340f-4ce1-9a2e-54cfd9079266@redhat.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: PClPdI-ZTHDFRfOwMWWxJR0jzlOSqiByMf3uRJH-P_E_1739387172 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: joiw5om9pkgs694auuhmfpmo1i4z5qaa X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 52B061C0018 X-HE-Tag: 1739387175-956789 X-HE-Meta: U2FsdGVkX1/oD2mcij+4xMx2B11+mz4awY0JUxSM/62suM6xh3xCGNLZH+Tbc9vO8ATt+J2KTRwf1qSQbxf06Sb1EGZ1zqLTKpbsU5bhMr1yHLl1pQQoLQfHbITvGd0gcJcB8mTrBFSJsLri5gH+it71LBeAtz5B+2sGHiLecwRipY92r7+edo+RrgYAykdmTCLIABB93vDqC+FMcXofxCsdfl5lqvFg0hTVOO3BaooO9T+RkId8hl6qIxkIJgrScCuZOZGzDVpWahQEZJTX2xWKMv6VRh2ThvnEjpAVebcnDsZJ3JQytMSji406uv7rs9jpxJ8PvgdjaXDnpT8Fg1Aog6kAID5yAG4gsoSFgnPabeyRgsdTGURcLMgv5E7Mi/Pi5VsQnWmjKQQws7uEpDEt2F47Pg9zRuGBpNIb6JYU2m+WfhKUl7H2X2Xs9QFsj5YGBhQ45HHE8uE5O0YqdUoPNUpfT4T6NJQmxihzzfQURrrrFw7H9CIUCmDWrqBMI7O73jzJ625wMkVZp+SM9C1AU+lExyr/VTnOLCl2GapQDJIZfZXoc3K5EmbC6pO+ChQIG+EitFdqWS8FqieZVnkcpGEeaGvgAmpsaKmssh83Qp76Cu07hEiYggwPcEj/bGVAvQEkZ7qM6Zn4rFzGctSE0GFOeWGMlof8lvZIok3/k7X9ctmjiSicCTQogMPvGuK3U28gwtDXWUNIJC8kPrd3Dy8Dtr6sn6DB77k1mMxEEvV1H+w7Sh4CeqMTH13D/t1Uvur2x9pJH6+Q4FAnN1qSJ6pQQGUCK3xkMa8eDcPfGqshhU1MJPUJpkKAZAF+ORE+n2n191HGd+gX7fWn2+5vHkIjvRlrJ5cEZKu+DPVMeyer4A2UP577ShgFC7XU3Lq60cHTc9vTAktr50kncOValoB8sq0jG5GYj2DzYF5MYCWpfCVLbrRuHjE6X04TeB0Tj9aXPM+M9ocuv3O ZHyeFCHD X1kiEPpQhMQ/3m5Rg6HGq9tUFJGImko0VL9gUj/lk0GrgUUrlKbhdqiBqcHG4kkiQTNKg4/hTzljvDITBx+HN3M8oqqj58rqsWoeicxPLjDv2NQzAqLejtLvHsj+oImZfa69uqXfT/d8j2grE6SpndJpOLRHuT8c5AnMbeGBvjyLSijkZxuRRmfHSirMZ6cGQD53XdlmLkgsXUXE6syJYWpAA3fhR1FQfgirQ6dRXpGXCQait6pkcE38fIduDgQw4g7WPZ+0Rx4WsS2fl3xXjuqvCCtYANBQCyNxXOhElbtJGss7c5coixfABcWYvb84VrzS12HNRdUkDIItRU2jnr/L4FncQm89VUBjCUY7jvXkyUh3prRo+HrW2Vp8giq4cxnXLTUTgvyXqIeqH4EBmyjfiQWr4Zd22ev2tVjOGzAr67GPpFFGHCn3fvcBsS0VABi9kX6Vl5CWWfX0ZMJ5ycs9OQOJXzKaqVv+d39rFCpMuwm4pyYlKtAvhaQ== X-Bogosity: Unsure, tests=bogofilter, spamicity=0.450935, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 06.02.25 20:27, Asahi Lina wrote: > > > On 2/7/25 4:18 AM, Asahi Lina wrote: >> >> >> On 2/7/25 2:58 AM, David Hildenbrand wrote: >>> On 04.02.25 22:06, Asahi Lina wrote: >>>> >>>> >>>> On 2/5/25 5:10 AM, David Hildenbrand wrote: >>>>> On 04.02.25 18:59, Asahi Lina wrote: >>>>>> On 2/4/25 11:38 PM, David Hildenbrand wrote: >>>>>>>>>> If the answer is "no" then that's fine. It's still an unsafe >>>>>>>>>> function >>>>>>>>>> and we need to document in the safety section that it should >>>>>>>>>> only be >>>>>>>>>> used for memory that is either known to be allocated and pinned and >>>>>>>>>> will >>>>>>>>>> not be freed while the `struct page` is borrowed, or memory that is >>>>>>>>>> reserved and not owned by the buddy allocator, so in practice >>>>>>>>>> correct >>>>>>>>>> use would not be racy with memory hot-remove anyway. >>>>>>>>>> >>>>>>>>>> This is already the case for the drm/asahi use case, where the pfns >>>>>>>>>> looked up will only ever be one of: >>>>>>>>>> >>>>>>>>>> - GEM objects that are mapped to the GPU and whose physical >>>>>>>>>> pages are >>>>>>>>>> therefore pinned (and the VM is locked while this happens so the >>>>>>>>>> objects >>>>>>>>>> cannot become unpinned out from under the running code), >>>>>>>>> >>>>>>>>> How exactly are these pages pinned/obtained? >>>>>>>> >>>>>>>> Under the hood it's shmem. For pinning, it winds up at >>>>>>>> `drm_gem_get_pages()`, which I think does a >>>>>>>> `shmem_read_folio_gfp()` on >>>>>>>> a mapping set as unevictable. >>>>>>> >>>>>>> Thanks. So we grab another folio reference via shmem_read_folio_gfp()- >>>>>>>> shmem_get_folio_gfp(). >>>>>>> >>>>>>> Hm, I wonder if we might end up holding folios residing in >>>>>>> ZONE_MOVABLE/ >>>>>>> MIGRATE_CMA longer than we should. >>>>>>> >>>>>>> Compared to memfd_pin_folios(), which simulates FOLL_LONGTERM and >>>>>>> makes >>>>>>> sure to migrate pages out of ZONE_MOVABLE/MIGRATE_CMA. >>>>>>> >>>>>>> But that's a different discussion, just pointing it out, maybe I'm >>>>>>> missing something :) >>>>>> >>>>>> I think this is a little over my head. Though I only just realized that >>>>>> we seem to be keeping the GEM objects pinned forever, even after unmap, >>>>>> in the drm-shmem core API (I see no drm-shmem entry point that would >>>>>> allow the sgt to be freed and its corresponding pages ref to be >>>>>> dropped, >>>>>> other than a purge of purgeable objects or final destruction of the >>>>>> object). I'll poke around since this feels wrong, I thought we were >>>>>> supposed to be able to have shrinker support for swapping out whole GPU >>>>>> VMs in the modern GPU MM model, but I guess there's no >>>>>> implementation of >>>>>> that for gem-shmem drivers yet...? >>>>> >>>>> I recall that shrinker as well, ... or at least a discussion around it. >>>>> >>>>> [...] >>>>> >>>>>>> >>>>>>> If it's only for crash dumps etc. that might even be opt-in, it makes >>>>>>> the whole thing a lot less scary. Maybe this could be opt-in >>>>>>> somewhere, >>>>>>> to "unlock" this interface? Just an idea. >>>>>> >>>>>> Just to make sure we're on the same page, I don't think there's >>>>>> anything >>>>>> to unlock in the Rust abstraction side (this series). At the end of the >>>>>> day, if nothing else, the unchecked interface (which the regular >>>>>> non-crash page table management code uses for performance) will let you >>>>>> use any pfn you want, it's up to documentation and human review to >>>>>> specify how it should be used by drivers. What Rust gives us here is >>>>>> the >>>>>> mandatory `unsafe {}`, so any attempts to use this API will necessarily >>>>>> stick out during review as potentially dangerous code that needs extra >>>>>> scrutiny. >>>>>> >>>>>> For the client driver itself, I could gate the devcoredump stuff behind >>>>>> a module parameter or something... but I don't think it's really worth >>>>>> it. We don't have a way to reboot the firmware or recover from this >>>>>> condition (platform limitations), so end users are stuck rebooting to >>>>>> get back a usable machine anyway. If something goes wrong in the >>>>>> crashdump code and the machine oopses or locks up worse... it doesn't >>>>>> really make much of a difference for normal end users. I don't think >>>>>> this will ever really happen given the constraints I described, but if >>>>>> somehow it does (some other bug somewhere?), well... the machine was >>>>>> already in an unrecoverable state anyway. >>>>>> >>>>>> It would be nice to have userspace tooling deployed by default that >>>>>> saves off the devcoredump somewhere, so we can have a chance at >>>>>> debugging hard-to-hit firmware crashes... if it's opt-in, it would only >>>>>> really be useful for developers and CI machines. >>>>> >>>>> Is this something that possibly kdump can save or analyze? Because that >>>>> is our default "oops, kernel crashed, let's dump the old content so we >>>>> can dump it" mechanism on production systems. >>>> >>>> kdump does not work on Apple ARM systems because kexec is broken and >>>> cannot be fully fixed, due to multiple platform/firmware limitations. A >>>> very limited version of kexec might work well enough for kdump, but I >>>> don't think anyone has looked into making that work yet... >>>> >>>>> but ... I am not familiar with devcoredump. So I don't know when/how it >>>>> runs, and if the source system is still alive (and remains alive --  in >>>>> contrast to a kernel crash). >>>> >>>> Devcoredump just makes the dump available via /sys so it can be >>>> collected by the user. The system is still alive, the GPU is just dead >>>> and all future GPU job submissions fail. You can still SSH in or (at >>>> least in theory, if enough moving parts are graceful about it) VT-switch >>>> to a TTY. The display controller is not part of the GPU, it is separate >>>> hardware. >>> >>> >>> Thanks for all the details (and sorry for the delay, I'm on PTO until >>> Monday ... :) >>> >>> (regarding the other mail) Adding that stuff to rust just so we have a >>> devcoredump that ideally wouldn't exist is a bit unfortunate. >>> >>> So I'm curious: we do have /proc/kcore, where we do all of the required >>> filtering, only allowing for reading memory that is online, not >>> hwpoisoned etc. >>> >>> makedumpfile already supports /proc/kcore. >>> >>> Would it be possible to avoid Devcoredump completely either by dumping / >>> proc/kcore directly or by having a user-space script that walks the page >>> tables to dump the content purely based on /proc/kcore? >>> >>> If relevant memory ranges are inaccessible from /proc/kcore, we could >>> look into exposing them. >> >> I'm not sure that's a good idea... the dump code runs when the GPU >> crashes, and makes copies of all the memory pages into newly allocated >> pages (this is around 16MB for a typical dump, and if allocation fails >> we just bail and clean up). Then userspace can read the coredump at its >> leisure. AIUI, this is exactly the intended use case of devcoredump. It >> also means that anyone can grab a core dump with just a `cp`, without >> needing any bespoke tools. >> >> After the snapshot is taken, the kernel will complete (fail) all GPU >> jobs, which means much of the shared memory will be freed and some >> structures will change contents. If we defer the coredump to userspace, >> then it would not be able to capture the state of all relevant memory >> exactly at the crash time, which could be very confusing. >> >> In theory I could change the allocators to not free or touch anything >> after a crash, and add guards to any mutations in the driver to avoid >> any changes after a crash... but that feels a lot more brittle and >> error-prone than just taking the core dump at the right time. >> > > If the arbitrary page lookups are that big a problem, I think I would > rather just memremap the all the bootloader-mapped firmware areas, hook > into all the allocators to provide a backdoor into the backing objects, > and just piece everything together by mapping page addresses to those. > It would be a bunch of extra code and scaffolding in the driver, and > require device tree and bootloader changes to link up the GPU node to > its firmware nodes, but it's still better than trying to do it all from > userspace IMO... Yes. Ideally, we'd not open up the can of worms of arbitrary pfn -> page conversions (including the pfn_to_online_page() etc nastiness) if it can be avoided in rust. Once there is an interface do do it, it's likely that new users will pop up that are not just "create a simple dump, I know what I am doing and only want sanity checks". So best if we could prevent new pfn walkers in rust somehow; they are already a pain to maintain+fix in C (including the upcoming more severe folio/memdesc work), that would be good. But if it's too hard to avoid, then it also doesn't make sense to overcomplicate things to work around it. -- Cheers, David / dhildenb