From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4F4AC4707C for ; Wed, 10 Jan 2024 12:13:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 540BA8D0007; Wed, 10 Jan 2024 07:13:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F1248D0001; Wed, 10 Jan 2024 07:13:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3438E8D0007; Wed, 10 Jan 2024 07:13:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1CE8F8D0001 for ; Wed, 10 Jan 2024 07:13:00 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D65CD80AF7 for ; Wed, 10 Jan 2024 12:12:59 +0000 (UTC) X-FDA: 81663290478.06.21822C9 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 714AF4001D for ; Wed, 10 Jan 2024 12:12:57 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=H0rHQtsU; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704888777; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9Zar9j74FFtum2Igwl0Q6gmjNjqinEJbuj8u3lqTtjM=; b=UqpAZBibzJA/VUFc/NIEA/Pa7nZH/Klf11xSXEMUWgIfz23+yQNYRvjN8gg6+kF3+AvLTa x4XUx8KRys0aVhnQG9mwlGaUf9dFePNnNGCnsPBccoGIVSW/tKzxdNYAbd1EzMxSDOUfQr 77m/GMgyBVyiDolwT+ijjttINUn0lDU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=H0rHQtsU; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704888777; a=rsa-sha256; cv=none; b=3r27rDJ+3pIxVVvMYFPa1zFiKaXC+A78Bx1Kz0UDA5Zm1hVLoJOoSndTdj924a37jqvlev LU8iDTR9fWz67sEVt6ZmGY3AAvJYE5821Z5L063JL76+6kCUnPyp0oj5P3h4ahwAnHqwj1 x6D/Fhr6cP8ki3LFQeglZ5Y9YGxMS44= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704888776; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=9Zar9j74FFtum2Igwl0Q6gmjNjqinEJbuj8u3lqTtjM=; b=H0rHQtsUQN/dOucsIC6fxjVY2C0XhwLbOiFVg5F4H9dqYo9vHAnDlZBaqIXOMWmsWbrtwM K3/Yfkgk08zfwv61DPEFfURyukdgLAm7lUF5x6kItwvKxAvuRwx7+f7qNgtfL71s/cEM8v RYkGKq/1mDnn6urzw697NE6xIyK6rvA= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-15-lRTR1xEnMUqAguG9GKrPEg-1; Wed, 10 Jan 2024 07:12:41 -0500 X-MC-Unique: lRTR1xEnMUqAguG9GKrPEg-1 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-40d1ffbc3b8so30481685e9.0 for ; Wed, 10 Jan 2024 04:12:38 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704888756; x=1705493556; h=content-transfer-encoding:in-reply-to:organization:autocrypt:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=9Zar9j74FFtum2Igwl0Q6gmjNjqinEJbuj8u3lqTtjM=; b=UvoStNhEvUNCLhGAzqtfCsBdGGt6zosifa58sKgSBTLD6Sw+nG5SRHjWe1SpyvI3er HPxB7tEUbykZ50zG7C+1v6iK1optxwqfZ2AfXrRR+hmfC/OPnXh7m8hDMj6pIF8fgMRM K8nTUKIZdiUE2SdDrwVIT3+d8L+ALvxi6srfHw6xZsdFdRzVOstdO6f/R4nw/JqjfZpB YxMZvldbnZO+JncaPDkRBcKHLAJYkvGP/6Wbq4dpR5PsVmnsp4Nocr0RzMAOGpE7y+pW bMMM3+ancwGoFfr3qLUaZjeet90mpOClB4yB++3VIoTYs2aVD2LZDfodapzKV0w/ZINo NwaQ== X-Gm-Message-State: AOJu0Yxp1AstQ5MNaJ4VePex05aj1PEVx+OZ1dvJZ8ojLqhBmWnLrio0 DwbCbg5UcLybibB3XLmrjzrCa4CYDE47D1/pG+lbO81Ec/O7p6f6J0mP1C3zugX4LpeyFAaGIIm h9Q/3fCny049Me4Pf9tI= X-Received: by 2002:a05:600c:2242:b0:40e:52d7:a945 with SMTP id a2-20020a05600c224200b0040e52d7a945mr254676wmm.360.1704888756155; Wed, 10 Jan 2024 04:12:36 -0800 (PST) X-Google-Smtp-Source: AGHT+IEUsk7E8F+YZ3AzjgUYD5gIZyZyAJH+FfJr3lzoPufB56vylLq6mw3sDqok59+W7+h9A2jF8A== X-Received: by 2002:a05:600c:2242:b0:40e:52d7:a945 with SMTP id a2-20020a05600c224200b0040e52d7a945mr254667wmm.360.1704888755664; Wed, 10 Jan 2024 04:12:35 -0800 (PST) Received: from ?IPV6:2003:cb:c73d:de00:7cf7:9482:e5dc:6ad0? (p200300cbc73dde007cf79482e5dc6ad0.dip0.t-ipconnect.de. [2003:cb:c73d:de00:7cf7:9482:e5dc:6ad0]) by smtp.gmail.com with ESMTPSA id j15-20020adfb30f000000b003375c8f796bsm4871246wrd.0.2024.01.10.04.12.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 10 Jan 2024 04:12:35 -0800 (PST) Message-ID: Date: Wed, 10 Jan 2024 13:12:34 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v1] tools/mm: Add thpmaps script to dump THP usage info To: Barry Song <21cnbao@gmail.com>, Ryan Roberts Cc: John Hubbard , Andrew Morton , Zenghui Yu , Matthew Wilcox , Kefeng Wang , Zi Yan , Alistair Popple , linux-mm@kvack.org References: <20240102153828.1002295-1-ryan.roberts@arm.com> <4e7445a0-acc9-487f-999f-a2b6d03d265e@nvidia.com> <3bd5e4a3-9f67-4483-9a0e-9abb5eb783cd@arm.com> <94ebe62b-5f55-4be9-b464-4105b4692496@arm.com> <68d5ce7e-6587-47c6-bd0f-988adf5d92a4@arm.com> <974a2670-7fa9-425e-921e-8d54a596e6cf@arm.com> <6c77f143-9c2c-4d17-9a2a-d69d9adf2eea@arm.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 714AF4001D X-Stat-Signature: f7ieckjtz5xnabg3ebx7iuuwa6417bdq X-Rspam-User: X-HE-Tag: 1704888777-743878 X-HE-Meta: U2FsdGVkX1+wnxHilWRBpxkqoAvzB3FQaMDfjGLjAevWtlrp6M67UYTT68ii1OBvvPA3ERd5vO9fxxWQh6vFrK8bx0uZnc4FQLDMIN5LwOQA4VePZu72JgQNmTKER8eEuhXoIgWpNE7vt+sSTKSpXJk0QaXXV2j7KgwafIf0ej9PRdfm+sawI6cSYzR5ZtabJbe8VKY4+6yp4VTE1FNMfOoNiWopLNHxaHNNwUXVg4PZWR3MjvzzIDfzLXbUZexaEEcDNs0OhLiGfbf1t7+SWbne9+1lbhU+hKOUkaFjD+63g0tICqsf7TP55LaNnsOq+RSmYVaIh7lwq+n28QAn52/Ug/4YWodbBzpXqT4vfoFqE0uOAYfzS9F787wkwFJdP/I6FV7o7kc1J16iDo9hU6oUZoVAxC1t74l0emWj7hqwtXh5GgbLfCmUO5cRf9Xug7+OslYwCd9qsCaNC1pHjqPQbxo8NdgAOE1tsyNjOS+CvEb5AWfrxgRirl/MWoZNJ2l1Ald+FT9lDqezNYod/c2LNIHyZEDG6FLg5LPZEroU0cm4fRQA1K9sapa6vhUT93dTOVhKx7MJxM5In6TrxIPHMTwgYk0lgomJlOb8XRYKLPPUSq3MZnJbooDmz9Sr/L9SP/TJeoghtn/7grbRRQPAp8m6M0TmthSP6s6PKXfLxU26w3bkw961rDQ3aStIPkd5xXlRzRQOReq2bnLYYXfyT5V43DngS2xc2MN0lGWvD2tDISQvot8V+Y6Ogb8F1avmif2xPt7B/u1tI5V+iFYAAZP3TMTZue87ZxHdcdSKowyCCGDCaKx+J2qaeKxERQ2O2ReNQslUbB4cvfuZQHX3P/PRuUyk/f07mG8g2Y83ogqCMW3DO43DbCNWYtm50fWX9a2Qhiba8FnHsoxSASNI6fJ/PXFIviJV6sahvTKOSD6l/wmMPW50nUsS+TA+n9ZBRXwptAGnH6D5oRu ky7V9I1L 14ekBY1f0kDNGygHu1qJ+p31cp9c1+/+7vQnziGZsWCd0JwkdCf4rgVqMN3FmpSxscRl7y9ps9rTxU0cIcCtX7c4ASPvJYC5WCpwkNTOTbFhNGnQpeE/Kz9oxiGvkX7tb06mQ9UYWtBgSSKHGLQY0YrMpv9O8gKE8A3NzHD5ECcgL6VlaCoNNlPHmwcNR+weWvh4zzrc/PGPL1sGq7r6SL16I/ALa7qSg+ErMZ5nE4ANSKBNxO19+veJ4830RQf7gYD2N5GJbSrPI1JA5jjuc1PZEY/9XvWX0kPAvAP+67xqSxdGO4qcBhYf5BcUbnzEKNvykyu2I/+fW/Q/ieEjXcz5qzGHGlosTu4/lEnRLXBy8QS1Qu/rV0VkDzPbJZoZlyrmVsYvsMe+CuRAD7thgdUUlUOvmfHzxpUzyxihiUuoKTqXQwS98hm+LgtjpB8MuO1n6OBRMXS1DVqsI1UDPc0Wg/kAwJONh2Il9svGh1VPTJV21yWE+oLyxCMOQVlQBo9jM X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 10.01.24 13:05, Barry Song wrote: > On Wed, Jan 10, 2024 at 7:59 PM Ryan Roberts wrote: >> >> On 10/01/2024 11:38, Barry Song wrote: >>> On Wed, Jan 10, 2024 at 7:21 PM Ryan Roberts wrote: >>>> >>>> On 10/01/2024 11:00, David Hildenbrand wrote: >>>>> On 10.01.24 11:55, Ryan Roberts wrote: >>>>>> On 10/01/2024 10:42, David Hildenbrand wrote: >>>>>>> On 10.01.24 11:38, Ryan Roberts wrote: >>>>>>>> On 10/01/2024 10:30, Barry Song wrote: >>>>>>>>> On Wed, Jan 10, 2024 at 6:23 PM Ryan Roberts wrote: >>>>>>>>>> >>>>>>>>>> On 10/01/2024 09:09, Barry Song wrote: >>>>>>>>>>> On Wed, Jan 10, 2024 at 4:58 PM Ryan Roberts wrote: >>>>>>>>>>>> >>>>>>>>>>>> On 10/01/2024 08:02, Barry Song wrote: >>>>>>>>>>>>> On Wed, Jan 10, 2024 at 12:16 PM John Hubbard wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 1/9/24 19:51, Barry Song wrote: >>>>>>>>>>>>>>> On Wed, Jan 10, 2024 at 11:35 AM John Hubbard >>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>> Hi Ryan, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> One thing that immediately came up during some recent testing of mTHP >>>>>>>>>>>>>>>> on arm64: the pid requirement is sometimes a little awkward. I'm >>>>>>>>>>>>>>>> running >>>>>>>>>>>>>>>> tests on a machine at a time for now, inside various containers and >>>>>>>>>>>>>>>> such, and it would be nice if there were an easy way to get some >>>>>>>>>>>>>>>> numbers >>>>>>>>>>>>>>>> for the mTHPs across the whole machine. >>>>>>>>>>>> >>>>>>>>>>>> Just to confirm, you're expecting these "global" stats be truely global >>>>>>>>>>>> and not >>>>>>>>>>>> per-container? (asking because you exploicitly mentioned being in a >>>>>>>>>>>> container). >>>>>>>>>>>> If you want per-container, then you can probably just create the container >>>>>>>>>>>> in a >>>>>>>>>>>> cgroup? >>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I'm not sure if that changes anything about thpmaps here. Probably >>>>>>>>>>>>>>>> this is fine as-is. But I wanted to give some initial reactions from >>>>>>>>>>>>>>>> just some quick runs: the global state would be convenient. >>>>>>>>>>>> >>>>>>>>>>>> Thanks for taking this for a spin! Appreciate the feedback. >>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> +1. but this seems to be impossible by scanning pagemap? >>>>>>>>>>>>>>> so may we add this statistics information in kernel just like >>>>>>>>>>>>>>> /proc/meminfo or a separate /proc/mthp_info? >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Yes. From my perspective, it looks like the global stats are more useful >>>>>>>>>>>>>> initially, and the more detailed per-pid or per-cgroup stats are the >>>>>>>>>>>>>> next level of investigation. So feels odd to start with the more >>>>>>>>>>>>>> detailed stats. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> probably because this can be done without the modification of the kernel. >>>>>>>>>>>> >>>>>>>>>>>> Yes indeed, as John said in an earlier thread, my previous attempts to add >>>>>>>>>>>> stats >>>>>>>>>>>> directly in the kernel got pushback; DavidH was concerned that we don't >>>>>>>>>>>> really >>>>>>>>>>>> know exectly how to account mTHPs yet >>>>>>>>>>>> (whole/partial/aligned/unaligned/per-size/etc) so didn't want to end up >>>>>>>>>>>> adding >>>>>>>>>>>> the wrong ABI and having to maintain it forever. There has also been some >>>>>>>>>>>> pushback regarding adding more values to multi-value files in sysfs, so >>>>>>>>>>>> David >>>>>>>>>>>> was suggesting coming up with a whole new scheme at some point (I know >>>>>>>>>>>> /proc/meminfo isn't sysfs, but the equivalent files for NUMA nodes and >>>>>>>>>>>> cgroups >>>>>>>>>>>> do live in sysfs). >>>>>>>>>>>> >>>>>>>>>>>> Anyway, this script was my attempt to 1) provide a short term solution >>>>>>>>>>>> to the >>>>>>>>>>>> "we need some stats" request and 2) provide a context in which to explore >>>>>>>>>>>> what >>>>>>>>>>>> the right stats are - this script can evolve without the ABI problem. >>>>>>>>>>>> >>>>>>>>>>>>> The detailed per-pid or per-cgroup is still quite useful to my case in >>>>>>>>>>>>> which >>>>>>>>>>>>> we set mTHP enabled/disabled and allowed sizes according to vma types, >>>>>>>>>>>>> eg. libc_malloc, java heaps etc. >>>>>>>>>>>>> >>>>>>>>>>>>> Different vma types can have different anon_name. So I can use the >>>>>>>>>>>>> detailed >>>>>>>>>>>>> info to find out if specific VMAs have gotten mTHP properly and how many >>>>>>>>>>>>> they have gotten. >>>>>>>>>>>>> >>>>>>>>>>>>>> However, Ryan did clearly say, above, "In future we may wish to >>>>>>>>>>>>>> introduce stats directly into the kernel (e.g. smaps or similar)". And >>>>>>>>>>>>>> earlier he ran into some pushback on trying to set up /proc or /sys >>>>>>>>>>>>>> values because this is still such an early feature. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I wonder if we could put the global stats in debugfs for now? That's >>>>>>>>>>>>>> specifically supposed to be a "we promise *not* to keep this ABI stable" >>>>>>>>>>>>>> location. >>>>>>>>>>>> >>>>>>>>>>>> Now that I think about it, I wonder if we can add a --global mode to the >>>>>>>>>>>> script >>>>>>>>>>>> (or just infer global when neither --pid nor --cgroup are provided). I >>>>>>>>>>>> think I >>>>>>>>>>>> should be able to determine all the physical memory ranges from >>>>>>>>>>>> /proc/iomem, >>>>>>>>>>>> then grab all the info we need from /proc/kpageflags. We should then be >>>>>>>>>>>> able to >>>>>>>>>>>> process it all in much the same way as for --pid/--cgroup and provide the >>>>>>>>>>>> same >>>>>>>>>>>> stats, but it will apply globally. What do you think? >>>>>>>>>> >>>>>>>>>> Having now thought about this for a few mins (in the shower, if anyone wants >>>>>>>>>> the >>>>>>>>>> complete picture :) ), this won't quite work. This approach doesn't have the >>>>>>>>>> virtual mapping information so the best it can do is tell us "how many of >>>>>>>>>> each >>>>>>>>>> size of THP are allocated?" - it doesn't tell us anything about whether they >>>>>>>>>> are >>>>>>>>>> fully or partially mapped or what their alignment is (all necessary if we >>>>>>>>>> want >>>>>>>>>> to know if they are contpte-mapped). So I don't think this approach is >>>>>>>>>> going to >>>>>>>>>> be particularly useful. >>>>>>>>>> >>>>>>>>>> And this is also the big problem if we want to gather stats inside the >>>>>>>>>> kernel; >>>>>>>>>> if we want something equivalant to /proc/meminfo's >>>>>>>>>> AnonHugePages/ShmemPmdMapped/FilePmdMapped, we need to consider not just the >>>>>>>>>> allocation of the THP but also whether it is mapped. That's easy for >>>>>>>>>> PMD-mappings, because there is only one entry to consider - when you set it, >>>>>>>>>> you >>>>>>>>>> increment the number of PMD-mapped THPs, when you clear it, you decrement. >>>>>>>>>> But >>>>>>>>>> for PTE-mappings it's harder; you know the size when you are mapping so its >>>>>>>>>> easy >>>>>>>>>> to increment, but you can do a partial unmap, so you would need to scan the >>>>>>>>>> PTEs >>>>>>>>>> to figure out if we are unmapping the first page of a previously >>>>>>>>>> fully-PTE-mapped THP, which is expensive. We would need a cheap mechanism to >>>>>>>>>> determine "is this folio fully and contiguously mapped in at least one >>>>>>>>>> process?". >>>>>>>>> >>>>>>>>> as OPPO's approach I shared to you before is maintaining two mapcount >>>>>>>>> 1. entire map >>>>>>>>> 2. subpage's map >>>>>>>>> 3. if 1 and 2 both exist, it is DoubleMapped. >>>>>>>>> >>>>>>>>> This isn't a problem for us. and everytime if we do a partial unmap, >>>>>>>>> we have an explicit >>>>>>>>> cont_pte split which will decrease the entire map and increase the >>>>>>>>> subpage's mapcount. >>>>>>>>> >>>>>>>>> but its downside is that we expose this info to mm-core. >>>>>>>> >>>>>>>> OK, but I think we have a slightly more generic situation going on with the >>>>>>>> upstream; If I've understood correctly, you are using the PTE_CONT bit in the >>>>>>>> PTE to determne if its fully mapped? That works for your case where you only >>>>>>>> have 1 size of THP that you care about (contpte-size). But for the upstream, we >>>>>>>> have multi-size THP so we can't use the PTE_CONT bit to determine if its fully >>>>>>>> mapped because we can only use that bit if the THP is at least 64K and aligned, >>>>>>>> and only on arm64. We would need a SW bit for this purpose, and the mm would >>>>>>>> need to update that SW bit for every PTE one the full -> partial map >>>>>>>> transition. >>>>>>> >>>>>>> Oh no. Let's not make everything more complicated for the purpose of some stats. >>>>>>> >>>>>> >>>>>> Indeed, I was intending to argue *against* doing it this way. Fundamentally, if >>>>>> we want to know what's fully mapped and what's not, then I don't see any way >>>>>> other than by scanning the page tables and we might as well do that in user >>>>>> space with this script. >>>>>> >>>>>> Although, I expect you will shortly make a proposal that is simple to implement >>>>>> and prove me wrong ;-) >>>>> >>>>> Unlikely :) As you said, once you have multiple folio sizes, it stops really >>>>> making sense. >>>>> >>>>> Assume you have a 128 kiB pageache folio, and half of that is mapped. You can >>>>> set cont-pte bits on that half and all is fine. Or AMD can benefit from it's >>>>> optimizations without the cont-pte bit and everything is fine. >>>> >>>> Yes, but for debug and optimization, its useful to know when THPs are >>>> fully/partially mapped, when they are unaligned etc. Anyway, the script does >>>> that for us, and I think we are tending towards agreement that there are >>>> unlikely to be any cost benefits by moving it into the kernel. >>> >>> frequent partial unmap can defeat all purpose for us to use large folios. >>> just imagine a large folio can soon be splitted after it is formed. we lose >>> the performance gain and might get regression instead. >> >> nit: just because a THP gets partially unmapped in a process doesn't mean it >> gets split into order-0 pages. If the folio still has all its pages mapped at >> least once then no further action is taken. If the page being unmapped was the >> last mapping of that page, then the THP is put on the deferred split queue, so >> that it can be split in future if needed. > > yes. That is exactly what the kernel is doing, but this is not so > important for us > to resolve performance issues. > >>> >>> and this can be very frequent, for example, one userspace heap management >>> is releasing memory page by page. >>> >>> In our real product deployment, we might not care about the second partial >>> unmapped, we do care about the first partial unmapped as we can use this >>> to know if split has ever happened on this large folios. an partial unmapped >>> subpage can be unlikely re-mapped back. >>> >>> so i guess 1st unmap is probably enough, at least for my product. I mean we >>> care about if partial unmap has ever happened on a large folio more than how >>> they are exactly partially unmapped :-) >> >> I'm not sure what you are suggesting here? A global boolean that tells you if >> any folio in the system has ever been partially unmapped? That will almost >> certainly always be true, even for a very well tuned system. >> >>> >>>> >>>>> >>>>> We want simple stats that tell us which folio sizes are actually allocated. For >>>>> everything else, just scan the process to figure out what exactly is going on. >>>>> >>>> >>>> Certainly that's much easier to do. But is it valuable? It might be if we also >>>> keep stats for the number of failures to allocate the various sizes - then we >>>> can see what percentage of high order allocation attempts are successful, which >>>> is probably useful. > > My point is that we split large folios into two simple categories, > 1. large folios which have never been partially unmapped > 2. large folios which have ever been partially unmapped. > With the rmap batching stuff I am working on, you get the complete thing unmapped in most cases (as long as they are in one VMA) -- for example during munmap()/exit()/etc. Only when multiple VMAs were involved, or when someone COWs / PTE_DONTNEEDs / munmap some subpages, you get a single page of a large folio. That could be used to simply flag the folio in your case. But not sure if that has to be handled on the rmap level. Could be handled higher up in the callchain (esp. pte-dontneed). -- Cheers, David / dhildenb