From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B751BC4707B for ; Wed, 10 Jan 2024 15:27:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 44FC06B0085; Wed, 10 Jan 2024 10:27:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3FF886B0087; Wed, 10 Jan 2024 10:27:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 279F96B008A; Wed, 10 Jan 2024 10:27:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1399B6B0085 for ; Wed, 10 Jan 2024 10:27:20 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D7FF3120A4E for ; Wed, 10 Jan 2024 15:27:19 +0000 (UTC) X-FDA: 81663780198.19.A151876 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf16.hostedemail.com (Postfix) with ESMTP id 9B430180029 for ; Wed, 10 Jan 2024 15:27:17 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=fnMABT2x; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704900437; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+n7NwvbU8OqrqTTSyWDZZCZnBh4I4iUq+bJSjIIj4oM=; b=CC+I1oIfyr5uRBgm7gkahVeV384ux46YWMlqB4fcnFY2puOy6RNisNS2RLvySvRWn2T0pF +7J1XNTuj+hWUKNf1C0bPaJBptH/wFoe+OQs3fM5rWVWtN9tBtOZUGcnZpbZKAToIxBekl nxS8oTk1XnY1GtlwlL6R6cWAjZBuaM0= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=fnMABT2x; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704900437; a=rsa-sha256; cv=none; b=ZbrF8cEyONxk+m4hmqlfadyITqDoklZR+Ee7vugXdedZfslfbv0i5PCVqojJBhbPm0VGJr 4u917Fc4DkGijP/SzQFWPynZLnxC8usx0OOUuNtMxYf2+fOBXrQDoZPT6RKkOO+ydfrHIi GS7HOp04y9EhCJKXDnJsckBHV64MSBo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704900437; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=+n7NwvbU8OqrqTTSyWDZZCZnBh4I4iUq+bJSjIIj4oM=; b=fnMABT2xn5qnBEXfWU0puR04vyPhMV2HRWq5/E/XmmJ1bYL2l2XYw3v/YRHBXukOU9Wdkl WkKCFThjwH0gDes85deI3jT67d0D/aHQdV8byOOflhbujdQMJBKRdymh7iGDVFtkyVmb3p 4c+Lnm1HwfKpOqrI7lO5uEOXtcc1P5o= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-382-eGX4bxaOObmpNAGrdR2w-A-1; Wed, 10 Jan 2024 10:27:15 -0500 X-MC-Unique: eGX4bxaOObmpNAGrdR2w-A-1 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-33688aa4316so2619488f8f.2 for ; Wed, 10 Jan 2024 07:27:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704900434; x=1705505234; h=content-transfer-encoding:in-reply-to:organization:autocrypt:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=+n7NwvbU8OqrqTTSyWDZZCZnBh4I4iUq+bJSjIIj4oM=; b=Rs/xBHV1ZUcQWowyGfb0jYvAOjY2nUtT24ahqgPW0MpfWvid4uJclt80BOg7LqsVgq W88ygXFt9gOc/mvIwJqMh2dHR2FvD/K9gjWP6iagmvBzV1W/f0GffIVUmZ10A1EiddO2 8MyzwtEf5fc4efuDXulRbeGmYsq5U+hvIu0hWhy3rATCdPxAdhIXUAyNQuJlq6WMUVfo ZH4VtEuGGPZiiYy2Nsn0tqHHqnGbyjwiOwQi+VY/sHxXUpefWs+27xYpH2kTBeWruoM7 4Xrp1yCKe0KnIRS/KWNMNCCKEZ9v+r5+YeZck5efZR3QkU5+cHwKPiwT7RGY7ee70/G3 aWxQ== X-Gm-Message-State: AOJu0Yxz/o/wTM21tkkyahjccTrUdZr1b6maT9QjIlQppCyGjZPsjEBo 5Y8/Ibanlv35K+N0QNL7D7NR/JvcdB6xhdoyzpUvVg6QNu/w6X39uLaRgeeCUSl8y9Dj9QTd4Kx EzCzs1WiTg4n0JPCC36A= X-Received: by 2002:a5d:6e12:0:b0:336:6900:96e2 with SMTP id h18-20020a5d6e12000000b00336690096e2mr673933wrz.72.1704900434407; Wed, 10 Jan 2024 07:27:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IFWwgw0g0lUOApGiSbHtgAwZZNjAI7auJomni8k/DHt5JpMX/S+Y3XQvNDbLhfbLZUIc5iVzA== X-Received: by 2002:a5d:6e12:0:b0:336:6900:96e2 with SMTP id h18-20020a5d6e12000000b00336690096e2mr673926wrz.72.1704900433852; Wed, 10 Jan 2024 07:27:13 -0800 (PST) Received: from ?IPV6:2003:cb:c73d:de00:7cf7:9482:e5dc:6ad0? (p200300cbc73dde007cf79482e5dc6ad0.dip0.t-ipconnect.de. [2003:cb:c73d:de00:7cf7:9482:e5dc:6ad0]) by smtp.gmail.com with ESMTPSA id l12-20020a5d668c000000b003375c072fbcsm5126806wru.100.2024.01.10.07.27.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 10 Jan 2024 07:27:13 -0800 (PST) Message-ID: <832e7d9e-13de-4c02-aaa6-0665f26f7d2e@redhat.com> Date: Wed, 10 Jan 2024 16:27:12 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v1] tools/mm: Add thpmaps script to dump THP usage info To: Zi Yan Cc: Barry Song <21cnbao@gmail.com>, Ryan Roberts , John Hubbard , Andrew Morton , Zenghui Yu , Matthew Wilcox , Kefeng Wang , Alistair Popple , linux-mm@kvack.org References: <20240102153828.1002295-1-ryan.roberts@arm.com> <3bd5e4a3-9f67-4483-9a0e-9abb5eb783cd@arm.com> <94ebe62b-5f55-4be9-b464-4105b4692496@arm.com> <68d5ce7e-6587-47c6-bd0f-988adf5d92a4@arm.com> <974a2670-7fa9-425e-921e-8d54a596e6cf@arm.com> <6c77f143-9c2c-4d17-9a2a-d69d9adf2eea@arm.com> <00E23A86-27CA-4A4D-832B-0DBABCC281EF@nvidia.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <00E23A86-27CA-4A4D-832B-0DBABCC281EF@nvidia.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 9B430180029 X-Stat-Signature: zg8s11p1tt4hfktcr1ngyr3kt6eutfcf X-Rspam-User: X-HE-Tag: 1704900437-955131 X-HE-Meta: U2FsdGVkX1+SL929pYb1iooiqgW7RoZPm+qfbeiOiNuz804gRDVhQ6f5kUU+iOcMVG4yFbL5/YDDrurn0VTfyx2hrKF/IjPdX9f3Hi9ftsIJ1Cc50WmYyBTC92RmjJhhZj0AB8ELDNkzzSTlSDYqd+XmOUFi9holbwMmsxuN08AOKpJtY0Vqw72KXVMusn/dyfITQ8EZOOqeBX/ns2s7NcpxdtGvJQPHfzcKVmCdijhV1J1vrMlBuFawXzoqoSJj4ahaCShSpCqFlQmaPYoubBhjcNHtsCneVwjnl7ssyy8sTWwYmuEB5LO6xs2x6lKxbcBaVT8zy9bTgVF+DOE2Ux/wZxJiI5P3T6c9MPZd8T9lOEPWUeGQ5nyiadWMbvd3LtDTQ0VZwdjCRrrDR8CDFL59AHI0WDkJuQhGjlzWK1FGuPs9HC2FHn6v05HUdktfljNt/FSyYUwhNPWQxb497Z0XzfosrJVl+RHTtk9cFwG++ARKZ6qaZfqp+FvrCwYhN5vTDmVmNKIvAkxsfzzDzUJSn2npky8aJtiawF63VUiQ15wkBdXUvRAFslX36hYAXoDOgWdL0QzA3MqS/2qtpm8kptPDd5vn/wlNj/rlbXw2imvZPrswiRx1GwXvzlohnNe+8fFvSplOdfGrG2emjyAPCL2K+pNA8psDjkJPXKpGGcBgRfqmoe+TeBD0G7kylM/XGahrkuvEGpZqeKZQc+gB18Vcn6m2yy9TQW69Ksxs2xgZT6Q8PrLcexAC24wZ2LwRCy4WKB7pFsnLzSosn3LGnOeiX61xxSqcohz5BgHHwjRekLlH86Us6C//N+f2uh4Bqbn/n917fTBo3wCtDoxqxM4yzy8/bJZovA8y240Z1FlU0vmURcxze78j2JeqU5ye4yZnX60xvi9WJtMfeVurqhAES4YbOTgTWJ9MQECOF5S5ip/SdQDVU3R9TV4Us5y7HUPHPqioPEgIN/0 RFjjNnJP osNI+32xBBPzlzEiikpZNETpSakS0H3mMOlmfm8vbGekRG4XF4DUEm1ixoTeDx2KyW/HQXjK2gV/8X1x3Z12hkIXQQa/6dh62n5OYTR4yYb9vlQCUURtJ9nhNfTv88yQqpoDZJ63exstw6Jcvjsums19MhAxMknypeJcFKRXIKlzYWjQKEf3nF2CAh3X9flCfrVj5QgvQHgKAaZBd/xTZSMHHb6EM8D0bnvL/kypnkU6qh/JfENO/EzwIhknvF2llIn0CrA1kTl+iTNCvjQgviINiM9Jk2NlqWb63mlhU2fo5dgjJ0IQs3m/qVwkfN/51UxzB+IJMSsnSPdcyVUa6t8NBogJhVNDcL+hQ5BH4RuNlH0EhGG6yT82YZLscgwUe+N8Nzx5xqo/kuEFFD//Z1ljLZagA78iP7qh368wmy1DIhrGGzpROmBSTLWvar2x6oXG6jwvWp2AhejeNYa2181Bd2qqjdlWYNwR8ld96FNtssxzGdCP4x623EKGcukS+p6p0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 10.01.24 16:19, Zi Yan wrote: > On 10 Jan 2024, at 7:12, David Hildenbrand wrote: > >> On 10.01.24 13:05, Barry Song wrote: >>> On Wed, Jan 10, 2024 at 7:59 PM Ryan Roberts wrote: >>>> >>>> On 10/01/2024 11:38, Barry Song wrote: >>>>> On Wed, Jan 10, 2024 at 7:21 PM Ryan Roberts wrote: >>>>>> >>>>>> On 10/01/2024 11:00, David Hildenbrand wrote: >>>>>>> On 10.01.24 11:55, Ryan Roberts wrote: >>>>>>>> On 10/01/2024 10:42, David Hildenbrand wrote: >>>>>>>>> On 10.01.24 11:38, Ryan Roberts wrote: >>>>>>>>>> On 10/01/2024 10:30, Barry Song wrote: >>>>>>>>>>> On Wed, Jan 10, 2024 at 6:23 PM Ryan Roberts wrote: >>>>>>>>>>>> >>>>>>>>>>>> On 10/01/2024 09:09, Barry Song wrote: >>>>>>>>>>>>> On Wed, Jan 10, 2024 at 4:58 PM Ryan Roberts wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 10/01/2024 08:02, Barry Song wrote: >>>>>>>>>>>>>>> On Wed, Jan 10, 2024 at 12:16 PM John Hubbard wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On 1/9/24 19:51, Barry Song wrote: >>>>>>>>>>>>>>>>> On Wed, Jan 10, 2024 at 11:35 AM John Hubbard >>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>> ... >>>>>>>>>>>>>>>>>> Hi Ryan, >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> One thing that immediately came up during some recent testing of mTHP >>>>>>>>>>>>>>>>>> on arm64: the pid requirement is sometimes a little awkward. I'm >>>>>>>>>>>>>>>>>> running >>>>>>>>>>>>>>>>>> tests on a machine at a time for now, inside various containers and >>>>>>>>>>>>>>>>>> such, and it would be nice if there were an easy way to get some >>>>>>>>>>>>>>>>>> numbers >>>>>>>>>>>>>>>>>> for the mTHPs across the whole machine. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Just to confirm, you're expecting these "global" stats be truely global >>>>>>>>>>>>>> and not >>>>>>>>>>>>>> per-container? (asking because you exploicitly mentioned being in a >>>>>>>>>>>>>> container). >>>>>>>>>>>>>> If you want per-container, then you can probably just create the container >>>>>>>>>>>>>> in a >>>>>>>>>>>>>> cgroup? >>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I'm not sure if that changes anything about thpmaps here. Probably >>>>>>>>>>>>>>>>>> this is fine as-is. But I wanted to give some initial reactions from >>>>>>>>>>>>>>>>>> just some quick runs: the global state would be convenient. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks for taking this for a spin! Appreciate the feedback. >>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> +1. but this seems to be impossible by scanning pagemap? >>>>>>>>>>>>>>>>> so may we add this statistics information in kernel just like >>>>>>>>>>>>>>>>> /proc/meminfo or a separate /proc/mthp_info? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Yes. From my perspective, it looks like the global stats are more useful >>>>>>>>>>>>>>>> initially, and the more detailed per-pid or per-cgroup stats are the >>>>>>>>>>>>>>>> next level of investigation. So feels odd to start with the more >>>>>>>>>>>>>>>> detailed stats. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> probably because this can be done without the modification of the kernel. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Yes indeed, as John said in an earlier thread, my previous attempts to add >>>>>>>>>>>>>> stats >>>>>>>>>>>>>> directly in the kernel got pushback; DavidH was concerned that we don't >>>>>>>>>>>>>> really >>>>>>>>>>>>>> know exectly how to account mTHPs yet >>>>>>>>>>>>>> (whole/partial/aligned/unaligned/per-size/etc) so didn't want to end up >>>>>>>>>>>>>> adding >>>>>>>>>>>>>> the wrong ABI and having to maintain it forever. There has also been some >>>>>>>>>>>>>> pushback regarding adding more values to multi-value files in sysfs, so >>>>>>>>>>>>>> David >>>>>>>>>>>>>> was suggesting coming up with a whole new scheme at some point (I know >>>>>>>>>>>>>> /proc/meminfo isn't sysfs, but the equivalent files for NUMA nodes and >>>>>>>>>>>>>> cgroups >>>>>>>>>>>>>> do live in sysfs). >>>>>>>>>>>>>> >>>>>>>>>>>>>> Anyway, this script was my attempt to 1) provide a short term solution >>>>>>>>>>>>>> to the >>>>>>>>>>>>>> "we need some stats" request and 2) provide a context in which to explore >>>>>>>>>>>>>> what >>>>>>>>>>>>>> the right stats are - this script can evolve without the ABI problem. >>>>>>>>>>>>>> >>>>>>>>>>>>>>> The detailed per-pid or per-cgroup is still quite useful to my case in >>>>>>>>>>>>>>> which >>>>>>>>>>>>>>> we set mTHP enabled/disabled and allowed sizes according to vma types, >>>>>>>>>>>>>>> eg. libc_malloc, java heaps etc. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Different vma types can have different anon_name. So I can use the >>>>>>>>>>>>>>> detailed >>>>>>>>>>>>>>> info to find out if specific VMAs have gotten mTHP properly and how many >>>>>>>>>>>>>>> they have gotten. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> However, Ryan did clearly say, above, "In future we may wish to >>>>>>>>>>>>>>>> introduce stats directly into the kernel (e.g. smaps or similar)". And >>>>>>>>>>>>>>>> earlier he ran into some pushback on trying to set up /proc or /sys >>>>>>>>>>>>>>>> values because this is still such an early feature. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I wonder if we could put the global stats in debugfs for now? That's >>>>>>>>>>>>>>>> specifically supposed to be a "we promise *not* to keep this ABI stable" >>>>>>>>>>>>>>>> location. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Now that I think about it, I wonder if we can add a --global mode to the >>>>>>>>>>>>>> script >>>>>>>>>>>>>> (or just infer global when neither --pid nor --cgroup are provided). I >>>>>>>>>>>>>> think I >>>>>>>>>>>>>> should be able to determine all the physical memory ranges from >>>>>>>>>>>>>> /proc/iomem, >>>>>>>>>>>>>> then grab all the info we need from /proc/kpageflags. We should then be >>>>>>>>>>>>>> able to >>>>>>>>>>>>>> process it all in much the same way as for --pid/--cgroup and provide the >>>>>>>>>>>>>> same >>>>>>>>>>>>>> stats, but it will apply globally. What do you think? >>>>>>>>>>>> >>>>>>>>>>>> Having now thought about this for a few mins (in the shower, if anyone wants >>>>>>>>>>>> the >>>>>>>>>>>> complete picture :) ), this won't quite work. This approach doesn't have the >>>>>>>>>>>> virtual mapping information so the best it can do is tell us "how many of >>>>>>>>>>>> each >>>>>>>>>>>> size of THP are allocated?" - it doesn't tell us anything about whether they >>>>>>>>>>>> are >>>>>>>>>>>> fully or partially mapped or what their alignment is (all necessary if we >>>>>>>>>>>> want >>>>>>>>>>>> to know if they are contpte-mapped). So I don't think this approach is >>>>>>>>>>>> going to >>>>>>>>>>>> be particularly useful. >>>>>>>>>>>> >>>>>>>>>>>> And this is also the big problem if we want to gather stats inside the >>>>>>>>>>>> kernel; >>>>>>>>>>>> if we want something equivalant to /proc/meminfo's >>>>>>>>>>>> AnonHugePages/ShmemPmdMapped/FilePmdMapped, we need to consider not just the >>>>>>>>>>>> allocation of the THP but also whether it is mapped. That's easy for >>>>>>>>>>>> PMD-mappings, because there is only one entry to consider - when you set it, >>>>>>>>>>>> you >>>>>>>>>>>> increment the number of PMD-mapped THPs, when you clear it, you decrement. >>>>>>>>>>>> But >>>>>>>>>>>> for PTE-mappings it's harder; you know the size when you are mapping so its >>>>>>>>>>>> easy >>>>>>>>>>>> to increment, but you can do a partial unmap, so you would need to scan the >>>>>>>>>>>> PTEs >>>>>>>>>>>> to figure out if we are unmapping the first page of a previously >>>>>>>>>>>> fully-PTE-mapped THP, which is expensive. We would need a cheap mechanism to >>>>>>>>>>>> determine "is this folio fully and contiguously mapped in at least one >>>>>>>>>>>> process?". >>>>>>>>>>> >>>>>>>>>>> as OPPO's approach I shared to you before is maintaining two mapcount >>>>>>>>>>> 1. entire map >>>>>>>>>>> 2. subpage's map >>>>>>>>>>> 3. if 1 and 2 both exist, it is DoubleMapped. >>>>>>>>>>> >>>>>>>>>>> This isn't a problem for us. and everytime if we do a partial unmap, >>>>>>>>>>> we have an explicit >>>>>>>>>>> cont_pte split which will decrease the entire map and increase the >>>>>>>>>>> subpage's mapcount. >>>>>>>>>>> >>>>>>>>>>> but its downside is that we expose this info to mm-core. >>>>>>>>>> >>>>>>>>>> OK, but I think we have a slightly more generic situation going on with the >>>>>>>>>> upstream; If I've understood correctly, you are using the PTE_CONT bit in the >>>>>>>>>> PTE to determne if its fully mapped? That works for your case where you only >>>>>>>>>> have 1 size of THP that you care about (contpte-size). But for the upstream, we >>>>>>>>>> have multi-size THP so we can't use the PTE_CONT bit to determine if its fully >>>>>>>>>> mapped because we can only use that bit if the THP is at least 64K and aligned, >>>>>>>>>> and only on arm64. We would need a SW bit for this purpose, and the mm would >>>>>>>>>> need to update that SW bit for every PTE one the full -> partial map >>>>>>>>>> transition. >>>>>>>>> >>>>>>>>> Oh no. Let's not make everything more complicated for the purpose of some stats. >>>>>>>>> >>>>>>>> >>>>>>>> Indeed, I was intending to argue *against* doing it this way. Fundamentally, if >>>>>>>> we want to know what's fully mapped and what's not, then I don't see any way >>>>>>>> other than by scanning the page tables and we might as well do that in user >>>>>>>> space with this script. >>>>>>>> >>>>>>>> Although, I expect you will shortly make a proposal that is simple to implement >>>>>>>> and prove me wrong ;-) >>>>>>> >>>>>>> Unlikely :) As you said, once you have multiple folio sizes, it stops really >>>>>>> making sense. >>>>>>> >>>>>>> Assume you have a 128 kiB pageache folio, and half of that is mapped. You can >>>>>>> set cont-pte bits on that half and all is fine. Or AMD can benefit from it's >>>>>>> optimizations without the cont-pte bit and everything is fine. >>>>>> >>>>>> Yes, but for debug and optimization, its useful to know when THPs are >>>>>> fully/partially mapped, when they are unaligned etc. Anyway, the script does >>>>>> that for us, and I think we are tending towards agreement that there are >>>>>> unlikely to be any cost benefits by moving it into the kernel. >>>>> >>>>> frequent partial unmap can defeat all purpose for us to use large folios. >>>>> just imagine a large folio can soon be splitted after it is formed. we lose >>>>> the performance gain and might get regression instead. >>>> >>>> nit: just because a THP gets partially unmapped in a process doesn't mean it >>>> gets split into order-0 pages. If the folio still has all its pages mapped at >>>> least once then no further action is taken. If the page being unmapped was the >>>> last mapping of that page, then the THP is put on the deferred split queue, so >>>> that it can be split in future if needed. >>> >>> yes. That is exactly what the kernel is doing, but this is not so >>> important for us >>> to resolve performance issues. >>> >>>>> >>>>> and this can be very frequent, for example, one userspace heap management >>>>> is releasing memory page by page. >>>>> >>>>> In our real product deployment, we might not care about the second partial >>>>> unmapped, we do care about the first partial unmapped as we can use this >>>>> to know if split has ever happened on this large folios. an partial unmapped >>>>> subpage can be unlikely re-mapped back. >>>>> >>>>> so i guess 1st unmap is probably enough, at least for my product. I mean we >>>>> care about if partial unmap has ever happened on a large folio more than how >>>>> they are exactly partially unmapped :-) >>>> >>>> I'm not sure what you are suggesting here? A global boolean that tells you if >>>> any folio in the system has ever been partially unmapped? That will almost >>>> certainly always be true, even for a very well tuned system. >>>> >>>>> >>>>>> >>>>>>> >>>>>>> We want simple stats that tell us which folio sizes are actually allocated. For >>>>>>> everything else, just scan the process to figure out what exactly is going on. >>>>>>> >>>>>> >>>>>> Certainly that's much easier to do. But is it valuable? It might be if we also >>>>>> keep stats for the number of failures to allocate the various sizes - then we >>>>>> can see what percentage of high order allocation attempts are successful, which >>>>>> is probably useful. >>> >>> My point is that we split large folios into two simple categories, >>> 1. large folios which have never been partially unmapped >>> 2. large folios which have ever been partially unmapped. >>> >> >> With the rmap batching stuff I am working on, you get the complete thing unmapped in most cases (as long as they are in one VMA) -- for example during munmap()/exit()/etc. > > IIUC, there are two cases: > > 1. munmap() a range within a VMA, the rmap batching can void temporary partially unmapped folios, since it does the range operations as a whole. > > 2. Barry has a case that userspace, e.g., the heap management, releases > memory page by page, which rmap batching cannot help, unless either userspace > batches memory releases or kernel delays and aggregates these memory releasing > syscalls. Exactly. And for 2. you immediately know that someone is partially unmapping a large folio. At least temporarily. Compared to doing a MADV_DONTNEED that covers a whole large folio (e.g., THP). -- Cheers, David / dhildenb