From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05651C3DA6E for ; Wed, 10 Jan 2024 11:24:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8A22F6B0075; Wed, 10 Jan 2024 06:24:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 852986B007B; Wed, 10 Jan 2024 06:24:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 67E1A6B009E; Wed, 10 Jan 2024 06:24:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5759D6B0075 for ; Wed, 10 Jan 2024 06:24:22 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2486680ACE for ; Wed, 10 Jan 2024 11:24:22 +0000 (UTC) X-FDA: 81663167964.14.E8FC5C5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf23.hostedemail.com (Postfix) with ESMTP id C604414000E for ; Wed, 10 Jan 2024 11:24:19 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QY8LFmMr; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704885859; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OKtrMrk7RjRWwyPB3QsbBaptopkjn7f0HzIqZUIvKxY=; b=CfARnqaGqF03+s1xh564m059soJ27KqW/Yg2vo9siPefYqKv4ZwS95yprWdt7Esccl+z1Z jNO/45ts1a7lKdiHMDJnzy6ZXYgNnl/tV31zBijtlshQzX5FzCrnS7Ng3hce35OmAo+Hxy YltEagHtwI+/JBgtY7rCiOOJi4rj4cs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704885859; a=rsa-sha256; cv=none; b=OKZ+q3Yhe2plemNGy7HMFcQsrfCVaF597rKao4rT4UZs7kt+bKWPCyU8kcwd+qpkydVc8n ub/1zlxc6bF5PtbG1DU7o1mp3/S2pOcqJvE2KDL1b24NPgoGanDfO914MeqCCzwJr3jfDq SJ8zVrxiZPrHjBLgIrSWvFFAAOdF97I= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QY8LFmMr; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704885859; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=OKtrMrk7RjRWwyPB3QsbBaptopkjn7f0HzIqZUIvKxY=; b=QY8LFmMrGgd8Won8LBDNd5UJWW8HbaFIyt9ACezBrBleHpWP6pMqP4VfuDXOh2T0/mECLn IiDYGO91oom5bZo3nTSIQhxf1KLUq9csBBCZg1x7s4QgPl9YXSPNPkH/7J0l5aHtfRAMZ/ uUyXR8+//+zGcotMF21E72b+rJ8e9Pk= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-530-uBG7Gy53MjyTAEJz2mXTFw-1; Wed, 10 Jan 2024 06:24:17 -0500 X-MC-Unique: uBG7Gy53MjyTAEJz2mXTFw-1 Received: by mail-wr1-f72.google.com with SMTP id ffacd0b85a97d-3369382a524so2604780f8f.1 for ; Wed, 10 Jan 2024 03:24:17 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704885856; x=1705490656; h=content-transfer-encoding:in-reply-to:organization:autocrypt:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=OKtrMrk7RjRWwyPB3QsbBaptopkjn7f0HzIqZUIvKxY=; b=s5N/RbMKZxb6SlTAQ1kQQTCPbNyCbz/pyDD3Psn6PHYnUVhYEpLunI3Eu99eR4Ccud COQIdDcUTBI4dBw4q12Q+d4zTE48oL3tfC/ynQ8gbg3srRfvEsVN9+4bMAzzmvnZGhbN I9GXs+kjLvhAbp/P9Zyux2+oAwD4rkBn4si6CmshX4AdfcmPRMEkB31AU1PjBWqudKcB Aq6SeNo4HrUK8nworidtuyZxVA2aSwERgmCHbQDr+D/veJOjq5XTs+dbPj/iHtYXo8+v sCt1JxA+Bh46PnO4/2lWURXQLNvV4mqgyhoRTdAwrk5TLxcmSlbCru0RzaJWWnC/r/pB uVEg== X-Gm-Message-State: AOJu0YxoB+jO/6zyOtmtSHDPhqrIgasuJefpXs6WdOXWzXm6SD8et1t7 VPK0vtdna1gX0RD9+jmlBwnf/v9EcxirZX+BBoeDzywQxalKzVn3DEvnlCYxGqlZyXETd+aAVrC XiCI+8EqlPIi0/iMyUnI= X-Received: by 2002:a5d:498b:0:b0:336:d51e:4567 with SMTP id r11-20020a5d498b000000b00336d51e4567mr452967wrq.31.1704885856604; Wed, 10 Jan 2024 03:24:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IGJxEKZKi4vhfIxFSIfZMa0JLkC9jZBArASM5N2Zz34BlZVEEZ7SGhAQJDOz3fAL8RhQi2kog== X-Received: by 2002:a5d:498b:0:b0:336:d51e:4567 with SMTP id r11-20020a5d498b000000b00336d51e4567mr452955wrq.31.1704885856166; Wed, 10 Jan 2024 03:24:16 -0800 (PST) Received: from ?IPV6:2003:cb:c73d:de00:7cf7:9482:e5dc:6ad0? (p200300cbc73dde007cf79482e5dc6ad0.dip0.t-ipconnect.de. [2003:cb:c73d:de00:7cf7:9482:e5dc:6ad0]) by smtp.gmail.com with ESMTPSA id e3-20020a5d5943000000b0033775980d26sm3976407wri.2.2024.01.10.03.24.15 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 10 Jan 2024 03:24:15 -0800 (PST) Message-ID: Date: Wed, 10 Jan 2024 12:24:14 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v1] tools/mm: Add thpmaps script to dump THP usage info To: Ryan Roberts , Barry Song <21cnbao@gmail.com> Cc: John Hubbard , Andrew Morton , Zenghui Yu , Matthew Wilcox , Kefeng Wang , Zi Yan , Alistair Popple , linux-mm@kvack.org References: <20240102153828.1002295-1-ryan.roberts@arm.com> <4e7445a0-acc9-487f-999f-a2b6d03d265e@nvidia.com> <3bd5e4a3-9f67-4483-9a0e-9abb5eb783cd@arm.com> <94ebe62b-5f55-4be9-b464-4105b4692496@arm.com> <68d5ce7e-6587-47c6-bd0f-988adf5d92a4@arm.com> <974a2670-7fa9-425e-921e-8d54a596e6cf@arm.com> <6c77f143-9c2c-4d17-9a2a-d69d9adf2eea@arm.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <6c77f143-9c2c-4d17-9a2a-d69d9adf2eea@arm.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Stat-Signature: as4i7jechanaoyjm1uqa5z1ozmfrz4zb X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: C604414000E X-Rspam-User: X-HE-Tag: 1704885859-86292 X-HE-Meta: U2FsdGVkX1+NeJYLOVmp3UcmnC1g33dB8KbSDYuURvMzOUQJ62N4OvgwEyelohJHVS1Qc2bURiHXAltnQvsCDv9MqHHgaPFiDT4guuci8bIQ3+TrviXyPFPFeD+GTluMm7LyDqlXTM7hgznzkRk1SGuUmwd1QvJiTrS1HCNzrgfdbAPpP+4gL6Qq7PP/h+B+wca+X4zLXryNsLtSxlAONznnRakJpnDJcwRgquHuTbK4/l8t5mu9wQ6ctf9BlGo6LtDl1QRhwnRVPYCrdOuQL2prOIYqQ94Z1GVbVeLicieMpg9DMAE6W7aBmvf+P3LL7KjSJzNzmk3fpffV872ZZuUlx5nCQvDZWKQwYyyHW56Rs7Qz+OoN9/C/Xki27HbR717YYg8o2Q44WUS1J+cNksCRLxjz0a/GZ0t8J0wmlR/6u4/ufc2aoP/pkIQMOPQeOvxXHSkv/4US7fKE+eeczxsmoHftvguALXlJ2LZac1/KlExf0mrrYu7Ehh53eMg9/XAkehTrZ/y2H7sCk0N2Gv/Sxci9xtNtpb3jFIkM2/JrVYNMCOUfsfKL+Q/agw/QK0jJpbopTqFqrM1IiEC3ICqgXWxsZC7HAOhATXBVVh2fFZUPoTIe5nzrZ3aq/K78FBYKSSoprDgxdHk2B8SeO7W1q7W+me6flRNatYw4IGsRfAllm2xjh56JXY9CguEwUZif67qqRB9Y+HxjD1YZObZ0/0QNH36ikAOpDJFOC6l9J+Qf+9cmwdbGKqWw23NNcu7PJRqSeNXXYl6ArU6W+XbVV3d65pOENHGXLZxTWTjG8qZpmw4tGzU301Fho22OYWI/TL6TqoRAMoyHFlhPVb0GX6RS3+8f8wYVGky6zHvqGgaUqHT0hsdF4hSHd2/+mhx3YXwnQhdhdTc0/BLuHGBdrU2JQlJjTYuvhGqsN+B6p5Z18e9+RbP5h9basl31eCRAMpP2KI7+d1Zel+b oVQ/vhDW n23eq/UqTarzH4jOpxxzSd9a7ZD9TLxvN+5cqRUf0U93vA0vQmd16e53pQqIf/HF8Jxd34Jd2Y7sVKYlupUPmhy/gHHRYdvZWnQQWeV0/Mfe5G6URJeBQA5SKmKXY/AiGlHPTRVsWJeU3fkpjMWy77J0R+vAr9cjOT/e/SRigRhagyasmM6VizgE6FX5AyXthv+wx7hnfCRoDbTieTwKB9oL2u8GmYRKAdMPO5CYrV3WkmDI/T6UM92E0gZhx14FrOYdzyQ2fB0Gblh5R20ZsdtNWNEKCV5qlQomN9w9FALBna5Z59gbJO+7VcgFnrXzIgHFmTy4sVaxKNGqZuEeOAzCBl3GX4z4g46zF4p40jFUu/Q7AKZjkSMHGx7iBWygQFk2QmiPVBIp/M7wy4c2NrlYMoWfZvXaYZ3pYbN/R8dfQy7ShJww+lNN1HnycvdFKmV7g2Ar1eVgF09lX9V4FLIMZem0hZVP0UQStf2DbU25L7igADd4fMSDO69cPr5Nnytb4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 10.01.24 12:20, Ryan Roberts wrote: > On 10/01/2024 11:00, David Hildenbrand wrote: >> On 10.01.24 11:55, Ryan Roberts wrote: >>> On 10/01/2024 10:42, David Hildenbrand wrote: >>>> On 10.01.24 11:38, Ryan Roberts wrote: >>>>> On 10/01/2024 10:30, Barry Song wrote: >>>>>> On Wed, Jan 10, 2024 at 6:23 PM Ryan Roberts wrote: >>>>>>> >>>>>>> On 10/01/2024 09:09, Barry Song wrote: >>>>>>>> On Wed, Jan 10, 2024 at 4:58 PM Ryan Roberts wrote: >>>>>>>>> >>>>>>>>> On 10/01/2024 08:02, Barry Song wrote: >>>>>>>>>> On Wed, Jan 10, 2024 at 12:16 PM John Hubbard wrote: >>>>>>>>>>> >>>>>>>>>>> On 1/9/24 19:51, Barry Song wrote: >>>>>>>>>>>> On Wed, Jan 10, 2024 at 11:35 AM John Hubbard >>>>>>>>>>>> wrote: >>>>>>>>>>> ... >>>>>>>>>>>>> Hi Ryan, >>>>>>>>>>>>> >>>>>>>>>>>>> One thing that immediately came up during some recent testing of mTHP >>>>>>>>>>>>> on arm64: the pid requirement is sometimes a little awkward. I'm >>>>>>>>>>>>> running >>>>>>>>>>>>> tests on a machine at a time for now, inside various containers and >>>>>>>>>>>>> such, and it would be nice if there were an easy way to get some >>>>>>>>>>>>> numbers >>>>>>>>>>>>> for the mTHPs across the whole machine. >>>>>>>>> >>>>>>>>> Just to confirm, you're expecting these "global" stats be truely global >>>>>>>>> and not >>>>>>>>> per-container? (asking because you exploicitly mentioned being in a >>>>>>>>> container). >>>>>>>>> If you want per-container, then you can probably just create the container >>>>>>>>> in a >>>>>>>>> cgroup? >>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I'm not sure if that changes anything about thpmaps here. Probably >>>>>>>>>>>>> this is fine as-is. But I wanted to give some initial reactions from >>>>>>>>>>>>> just some quick runs: the global state would be convenient. >>>>>>>>> >>>>>>>>> Thanks for taking this for a spin! Appreciate the feedback. >>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> +1. but this seems to be impossible by scanning pagemap? >>>>>>>>>>>> so may we add this statistics information in kernel just like >>>>>>>>>>>> /proc/meminfo or a separate /proc/mthp_info? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Yes. From my perspective, it looks like the global stats are more useful >>>>>>>>>>> initially, and the more detailed per-pid or per-cgroup stats are the >>>>>>>>>>> next level of investigation. So feels odd to start with the more >>>>>>>>>>> detailed stats. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> probably because this can be done without the modification of the kernel. >>>>>>>>> >>>>>>>>> Yes indeed, as John said in an earlier thread, my previous attempts to add >>>>>>>>> stats >>>>>>>>> directly in the kernel got pushback; DavidH was concerned that we don't >>>>>>>>> really >>>>>>>>> know exectly how to account mTHPs yet >>>>>>>>> (whole/partial/aligned/unaligned/per-size/etc) so didn't want to end up >>>>>>>>> adding >>>>>>>>> the wrong ABI and having to maintain it forever. There has also been some >>>>>>>>> pushback regarding adding more values to multi-value files in sysfs, so >>>>>>>>> David >>>>>>>>> was suggesting coming up with a whole new scheme at some point (I know >>>>>>>>> /proc/meminfo isn't sysfs, but the equivalent files for NUMA nodes and >>>>>>>>> cgroups >>>>>>>>> do live in sysfs). >>>>>>>>> >>>>>>>>> Anyway, this script was my attempt to 1) provide a short term solution >>>>>>>>> to the >>>>>>>>> "we need some stats" request and 2) provide a context in which to explore >>>>>>>>> what >>>>>>>>> the right stats are - this script can evolve without the ABI problem. >>>>>>>>> >>>>>>>>>> The detailed per-pid or per-cgroup is still quite useful to my case in >>>>>>>>>> which >>>>>>>>>> we set mTHP enabled/disabled and allowed sizes according to vma types, >>>>>>>>>> eg. libc_malloc, java heaps etc. >>>>>>>>>> >>>>>>>>>> Different vma types can have different anon_name. So I can use the >>>>>>>>>> detailed >>>>>>>>>> info to find out if specific VMAs have gotten mTHP properly and how many >>>>>>>>>> they have gotten. >>>>>>>>>> >>>>>>>>>>> However, Ryan did clearly say, above, "In future we may wish to >>>>>>>>>>> introduce stats directly into the kernel (e.g. smaps or similar)". And >>>>>>>>>>> earlier he ran into some pushback on trying to set up /proc or /sys >>>>>>>>>>> values because this is still such an early feature. >>>>>>>>>>> >>>>>>>>>>> I wonder if we could put the global stats in debugfs for now? That's >>>>>>>>>>> specifically supposed to be a "we promise *not* to keep this ABI stable" >>>>>>>>>>> location. >>>>>>>>> >>>>>>>>> Now that I think about it, I wonder if we can add a --global mode to the >>>>>>>>> script >>>>>>>>> (or just infer global when neither --pid nor --cgroup are provided). I >>>>>>>>> think I >>>>>>>>> should be able to determine all the physical memory ranges from >>>>>>>>> /proc/iomem, >>>>>>>>> then grab all the info we need from /proc/kpageflags. We should then be >>>>>>>>> able to >>>>>>>>> process it all in much the same way as for --pid/--cgroup and provide the >>>>>>>>> same >>>>>>>>> stats, but it will apply globally. What do you think? >>>>>>> >>>>>>> Having now thought about this for a few mins (in the shower, if anyone wants >>>>>>> the >>>>>>> complete picture :) ), this won't quite work. This approach doesn't have the >>>>>>> virtual mapping information so the best it can do is tell us "how many of >>>>>>> each >>>>>>> size of THP are allocated?" - it doesn't tell us anything about whether they >>>>>>> are >>>>>>> fully or partially mapped or what their alignment is (all necessary if we >>>>>>> want >>>>>>> to know if they are contpte-mapped). So I don't think this approach is >>>>>>> going to >>>>>>> be particularly useful. >>>>>>> >>>>>>> And this is also the big problem if we want to gather stats inside the >>>>>>> kernel; >>>>>>> if we want something equivalant to /proc/meminfo's >>>>>>> AnonHugePages/ShmemPmdMapped/FilePmdMapped, we need to consider not just the >>>>>>> allocation of the THP but also whether it is mapped. That's easy for >>>>>>> PMD-mappings, because there is only one entry to consider - when you set it, >>>>>>> you >>>>>>> increment the number of PMD-mapped THPs, when you clear it, you decrement. >>>>>>> But >>>>>>> for PTE-mappings it's harder; you know the size when you are mapping so its >>>>>>> easy >>>>>>> to increment, but you can do a partial unmap, so you would need to scan the >>>>>>> PTEs >>>>>>> to figure out if we are unmapping the first page of a previously >>>>>>> fully-PTE-mapped THP, which is expensive. We would need a cheap mechanism to >>>>>>> determine "is this folio fully and contiguously mapped in at least one >>>>>>> process?". >>>>>> >>>>>> as OPPO's approach I shared to you before is maintaining two mapcount >>>>>> 1. entire map >>>>>> 2. subpage's map >>>>>> 3. if 1 and 2 both exist, it is DoubleMapped. >>>>>> >>>>>> This isn't a problem for us. and everytime if we do a partial unmap, >>>>>> we have an explicit >>>>>> cont_pte split which will decrease the entire map and increase the >>>>>> subpage's mapcount. >>>>>> >>>>>> but its downside is that we expose this info to mm-core. >>>>> >>>>> OK, but I think we have a slightly more generic situation going on with the >>>>> upstream; If I've understood correctly, you are using the PTE_CONT bit in the >>>>> PTE to determne if its fully mapped? That works for your case where you only >>>>> have 1 size of THP that you care about (contpte-size). But for the upstream, we >>>>> have multi-size THP so we can't use the PTE_CONT bit to determine if its fully >>>>> mapped because we can only use that bit if the THP is at least 64K and aligned, >>>>> and only on arm64. We would need a SW bit for this purpose, and the mm would >>>>> need to update that SW bit for every PTE one the full -> partial map >>>>> transition. >>>> >>>> Oh no. Let's not make everything more complicated for the purpose of some stats. >>>> >>> >>> Indeed, I was intending to argue *against* doing it this way. Fundamentally, if >>> we want to know what's fully mapped and what's not, then I don't see any way >>> other than by scanning the page tables and we might as well do that in user >>> space with this script. >>> >>> Although, I expect you will shortly make a proposal that is simple to implement >>> and prove me wrong ;-) >> >> Unlikely :) As you said, once you have multiple folio sizes, it stops really >> making sense. >> >> Assume you have a 128 kiB pageache folio, and half of that is mapped. You can >> set cont-pte bits on that half and all is fine. Or AMD can benefit from it's >> optimizations without the cont-pte bit and everything is fine. > > Yes, but for debug and optimization, its useful to know when THPs are > fully/partially mapped, when they are unaligned etc. Anyway, the script does > that for us, and I think we are tending towards agreement that there are > unlikely to be any cost benefits by moving it into the kernel. Agreed. And just adding: while one process might map a folio unaligned/partial/ ... another one might map it aligned/fully. So this per-process scanning is really required (because per process stats per folio are pretty much out of scope :) ). > >> >> We want simple stats that tell us which folio sizes are actually allocated. For >> everything else, just scan the process to figure out what exactly is going on. >> > > Certainly that's much easier to do. But is it valuable? It might be if we also > keep stats for the number of failures to allocate the various sizes - then we > can see what percentage of high order allocation attempts are successful, which > is probably useful. Agreed. -- Cheers, David / dhildenb