From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD5BCC021A4 for ; Mon, 24 Feb 2025 21:53:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3818028000F; Mon, 24 Feb 2025 16:53:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3095E28000D; Mon, 24 Feb 2025 16:53:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E70E28000F; Mon, 24 Feb 2025 16:53:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D659828000D for ; Mon, 24 Feb 2025 16:53:39 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8932949A41 for ; Mon, 24 Feb 2025 21:53:39 +0000 (UTC) X-FDA: 83156190558.19.C7B0FC5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf07.hostedemail.com (Postfix) with ESMTP id EB2D940003 for ; Mon, 24 Feb 2025 21:53:36 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=CFUJm4qe; spf=pass (imf07.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740434017; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kydi1edv7zo81EuViV3XyV61d+pzCDxRiN2C51w0vq8=; b=NozLM4NV2hraMOwaxXDme6GnRrp1cjh1qImqpBu6x4OKVQ8H8cCxG1QTY0ymG6EHsV1bs/ mgJIUUsRAclX9NBRq0CwAJym4MFXosBfusbHAd2UpXKSkAzLniL760EEwaqWLaR6NSYTXR U+tKjrRAt85F4Q2ouxIYFaVlKC+dfb4= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=CFUJm4qe; spf=pass (imf07.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740434017; a=rsa-sha256; cv=none; b=oBbjPiGlLE16CWBypc2TmtvjfTxb0FUEcKFydgNpVtj0oVUYPiwumujpeF1tixI5V3sEte 5ZviWxdNJYyoYHl4ZVu7ANQX28vkVa7y130T48VQW/sYKgm4DWNAGdR1TZ7cwXXID/UBMz CQTSXlNLPqtU9d74UbE6xzL+KOhMiSU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1740434016; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=kydi1edv7zo81EuViV3XyV61d+pzCDxRiN2C51w0vq8=; b=CFUJm4qexU52o1T1Nu3ITryLGs9Ft9hoiUPoXqRwIZuBGm4f1/m4xLH8ttGgHyMcjCmEvL 5hncR7f9Z157POnddnVhg8xHJOE9apO4Rd3szxXZXrLe/hWCHPODuho9wtoFa7+umxwxJx RKozW6kOyHXvJV1XpDoCd+4b1ndF2Ak= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-314-nQI7iyFpN0Ke1Gx0fJ1FJw-1; Mon, 24 Feb 2025 16:53:35 -0500 X-MC-Unique: nQI7iyFpN0Ke1Gx0fJ1FJw-1 X-Mimecast-MFC-AGG-ID: nQI7iyFpN0Ke1Gx0fJ1FJw_1740434014 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-4399a5afcb3so46407055e9.3 for ; Mon, 24 Feb 2025 13:53:34 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740434014; x=1741038814; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:from:references:cc:to:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=kydi1edv7zo81EuViV3XyV61d+pzCDxRiN2C51w0vq8=; b=TzDfWp3eKcM4CeVST0bgmmZ+gD+J4AnliKFUjuvM5pH4FXZ82EvlrcspYvCVHXxWkX ZLLErP98w6++bIGRy+insRZBXLF8SuVOTrVs1AcN8tTuIOiIS3SVeJRxMuqyootizaLt 2bSYKziN7fBPq9JEHKUhV6wPgXJt9HxuSIZkFbKQ6gc1P7h4FZQwOryLhzQSb+vk7RG7 I7LC982rB7ydS7lmTPZUmHUIdLcXidQhMS7eRqrXKLR2xMxshrrHI7d7PQlmGRNR1U6k /IvaYcGIc2kZGlQ2eEIn452A9yiJaOOxhDTJR3k7czqG4WHW8q3QwUGvcQF+WIetFK+8 C8ZQ== X-Forwarded-Encrypted: i=1; AJvYcCVdSiHbCYrjlMP3z0T15XiP6ceXwmsgF7nVmpAOZlm7QEz6gV+peNiPRKvieLH8crYcUHMdAqUL1Q==@kvack.org X-Gm-Message-State: AOJu0Yz3Rz9g2sqie/cqAQpbS8FEdDLjBzp/lTkHtBYAkXwhKB+9JegS DifnGUYvfly7R4UrW3Om9Qk8RfnR7IHrKfvAqKE3tuhb5IZvMPH433GWtfU7dpaAeo3da7PtLRj P+xvSdTrAFNQqOm63sO4lxcOmAoVU/xx0N6qIOTw5WEN58VV2ThYHn8VNlY4= X-Gm-Gg: ASbGncuCuq6/iMTcufsDium3Wq+lpsWxQqo5xwTkoAlAOBcvnDs3rfYk8osNcfTKnfQ BYnd1w35C19Vipd2M2wHblvhwNHPEtRTnO0HdVQsRc3OHeBaeg4GuLBgf9d6YQrdBa6bPqYSk8+ cEeKnb7EPJ/7SXyxTojxXLparJ54E9M2qqIwVmBGJU6tKxwwcWafd2o5tbBAKp9SOv4yp5pDvYZ BwfFanXG1eL1drFRZm3WqnNDU/lj+0wV6txGYJ4xJ631vI8t+uwiThgh29L7wIeo+0AN/1T9Ix9 8gGbb5BJUHtVOElgx1IQO+b4vje+98RKsknGrUxH32ieYwWegiSIq02hKTCwAT75aivDVQDLcG9 ayTKUkxCq7cgpiSHMO36ufbZNqos+N+r+igS4S5q1pOc= X-Received: by 2002:a05:600c:3b02:b0:439:94f8:fc79 with SMTP id 5b1f17b1804b1-43ab0ec8f7dmr10762075e9.0.1740434013754; Mon, 24 Feb 2025 13:53:33 -0800 (PST) X-Google-Smtp-Source: AGHT+IGkTgMDMNLKfN+dDzSWo/p7vVZriiJ8gKb01Qvd68zttKaby/KqQQrqbKEwqz2pyN7lubSwlw== X-Received: by 2002:a05:600c:3b02:b0:439:94f8:fc79 with SMTP id 5b1f17b1804b1-43ab0ec8f7dmr10761835e9.0.1740434013307; Mon, 24 Feb 2025 13:53:33 -0800 (PST) Received: from ?IPV6:2003:cb:c735:1900:ac8b:7ae5:991f:54fc? (p200300cbc7351900ac8b7ae5991f54fc.dip0.t-ipconnect.de. [2003:cb:c735:1900:ac8b:7ae5:991f:54fc]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-390cd883c68sm219732f8f.60.2025.02.24.13.53.31 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 24 Feb 2025 13:53:32 -0800 (PST) Message-ID: Date: Mon, 24 Feb 2025 22:53:30 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 16/20] fs/proc/page: remove per-page mapcount dependency for /proc/kpagecount (CONFIG_NO_PAGE_MAPCOUNT) To: Zi Yan Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Muchun Song , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn References: <20250224165603.1434404-1-david@redhat.com> <20250224165603.1434404-17-david@redhat.com> <8a5e94a2-8cd7-45f5-a2be-525242c0cd16@redhat.com> <9010E213-9FC5-4900-B971-D032CB879F2E@nvidia.com> <567b02b0-3e39-4e3c-ba41-1bc59217a421@redhat.com> <30C2A030-7438-4298-87D8-287BED1EA473@nvidia.com> <3f6b7e66-3412-4af2-97d9-6d31d6373079@redhat.com> <1FAD9E31-3D11-4759-9363-4B76BE96002A@nvidia.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <1FAD9E31-3D11-4759-9363-4B76BE96002A@nvidia.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: wBf3AcjIb5SxDEBaV04jUeOFnHjnoX10sDv4GGn1bhA_1740434014 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: EB2D940003 X-Stat-Signature: 91dq1pyeqhm9zj5qs85ubti3p4p9i43n X-HE-Tag: 1740434016-241904 X-HE-Meta: U2FsdGVkX18OvpHAZrWgPTZGsZz5QUlZMI5ae/RishWDmh2eirFoLwJFW+SynjjjG7S/x5qWvLi3+lXs91Cym4N6CkGv7H/iecovwsFaEtysR01lLq5cuI7jnlHGC7dcq+8YlYqZ/Ym2B2zJLL7epu9BGf7444JBazg1TwCg5YxnYy0kCzq8OVIDQnZCEcwTCrsPYJ8a9VBarAjYWGmASfdSJrwie9ECt1AV5E04481inIfMS6UHo4NIWN9m7BEoSgw7TU3hnMHofa7Lmuhm5IlgU/0YtNUIvcZQJ4+Hfv2uTvzGtzYHsVIcsASfdkAfUG09DVdmwWKVpOFs5RZ3ZtZM177lMaSRr47blfKaRvI6xMZtsSdlnwPq5YInPHL8+lE6JqpqapYuxHNGAo3bfDBONdvbQGwaPDyDX4VGmf5kPMJHC5jN/em9y/N1E8BXgZN1r7Pxyjbx1y0vxCmIRpf399am9pNTdUzQhHlBQmy7yex+zhm5fhrJkxU9ka1O95QIc7i0I2aFKc2rE0MtHZYdO+biL4LbTExi5VKMqmai2YCr0UCiyJVl02ioO03k+X2nDpgwS2HRUzorOpEfQhX4sLJoSiGssifAA/c6+0AhetiLIDaE2YP7GcjXSawoCKIOa3rmTi86nkdG6OfcvNZWsfRdAdo6sqQI7QnwKbxT1jsJtDclGSZTsDumLp5wKzLC/+0xdQk0TuecUArHcEDdDqr76/t+D8xWLMaheDitQDV5Qyrmb/ABB/gl8hEd1BVj03Be5nJj5ArZ8m81vC+VxscuXPYTudyL1KVfy7RNHET6r01vShNI4hZXTuSKr21FkSVoVvKFO13ON1077HHbebvaISoiVCFS6H8Cpb3ku6/nlifiCg1PHsDmWENAmlSAHF4P4utvyct21wcqueZ6i5Z+l3bDBPtW4BPHPGgLwNJTMJocYz5WsBZVcdpG/urSCvio0CX6Z5VNo6J SIJ9sHXz E9zSfOKvTvkTOq14kwq+PMo/ghoNp2R8QNEmm8IyLLiaBnLFoVwK1ZRdfSob9Ev1y20o0/JS4PCtcwm2+SbUD82YAiepkqL3ghBdlOzt+LniwsL/OoA4Zqjh9+Z7vHtE7kq9LMEN9z9NW3uczyJQBEMcI4qT1vyaqTUdSMfVnvpoRa05KlBZcfMeBQawHk/Pf3DCQm7wOEGEG3UaFqgofxw1x0vYQ8xv76hMNgOTLHUy3j4EKwWno6rJh/wQ2D76H9vSLp+/Cvui3McLsCL/XR4OJYtf3ZWCOIQYP7iiIkJvHX9vqoGQJvbEpG5kgunqRr7f5IroW45YzbfVoPoA63eQbUMqHReeOZTeH/EIKV5L2AMhRCUX75pedmicDRjzSMnL2rRBnVY9Gv4VE5r2nb/xbDDAvj0RuxOy+iW1CoLeEv95OVMX/OEC0uO3y+GmkiWVXPG0m7mXNg7+4SIg1wKItmQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 24.02.25 22:44, Zi Yan wrote: > On 24 Feb 2025, at 16:42, David Hildenbrand wrote: > >> On 24.02.25 22:23, Zi Yan wrote: >>> On 24 Feb 2025, at 16:15, David Hildenbrand wrote: >>> >>>> On 24.02.25 22:10, Zi Yan wrote: >>>>> On 24 Feb 2025, at 16:02, David Hildenbrand wrote: >>>>> >>>>>> On 24.02.25 21:40, Zi Yan wrote: >>>>>>> On Mon Feb 24, 2025 at 11:55 AM EST, David Hildenbrand wrote: >>>>>>>> Let's implement an alternative when per-page mapcounts in large folios >>>>>>>> are no longer maintained -- soon with CONFIG_NO_PAGE_MAPCOUNT. >>>>>>>> >>>>>>>> For large folios, we'll return the per-page average mapcount within the >>>>>>>> folio, except when the average is 0 but the folio is mapped: then we >>>>>>>> return 1. >>>>>>>> >>>>>>>> For hugetlb folios and for large folios that are fully mapped >>>>>>>> into all address spaces, there is no change. >>>>>>>> >>>>>>>> As an alternative, we could simply return 0 for non-hugetlb large folios, >>>>>>>> or disable this legacy interface with CONFIG_NO_PAGE_MAPCOUNT. >>>>>>>> >>>>>>>> But the information exposed by this interface can still be valuable, and >>>>>>>> frequently we deal with fully-mapped large folios where the average >>>>>>>> corresponds to the actual page mapcount. So we'll leave it like this for >>>>>>>> now and document the new behavior. >>>>>>>> >>>>>>>> Note: this interface is likely not very relevant for performance. If >>>>>>>> ever required, we could try doing a rather expensive rmap walk to collect >>>>>>>> precisely how often this folio page is mapped. >>>>>>>> >>>>>>>> Signed-off-by: David Hildenbrand >>>>>>>> --- >>>>>>>> Documentation/admin-guide/mm/pagemap.rst | 7 +++++- >>>>>>>> fs/proc/internal.h | 31 ++++++++++++++++++++++++ >>>>>>>> fs/proc/page.c | 19 ++++++++++++--- >>>>>>>> 3 files changed, 53 insertions(+), 4 deletions(-) >>>>>>>> >>>>>>>> diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst >>>>>>>> index caba0f52dd36c..49590306c61a0 100644 >>>>>>>> --- a/Documentation/admin-guide/mm/pagemap.rst >>>>>>>> +++ b/Documentation/admin-guide/mm/pagemap.rst >>>>>>>> @@ -42,7 +42,12 @@ There are four components to pagemap: >>>>>>>> skip over unmapped regions. >>>>>>>> * ``/proc/kpagecount``. This file contains a 64-bit count of the number of >>>>>>>> - times each page is mapped, indexed by PFN. >>>>>>>> + times each page is mapped, indexed by PFN. Some kernel configurations do >>>>>>>> + not track the precise number of times a page part of a larger allocation >>>>>>>> + (e.g., THP) is mapped. In these configurations, the average number of >>>>>>>> + mappings per page in this larger allocation is returned instead. However, >>>>>>>> + if any page of the large allocation is mapped, the returned value will >>>>>>>> + be at least 1. >>>>>>>> The page-types tool in the tools/mm directory can be used to query the >>>>>>>> number of times a page is mapped. >>>>>>>> diff --git a/fs/proc/internal.h b/fs/proc/internal.h >>>>>>>> index 1695509370b88..16aa1fd260771 100644 >>>>>>>> --- a/fs/proc/internal.h >>>>>>>> +++ b/fs/proc/internal.h >>>>>>>> @@ -174,6 +174,37 @@ static inline int folio_precise_page_mapcount(struct folio *folio, >>>>>>>> return mapcount; >>>>>>>> } >>>>>>>> +/** >>>>>>>> + * folio_average_page_mapcount() - Average number of mappings per page in this >>>>>>>> + * folio >>>>>>>> + * @folio: The folio. >>>>>>>> + * >>>>>>>> + * The average number of present user page table entries that reference each >>>>>>>> + * page in this folio as tracked via the RMAP: either referenced directly >>>>>>>> + * (PTE) or as part of a larger area that covers this page (e.g., PMD). >>>>>>>> + * >>>>>>>> + * Returns: The average number of mappings per page in this folio. 0 for >>>>>>>> + * folios that are not mapped to user space or are not tracked via the RMAP >>>>>>>> + * (e.g., shared zeropage). >>>>>>>> + */ >>>>>>>> +static inline int folio_average_page_mapcount(struct folio *folio) >>>>>>>> +{ >>>>>>>> + int mapcount, entire_mapcount; >>>>>>>> + unsigned int adjust; >>>>>>>> + >>>>>>>> + if (!folio_test_large(folio)) >>>>>>>> + return atomic_read(&folio->_mapcount) + 1; >>>>>>>> + >>>>>>>> + mapcount = folio_large_mapcount(folio); >>>>>>>> + entire_mapcount = folio_entire_mapcount(folio); >>>>>>>> + if (mapcount <= entire_mapcount) >>>>>>>> + return entire_mapcount; >>>>>>>> + mapcount -= entire_mapcount; >>>>>>>> + >>>>>>>> + adjust = folio_large_nr_pages(folio) / 2; >>>>>> >>>>>> Thanks for the review! >>>>>> >>>>>>> >>>>>>> Is there any reason for choosing this adjust number? A comment might be >>>>>>> helpful in case people want to change it later, either with some reasoning >>>>>>> or just saying it is chosen empirically. >>>>>> >>>>>> We're dividing by folio_large_nr_pages(folio) (shifting by folio_large_order(folio)), so this is not a magic number at all. >>>>>> >>>>>> So this should be "ordinary" rounding. >>>>> >>>>> I thought the rounding would be (mapcount + 511) / 512. >>>> >>>> Yes, that's "rounding up". >>>> >>>>> But >>>>> that means if one subpage is mapped, the average will be 1. >>>>> Your rounding means if at least half of the subpages is mapped, >>>>> the average will be 1. Others might think 1/3 is mapped, >>>>> the average will be 1. That is why I think adjust looks like >>>>> a magic number. >>>> >>>> I think all callers could tolerate (or benefit) from folio_average_page_mapcount() returning at least 1 in case any page is mapped. >>>> >>>> There was a reason why I decided to round to the nearest integer instead. >>>> >>>> Let me think about this once more, I went back and forth a couple of times on this. >>> >>> Sure. Your current choice might be good enough for now. My intend of >>> adding a comment here is just to let people know the adjust can be >>> changed in the future. :) >> >> The following will make the callers easier to read, while keeping >> the rounding to the next integer for the other cases untouched. >> >> +/** >> + * folio_average_page_mapcount() - Average number of mappings per page in this >> + * folio >> + * @folio: The folio. >> + * >> + * The average number of present user page table entries that reference each >> + * page in this folio as tracked via the RMAP: either referenced directly >> + * (PTE) or as part of a larger area that covers this page (e.g., PMD). >> + * >> + * The average is calculated by rounding to the nearest integer; however, >> + * if at least a single page is mapped, the average will be at least 1. >> + * >> + * Returns: The average number of mappings per page in this folio. >> + */ >> +static inline int folio_average_page_mapcount(struct folio *folio) >> +{ >> + int mapcount, entire_mapcount, avg; >> + >> + if (!folio_test_large(folio)) >> + return atomic_read(&folio->_mapcount) + 1; >> + >> + mapcount = folio_large_mapcount(folio); >> + if (unlikely(mapcount <= 0)) >> + return 0; >> + entire_mapcount = folio_entire_mapcount(folio); >> + if (mapcount <= entire_mapcount) >> + return entire_mapcount; >> + mapcount -= entire_mapcount; >> + >> + /* Round to closest integer ... */ >> + avg = (mapcount + folio_large_nr_pages(folio) / 2) >> folio_large_order(folio); >> + avg += entire_mapcount; >> + /* ... but return at least 1. */ >> + return max_t(int, avg, 1); >> +} > > LGTM. Thanks. Thanks! BTW, I think I chose the "round to closest integer" primarily to make the PSS estimate a bit better. But that is indeed something that can be adjusted easily later. BTW, as commented in the cover letter, being able to calculate the avg without the entire_mapcount could clean this function up quite a bit (and make it completely atomic), but that will require more work. -- Cheers, David / dhildenb