From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4D6FC3DA6E for ; Wed, 10 Jan 2024 10:30:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 543186B007E; Wed, 10 Jan 2024 05:30:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F3566B0080; Wed, 10 Jan 2024 05:30:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 394726B0081; Wed, 10 Jan 2024 05:30:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 286756B007E for ; Wed, 10 Jan 2024 05:30:27 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 027C7A03FC for ; Wed, 10 Jan 2024 10:30:26 +0000 (UTC) X-FDA: 81663032094.06.60F2FE9 Received: from mail-vs1-f49.google.com (mail-vs1-f49.google.com [209.85.217.49]) by imf07.hostedemail.com (Postfix) with ESMTP id 2A55640007 for ; Wed, 10 Jan 2024 10:30:25 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=H6vda4me; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.49 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704882625; a=rsa-sha256; cv=none; b=IPM4zc07Bo55mYKOQWnGK0HAvNZ1LVYqntgV9h/Ztw7+Pvq3YESDIcXz775glikfmggb8e TbAnN2sVLBFX/SPqAPQERktL9VcCWTSzsxmEL9i6Qjrn4ERUGyMgxzZ4ZUzRkF1W5cQsIC a36k/DaG/81hIZtoUFm8FHtTcHc2if4= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=H6vda4me; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.49 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704882625; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Bt2Zk0iCJfuQKw2mbEEZRA6099clDfE+kDrygQNGIwE=; b=w63z4kFTxCmsUXhFOPPFwuB/CHqoR9ojzXgFoV4pCP/0QxNrGhz0bbD5EBjkMs4og5EZsg UYg1XgGTv6x9zhVweOkh5j+ye2Usns4WCdt1QVmEKEXOojZ1ounMGYNlnUKIiy0UVfWAeq 5Vp8wkaRECuoY+rb//PoodaEOHa5PyQ= Received: by mail-vs1-f49.google.com with SMTP id ada2fe7eead31-467e862de61so489455137.3 for ; Wed, 10 Jan 2024 02:30:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704882624; x=1705487424; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Bt2Zk0iCJfuQKw2mbEEZRA6099clDfE+kDrygQNGIwE=; b=H6vda4metj54FoV80cnUOUwivJ2mqqdhi3YeeoIkKai2YCXeTYrHrHmyAmvESOneUh jhRVenC9VLg6Ysluontxd+1zZk7TRR/QYhem7UZR4KbJN8rsNfhe43W5GUFWcMDDC4fd h79L/hwpmg5mVLFW/uuz3Okw2xAPRIOzzbrX2kH+VOTkP+170NMfg3pWPIEM2bLeqK4G 4Ga82LkQ/FTqPl2NWumuZFBVEfXD9pon309BSeRGwHzRXx2/n7cBznt9Ly+Fqkm9vzij aZNwqkST9wtkX6Yh4lni9DKHehJjuknGNYno43fajTS2CGbDbFNAKhWtKPJa/rYb1OIC n62w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704882624; x=1705487424; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Bt2Zk0iCJfuQKw2mbEEZRA6099clDfE+kDrygQNGIwE=; b=JzWT/Dl1kz1+g0vXnXbkL24Z6Yhxm8vNS3XhXVziYepK++/SiYLIgw9N1ylwm+l7sD HoXAp8TKT9O6bNpOuySDap3L6t1IbnpCdcPYxZQ48iPdXUBy6lFsCzG2aD70hUJsi7c8 T1jFgiGmRWB1WIkYJb54Mmne0DM35j8arkFDsyttm2qHPYOXMFkm/aXbXW8oaV+ynhbX BQIq0lZ7AsvoRLCEht6yQPEXw3gpU8xr3POBryXDA1JRwUbu5ZvGJbc6u7S2WGe+uN4Y 6tOiHC98Uk2l9FWkYw/GZJuAe4hYKgo72zfEI9yjGOrpH8hmrJILIj9vdBCgHXLLowPx 7zhA== X-Gm-Message-State: AOJu0YzeHO7L+4x76z///cbz+7niCCwtkzcJZU6vIR9zM+rWll/8Ij0t c10Sl46zb+bsNlDZmBnllJ5WKXtv7pmxC+3yHdE= X-Google-Smtp-Source: AGHT+IGw4oEG6eyFz0i1YkcNfPlLLtnPzvFws1B9DRD1SNc+22IYFo+5CIXjjKDuivs88ZIJvhGwIrHuajDTybjP3AQ= X-Received: by 2002:a05:6102:e0c:b0:467:af58:295a with SMTP id o12-20020a0561020e0c00b00467af58295amr429947vst.31.1704882624231; Wed, 10 Jan 2024 02:30:24 -0800 (PST) MIME-Version: 1.0 References: <20240102153828.1002295-1-ryan.roberts@arm.com> <4e7445a0-acc9-487f-999f-a2b6d03d265e@nvidia.com> <3bd5e4a3-9f67-4483-9a0e-9abb5eb783cd@arm.com> <94ebe62b-5f55-4be9-b464-4105b4692496@arm.com> In-Reply-To: <94ebe62b-5f55-4be9-b464-4105b4692496@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Wed, 10 Jan 2024 18:30:12 +0800 Message-ID: Subject: Re: [RFC PATCH v1] tools/mm: Add thpmaps script to dump THP usage info To: Ryan Roberts Cc: John Hubbard , Andrew Morton , Zenghui Yu , Matthew Wilcox , David Hildenbrand , Kefeng Wang , Zi Yan , Alistair Popple , linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2A55640007 X-Stat-Signature: 1ix8tugpwgnxw8u9jpfk9xj9fzzf1n1g X-HE-Tag: 1704882624-577262 X-HE-Meta: U2FsdGVkX18k5OPMomTVuhj0SCAyZRw/l2aO0fhzxalMKb29qi6BuuVOGK6u3kPu/TNtB6zmF+lqmQmiz0pQDndZycXDdHGJeME+yCJ+Aa25wRPais7O9XFHLPIntdV9ywDqjiAnvGA5wHuTmeIHnMQxifAuWMdvzApqMXXVUgI1pVY1SFOHf1Tbi+VvFcrLBfoawkXDpvbp0yhiy9MwFCgHK5scyXpbvtYcggeNlZ7O3LEiSrPuWIPurl9rgd4fzKw6efmu3s7mXVhh4F+D3V/5IMjEGffeeNM8NlmB67bZTLfWU+U64aB7qVVTIuUqNQdc+aGJEEhEq/hL4etYCrnQtZcOkAdxx9tS7daYG/dKctX+3FHgmDqSGGEq51sYGTn5XFsyXrDsJsYnpN83IYMIbvF2eKoueXXRqRy80+0vdqyjPj4htkwnGmZueQG0zrAtHzzSsgn20MA3mKetagO5/pPdyg2NSwsuDxsovNvHX709T6FpWAqL06H8p7Aoiz1w3xRfwo9BViLOYMYuVFbTvCR9T7JURRM1Qwxxz1Fvm6YekFnt3hnt7DVdQYIRmH+4P4FrKT7nl/2/mMVriRquC/hYGDWsSkDXBFOLU9uBWwKrk+/nZrUmLWKPnV39QGsGA/yttqFSG32b4DOZcSfeXx61HO6sECxr7qY1/sXApADRNSnKwAC6kf1DogoAnC6MUzoiC/oPRqgaV19c66YGZObpLSdHEaFHTmBbf6KtHQ2a3yDkIBNk5p1pf7fe+ak/kOwdD9uHErNfjbSJA5W3TlIkeKk/Uu3wVqfTMO4nLLJYty7w7E4Vzbe0+AGr2a/gbOzJ5Qohi1Hq7u37s5JVKp/FXBzWLTpnxFppdghucXGpSFWCsfyiEVXSlxXclukTImFiRYhY9ZnM+av6h38eypOxE5ZqdAVuPXfaxiYO3XBWFPwNB13hYVTb6Xfz90Igd+HV3aj4oAG+LWN 3uassGhr qOTNrOTQymHhz0kNU3itK2SjutaXgf5vPnMzJgJazSoRjl3IUVipTIH+JnlZTjd8xjl2JPRwuPZ3uu/xOtWmzxu925Hjo1AtngJybL8bBqXMb6S4B30vWIXJAlnCU8QORWwrbVoWK1A4Pkfs6iaOhVJOqtP1qOa6mFC4l2EtejB63u4mXM6gygk7Mg+al9asGD3H/7GTM119i+3LEOdSNBE0bCYCxCilWaIiNNTlQyiE7pgEEtYpaXEvF4nulhZP2dXYuzS94E7U9NkmX2uq+XovAK1TnXHyNC1JR5BIFqjlGrmuJXPQzFspg13F8DjuGfWSTDGAeBptbdNhOwYs6H+HACGEYQMO7ojbrd/5VEtPOKLe34bQbeDWCSsqdpdZs9nRwzt51u//S7E61tuCWcOxg58ZrjkPjHFghAfz5tvjmh0/OvvbIQtNL+w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jan 10, 2024 at 6:23=E2=80=AFPM Ryan Roberts = wrote: > > On 10/01/2024 09:09, Barry Song wrote: > > On Wed, Jan 10, 2024 at 4:58=E2=80=AFPM Ryan Roberts wrote: > >> > >> On 10/01/2024 08:02, Barry Song wrote: > >>> On Wed, Jan 10, 2024 at 12:16=E2=80=AFPM John Hubbard wrote: > >>>> > >>>> On 1/9/24 19:51, Barry Song wrote: > >>>>> On Wed, Jan 10, 2024 at 11:35=E2=80=AFAM John Hubbard wrote: > >>>> ... > >>>>>> Hi Ryan, > >>>>>> > >>>>>> One thing that immediately came up during some recent testing of m= THP > >>>>>> on arm64: the pid requirement is sometimes a little awkward. I'm r= unning > >>>>>> tests on a machine at a time for now, inside various containers an= d > >>>>>> such, and it would be nice if there were an easy way to get some n= umbers > >>>>>> for the mTHPs across the whole machine. > >> > >> Just to confirm, you're expecting these "global" stats be truely globa= l and not > >> per-container? (asking because you exploicitly mentioned being in a co= ntainer). > >> If you want per-container, then you can probably just create the conta= iner in a > >> cgroup? > >> > >>>>>> > >>>>>> I'm not sure if that changes anything about thpmaps here. Probably > >>>>>> this is fine as-is. But I wanted to give some initial reactions fr= om > >>>>>> just some quick runs: the global state would be convenient. > >> > >> Thanks for taking this for a spin! Appreciate the feedback. > >> > >>>>> > >>>>> +1. but this seems to be impossible by scanning pagemap? > >>>>> so may we add this statistics information in kernel just like > >>>>> /proc/meminfo or a separate /proc/mthp_info? > >>>>> > >>>> > >>>> Yes. From my perspective, it looks like the global stats are more us= eful > >>>> initially, and the more detailed per-pid or per-cgroup stats are the > >>>> next level of investigation. So feels odd to start with the more > >>>> detailed stats. > >>>> > >>> > >>> probably because this can be done without the modification of the ker= nel. > >> > >> Yes indeed, as John said in an earlier thread, my previous attempts to= add stats > >> directly in the kernel got pushback; DavidH was concerned that we don'= t really > >> know exectly how to account mTHPs yet > >> (whole/partial/aligned/unaligned/per-size/etc) so didn't want to end u= p adding > >> the wrong ABI and having to maintain it forever. There has also been s= ome > >> pushback regarding adding more values to multi-value files in sysfs, s= o David > >> was suggesting coming up with a whole new scheme at some point (I know > >> /proc/meminfo isn't sysfs, but the equivalent files for NUMA nodes and= cgroups > >> do live in sysfs). > >> > >> Anyway, this script was my attempt to 1) provide a short term solution= to the > >> "we need some stats" request and 2) provide a context in which to expl= ore what > >> the right stats are - this script can evolve without the ABI problem. > >> > >>> The detailed per-pid or per-cgroup is still quite useful to my case i= n which > >>> we set mTHP enabled/disabled and allowed sizes according to vma types= , > >>> eg. libc_malloc, java heaps etc. > >>> > >>> Different vma types can have different anon_name. So I can use the de= tailed > >>> info to find out if specific VMAs have gotten mTHP properly and how m= any > >>> they have gotten. > >>> > >>>> However, Ryan did clearly say, above, "In future we may wish to > >>>> introduce stats directly into the kernel (e.g. smaps or similar)". A= nd > >>>> earlier he ran into some pushback on trying to set up /proc or /sys > >>>> values because this is still such an early feature. > >>>> > >>>> I wonder if we could put the global stats in debugfs for now? That's > >>>> specifically supposed to be a "we promise *not* to keep this ABI sta= ble" > >>>> location. > >> > >> Now that I think about it, I wonder if we can add a --global mode to t= he script > >> (or just infer global when neither --pid nor --cgroup are provided). I= think I > >> should be able to determine all the physical memory ranges from /proc/= iomem, > >> then grab all the info we need from /proc/kpageflags. We should then b= e able to > >> process it all in much the same way as for --pid/--cgroup and provide = the same > >> stats, but it will apply globally. What do you think? > > Having now thought about this for a few mins (in the shower, if anyone wa= nts the > complete picture :) ), this won't quite work. This approach doesn't have = the > virtual mapping information so the best it can do is tell us "how many of= each > size of THP are allocated?" - it doesn't tell us anything about whether t= hey are > fully or partially mapped or what their alignment is (all necessary if we= want > to know if they are contpte-mapped). So I don't think this approach is go= ing to > be particularly useful. > > And this is also the big problem if we want to gather stats inside the ke= rnel; > if we want something equivalant to /proc/meminfo's > AnonHugePages/ShmemPmdMapped/FilePmdMapped, we need to consider not just = the > allocation of the THP but also whether it is mapped. That's easy for > PMD-mappings, because there is only one entry to consider - when you set = it, you > increment the number of PMD-mapped THPs, when you clear it, you decrement= . But > for PTE-mappings it's harder; you know the size when you are mapping so i= ts easy > to increment, but you can do a partial unmap, so you would need to scan t= he PTEs > to figure out if we are unmapping the first page of a previously > fully-PTE-mapped THP, which is expensive. We would need a cheap mechanism= to > determine "is this folio fully and contiguously mapped in at least one pr= ocess?". as OPPO's approach I shared to you before is maintaining two mapcount 1. entire map 2. subpage's map 3. if 1 and 2 both exist, it is DoubleMapped. This isn't a problem for us. and everytime if we do a partial unmap, we have an explicit cont_pte split which will decrease the entire map and increase the subpage's mapcount. but its downside is that we expose this info to mm-core. > > So depending on what global stats you actually need, the route to getting= them > cheaply may not be easy. (My previous attempt to add stats cheated and di= dn't > try to track "fully mapped" vs "partially mapped" - instead it just count= ed the > number of pages belonging to a THP (of any size) that were mapped. > > If you need the global mapping state, then the short term way to do this = would > be to provide the root cgroup, then have the script recurse through all c= hild > cgroups; That would pick up all the processes and iterate through them: > > $ thpmaps --cgroup /sys/fs/cgroup --summary ... > > This won't quite work with the current version because it doesn't recurse > through the cgroup children currently, but that would be easy to add. > > > > > > for debug purposes, it should be good. imaging there is a health > > monitor which needs > > to sample the stats of large folios online and periodically, this > > might be too expensive. > > > >> > >> If we can possibly avoid sysfs/debugfs I would prefer to keep it all i= n a script > >> for now. > >> > >>> > >>> +1. > >>> > >>>> > >>>> > >>>> thanks, > >>>> -- > >>>> John Hubbard > >>>> NVIDIA > >>>> > >>> > > Thanks Barry