From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44063EB64D0 for ; Tue, 13 Jun 2023 18:45:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9AF178E0003; Tue, 13 Jun 2023 14:45:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 95F716B0075; Tue, 13 Jun 2023 14:45:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 826BA8E0003; Tue, 13 Jun 2023 14:45:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 72B7A6B0074 for ; Tue, 13 Jun 2023 14:45:02 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 2ED071C81DD for ; Tue, 13 Jun 2023 18:45:02 +0000 (UTC) X-FDA: 80898601644.27.D62A6E9 Received: from mail-il1-f176.google.com (mail-il1-f176.google.com [209.85.166.176]) by imf30.hostedemail.com (Postfix) with ESMTP id 24CF380002 for ; Tue, 13 Jun 2023 18:44:58 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=Zby6oNor; spf=pass (imf30.hostedemail.com: domain of yuzhao@google.com designates 209.85.166.176 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686681899; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GkCb0yFxfR4opINNJsGvfT//mpUEjiOj45xzWBFj8VM=; b=mJ8eMK/f3Cl/A7WFxgPQgX3o3FrCHWLXEr9VHO5NkDfjt2deRqvLjDTHD0hNOnMSyQ3Ovu 8TG5qC3xvW6xdeACmOo82aI7oWxNUO77D3YWHUc/BRjqhi4ZdVrLXu+9p3fXpfFzXNH5Fe xEq2rqaKUluxzGh9yb6+uNDCl7AZpRo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686681899; a=rsa-sha256; cv=none; b=GuvAfesKRu/fgjw2atVd++VTp4B47bGIFlJHwu8EuVG30iTQGXfKqXfPggePt3HUfwtvYU /amemX6OLyclwQ7ggNEoE4ZeDU6OuqxZ6Ogdr2jcEv8Lq+v2290andn8e5ZBseFN2C9zjJ wGyu2FcbKP7vKEDwED3iIQ6HTerkOq8= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=Zby6oNor; spf=pass (imf30.hostedemail.com: domain of yuzhao@google.com designates 209.85.166.176 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-il1-f176.google.com with SMTP id e9e14a558f8ab-33b7f217dd0so24195ab.0 for ; Tue, 13 Jun 2023 11:44:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686681898; x=1689273898; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=GkCb0yFxfR4opINNJsGvfT//mpUEjiOj45xzWBFj8VM=; b=Zby6oNorVmRoqwVF6+hjAJrFiNISC3tvVqx5OLJFMY3po6VG1xGebxomdPBOCbJ0gS c+2j61h66X4fgfZqNNEb3v+l0wdeW2fIH8XTKl5mCJrAXFGNC7L+Ziq0ousiTRgEDQYI nf53wPeH+XYEoE4YcnQ4s3rnhJHf+xMQBWSPDrMWFJwgUqYQmw8lHP8e2JoRp41b0qba wTgcgznq6Skj4GVzJxC2OdvIeSpCL6mJJ67xB8EASSbKJt5hNhoziN/loKMDFcUjoko4 leuQjxmdQlXZsE8WF5DWFJEjWsYK00un35D6tLc34dgV1ogxS+SJF8PqM9d3nqMe/hHT EhnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686681898; x=1689273898; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=GkCb0yFxfR4opINNJsGvfT//mpUEjiOj45xzWBFj8VM=; b=ay/yD23fYH0sLwpxpoGjRnmSceXkDWfgS10cvzCdWOzDoW6ySg/KAQx3RTAw1Cqqgc oob76DJYv25U7oKBh2WyveaN2Ox5EEVE5tCpkoI3LkvG+nLE5KWWZ4oTbXWuY4F0sQZB i3EsH0zdI+tcw0QidiztLe04beR6fvjLFe8n+c0qHOTSQy63q4tHypL4Hkn9hHZzCh1X tYsELKpeuzg6qvSaEcnyZrsP9TbVwEPgbKYV9ehyFbb6SDMF4Ff4UjSzkREuovFdbe0R cj69QCRTDTOPUpSmE36iyNKtxbGthZJim1lVGAgoWCFh5aS7V3B281LejqGSQOLH6FEx r0Fw== X-Gm-Message-State: AC+VfDxhNG62n2GzWDdGWyUzpRh731C09PTytQaAhSyTizwwP+0PdbCi peG0H49wl98RRZCdE1DUFBWnWg== X-Google-Smtp-Source: ACHHUZ4JbDitzJjJ2Lghi72Mu72vEuviVacnx0bwQzYEBNyM6dNJ+ETPjetmQMYyiVyhpZB/h0gF3w== X-Received: by 2002:a05:6e02:194c:b0:337:c28c:3d0f with SMTP id x12-20020a056e02194c00b00337c28c3d0fmr29219ilu.6.1686681898060; Tue, 13 Jun 2023 11:44:58 -0700 (PDT) Received: from google.com ([2620:15c:183:200:cb62:3c3c:921b:d84]) by smtp.gmail.com with ESMTPSA id k11-20020a02cccb000000b004065707eb2bsm3610615jaq.42.2023.06.13.11.44.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Jun 2023 11:44:57 -0700 (PDT) Date: Tue, 13 Jun 2023 12:44:54 -0600 From: Yu Zhao To: Ryan Roberts Cc: Jonathan Corbet , Andrew Morton , "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-doc@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v1 0/2] Report on physically contiguous memory in smaps Message-ID: References: <20230613160950.3554675-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230613160950.3554675-1-ryan.roberts@arm.com> X-Rspamd-Queue-Id: 24CF380002 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 4c4byosmda8j4mbqodoripjm1hji8jrg X-HE-Tag: 1686681898-995137 X-HE-Meta: U2FsdGVkX19MWIWVSr8DdmXJtaJIp4TUjHY/E4oPwiTQ8p1DVTcAPIyiKuE/EFJ4NRHkIXARswHhabgm4BtOj4TxpmQVvbI3PQu9vJu/SsXzvWqYzZVnqqOnEfgAX6iM0rUXol0MEArOZ80FLzPLPWJXs9rMKwB//YLkRPOKKBqVMvlINh970/fvZY6wAxdyIeKbvnEQoNLdrNYlERoU4Wg9OBdNhzxsxRSJLfWXlJDW08OYLW8QRzxk1p4zuYEMXlxeOK00xmVVwrgjFITE8cCHV5R8PXiD6GQn98nHkMI3zyjk6Sk7hjrIkUvdpsi0P83KPCoDGBA5sdLqi8fKcldHasU47h5ADJXNFhb8YBEQEKoOvEW66ksrpVP0jHwo9CStTr3eIlySLYeuKiktGFhpgkkfh1DE0BCf+s1pGvH/Y2DNFg2kxnnFBthIoEDT8ONpmjO1JijKVhjPJ5cdWiMTitpOw6EECQcFSrjipOX1tPZzNdyl8DuHSsf0uw65XdnmpzNfdS5h3hTWiUd4LVbqxeEEJy1jMlB0NkfBoOHBysCEjceIPb3/cKtLnixgMEGPK9qLlDK9T/SIyJ5ulRHWxtWQe6ZuF88JopPT+DCsqwa4PtMmq18avFY91C5xF0GQLygwYCprNwwVVWOoNvObD0L6P6mPkeZrJtjSdA8aeab4QmuGLmc6mbtu/XpAKRH1udzwp20Qv3J633vHhLrT7G5F9SV5/9HH937uupCYigG37RSLLOX8gZThZs4M4TAp5cnRWsarWixFkBIQhCrYq8mCRU+ukHRI19gTL8PTgKGGaiEJ9cXMGnB3hNVWzCtkI+SzdeHdiBATREir3wQhqJSSl9dKE6MOLu2NALj9GUg3x4ILik902meIeFF2CXVog5CgwN1exd12x9l2yaJ0qB7R88otl30RhO4sZETp1SgKTT9LkYQsGxXIrLb8Sbd/eDiFYKrWO30ewJF YumQ2Y84 j/VACHjVre3xCzi/LAvxWON3g8I+FkxRsmXlivrBQ0dFcafbu5ugY8I7nHFuQ8a24lq4jtSODYVsaUzcsRnM3PyLSB5mV4ANWmCz6JwRjaj6y+xvPtRsQ4vbC31hnlK9oscn7qrOlWXANaCrT3nyWYjoLGlVBUO+hXkM2qefPm38BlgPkfz+0/I/xIPXnCkYNGBaLnwJX6pCJfcLR4ikjPzeBThc7V+GL2vLSkoX1NnXI+Wsz7dsrhFUbqRVN6uWZV73tEnv34oK/huhO6HjgmRJjrPURH90yZfHHZSD3sZ3FDTEGMv+Or0bzNQy44twCqTB2lbwwTY1wQNKvGmwc7XHR6WdKklqVc0oGo0+ApQBAkmdlaBEdzJ8I2w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 13, 2023 at 05:09:48PM +0100, Ryan Roberts wrote: > Hi All, > > I thought I would try my luck with this pair of patches... Ack on the idea. Actually I have a script to do just this, but it's based on pagemap (attaching the script at the end). > This series adds new entries to /proc/pid/smaps[_rollup] to report on physically > contiguous runs of memory. The first patch reports on the sizes of the runs by > binning into power-of-2 blocks and reporting how much memory is in which bin. > The second patch reports on how much of the memory is contpte-mapped in the page > table (this is a hint that arm64 supports to tell the HW that a range of ptes > map physically contiguous memory). > > With filesystems now supporting large folios in the page cache, this provides a > useful way to see what sizes are actually getting mapped. And with the prospect > of large folios for anonymous memory and contpte mapping for conformant large > folios on the horizon, this reporting will become useful to aid application > performance optimization. > > Perhaps I should really be submitting these patches as part of my large anon > folios and contpte sets (which I plan to post soon), but given this touches > the user ABI, I thought it was sensible to post it early and separately to get > feedback. > > It would specifically be good to get feedback on: > > - The exact set of new fields depend on the system that its being run on. Does > this cause problem for compat? (specifically the bins are determined based > on PAGE_SIZE and PMD_SIZE). > - The ContPTEMapped field is effectively arm64-specific. What is the preferred > way to handle arch-specific values if not here? No strong opinions here. === $ cat memory-histogram/mem_hist.py """Script that scans VMAs, outputting histograms regarding memory allocations. Example usage: python3 mem_hist.py --omit-file-backed --omit-unfaulted-vmas For every process on the system, this script scans each VMA, counting the number of order n allocations for 0 <= n <= MAX_ORDER. An order n allocation is a region of memory aligned to a PAGESIZE * (2 ^ n) sized region consisting of 2 ^ n pages in which every page is present (according to the data in /proc//pagemap). VMA information as in /proc//maps is output for all scanned VMAs along with a histogram of allocation orders. For example, this histogram states that there are 12 order 0 allocations, 4 order 1 allocations, 5 order 2 allocations, and so on: [12, 4, 5, 9, 5, 10, 6, 2, 2, 4, 3, 4] In addition to per-VMA histograms, per-process histograms are printed. Per-process histograms are the sum of the histograms of all VMAs contained within it, allowing for an overview of the memory allocations patterns of the process as a whole. Processes, and VMAs under each process are printed sorted in reverse-lexographic order of historgrams. That is, VMAs containing more high order allocations will be printed after ones containing more low order allocations. The output can thus be easily visually scanned to find VMAs in which hugepage use shows the most potential benefit. To reduce output clutter, the options --omit-file-backed exists to omit VMAs that are file backed (which, outside of tmpfs, don't support transparent hugepages on Linux). Additionally, the option --omit-unfaulted-vmas exists to omit VMAs containing zero resident pages. """ import argparse import functools import re import struct import subprocess import sys ALL_PIDS_CMD = "ps --no-headers -e | awk '{ print $1 }'" # Maximum order the script creates histograms up to. This is by default 9 # since the usual hugepage size on x86 is 2MB which is 2**9 4KB pages MAX_ORDER = 9 PAGE_SIZE = 2**12 BLANK_HIST = [0] * (MAX_ORDER + 1) class Vma: """Represents a virtual memory area. Attributes: proc: Process object in which this VMA is contained start_vaddr: Start virtual address of VMA end_vaddr: End virtual address of VMA perms: Permission string of VMA as in /proc//maps (eg. rw-p) mapped_file: Path to file backing this VMA from /proc//maps, empty string if not file backed. Note there are some cases in Linux where this may be nonempty and the VMA not file backed (eg. memfds) hist: This VMA's histogram as a list of integers """ def __init__(self, proc, start_vaddr, end_vaddr, perms, mapped_file): self.proc = proc self.start_vaddr = start_vaddr self.end_vaddr = end_vaddr self.perms = perms self.mapped_file = mapped_file def is_file_backed(self): """Returns true if this VMA is file backed, false otherwise.""" # The output printed for memfds (eg. /memfd:crosvm) also happens to be a # valid file path on *nix, so special case them return (bool(re.match("(?:/[^/]+)+", self.mapped_file)) and not bool(re.match("^/memfd:", self.mapped_file))) @staticmethod def bitmask(hi, lo): """Returns a bitmask with the bits from index hi to low+1 set.""" return ((1 << (hi - lo)) - 1) << lo @property @functools.lru_cache(maxsize=50000) def hist(self): """Returns this VMA's histogram as a list.""" hist = BLANK_HIST[:] pagemap_file = safe_open_procfile(self.proc.pid, "pagemap", "rb") if not pagemap_file: err_print( "Cannot open /proc/{0}/pagemap, not generating histogram".format( self.proc.pid)) return hist # Page index of start/end VMA virtual addresses vma_start_page_i = self.start_vaddr // PAGE_SIZE vma_end_page_i = self.end_vaddr // PAGE_SIZE for order in range(0, MAX_ORDER + 1): # If there are less than two previous order pages, there can be no more # pages of a higher order so just break out to save time if order > 0 and hist[order - 1] < 2: break # First and last pages aligned to 2**order bytes in this VMA first_aligned_page = (vma_start_page_i & self.bitmask(64, order)) + 2**order last_aligned_page = vma_end_page_i & self.bitmask(64, order) # Iterate over all order-sized and order-aligned chunks in this VMA for start_page_i in range(first_aligned_page, last_aligned_page, 2**order): if self._is_region_present(pagemap_file, start_page_i, start_page_i + 2**order): hist[order] += 1 # Subtract two lower order VMAs so that we don't double-count # order n VMAs as two order n-1 VMAs as well if order > 0: hist[order - 1] -= 2 pagemap_file.close() return hist def _is_region_present(self, pagemap_file, start_page_i, end_page_i): """Returns True if all pages in the given range are resident. Args: pagemap_file: Opened /proc//pagemap file for this process start_page_i: Start page index for range end_page_i: End page index for range Returns: True if all pages from page index start_page_i to end_page_i are present according to the pagemap file, False otherwise. """ pagemap_file.seek(start_page_i * 8) for _ in range(start_page_i, end_page_i): # /proc//pagemaps contains an 8 byte value for every page page_info, = struct.unpack("Q", pagemap_file.read(8)) # Bit 63 is set if the page is present if not page_info & (1 << 63): return False return True def __str__(self): return ("{start:016x}-{end:016x} {size:<8} {perms:<4} {hist:<50} " "{mapped_file:<40}").format( start=self.start_vaddr, end=self.end_vaddr, size="%dk" % ((self.end_vaddr - self.start_vaddr) // 1024), perms=self.perms, hist=str(self.hist), mapped_file=str(self.mapped_file)) class Process: """Represents a running process. Attributes: vmas: List of VMA objects representing this processes's VMAs pid: Process PID name: Name of process (read from /proc//status """ _MAPS_LINE_REGEX = ("([0-9a-f]+)-([0-9a-f]+) ([r-][w-][x-][ps-]) " "[0-9a-f]+ [0-9a-f]+:[0-9a-f]+ [0-9]+[ ]*(.*)") def __init__(self, pid): self.vmas = [] self.pid = pid self.name = None self._read_name() self._read_vma_info() def _read_name(self): """Reads this Process's name from /proc//status.""" get_name_sp = subprocess.Popen( "grep Name: /proc/%d/status | awk '{ print $2 }'" % self.pid, shell=True, stdout=subprocess.PIPE) self.name = get_name_sp.communicate()[0].decode("ascii").strip() def _read_vma_info(self): """Populates this Process's VMA list.""" f = safe_open_procfile(self.pid, "maps", "r") if not f: err_print("Could not read maps for process {0}".format(self.pid)) return for line in f: match = re.match(Process._MAPS_LINE_REGEX, line) start_vaddr = int(match.group(1), 16) end_vaddr = int(match.group(2), 16) perms = match.group(3) mapped_file = match.group(4) if match.lastindex == 4 else None self.vmas.append(Vma(self, start_vaddr, end_vaddr, perms, mapped_file)) f.close() @property @functools.lru_cache(maxsize=50000) def hist(self): """The process-level memory allocation histogram. This is the sum of all VMA histograms for every VMA in this process. For example, if a process had two VMAs with the following histograms: [1, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0] [0, 1, 2, 3, 0, 0, 0, 0, 0, 0, 0] This would return: [1, 3, 5, 3, 0, 0, 0, 0, 0, 0, 0] """ return [sum(x) for x in zip(*[vma.hist for vma in self.vmas])] def __str__(self): return "process {pid:<18} {name:<25} {hist:<50}".format( pid=self.pid, name=str(self.name), hist=str(self.hist)) def safe_open_procfile(pid, file_name, mode): """Safely open the given file under /proc/. This catches a variety of common errors bound to happen when using this script (eg. permission denied, process already exited). Args: pid: Pid of process (used to construct /proc//) file_name: File directly under /proc// to open mode: Mode to pass to open (eg. "w", "r") Returns: File object corresponding to file requested or None if there was an error """ full_path = "/proc/{0}/{1}".format(pid, file_name) try: return open(full_path, mode) except PermissionError: err_print("Not accessing {0} (permission denied)".format(full_path)) except FileNotFoundError: err_print( "Not opening {0} (does not exist, process {1} likely exited)".format( full_path, pid)) def err_print(*args, **kwargs): print(*args, file=sys.stderr, **kwargs) def print_hists(args): """Prints all process and VMA histograms as/per module documentation.""" pid_list_sp = subprocess.Popen( ALL_PIDS_CMD, shell=True, stdout=subprocess.PIPE) pid_list = map(int, pid_list_sp.communicate()[0].splitlines()) procs = [] for pid in pid_list: procs.append(Process(pid)) for proc in sorted(procs, key=lambda p: p.hist[::-1]): # Don't print info on kernel threads or processes we couldn't collect info # on due to insufficent permissions if not proc.vmas: continue print(proc) for vma in sorted(proc.vmas, key=lambda v: v.hist[::-1]): if args.no_unfaulted_vmas and vma.hist == BLANK_HIST: continue elif args.omit_file_backed and vma.is_file_backed(): continue print(" ", vma) if __name__ == "__main__": parser = argparse.ArgumentParser( description=("Create per-process and per-VMA " "histograms of contigous virtual " "memory allocations")) parser.add_argument( "--omit-unfaulted-vmas", dest="no_unfaulted_vmas", action="store_true", help="Omit VMAs containing 0 present pages from output") parser.add_argument( "--omit-file-backed", dest="omit_file_backed", action="store_true", help="Omit VMAs corresponding to mmaped files") print_hists(parser.parse_args())