From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A693C10F16 for ; Mon, 6 May 2024 12:03:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9CAB26B0083; Mon, 6 May 2024 08:03:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 97A6D6B0085; Mon, 6 May 2024 08:03:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 869256B0087; Mon, 6 May 2024 08:03:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6954D6B0083 for ; Mon, 6 May 2024 08:03:55 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id EAD011207C4 for ; Mon, 6 May 2024 12:03:54 +0000 (UTC) X-FDA: 82087837188.18.4F0CDD0 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf20.hostedemail.com (Postfix) with ESMTP id 08E4A1C0005 for ; Mon, 6 May 2024 12:03:52 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=mRfikGBB; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf20.hostedemail.com: domain of hawk@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=hawk@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714997033; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Y0PEn3Arx861/D01eV/nqKRM5XJi+loa/U1zrx2LULk=; b=InUR1D/fwi5sYTkcAOmxazV6/hjI4FBk5qQe8xu6TSXdIiWEcxXzmCUzqgYbF2g9hYmJ0M ZVbpT+lzF5Y58dyoLks2FlFgizYOmrVNC/FYOUupcqLUEolAC4CbiPRqZ+EdEn92TjnHAN X93Dndl4BMuXXd9ny0nNuDyctvLMWfk= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=mRfikGBB; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf20.hostedemail.com: domain of hawk@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=hawk@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714997033; a=rsa-sha256; cv=none; b=HKxJl7aAiNLT1nVXmjHSBUKVkUdZQciG9U26xmWK4r8wXd9rvikarhpLXmrP3baL9bhX/u wixYlKK7EEBqRORsBnHjCmcmH20gKSHC5ZP3QfJ+CwmVnkvDVkHuYrjSk6nVwgL8h2ATeh jadSLm26u38mAh9LMdZKZaqceqwKFSM= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 0286761043; Mon, 6 May 2024 12:03:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01E45C116B1; Mon, 6 May 2024 12:03:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714997031; bh=vZLDlCQywQMMqzP/n/DNibtb3u8+gJTHpgsfApvLK+A=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=mRfikGBBMDfRbhbVjp8yMC9P2JhRVQmwe2R5TqSMwrZxsMYqXnJltFfO1KktXB7fH YAiJFUlK5NqPiubLZNyasP6pvVoLj3gNTiuhSpT6mOPzLlBXd6iOko1sjmJhIi0VzZ h44gU3sD7wPdclAgEWk4PYDsIrKjmDpHfDH2ItVHJwTWn7Dhpk0Ng2fYb9WCnhv7CF 0XxFfeu3kiAeJhjdthxZwdO7X5WSRg5cmJ8Z7Ga+MBI55uWicKsvBnmTpksKe+Hsmv NYVfR0xF2OW3F4s0r3BKi60/7HlcZIzF5jP1DhLP0QNIzf2NWh/HGfX77zyc6qjwpY UQwZIeICTmloQ== Message-ID: <55854a94-681e-4142-9160-98b22fa64d61@kernel.org> Date: Mon, 6 May 2024 14:03:47 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1] cgroup/rstat: add cgroup_rstat_cpu_lock helpers and tracepoints To: Shakeel Butt Cc: Waiman Long , tj@kernel.org, hannes@cmpxchg.org, lizefan.x@bytedance.com, cgroups@vger.kernel.org, yosryahmed@google.com, netdev@vger.kernel.org, linux-mm@kvack.org, kernel-team@cloudflare.com, Arnaldo Carvalho de Melo , Sebastian Andrzej Siewior , Daniel Dao , Ivan Babrou , jr@cloudflare.com References: <171457225108.4159924.12821205549807669839.stgit@firesoul> <30d64e25-561a-41c6-ab95-f0820248e9b6@redhat.com> <4a680b80-b296-4466-895a-13239b982c85@kernel.org> <203fdb35-f4cf-4754-9709-3c024eecade9@redhat.com> <42a6d218-206b-4f87-a8fa-ef42d107fb23@kernel.org> <4gdfgo3njmej7a42x6x6x4b6tm267xmrfwedis4mq7f4mypfc7@4egtwzrfqkhp> Content-Language: en-US From: Jesper Dangaard Brouer In-Reply-To: <4gdfgo3njmej7a42x6x6x4b6tm267xmrfwedis4mq7f4mypfc7@4egtwzrfqkhp> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam01 X-Stat-Signature: kus5b5k84xd7das5p3jaokeed1g3ao64 X-Rspam-User: X-Rspamd-Queue-Id: 08E4A1C0005 X-HE-Tag: 1714997032-668005 X-HE-Meta: U2FsdGVkX1/HE50eCbR4l8y9UWjId40ZSec9EbMxq/ALvlVIP4ONS/agBvjUlc62o/L7dUcxUtoe2/rtA3M2aKNvkVh5ekwadiKH13Vf70b6MnSB3tdMuFx/2sz9PzkFd9VsBCwi4+mzBfTK4kpFBR79aJHsFpuXh5AAwYdiw/KwxzpqdSpVs9CvHQzxApCHlq+DGTiVIe225Oaz0USGvI80Z47Yob9zvDYkoeE1SnoW24sjhHiT5mlbZ/bYYs/cFio+ayn1sB0iZU6dcV4wKUpiFQiv7pE8Y85Tcusvth6elLsqW/n8agPFxFHo08eG5uFoBmYPdZRvrJD8RdDigZv2pPx3GER6luROQ3+/lFpBYYeMjL0hHBUeZnqPqjAvNDMa72a4qykt11O9aNnytJF12WuBxiqiWebzmrjQ6rxcBx3RZ4cFhcRbbIMKMXWb5KMBkm1DZHBGn6QIyokzMfOx1SGUH9dgb4FC5ESnvkDjg/vqaTRy7+JImcofJTfPINjr5nNU1p6F0+kz81rcH9Ypm5hOj3c/EnKAmMwuDs7aFlOBEB0NzVE8194Fp0/d/yjfaAqyexHFMfcdNXeTJgz1sis6Bo+R1i9pV7PZSaBot/ba/oZ9mvA3+EOBjzjmmQsEpxoRuRGq47kaZ/C8CslaQoaNPbgIsXvAL8LLyQFJ0IfyU5fZUhhT17alRQSS0mLYMG5FGAbKzsOB/38m/FX4XStAfCaN9D0ImvW+VU+icSkNP4KcBZVT2GjdeaoC0+a+XefQw47fpUl9Lz1x7JHnARs75qreA6KffBLuQeKFGjoxYl3bOONHckMtF0/A8rdbhWtIEnWFLZUPgZF5pXZG+9DzsvOvK78V4Epfr2/OiQRT5SetukaHqqR37t5+H9/TD/LtFPksGvmKW5LExY4UnWbOGoZ1LRoKgP5GZ4SBOEnTTm5dW7tO9R/oPkUS0yVIl+/o/lPaoKxbsQb b9ed4sWU UdYPFhFWicj+75osPWkH3QCixd2i+PSahN7a8E+Dxesm9DW/wZNqsKxtN/1BsfRExaEpyFcjUVC9NBeiF6qaeD6bLVFs9FBjuXY+DOMDXmmFB558s5MYTGWb6TEC4wrmdWPIEEgGCfyHftbh0GOGynA6OUo77+POxiWSJWql2alzjTcaOSnHKy0U5AhpD7cOpC5BvsUQ9OTxYYWOI+2hh7KgMtqva3FlfCTRGnKAW4EHDh1EiETugTqFUAJGIRmNTJbaGcWscE7WqYS7LaLncsO/RfA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 03/05/2024 21.18, Shakeel Butt wrote: > On Fri, May 03, 2024 at 04:00:20PM +0200, Jesper Dangaard Brouer wrote: >> >> > [...] >>> >>> I may have mistakenly thinking the lock hold time refers to just the >>> cpu_lock. Your reported times here are about the cgroup_rstat_lock. >>> Right? If so, the numbers make sense to me. >>> >> >> True, my reported number here are about the cgroup_rstat_lock. >> Glad to hear, we are more aligned then :-) >> >> Given I just got some prod machines online with this patch >> cgroup_rstat_cpu_lock tracepoints, I can give you some early results, >> about hold-time for the cgroup_rstat_cpu_lock. > > Oh you have already shared the preliminary data. > >> >> From this oneliner bpftrace commands: >> >> sudo bpftrace -e ' >> tracepoint:cgroup:cgroup_rstat_cpu_lock_contended { >> @start[tid]=nsecs; @cnt[probe]=count()} >> tracepoint:cgroup:cgroup_rstat_cpu_locked { >> $now=nsecs; >> if (args->contended) { >> @wait_per_cpu_ns=hist($now-@start[tid]); delete(@start[tid]);} >> @cnt[probe]=count(); @locked[tid]=$now} >> tracepoint:cgroup:cgroup_rstat_cpu_unlock { >> $now=nsecs; >> @locked_per_cpu_ns=hist($now-@locked[tid]); delete(@locked[tid]); >> @cnt[probe]=count()} >> interval:s:1 {time("%H:%M:%S "); print(@wait_per_cpu_ns); >> print(@locked_per_cpu_ns); print(@cnt); clear(@cnt);}' >> >> Results from one 1 sec period: >> >> 13:39:55 @wait_per_cpu_ns: >> [512, 1K) 3 | | >> [1K, 2K) 12 |@ | >> [2K, 4K) 390 >> |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| >> [4K, 8K) 70 |@@@@@@@@@ | >> [8K, 16K) 24 |@@@ | >> [16K, 32K) 183 |@@@@@@@@@@@@@@@@@@@@@@@@ | >> [32K, 64K) 11 |@ | >> >> @locked_per_cpu_ns: >> [256, 512) 75592 |@ | >> [512, 1K) 2537357 >> |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| >> [1K, 2K) 528615 |@@@@@@@@@@ | >> [2K, 4K) 168519 |@@@ | >> [4K, 8K) 162039 |@@@ | >> [8K, 16K) 100730 |@@ | >> [16K, 32K) 42276 | | >> [32K, 64K) 1423 | | >> [64K, 128K) 89 | | >> >> @cnt[tracepoint:cgroup:cgroup_rstat_cpu_lock_contended]: 3 /sec >> @cnt[tracepoint:cgroup:cgroup_rstat_cpu_unlock]: 3200 /sec >> @cnt[tracepoint:cgroup:cgroup_rstat_cpu_locked]: 3200 /sec >> >> >> So, we see "flush-code-path" per-CPU-holding @locked_per_cpu_ns isn't >> exceeding 128 usec. > > Hmm 128 usec is actually unexpectedly high. > How does the cgroup hierarchy on your system looks like? I didn't design this, so hopefully my co-workers can help me out here? (To @Daniel or @Jon) My low level view is that, there are 17 top-level directories in /sys/fs/cgroup/. There are 649 cgroups (counting occurrence of memory.stat). There are two directories that contain the major part. - /sys/fs/cgroup/system.slice = 379 - /sys/fs/cgroup/production.slice = 233 - (production.slice have directory two levels) - remaining 37 We are open to changing this if you have any advice? (@Daniel and @Jon are actually working on restructuring this) > How many cgroups have actual workloads running? Do you have a command line trick to determine this? > Can the network softirqs run on any cpus or smaller > set of cpus? I am assuming these softirqs are processing packets from > any or all cgroups and thus have larger cgroup update tree. Softirq and specifically NET_RX is running half of the cores (e.g. 64). (I'm looking at restructuring this allocation) > I wonder if > you comment out MEMCG_SOCK stat update and still see the same holding > time. > It doesn't look like MEMCG_SOCK is used. I deduct you are asking: - What is the update count for different types of mod_memcg_state() calls? // Dumped via BTF info enum memcg_stat_item { MEMCG_SWAP = 43, MEMCG_SOCK = 44, MEMCG_PERCPU_B = 45, MEMCG_VMALLOC = 46, MEMCG_KMEM = 47, MEMCG_ZSWAP_B = 48, MEMCG_ZSWAPPED = 49, MEMCG_NR_STAT = 50, }; sudo bpftrace -e 'kfunc:vmlinux:__mod_memcg_state{@[args->idx]=count()} END{printf("\nEND time elapsed: %d sec\n", elapsed / 1000000000);}' Attaching 2 probes... ^C END time elapsed: 99 sec @[45]: 17996 @[46]: 18603 @[43]: 61858 @[47]: 21398919 It seems clear that MEMCG_KMEM = 47 is the main "user". - 21398919/99 = 216150 calls per sec Could someone explain to me what this MEMCG_KMEM is used for? >> >> My latency requirements, to avoid RX-queue overflow, with 1024 slots, >> running at 25 Gbit/s, is 27.6 usec with small packets, and 500 usec >> (0.5ms) with MTU size packets. This is very close to my latency >> requirements. >> >> --Jesper >>