From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B50C2C4345F for ; Fri, 12 Apr 2024 19:51:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D5AB6B0098; Fri, 12 Apr 2024 15:51:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 788936B0099; Fri, 12 Apr 2024 15:51:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D7C96B009A; Fri, 12 Apr 2024 15:51:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 443DE6B0098 for ; Fri, 12 Apr 2024 15:51:53 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 132D31C0968 for ; Fri, 12 Apr 2024 19:51:53 +0000 (UTC) X-FDA: 82001925306.01.F9CC28F Received: from mail-lf1-f54.google.com (mail-lf1-f54.google.com [209.85.167.54]) by imf22.hostedemail.com (Postfix) with ESMTP id 17A4EC0007 for ; Fri, 12 Apr 2024 19:51:49 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=wqf+GaU1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of yosryahmed@google.com designates 209.85.167.54 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712951510; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jAsbWIWxDyCbTeCy6d/WiDDQ6YO4elX4E4lumQbhzs4=; b=D8S5YiFPYkArpNABokYrvNfxKMaYCJRT8wAiHRDRo3rJBd/XQLjsomQNKZV/CiHBfvL5vW Jf3EfWex/VYob0+3nfE8tx2IiyY7wDJmLlw/5J7eCQgz30pcAVIywIWHtlbjIKgsaqdcI0 ykaY8HK+EZcb/1SsujSRRnkwofNsWmM= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=wqf+GaU1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of yosryahmed@google.com designates 209.85.167.54 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712951510; a=rsa-sha256; cv=none; b=w2o42s8TVT3DnxUQWEjlXkQl1Rayw+FzJkGVaqsjuvEsaQlwROHhZplaiDXvCnF3p5gWHi N95ml4mJh5lUeqc0Cs7X8TJG2AD8xp6Tei3PADZSj1J7fUVfTHBzPMiDnSCySr9Ib5vCBu hgzRqIKT1Xil+3hZZNPu6VT6LdfPT1s= Received: by mail-lf1-f54.google.com with SMTP id 2adb3069b0e04-516d2b9cd69so1480362e87.2 for ; Fri, 12 Apr 2024 12:51:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712951508; x=1713556308; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=jAsbWIWxDyCbTeCy6d/WiDDQ6YO4elX4E4lumQbhzs4=; b=wqf+GaU18t4zlkXpibIV8OVdDv4hXFTxdNcmutV0WPh3pmRrPu+gRL57OlVQllJHkG kekyZdkucKS2/0aQpr9luZKX4IeYccEAs5oJI6hbkhS+bUVAAAcucQ4EtSIAGUYNXqks na0XkC+YwRZGFYU6J2OYFY5/qs7xe9Du3AnJpvCLQY5CwOf9vo7BZllXy5gAuP1IaSC9 AQrIY/r/ZQEOfxs5v0v0BJj54DVuWLs36MKFCFFPCa81KKRJjT7YNN1qEKG2TcTQFm20 DKeFdZWVdO+HFgLjJ3WrpkLzxTOKoZZKdJFih9eEe5PXrpNAPXWZcnrFO6X8wTR1Kq7z NPcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712951508; x=1713556308; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jAsbWIWxDyCbTeCy6d/WiDDQ6YO4elX4E4lumQbhzs4=; b=PHti/1OVyLNkpXDsbcPS0qiQrVry5BeXsDL36hZt5QQlZX7v2m88QozN5W2UbS+Y07 cT4mJQ6/gr+LXcGIN4AoPf9NMCBtR1vjbBymnR1bqC8A7qy9jiHVBUQ8mSJ0FdK/chyd VYlfzq1T+qoEEp4dfYJS3275PbyPbXe1+w+kYY5XOQtoieDgpsta99dFjco2pY9kQMid 9hsEcwlhM3ePAKLg51AElwJC0iVUp2dAWhHB/pUeLjRNiunlmJMoWJ4bKkuPn0uIHrXe D+d7Rj+1eP9jYqzSj/l4QTF2H1L76qLkFRboBotKzZwMgFFg3JE6uMjEmvIomz2Kkv3R dvMg== X-Forwarded-Encrypted: i=1; AJvYcCXFkCYsVFqGkHPcREKfEXVY5kPkUYbU+UVaMqjp5hbCZNqbMthg6lD/q4n5YAPaiQ9hjC0sD/jU1C4AtYUSwccS+Uo= X-Gm-Message-State: AOJu0Yz3vO2JkTG2olsUxM0Nt0C5V4naPcVgZd3HrvYOrMLBoE45qNj1 wlDlq/EYv+zK3Vo3Z8cCXQvyq6wKT7GOGkYuNgfOeKhQ8Y4VfZz9HYFGLmgTwPaIist87z9pzUG dFvmsHvcdB/v19fTpJ14uXKOOc3nklEpPc0Tf X-Google-Smtp-Source: AGHT+IFHmPijgWrApfizlTcY29Rrbg05HsjLi2+sFWc0+n5uAgIKka/EAL9+99DU+fWjHK2WtZAa8S/s1vI29shmIyY= X-Received: by 2002:ac2:41c5:0:b0:516:cdfa:17f6 with SMTP id d5-20020ac241c5000000b00516cdfa17f6mr2105982lfi.67.1712951507911; Fri, 12 Apr 2024 12:51:47 -0700 (PDT) MIME-Version: 1.0 References: <7cd05fac-9d93-45ca-aa15-afd1a34329c6@kernel.org> <20240319154437.GA144716@cmpxchg.org> <56556042-5269-4c7e-99ed-1a1ab21ac27f@kernel.org> <96728c6d-3863-48c7-986b-b0b37689849e@redhat.com> <4fd9106c-40a6-415a-9409-c346d7ab91ce@redhat.com> <75d837cc-4d33-44f6-bb0c-7558f0488d4e@kernel.org> In-Reply-To: <75d837cc-4d33-44f6-bb0c-7558f0488d4e@kernel.org> From: Yosry Ahmed Date: Fri, 12 Apr 2024 12:51:09 -0700 Message-ID: Subject: Re: Advice on cgroup rstat lock To: Jesper Dangaard Brouer Cc: Waiman Long , Johannes Weiner , Tejun Heo , Jesper Dangaard Brouer , "David S. Miller" , Sebastian Andrzej Siewior , Shakeel Butt , Arnaldo Carvalho de Melo , Daniel Bristot de Oliveira , kernel-team , cgroups@vger.kernel.org, Linux-MM , Netdev , bpf , LKML , Ivan Babrou Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 17A4EC0007 X-Stat-Signature: tjwadboi8itcu7sg8nedtutefks3adas X-HE-Tag: 1712951509-776505 X-HE-Meta: U2FsdGVkX18N8z3RMNZohEy2rdFA8AUAERrKah/dxixW4P+FQxwxuw9IKK7ngFYzRDfXyuiS743B+C7+cFuZQHQpWYJ9Lqrpb+mNsLXXo6eyS9TByuWGUp3mGoA9+s62KdjAOwdpEJHUlJddn7QwpMIOayl4wftW0J2tUdU20/xQoEjmIUH2xSPkc0xvYFMXALG2veGvZnvvuJ/PWViJrQvC+wH9KgZYL6eS0NkmagS7eTkeKeW6zccYn1qjzRiXTe/eaWVj+xZDaR/5yGl+WNiJLAa42MtLm0fxSiu8Rgv7X8uGJ7oayj0RKKGgOiD/V6du653gIEqdg0722keQtSw/K0pIX5V7o6LmDE5UkblA1cFjtFl6pbbpiAJ75mPYxQeI5XQkBEBPxdMr0GB5B7KvHHSVUC41WOLefdATE9483EyQp+D9YUjpBuDRA0vQFUJxpdqWSlSKJsMJcFn/xneyU6/mYD9oAPdoZyRGkz9LJXTJBT85j/ICe4PT8q0/Sb8rPu5paFa21jvay58ZtS9oA+i/2doQUSQqWnFDQPiLxkzNQyXjjpJ3tZGdU5RGJfDOTgQNWTbF1ZGjjBhgCAzzgfvn45f+M3PxU/Skw6UU5uh9C/LvjOCnYv85/xoC826ZRrsqfBJ0UKuz62qyuRuQMaQC2WRlVmD73NlvRpxYNeNUpUS68evtg+x6SAKS6TNjgITALUqlA9QhiuD7ltRX3jz3d50jnADHAiNGMlqWtu2HOZ8RWgVecs5W1b6kqI7N+EqzeDHweLqPOACpmnBlOKYiFefEMWzBVwdjmr/Mx1N8xQZgM/eTMUc1OaJrIKbboNFFA6hM7JpPg/vWSpiWnCaZ0ENS31WcvtNpd89Whz6rhYAe9lFDcPWuCpWNTRUgoJld2ykwWAA/U0P0rJD7580Y7o/41DyhlrjfvCrmJbY82BGUaMsQP0QTR5uAn7Fhfe3c6dtm/Q/fAlT acr5+YHe wTPqtlq/lCBxhSLObCDhp4ozRWfglhw7gTpqEifH/lCBEhjdgpihNznEAS2Wc/02ABZwrr+FeXAI3gCzBDcZm471JHLeE3Ug9H7XSr7vUsj8N+OJK79OQGUl/MDkF8TvDbkrR90AqJ4st21cv7ZIH9GyMDT94+4QlYeTWj+2k2mOpGx+bJ7XZFexkhOj5bI+YR4GW2qjpOxsd05hN5eWc3u4Kq/fd3PERz3Pvf9G811hC//x1+I1ipor7KdS2oxpfTxYDpnmBdRtjToosvs6+pFoQ9jTIzjkvf1cSS+5O3dp+9hkdpYqp3PaW5+zniMw9tJsajiCiPafQ9/w8mBodyYRzyGpgOKhvhr+UPTUra/1ONf5MgEVjL5iqPn5baupfuR06bNojf4BBJfd4rnDz53FDfgyDbIwMG/yWaTt4x689r9sLTuaovR8n8upeNCv1ANTitoU0HbsIHzyJcNPcu6b2Pebuhozq66Z0uQxQKebfT9XJy+ESBdjVWNwn80WgrHuP8hYpPmss4QoeRJRv/cH4+RD52/XNm45i X-Bogosity: Ham, tests=bogofilter, spamicity=0.001299, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Apr 12, 2024 at 12:26=E2=80=AFPM Jesper Dangaard Brouer wrote: > > > > On 11/04/2024 19.22, Yosry Ahmed wrote: > > [..] > >>>>>> > >>>>>> How far can we go... could cgroup_rstat_lock be converted to a mut= ex? > >> >>> > >>>>> The cgroup_rstat_lock was originally a mutex. It was converted to a > >>>>> spinlock in commit 0fa294fb1985 ("group: Replace cgroup_rstat_mutex= with > >>>>> a spinlock"). Irq was disabled to enable calling from atomic contex= t. > >>>>> Since commit 0a2dc6ac3329 ("cgroup: remove > >>>>> cgroup_rstat_flush_atomic()"), the rstat API hadn't been called fro= m > >>>>> atomic context anymore. Theoretically, we could change it back to a > >>>>> mutex or not disabling interrupt. That will require that the API ca= nnot > >>>>> be called from atomic context going forward. > >> >>> > >>>> I think we should avoid flushing from atomic contexts going forward > >>>> anyway tbh. It's just too much work to do with IRQs disabled, and we > >>>> observed hard lockups before in worst case scenarios. > >>>> > >> > >> Appreciate the historic commits as documentation for how the code > >> evolved. Sounds like we agree that the IRQ-disable can be lifted, > >> at-least between the three of us. > > > > It can be lifted, but whether it should be or not is a different > > story. I tried keeping it as a spinlock without disabling IRQs before > > and Tejun pointed out possible problems, see below. > > > > IMHO it *MUST* be lifted, as disabling IRQs here is hurting other parts > of the system and actual production systems. > > The "offending" IRQ-spin_lock commit (0fa294fb1985) is from 2018, and > GitHub noticed in 2019 (via blog[1]) and at Red Hat I backported[2] > patches (which I now understand) only mitigate the issues. Our prod > systems are on 6.1 and 6.6 where we still clearly see the issue > occurring. Also Daniel's "rtla timerlat" tool for catching systems > latency issues have "cgroup_rstat_flush_locked" as the poster child [3][4= ]. We have been bitten by the IRQ-spinlock before, so I cannot disagree, although for us removing atomic flushes and allowing the lock to be dropped between CPU flushes seems to be good enough (for now). > > > [1] https://github.blog/2019-11-21-debugging-network-stalls-on-kubernet= es/ > [2] https://bugzilla.redhat.com/show_bug.cgi?id=3D1795049 > [3] https://bristot.me/linux-scheduling-latency-debug-and-analysis/ > [4] Documentation/tools/rtla/rtla-timerlat-top.rst > > >> > >>>> I think one problem that was discussed before is that flushing is > >>>> exercised from multiple contexts and could have very high concurrenc= y > >>>> (e.g. from reclaim when the system is under memory pressure). With a > >>>> mutex, the flusher could sleep with the mutex held and block other > >>>> threads for a while. > >>>> > >> > >> Fair point, so in first iteration we keep the spin_lock but don't do t= he > >> IRQ disable. > > > > I tried doing that before, and Tejun had some objections: > > https://lore.kernel.org/lkml/ZBz%2FV5a7%2F6PZeM7S@slm.duckdns.org/ > > > > My read of that thread is that Tejun would prefer we look into > > converting cgroup_rsat_lock into a mutex again, or more aggressively > > drop the lock on CPU boundaries. Perhaps we can unconditionally drop > > the lock on each CPU boundary, but I am worried that contending the > > lock too often may be an issue, which is why I suggested dropping the > > lock if there are pending IRQs instead -- but I am not sure how to do > > that :) > > > > Like Tejun, I share the concern that keeping this a spinlock will > can increase the chance of several CPUs contend on this lock (which is > also a production issue we see). This is why I suggested to "exit" if > (1) we see the lock have been taken by somebody else, or if (2) stats > were flushed recently. When you say "exit", do you mean abort the whole thing, or just don't spin for the lock but wait for the ongoing flush? > > For (2), memcg have a mem_cgroup_flush_stats_ratelimited() system > combined with memcg_vmstats_needs_flush(), which limits the pressure on > the global lock (cgroup_rstat_lock). > *BUT* other users of cgroup_rstat_flush() like when reading io.stat > (blk-cgroup.c) and cpu.stat, don't have such a system to limit pressure > on global lock. Further more, userspace can easily trigger this via > reading those stat files. And normal userspace stats tools (like > cadvisor, nomad, systemd) spawn threads reading io.stat, cpu.stat and > memory.stat, likely without realizing that kernel side they share same > global lock... > > I'm working on a code solution/proposal for "ratelimiting" global lock > access when reading io.stat and cpu.stat. I personally don't like mem_cgroup_flush_stats_ratelimited() very much, because it is time-based (unlike memcg_vmstats_needs_flush()), and a lot of changes can happen in a very short amount of time. However, it seems like for some workloads it's a necessary evil :/ I briefly looked into a global scheme similar to memcg_vmstats_needs_flush() in core cgroups code, but I gave up quickly. Different subsystems have different incomparable stats, so we cannot have a simple magnitude of pending updates on a cgroup-level that represents all subsystems fairly. I tried to have per-subsystem callbacks to update the pending stats and check if flushing is required -- but it got complicated quickly and performance was bad. At some point, having different rstat trees for different subsystems was brought up. I never looked into actually implementing it, but I suppose if we do that we have a generic scheme similar to memcg_vmstats_needs_flush() that can be customized by each subsystem in a clean performant way? I am not sure. [..] > >> > >>>> I vaguely recall experimenting locally with changing that lock into = a > >>>> mutex and not liking the results, but I can't remember much more. I > >>>> could be misremembering though. > >>>> > >>>> Currently, the lock is dropped in cgroup_rstat_flush_locked() betwee= n > >>>> CPU iterations if rescheduling is needed or the lock is being > >>>> contended (i.e. spin_needbreak() returns true). I had always wondere= d > >>>> if it's possible to introduce a similar primitive for IRQs? We could > >>>> also drop the lock (and re-enable IRQs) if IRQs are pending then. > >>> > >>> I am not sure if there is a way to check if a hardirq is pending, but= we > >>> do have a local_softirq_pending() helper. > >> > >> The local_softirq_pending() might work well for me, as this is our pro= d > >> problem, that CPU local pending softirq's are getting starved. > > > > If my understanding is correct, softirqs are usually scheduled by > > IRQs, which means that local_softirq_pending() may return false if > > there are pending IRQs (that will schedule softirqs). Is this correct? > > > > Yes, networking hard IRQ will raise softirq, but software often also > raise softirq. > I see where you are going with this... the cgroup_rstat_flush_locked() > loop "play nice" check happens with IRQ lock held, so you speculate that > IRQ handler will not be able to raise softirq, thus > local_softirq_pending() will not work inside IRQ lock. Exactly. I wonder if it would be okay to just unconditionally drop the lock at each CPU boundary. Would be interesting to experiment with this. One disadvantage of the mutex in this case (imo) is that outside of the percpu spinlock critical section, we don't really need to be holding the global lock/mutex. So sleeping while holding it is not needed and only introduces problems. Dropping the spinlock at each boundary seems like a way to circumvent that. If the problems you are observing are mainly on CPUs that are holding the lock and flushing, I suspect this should greatly. If the problems are mainly on CPUs spinning for the lock, I suspect it will still help redistribute the lock (and IRQs disablement) more often, but not as much. > > > >> > >> In production another problematic (but rarely occurring issue) is when > >> several CPUs contend on this lock. Yosry's recent work/patches have > >> already reduced the chances of this happening (thanks), BUT it still c= an > >> and does happen. > >> A simple solution to this, would be to do a spin_trylock() in > >> cgroup_rstat_flush(), and exit if we cannot get the lock, because we > >> know someone else will do the work. > > > > I am not sure I understand what you mean specifically with the checks > > below, but I generally don't like this (as you predicted :) ). > > > > On the memcg side, we used to have similar logic when we used to > > always flush the entire tree. This leaded to flushing being > > indeterministic. You would occasionally get stale stats because of the > > contention, which resulted in some inconsistencies (e.g. performing > > proactive reclaim successfully then reading the stats that do not > > reflect that). > > > > Now that we dropped the logic to always flush the entire tree, it is > > even more difficult because concurrent flushes could be in completely > > irrelevant subtrees. > > > > If we were to introduce some smart logic to figure out that the > > subtree we are trying to flush is already being flushed, I think we > > would need to wait for that ongoing flush to complete instead of just > > returning (e.g. using completions). But I think such implementations > > to find overlapping flushes and wait for them may be too compicated. > > > > We will see if you hate my current code approach ;-) Just to be clear, if the spinlock was to be converted to a mutex, or to be dropped at each CPU boundary, do you still think such ratelimiting is still needed to mitigate lock contention -- even if the IRQs latency problem is fixed?