From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB651C10DC3 for ; Mon, 4 Dec 2023 21:38:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 234586B020F; Mon, 4 Dec 2023 16:38:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E49A6B0213; Mon, 4 Dec 2023 16:38:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0ACBA6B0246; Mon, 4 Dec 2023 16:38:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id EDEC56B020F for ; Mon, 4 Dec 2023 16:38:22 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CE08C1C068B for ; Mon, 4 Dec 2023 21:38:22 +0000 (UTC) X-FDA: 81530449644.21.99610F2 Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) by imf18.hostedemail.com (Postfix) with ESMTP id 0D7CD1C0011 for ; Mon, 4 Dec 2023 21:38:19 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ZEpL06Gu; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.45 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701725900; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hop+jSggmEUlZwuqjvhWA9rrsZ1f5SnaiqYLNC/ALg4=; b=p3uaFDNoJodmoWS2jGHOoAfZwegs5dKBNps1EWtzbe8GthXjyuif/vogMIRwy/usEDA6vt UaohyS1ax6Kzexs7LXp6qLbNhmHDTUMQ/89nEwHYx0yHJtOeDMtA4bXZ2uozIARenWwfwn JU+tTCViC214DNfdzh4vEApKBOwtj30= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ZEpL06Gu; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.45 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701725900; a=rsa-sha256; cv=none; b=BsvqR1UQHEpPN2/lI/wVD6J2K7MXOz5GWpVRS7Xv4BKX1llQB8mh7QWf/jhauJJzWtJbRV cAGBUvK5CjxHtfL1y98Lcdt+25DKz/7I1NQhaYM2lk9gq7D/sd5suaeYXW1C8+z39xvWd6 8bpLdnCR7CF52mQ60+wM1bb0jg2VFI8= Received: by mail-ej1-f45.google.com with SMTP id a640c23a62f3a-a1a52aecc67so359557066b.2 for ; Mon, 04 Dec 2023 13:38:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701725898; x=1702330698; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=hop+jSggmEUlZwuqjvhWA9rrsZ1f5SnaiqYLNC/ALg4=; b=ZEpL06GuS+TElhcmzmndRCJFoupHUWee4MXLyhZ2rvPfllyynBIYVnLhJe/j4Vzl/E Fm0oxZZXvqKlte6Vv0TKThaj0JjdhfoJf4PFjnLxW4Xu5v6KJb2n8qhzR7Zs/cNrZqba cykInl1mqoomNgG6xsvMRwkyk8hrtvKooDcg0BWCA0QJrO0cIG/szqGaaxBzM4f3BdQw AWzTcgod8KCUh3NpBV5AI5TrKnd/une0a5VKr6AS++RF4YgnyAKjA/iYusOcVWtSOeks gH5ha3FEGVLSJWdtY16gWHNk+soPZr/LUqVwIVqAFQ4jDKjMCbygrTq3d4WDq24XFjZi nj0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701725898; x=1702330698; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hop+jSggmEUlZwuqjvhWA9rrsZ1f5SnaiqYLNC/ALg4=; b=StmeiIC2cI45iTIJol6qxlmXYZk2p+lLChRyR3hW5U1sSQOUVS4T9zexDDSIGMr5rU uvzHat0R8ZptMGG/IV2HNNA7ES0a8DBiI81qtu3AHTenLQsxRB+PapYfvGFIAVWxfrtm FYEXOtI/tSgbD8WZQZcpRQkUj8AnG0Bs7Y/pOQaDpAfNqI+rl8lbFkvW0lN1YYqNW6zE LTqY6dLG9NLxazMOZx42M3jcWwe8vPh1R05KLnNSYII5E1498ffBuGgO+soRpHeQeCU0 rbxnZSU6W/Z2hK6HZGF0E7CtlHEmnUD8T597185BKtmTVbqTBXpgR6kBayJ02/bpY431 H4lA== X-Gm-Message-State: AOJu0YwolPSW5Si3MT5tDzecrTSo7g6rLUjpMNDEPboyBYMLUwjGlqyQ fX+FWHV4PbUzWTxCvV6aIHElZ+1GiUN92ao25mO54w== X-Google-Smtp-Source: AGHT+IG/IeYkVvZXRhGOJtuyrrZO5rU7RKwJ59APByW8scWghaWLQjxjSzwxW33hqfosiYZ0F4HPrtlUnPAOPAHtqrw= X-Received: by 2002:a17:906:1091:b0:a19:a1ba:8cbf with SMTP id u17-20020a170906109100b00a19a1ba8cbfmr4359871eju.93.1701725898370; Mon, 04 Dec 2023 13:38:18 -0800 (PST) MIME-Version: 1.0 References: <20231129032154.3710765-1-yosryahmed@google.com> <20231129032154.3710765-6-yosryahmed@google.com> <20231202083129.3pmds2cddy765szr@google.com> In-Reply-To: From: Yosry Ahmed Date: Mon, 4 Dec 2023 13:37:42 -0800 Message-ID: Subject: Re: [mm-unstable v4 5/5] mm: memcg: restore subtree stats flushing To: Shakeel Butt Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Ivan Babrou , Tejun Heo , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Waiman Long , kernel-team@cloudflare.com, Wei Xu , Greg Thelen , Domenico Cerasuolo , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: f46dbfw1pkeapsuidspmy9watnhp8fkg X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0D7CD1C0011 X-HE-Tag: 1701725899-540431 X-HE-Meta: U2FsdGVkX19DRUT+st9dO1/lnh72VzVICWQduyQP+utQyameZAqdWc5sM6u3F7Vkj8nsPSMY/vqBhpumm8xFud8aImV8tr8rvmaUpC3gt3IhsgDna7yenybtb1+Cc3lc6vCvIe8YUpwVtinBjhMprRSNsQPYzPtw3WAYcX/jlpisPZsUZE2zOswddw3rDRfJrqGvXvLdx263MXiatoATSb8/BMCFWvMzES6ktqDFmx1vmuxL0YE9ToKgjFq8AQQshTfTDyIQVOLkXEj7YGI6EWyfUDTvDtCnodQhAOkC6jQ53vzPGlohNeul9CWQpoYPdd5PC93cPxMdev6HMNZL/86Tv3MjGb404CdGo4+jIQG2jsiNQ6SDo5QDL9JI9bar1I4kr1Isb1Anklv63IV5jwqpCifAbCLedCR9uJS8i1KTsz+4C3bjlHpENDpFElsxb/NxDDVmlXX46MW5AcCcV3cq4dLFNUjTWij+ct7XbgtTvNsXMbjrkzHJTu90oV+VzCv/FSaLL0YJf0v9XgrWKkELKbKBEK+P3fsf03fsOG7JoLqOEYPVpt7SvWtDC6T/cavrZ001FPgXT84UoD3IdqA4vuvirk18+4SLEqUZCyjcf06OsZ/c3tbn9pHRG5qK5pXM06r66m0MGW1nXmWHBy1qCt5HPb+eqFoO++rY+HYev6Z4WJBINGe1/ptFDlzEHK0pT9IN+zwB6Xs08baANiR8MjHQLIGpDUsJVEE+PdPjyeSpqq58QGCp2cOkfgh4Td9WI0pu50evP9pWHBKYCGHE+MLkGHeVOarBZYnSaWwmb44BK1lvz5ZsE7wJpK883M5vajHm60FM4pxmxyi71FNCaDcN8H7CJ0tYk6Imqjv24bwWS49gWmDfTDvbE7Dap0RDTmQLItzS06/oZ16K+XazD/YMSc8Wk4ba/iRQOSY24z9QfwWZwnFpmb7YhQVDwct0M1JACDf6IOMr2g1 f8kSmawL iXhu28Ku0NMawvBAciFLXgJnnksnMphjI8qWFbxSJn4IgaecQV5Vp9uYscn8tD3yOlqOGFsE06P2k4BvXdGiPnEY/DTTh3tscRerpvoKwhPIrfOUY9m8abPVwjLs3WKRHo+toZYMEhSnNnGEEHCnf2WfGlfoRoQypRmvYXt4tfpxKlXpwzPFfyQgcbv/ZJKqvIJCigrVxHuCnVFpj33MDw4nNsCtBLhUYUtCL+RYN8FIvZefEpqNcrzmb1sGSkCWFH2YO/9rl9D0XmdEQ0iFTk6k2hdIai9uqeRFhSkhX3jWqbGQstxbH8XM9StYfiv9dr0b2gsCbbaatKYOcnodKj7785Ao8b6nWSR/EBTZjmrm+5NLKP2rUZJ+kew== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Dec 4, 2023 at 12:12=E2=80=AFPM Yosry Ahmed = wrote: > > On Sat, Dec 2, 2023 at 12:31=E2=80=AFAM Shakeel Butt wrote: > > > > On Wed, Nov 29, 2023 at 03:21:53AM +0000, Yosry Ahmed wrote: > > [...] > > > +void mem_cgroup_flush_stats(struct mem_cgroup *memcg) > > > { > > > - if (memcg_should_flush_stats(root_mem_cgroup)) > > > - do_flush_stats(); > > > + static DEFINE_MUTEX(memcg_stats_flush_mutex); > > > + > > > + if (mem_cgroup_disabled()) > > > + return; > > > + > > > + if (!memcg) > > > + memcg =3D root_mem_cgroup; > > > + > > > + if (memcg_should_flush_stats(memcg)) { > > > + mutex_lock(&memcg_stats_flush_mutex); > > > > What's the point of this mutex now? What is it providing? I understand > > we can not try_lock here due to targeted flushing. Why not just let the > > global rstat serialize the flushes? Actually this mutex can cause > > latency hiccups as the mutex owner can get resched during flush and the= n > > no one can flush for a potentially long time. > > I was hoping this was clear from the commit message and code comments, > but apparently I was wrong, sorry. Let me give more context. > > In previous versions and/or series, the mutex was only used with > flushes from userspace to guard in-kernel flushers against high > contention from userspace. Later on, I kept the mutex for all memcg > flushers for the following reasons: > > (a) Allow waiters to sleep: > Unlike other flushers, the memcg flushing path can see a lot of > concurrency. The mutex avoids having a lot of CPUs spinning (e.g. > concurrent reclaimers) by allowing waiters to sleep. > > (b) Check the threshold under lock but before calling cgroup_rstat_flush(= ): > The calls to cgroup_rstat_flush() are not very cheap even if there's > nothing to flush, as we still need to iterate all CPUs. If flushers > contend directly on the rstat lock, overlapping flushes will > unnecessarily do the percpu iteration once they hold the lock. With > the mutex, they will check the threshold again once they hold the > mutex. > > (c) Protect non-memcg flushers from contention from memcg flushers. > This is not as strong of an argument as protecting in-kernel flushers > from userspace flushers. > > There has been discussions before about changing the rstat lock itself > to be a mutex, which would resolve (a), but there are concerns about > priority inversions if a low priority task holds the mutex and gets > preempted, as well as the amount of time the rstat lock holder keeps > the lock for: > https://lore.kernel.org/lkml/ZO48h7c9qwQxEPPA@slm.duckdns.org/ > > I agree about possible hiccups due to the inner lock being dropped > while the mutex is held. Running a synthetic test with high > concurrency between reclaimers (in-kernel flushers) and stats readers > show no material performance difference with or without the mutex. > Maybe things cancel out, or don't really matter in practice. > > I would prefer to keep the current code as I think (a) and (b) could > cause problems in the future, and the current form of the code (with > the mutex) has already seen mileage with production workloads. Correction: The priority inversion is possible on the memcg side due to the mutex in this patch. Also, for point (a), the spinners will eventually sleep once they hold the lock and hit the first CPU boundary -- because of the lock dropping and cond_resched(). So eventually, all spinners should be able to sleep, although it will be a while until they do. With the mutex, they all sleep from the beginning. Point (b) still holds though. I am slightly inclined to keep the mutex but I can send a small fixlet to remove it if others think otherwise. Shakeel, Wei, any preferences?