From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC31CC6FD1C for ; Thu, 23 Mar 2023 19:35:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E8F0E6B0071; Thu, 23 Mar 2023 15:35:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E162E6B0072; Thu, 23 Mar 2023 15:35:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8EFC6B0074; Thu, 23 Mar 2023 15:35:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B372B6B0071 for ; Thu, 23 Mar 2023 15:35:36 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 81359A0542 for ; Thu, 23 Mar 2023 19:35:36 +0000 (UTC) X-FDA: 80601167472.12.EE01FA2 Received: from mail-yw1-f174.google.com (mail-yw1-f174.google.com [209.85.128.174]) by imf24.hostedemail.com (Postfix) with ESMTP id BE4DF18001B for ; Thu, 23 Mar 2023 19:35:33 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qcmPuY4U; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of shakeelb@google.com designates 209.85.128.174 as permitted sender) smtp.mailfrom=shakeelb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679600133; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7VcmchQqt9DaMxSLZ78hVIFyFO9YofibofDFlcn/C/A=; b=SBRXk4eaZDjq2TOWGEzBT03tp+6Nne/bKKqrsYJnHsgQt56a/Ao6OVLtDWCL+dY2molllx mc8aLlHmBCUPxxG7YYrMxhCLe+4N5hQ++X3PoVNAvtxAGsYrTU9SapAwaQFDuCuI/uzt81 3ohTi5yCrDGrl69uVB3vi7acKevqyyE= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qcmPuY4U; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of shakeelb@google.com designates 209.85.128.174 as permitted sender) smtp.mailfrom=shakeelb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679600133; a=rsa-sha256; cv=none; b=gdDcIlFcUZBmCjoZ5nuG7Y25UyxHUJfy3+u8r4Ft3wQ03FB/VDpHGYWycCYgpRLuhsE5gI txaZQRiwhrsDZAaHYuzvMB+kQCckMUuEt5lm8xElpA24Cl/ZZ+8JNvsbfSs8zzbF7GnwYh qz3DHNZDs8MK40aWZ2Q8b/t07mli4fM= Received: by mail-yw1-f174.google.com with SMTP id 00721157ae682-5456249756bso116609927b3.5 for ; Thu, 23 Mar 2023 12:35:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679600133; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=7VcmchQqt9DaMxSLZ78hVIFyFO9YofibofDFlcn/C/A=; b=qcmPuY4UNpBr+P2ZKeYHu1R051ec4TvA2Clkq4ip2880X7aXBn2kafw99H0CzcQE8Y 55btBqMovU8tiALDLxofgtxE/bZ0HgUAqQ7/uL9ivj3gfCrk6Yuh0oGKI0h3hhLcEY7P dflHLX+NDoqrZpncNEKvk51KUcHVmknk2JEAaqvkhAXJHpK54fIOZMMCA7hah18AZ/bd JkZhELoYu+zcuogS32m/qQmFm811rpAtxS23YU+81bGhLBg7rvz3YmoDRQyp8dCCDNhb lUPfwH5qF/cf+E5RouEWAGBRgF9zRDRmzCb9ikWcAdK0VJGHDe0vWwWNlwRMGXpXzXAs LaXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679600133; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7VcmchQqt9DaMxSLZ78hVIFyFO9YofibofDFlcn/C/A=; b=ECZlByYB/Wb7GEMU1u+W8YQ6XkbPV+ZpuzCpMbNdEDtyoCAZ9nsXm9iu3wKKqF0EgK /IHvbOkQ/+UXhc6slkh6zFkeWUu9We0qs3jsqDJBq+IHq4NPj/BwDpwaWMXN8d6E//sx esrvtft99eqGZjrsF5h5/S9eBFt15VNw7Bq/XHMBJXCoxmE0tN7bhNeAZYNRg9yzfLMB S3kHvPtuLu0OlKphRZyKw8OIriArr7NthKJm8wPZ9YSzMWbU9S7g2exsJV8CDHC3OBxm q3Z+6IEexkPLQfI/FXeoGgd6sxSUMZ0eqpKezhkle+yK4Ogc4pwVdP7AcY66Id30x5K8 4jSA== X-Gm-Message-State: AAQBX9d2nQbPwKYgkIdPaffDGPELwr9zc1CaQA94mSh7B5RtoangfhMm yT1cmouMRRtuPZ01XNn76lQVe2HjNsWkTUya94lYvA== X-Google-Smtp-Source: AKy350ZXwVmW+8wxag0P4Y079RwZrC59m2sOfutsdJVuOG7wMxgYoQ6G9rfVZpnKi1K8J8jngBYLKTgqhGctfMls/iw= X-Received: by 2002:a81:ef08:0:b0:545:883a:542c with SMTP id o8-20020a81ef08000000b00545883a542cmr1610295ywm.0.1679600131909; Thu, 23 Mar 2023 12:35:31 -0700 (PDT) MIME-Version: 1.0 References: <20230323040037.2389095-1-yosryahmed@google.com> <20230323040037.2389095-5-yosryahmed@google.com> <20230323155613.GC739026@cmpxchg.org> <20230323172732.GE739026@cmpxchg.org> In-Reply-To: From: Shakeel Butt Date: Thu, 23 Mar 2023 12:35:20 -0700 Message-ID: Subject: Re: [RFC PATCH 4/7] memcg: sleep during flushing stats in safe contexts To: Yosry Ahmed Cc: Johannes Weiner , Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Michal Hocko , Roman Gushchin , Muchun Song , Andrew Morton , Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: BE4DF18001B X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: a6d5pnu3ircgf48nucwupgfta56twh56 X-HE-Tag: 1679600133-359699 X-HE-Meta: U2FsdGVkX1+6G8e2OF1o0icKooY3eIwHpUmXGRZX1ov3AhVmnnZk6CZf/gNhbtnlpTkvnzNyRTw6ldc/ZdOfhSUVM/rRXsmFaJN2FajobcvonU8G4+PTikjgg1kSGQD0XHDah8JZWdFCFqJmjbc2UrnV+joDZOAFhUK/2m7tVsMoM5f75AenGLFX9nHvboMPbSiXqJuKsrvX6ialMlLeG3Z4OIEyFzYok0v2JbFJ+NfUiid9pUWk5aA8IBNrBMI6ROi8E5x0wwqQX23yayEMqU19uVej7OFtgZ2DbQD3UuuD4bpHUX/xNxqCdJ/JTvXqV9jfK0ySHNPYlIVPHmqhIh1SgW5OjiETeWn5eRyhscjQacd2D8DyWJPRWua4WH/Jr7PKa+WZpqtDa1D13mBbYkkIpYPPn3T5OpUpr0wOYtR7vDUii8H8wPpXmyrh1v9LdKKRq/w2BBpVOSHZ/H8gLLYoPTDcrnAt5ulikxSsyu+Un+WyUSQQ7oLMWVakBM7gzU4KiZnkHfnbjkBTG4Eyqy6eS72Aqg5Sbo2Wq0EFdHQoRMD9IAsiVf2g56zVaUS+hsalkMr4c+TOXZ9arv9U+pLaAGcQiNz6l2RHa8U4t+w0pH26CkC51sMc0tRv6GSKi7+8ZfMcKAg+qFCU3dyW2JvJIgohSxuumOjsVxmfaAIQkMlvwo1avJPwxHVnehkaQyV7is02tt9D/GhoxuTRkS6WiAcVbLJItJU0I95fnFnT9mSBfdWzKrnt71DqiloPo1h7hXGyNeY2iQ6PTOV68J9d3TtqvI8Y4pS0zr8x3m41fGfZbFlO5qcg47b5PO/g3q3otGB+gFq2P+d92hx+Mo2rPvd3Izl57USAnZV22tDyws3B4IUp8GUnuOdK/FLruTki4e1Et3E54k6eQDJnaDcq8WcAM4D0XL1ne6S5dW1pjS4eeMJhP6jb+YBYwhtmq6LDuZ0skjbQdDzPQxQ r5D69yTf 3xIyF1lDEg0Ac8c3HEKneoJzvHJbJ40YvC9wivgHxOL22UuBz23yFFialeBc+uPbnJ1SMOHld6fTfLuBHsUXAxZgcHtS0TIvxEzuJnUw140N0oXpQlN5DODRDrPVkO4v4BzEyUf77igB7z3CRBuBiTIBmWwWHqkSxkGjWRG1A+uJZJ96gvYq+Q+hL8djAc+TCY+daQZO9w8xNXT0XOLTSKAFYlducnSquq6ty6IYXY4WsnGgqZ8aMyUCtNOkMP+8kLT8ExFPKSIkPJB94VX1ov1PRcyswdVGd00Ep5LfhUJuqfNJNSg+ifnFw4MxrsexUSNgMDTCzthvZvLc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Mar 23, 2023 at 11:08=E2=80=AFAM Yosry Ahmed wrote: > > On Thu, Mar 23, 2023 at 10:27=E2=80=AFAM Johannes Weiner wrote: > > > > On Thu, Mar 23, 2023 at 09:01:12AM -0700, Yosry Ahmed wrote: > > > On Thu, Mar 23, 2023 at 8:56=E2=80=AFAM Johannes Weiner wrote: > > > > > > > > On Thu, Mar 23, 2023 at 04:00:34AM +0000, Yosry Ahmed wrote: > > > > > @@ -644,26 +644,26 @@ static void __mem_cgroup_flush_stats(void) > > > > > return; > > > > > > > > > > flush_next_time =3D jiffies_64 + 2*FLUSH_TIME; > > > > > - cgroup_rstat_flush(root_mem_cgroup->css.cgroup, false); > > > > > + cgroup_rstat_flush(root_mem_cgroup->css.cgroup, may_sleep); > > > > > > > > How is it safe to call this with may_sleep=3Dtrue when it's holding= the > > > > stats_flush_lock? > > > > > > stats_flush_lock is always called with trylock, it is only used today > > > so that we can skip flushing if another cpu is already doing a flush > > > (which is not 100% correct as they may have not finished flushing yet= , > > > but that's orthogonal here). So I think it should be safe to sleep as > > > no one can be blocked waiting for this spinlock. > > > > I see. It still cannot sleep while the lock is held, though, because > > preemption is disabled. Make sure you have all lock debugging on while > > testing this. > > Thanks for pointing this out, will do. > > > > > > Perhaps it would be better semantically to replace the spinlock with > > > an atomic test and set, instead of having a lock that can only be use= d > > > with trylock? > > > > It could be helpful to clarify what stats_flush_lock is protecting > > first. Keep in mind that locks should protect data, not code paths. > > > > Right now it's doing multiple things: > > > > 1. It protects updates to stats_flush_threshold > > 2. It protects updates to flush_next_time > > 3. It serializes calls to cgroup_rstat_flush() based on those ratelimit= s > > > > However, > > > > 1. stats_flush_threshold is already an atomic > > > > 2. flush_next_time is not atomic. The writer is locked, but the reader > > is lockless. If the reader races with a flush, you could see this: > > > > if (time_after(jiffies, flush_n= ext_time)) > > spin_trylock() > > flush_next_time =3D now + delay > > flush() > > spin_unlock() > > spin_trylock() > > flush_next_time =3D now + delay > > flush() > > spin_unlock() > > > > which means we already can get flushes at a higher frequency than > > FLUSH_TIME during races. But it isn't really a problem. > > > > The reader could also see garbled partial updates, so it needs at > > least READ_ONCE and WRITE_ONCE protection. > > > > 3. Serializing cgroup_rstat_flush() calls against the ratelimit > > factors is currently broken because of the race in 2. But the race > > is actually harmless, all we might get is the occasional earlier > > flush. If there is no delta, the flush won't do much. And if there > > is, the flush is justified. > > > > In summary, it seems to me the lock can be ditched altogether. All the > > code needs is READ_ONCE/WRITE_ONCE around flush_next_time. > > Thanks a lot for this analysis. I agree that the lock can be removed > with proper READ_ONCE/WRITE_ONCE, but I think there is another purpose > of the lock that we are missing here. > > I think one other purpose of the lock is avoiding a thundering herd > problem on cgroup_rstat_lock, particularly from reclaim context, as > mentioned by the log of commit aa48e47e3906 ("memcg: infrastructure > to flush memcg stats"). > > While testing, I did notice that removing this lock indeed causes a > thundering herd problem if we have a lot of concurrent reclaimers. The > trylock makes sure we abort immediately if someone else is flushing -- > which is not ideal because that flusher might have just started, and > we may end up reading stale data anyway. > > This is why I suggested replacing the lock by an atomic, and do > something like this if we want to maintain the current behavior: > > static void __mem_cgroup_flush_stats(void) > { > ... > if (atomic_xchg(&ongoing_flush, 1)) > return; > ... > atomic_set(&ongoing_flush, 0) > } > > Alternatively, if we want to change the behavior and wait for the > concurrent flusher to finish flushing, we can maybe spin until > ongoing_flush goes back to 0 and then return: > > static void __mem_cgroup_flush_stats(void) > { > ... > if (atomic_xchg(&ongoing_flush, 1)) { > /* wait until the ongoing flusher finishes to get updated stats *= / > while (atomic_read(&ongoing_flush) {}; > return; > } > /* flush the stats ourselves */ > ... > atomic_set(&ongoing_flush, 0) > } > > WDYT? I would go with your first approach i.e. no spinning.