From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98ECAC433DB for ; Thu, 4 Feb 2021 16:15:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1DE8064E2C for ; Thu, 4 Feb 2021 16:15:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1DE8064E2C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 638F26B006E; Thu, 4 Feb 2021 11:15:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E90F6B0072; Thu, 4 Feb 2021 11:15:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B1A46B0075; Thu, 4 Feb 2021 11:15:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0004.hostedemail.com [216.40.44.4]) by kanga.kvack.org (Postfix) with ESMTP id 35B566B006E for ; Thu, 4 Feb 2021 11:15:10 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E85B71EFF for ; Thu, 4 Feb 2021 16:15:09 +0000 (UTC) X-FDA: 77781084738.03.plot70_430c38c275dd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id ABDD028A4EB for ; Thu, 4 Feb 2021 16:15:09 +0000 (UTC) X-HE-Tag: plot70_430c38c275dd X-Filterd-Recvd-Size: 6361 Received: from mail-qk1-f177.google.com (mail-qk1-f177.google.com [209.85.222.177]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Thu, 4 Feb 2021 16:15:08 +0000 (UTC) Received: by mail-qk1-f177.google.com with SMTP id u20so3810400qku.7 for ; Thu, 04 Feb 2021 08:15:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=0rhTk5izLatoFfIDNxmz1fyaLshSZ+6LLZBve6XjOzQ=; b=O4ZVUwRrPSmt03J6wBFbrAwKR8DigcweNeGpjQBAT542zJXom/F8N0kM5lXk+AOpAz lkEqie3XYPUxu/MymMBqjyYRKYWigI816jCPhNr1e907EDJ6OXubQrfNVJizoLGIlbv1 sXK3IjOgSrrdcit8dHEAFEjRAmzd28hArjzv69QYDfjsiT0VwXNP/mr2Dn3OKxdd3o6U 8rykm84xwcu5321Sid2AN+avO0Ad5tnKkoMCTX2BEAg0UiQ1L0RRQmz3uHhW8MULkxy2 gGGYZPxIOmxMRqo45RSrHCREyaqZuMRHgMDKQhWwUIS/Imgu4yXVukgdiiRVb2odb0q3 iJ+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=0rhTk5izLatoFfIDNxmz1fyaLshSZ+6LLZBve6XjOzQ=; b=p7TbgY+MfFyTwomHBFwGoxYCPTrExAF7nTKGxatn+4/gZ6dNSJK5qnQ81M+NF0UyVT w17BkKgJzI/YfwLfUWt4yrdhOppt9mC7bPJ7KiPecnUC5sVw4PEzzJU+tJgo5H4MklOD aosEtj//Zzl8fgAUnVr9RincEkw4+1cMyhh/aqkV/myKhExgzZjUWPjvnWCw3CN39l9O vfelA4TyKCOj0LpbjoDy1d6q0vQXac0+ef8L+gkFAh77b3Kt9GyGiRy+m0TuNO0EI01T +RNbswcBWRGerwse9KbPemhyMG7hx/GfxNdZXn/pQjjhSM2VeSXGZ4u6Fxs7WZ8hENPh Ds9A== X-Gm-Message-State: AOAM531C0orxefyGY7NrLLmfd99neRhTRXec391TMEzdHwDH0uwwSMfW PgVZ2PBMup/sjyvXEMID4uiq6g== X-Google-Smtp-Source: ABdhPJxYWWQMP2WKabbIu6NX0M63DryEWNKDcsggcNFgrnT4GLgkWOrNv90Sem4qr5H7wW5OLfy1/g== X-Received: by 2002:a37:aa09:: with SMTP id t9mr8022967qke.214.1612455308240; Thu, 04 Feb 2021 08:15:08 -0800 (PST) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id k14sm4756805qtj.40.2021.02.04.08.15.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Feb 2021 08:15:07 -0800 (PST) Date: Thu, 4 Feb 2021 11:15:06 -0500 From: Johannes Weiner To: Michal Hocko Cc: Andrew Morton , Tejun Heo , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH 6/7] mm: memcontrol: switch to rstat Message-ID: References: <20210202184746.119084-1-hannes@cmpxchg.org> <20210202184746.119084-7-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hello Michal, On Thu, Feb 04, 2021 at 03:19:17PM +0100, Michal Hocko wrote: > On Tue 02-02-21 13:47:45, Johannes Weiner wrote: > > Replace the memory controller's custom hierarchical stats code with > > the generic rstat infrastructure provided by the cgroup core. > > > > The current implementation does batched upward propagation from the > > write side (i.e. as stats change). The per-cpu batches introduce an > > error, which is multiplied by the number of subgroups in a tree. In > > systems with many CPUs and sizable cgroup trees, the error can be > > large enough to confuse users (e.g. 32 batch pages * 32 CPUs * 32 > > subgroups results in an error of up to 128M per stat item). This can > > entirely swallow allocation bursts inside a workload that the user is > > expecting to see reflected in the statistics. > > > > In the past, we've done read-side aggregation, where a memory.stat > > read would have to walk the entire subtree and add up per-cpu > > counts. This became problematic with lazily-freed cgroups: we could > > have large subtrees where most cgroups were entirely idle. Hence the > > switch to change-driven upward propagation. Unfortunately, it needed > > to trade accuracy for speed due to the write side being so hot. > > > > Rstat combines the best of both worlds: from the write side, it > > cheaply maintains a queue of cgroups that have pending changes, so > > that the read side can do selective tree aggregation. This way the > > reported stats will always be precise and recent as can be, while the > > aggregation can skip over potentially large numbers of idle cgroups. > > > > This adds a second vmstats to struct mem_cgroup (MEMCG_NR_STAT + > > NR_VM_EVENT_ITEMS) to track pending subtree deltas during upward > > aggregation. It removes 3 words from the per-cpu data. It eliminates > > memcg_exact_page_state(), since memcg_page_state() is now exact. > > I am still digesting details and need to look deeper into how rstat > works but removing our own stats is definitely a good plan. Especially > when there are existing limitations and problems that would need fixing. > > Just to check that my high level understanding is correct. The > transition is effectivelly removing a need to manually sync counters up > the hierarchy and partially outsources that decision to rstat core. The > controller is responsible just to tell the core how that syncing is done > (e.g. which specific counters etc). Yes, exactly. rstat implements a tree of cgroups that have local changes pending, and a flush walk on that tree. But it's all driven by the controller. memcg needs to tell rstat 1) when stats in a local cgroup change e.g. when we do mod_memcg_state() (cgroup_rstat_updated), 2) when to flush, e.g. before a memory.stat read (cgroup_rstat_flush), and 3) how to flush one cgroup's per-cpu state and propagate it upward to the parent during rstat's flush walk (.css_rstat_flush). > Excplicit flushes are needed when you want an exact value (e.g. when > values are presented to the userspace). I do not see any flushes to > be done by the core pro-actively except for clean up on a release. > > Is the above correct understanding? Yes, that's correct.