From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACF35C433FE for ; Thu, 6 Oct 2022 23:07:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B69258E0001; Thu, 6 Oct 2022 19:07:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B17F06B0073; Thu, 6 Oct 2022 19:07:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B8568E0001; Thu, 6 Oct 2022 19:07:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 86BC06B0072 for ; Thu, 6 Oct 2022 19:07:42 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 523C9161241 for ; Thu, 6 Oct 2022 23:07:42 +0000 (UTC) X-FDA: 79992063564.08.6082639 Received: from mail-wr1-f52.google.com (mail-wr1-f52.google.com [209.85.221.52]) by imf04.hostedemail.com (Postfix) with ESMTP id BB0A44001D for ; Thu, 6 Oct 2022 23:07:40 +0000 (UTC) Received: by mail-wr1-f52.google.com with SMTP id bk15so4775457wrb.13 for ; Thu, 06 Oct 2022 16:07:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=ELjLnSOUcHIeqK95gfd+ZLIPRkJZ8GrVrKNIZucF8Dc=; b=XPIOx2SSBiO3YyMi7Nhb8Htq3ra0JsZ/bF0zVfIl+RbnqOchK5bn168GneznsDaWi8 8Ar5OQe4Ercb8vrp3t4ZttP2iI4c1Rxy+GKHyGR+bdaf/HoK53muscIZCJZkMJPyNumf KLTHJJ/MakCxzoOYlXb5Hl5g2N/0lNW8AqkUOdZnl88lvjpHHP2aP86riLmH+zbpHJYc FPvtqfwI31EzVNVWd9t2f+sPnGeTnrNTbEMG/sNKk5FspVHXVC7ddPBe8VB5PqxjzKkn ev2LAWfE5N+bFROT5lRcfIkWfo/8+jlNRowMt7Fw4FerR3FCA9d0kdsSo83v5ksLTi53 6euw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=ELjLnSOUcHIeqK95gfd+ZLIPRkJZ8GrVrKNIZucF8Dc=; b=OcP241oTwolEaZsL0ej7Uf5U5ClIE/mpQrkeO8nxLp6wLOS8im/UzocI+fvPMQtJ77 7L8OTPLMzfHz2tmMk90h+EhTGIS5L0jI6N0xM37VdsF7YPOFovtv8JBlbGdCucaJsGgc hnx7RlbdVXIXpmnwYX//p6qTQGaASWVpJAcXGQvFC5h0LIPS3G/1nV+h7eKTxGrdpNzL cMWAzJfax1EyXTFEOa//X2KjeT1c1DbDZGqjMnOOvHUjdjZqKjgNDGM6L5JaMhArBArv dpO6MuqDY93Wad6Y5KMWUuyHb3T/Bt3okVDq6idSqspk4upo0YD0PXEVxB9Hm9z9q/bl iY5Q== X-Gm-Message-State: ACrzQf34DJhj7Uf3C9kpBQmKctP4id1t7qwG3AonKMlqc02PPEWorCpJ nED0X/ri8xp4tBjE2kHmTk9QRGlcAun3JW30SOkdPQ== X-Google-Smtp-Source: AMsMyM6WTTKC66A3zlL/lGZp6jqWLfBWzqUTP+FCRAUJ7Zu0o7dBSOAVIt3nnjAovGNv8mtazO4DFwP9Uc9jXmhanLc= X-Received: by 2002:a5d:64e8:0:b0:22a:bbb0:fa with SMTP id g8-20020a5d64e8000000b0022abbb000famr1410594wri.372.1665097659137; Thu, 06 Oct 2022 16:07:39 -0700 (PDT) MIME-Version: 1.0 References: <20221005173713.1308832-1-yosryahmed@google.com> In-Reply-To: From: Yosry Ahmed Date: Thu, 6 Oct 2022 16:07:02 -0700 Message-ID: Subject: Re: [PATCH v2] mm/vmscan: check references from all memcgs for swapbacked memory To: Yu Zhao Cc: Johannes Weiner , Andrew Morton , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Greg Thelen , David Rientjes , Cgroups , Linux-MM Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665097660; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ELjLnSOUcHIeqK95gfd+ZLIPRkJZ8GrVrKNIZucF8Dc=; b=M+cwzbAL8trzL1AeU+kELZliDKX7dMQGDT+m/5IOii8fq/LtXEQ9H5Z6anF54G0Ql1l7b0 IP9yIokqLWAHzILpyHuHIyiFQk4EBjuCn89uXZqqchCZgD8oa8ymeldyW41NPPcmkOJrTM UPBrS2xc1I+1GkB2uPs3W8Y4pH0FaPo= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=XPIOx2SS; spf=pass (imf04.hostedemail.com: domain of yosryahmed@google.com designates 209.85.221.52 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665097660; a=rsa-sha256; cv=none; b=SrMUOGK+SyyfSXQ0q066v6Es0jvf2g/+fyur5wC8idSn2lkiQlGPysfFl2fQIzhw/Fpcj1 /3ELrWF3Vt0WTnBPf0Ds82esWaGJJXmjROWqxkYdrTwuJnOjrVRAWW/s+0NB6CbORyerPr KRzfbZNsRI4BvynE0k80oCBXs855NQc= X-Stat-Signature: uh4beihuza4c5diw6ppwk7dqrypgnaa1 X-Rspamd-Queue-Id: BB0A44001D Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=XPIOx2SS; spf=pass (imf04.hostedemail.com: domain of yosryahmed@google.com designates 209.85.221.52 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam07 X-Rspam-User: X-HE-Tag: 1665097660-135429 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Oct 6, 2022 at 2:57 PM Yu Zhao wrote: > > On Thu, Oct 6, 2022 at 12:30 PM Yosry Ahmed wrote: > > > > On Thu, Oct 6, 2022 at 8:32 AM Johannes Weiner wrote: > > > > > > On Thu, Oct 06, 2022 at 12:30:45AM -0700, Yosry Ahmed wrote: > > > > On Wed, Oct 5, 2022 at 9:19 PM Johannes Weiner wrote: > > > > > > > > > > On Wed, Oct 05, 2022 at 03:13:38PM -0600, Yu Zhao wrote: > > > > > > On Wed, Oct 5, 2022 at 3:02 PM Yosry Ahmed wrote: > > > > > > > > > > > > > > On Wed, Oct 5, 2022 at 1:48 PM Yu Zhao wrote: > > > > > > > > > > > > > > > > On Wed, Oct 5, 2022 at 11:37 AM Yosry Ahmed wrote: > > > > > > > > > > > > > > > > > > During page/folio reclaim, we check if a folio is referenced using > > > > > > > > > folio_referenced() to avoid reclaiming folios that have been recently > > > > > > > > > accessed (hot memory). The rationale is that this memory is likely to be > > > > > > > > > accessed soon, and hence reclaiming it will cause a refault. > > > > > > > > > > > > > > > > > > For memcg reclaim, we currently only check accesses to the folio from > > > > > > > > > processes in the subtree of the target memcg. This behavior was > > > > > > > > > originally introduced by commit bed7161a519a ("Memory controller: make > > > > > > > > > page_referenced() cgroup aware") a long time ago. Back then, refaulted > > > > > > > > > pages would get charged to the memcg of the process that was faulting them > > > > > > > > > in. It made sense to only consider accesses coming from processes in the > > > > > > > > > subtree of target_mem_cgroup. If a page was charged to memcg A but only > > > > > > > > > being accessed by a sibling memcg B, we would reclaim it if memcg A is > > > > > > > > > is the reclaim target. memcg B can then fault it back in and get charged > > > > > > > > > for it appropriately. > > > > > > > > > > > > > > > > > > Today, this behavior still makes sense for file pages. However, unlike > > > > > > > > > file pages, when swapbacked pages are refaulted they are charged to the > > > > > > > > > memcg that was originally charged for them during swapping out. Which > > > > > > > > > means that if a swapbacked page is charged to memcg A but only used by > > > > > > > > > memcg B, and we reclaim it from memcg A, it would simply be faulted back > > > > > > > > > in and charged again to memcg A once memcg B accesses it. In that sense, > > > > > > > > > accesses from all memcgs matter equally when considering if a swapbacked > > > > > > > > > page/folio is a viable reclaim target. > > > > > > > > > > > > > > > > > > Modify folio_referenced() to always consider accesses from all memcgs if > > > > > > > > > the folio is swapbacked. > > > > > > > > > > > > > > > > It seems to me this change can potentially increase the number of > > > > > > > > zombie memcgs. Any risk assessment done on this? > > > > > > > > > > > > > > Do you mind elaborating the case(s) where this could happen? Is this > > > > > > > the cgroup v1 case in mem_cgroup_swapout() where we are reclaiming > > > > > > > from a zombie memcg and swapping out would let us move the charge to > > > > > > > the parent? > > > > > > > > > > > > The scenario is quite straightforward: for a page charged to memcg A > > > > > > and also actively used by memcg B, if we don't ignore the access from > > > > > > memcg B, we won't be able to reclaim it after memcg A is deleted. > > > > > > > > > > This patch changes the behavior of limit-induced reclaim. There is no > > > > > limit reclaim on A after it's been deleted. And parental/global > > > > > reclaim has always recognized outside references. > > > > > > > > Do you mind elaborating on the parental reclaim part? > > > > > > > > I am looking at the code and it looks like memcg reclaim of a parent > > > > (limit-induced or proactive) will only consider references coming from > > > > its subtree, even when reclaiming from its dead children. It looks > > > > like as long as sc->target_mem_cgroup is set, we ignore outside > > > > references (relative to sc->target_mem_cgroup). > > > > > > Yes, I was referring to outside of A. > > > > > > As of today, any siblings of A can already pin its memory after it's > > > dead. I suppose your patch would add cousins to that list. It doesn't > > > seem like a categorial difference to me. > > > > > > > If that is true, maybe we want to keep ignoring outside references for > > > > swap-backed pages if the folio is charged to a dead memcg? My > > > > understanding is that in this case we will uncharge the page from the > > > > dead memcg and charge the swapped entry to the parent, reducing the > > > > number of refs on the dead memcg. Without this check, this patch might > > > > prevent the charge from being moved to the parent in this case. WDYT? > > > > > > I don't think it's worth it. Keeping the semantics simple and behavior > > > predictable is IMO more valuable. > > > > > > It also wouldn't fix the scrape-before-rmdir issue Yu points out, > > > which I think is the more practical concern. In light of that, it > > > might be best to table the patch for now. (Until we have > > > reparent-on-delete for anon and file pages...) > > > > If we add a mem_cgroup_online() check, we partially solve the problem. > > Maybe scrape-before-rmdir will not reclaim those pages at once, but > > the next time we try to reclaim from the dead memcg (global, limit, > > proactive,..) we will reclaim the pages. So we will only be delaying > > the freeing of those zombie memcgs. > > As an observer, this seems to be the death by a thousand cuts of the > existing mechanism that Google has been using to virtually eliminate > zombie memcgs for the last decade. > > I understand the desire to fix a specific problem with this patch. But > it's methodically wrong to focus on specific problems without > considering the large picture and how it's evolving. > > Our internal memory.reclaim, which is being superseded, is a superset > of the mainline version. It has two flags relevant to this discussion: > 1. hierarchical walk of a parent > 2. target dead memcgs only > With these, our job scheduler (Borg) doesn't need to scrape before > rmdir at all. It does something called "applying root pressure", > which, as one might imagine, is to write to the root memory.reclaim > with the above flags. We have metrics on the efficiency of this > mechanism and they are closely monitored. > > Why is this important? Because Murphy's law is generally true for a > fleet when its scale and diversity is large and high enough. *We used > to run out memcg IDs.* And we are still carrying a patch that enlarges > swap_cgroup->id from unsigned short to unsigned int. > > Compared with the recharging proposal we have been discussing, the two > cases that the above solution can't help: > 1. kernel long pins > 2. foreign mlocks > But it's still *a lot* more reliable than the scrape-before-rmdir > approach (or scrape-after-rmdir if we can hold the FD open before > rmdir), because it offers unlimited retries and no dead memcgs, e.g., > those created and deleted by jobs (not the job scheduler), can escape. > > Unless you can provide data, my past experience tells me that this > patch will make scrape-before-rmdir unacceptable (in terms of > effectiveness) to our fleet. Of course you can add additional code, > i.e., those two flags or the offline check, which I'm not object to. I agree that the zombie memcgs problem is a serious problem that needs to be dealt with, and recharging memory when a memcg is dying seems like a promising direction. However, this patch's goal is to improve reclaim of shared swapbacked memory in general, regardless of the zombie memcgs problem. I understand that the current version affects the zombie memcgs problem, but I believe this is an oversight that needs to be fixed, not something that should make us leave the reclaim problem unsolved. I think this patch + an offline check should be sufficient to fix the reclaim problem while not regressing the zombie memcgs problem for multiple reasons, see below. If we implement recharging memory of dying memcgs in the future, we can always come back and remove the offline check. > Frankly, let me ask the real question: are you really sure this is the > best for us and the rest of the community? Yes. This patch should improve reclaim of shared memory as I elaborated in my previous email, and with an offline check I believe we shouldn't be regressing the zombie memcgs problem whether for Google or the community, for the following reasons. For Google: a) With an offline check, our root memcg pressure method should remain unaffected, as this patch + offline check should be nop for dead memcgs. b) The current version of our internal memory.reclaim actually considers accesses from outside memcgs, specifically for the reason I am sending this patch for. In retrospect, maybe this should *not* be the case for the root zombie cleanup scenario, but maybe this was an oversight, or it wasn't the case originally. Anyway, as I have mentioned before, I think for the case of shared memory our userspace has become smart enough to stop referencing the shared memory charged to dead memcgs. For the community: - In the scenario of scrape-before-rmdir where that memcg has shared memory charged to it (which shouldn't be very likely), this patch would only delay freeing the dead memcgs until the next reclaim cycle. Given that memory.reclaim was only recently upstreamed, and memory.force_empty is not in cgroup v2, I am not really sure if a lot of people are depending on this scrape-before-rmdir behavior. For both Google and the community, we get a better reclaim policy for shared memory. Again, this conversation got quite derailed. There is a clear problem with reclaiming shared memory that we are trying to fix. We should not regress the zombie cleanup behavior (thanks Yu for catching that), but hopefully this can be accomplished with a simple offline check. Does this make sense to you, Yu and Johannes? If yes, I can send a v3 with an added offline check. > > (I think the recharging proposal is.)