From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E78B6C76196 for ; Thu, 6 Apr 2023 14:08:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 61C936B0071; Thu, 6 Apr 2023 10:08:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5CCDA6B0074; Thu, 6 Apr 2023 10:08:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 494886B0075; Thu, 6 Apr 2023 10:08:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 39CED6B0071 for ; Thu, 6 Apr 2023 10:08:23 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 0993E4133B for ; Thu, 6 Apr 2023 14:08:23 +0000 (UTC) X-FDA: 80651146086.10.34427DE Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) by imf15.hostedemail.com (Postfix) with ESMTP id 308FCA000E for ; Thu, 6 Apr 2023 14:08:20 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Ldvqw71+; spf=pass (imf15.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.45 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680790101; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jW9/JTaDylpgeIAJECkEaG5KDiwSixA3SIgLvbU4AaM=; b=m2r8CFxxZZOUU84aSwQqr2nfIEbHOeqMI+Lyw5Ginx7iKmSmsIm7Nqk5pk0K8bHSM+bLIT 2+CJH8rZbsU6ZgD69tZSIAoANAqMNqRDBP0nr2WYyWG3aO97unBETA+lt0/fFlalWFGd5D yaHCRTSsSv2o2Tx/OF3qg1aNKo9nDuw= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Ldvqw71+; spf=pass (imf15.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.45 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680790101; a=rsa-sha256; cv=none; b=nLYDYesHX+f+24yX2CozDM3lp1Xa5a1tD8o0IflQtCkiPl/9lQg0PiTUKqm/4TWeFOkaG2 inzaoLIHuS98eEyHhg161XsPrpfYppmx2lhrD0ZwYkkyIBrniEj8JNQZW4yBVnG1ySR8/E cG8S0NkbInrF9+zMxnX6I/qQ6vz2D+o= Received: by mail-ej1-f45.google.com with SMTP id j22so1706974ejv.1 for ; Thu, 06 Apr 2023 07:08:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680790099; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=jW9/JTaDylpgeIAJECkEaG5KDiwSixA3SIgLvbU4AaM=; b=Ldvqw71+VNh7tiIc4Z+CvHM5E4lX14+MqS/bwxliBBbEllQmotXyQ5XfnUEEGp2HQY ztGbZZSn94MOcBsnIWgnD52vWIU2B3+rXqpHy58REb6k+o5QnpsPP/mLEGqU1uHId5z4 B454ZFZ4TX/JEQ9Th4j7J3bptmCB+wC3Qk7vApbN+hRZNpKyp7uu9Cjr3TQYIKHoBUgj ux2ery45Q/wVCkabLZ18a2xTySnSL+7qT4PwHpRRQj8khOrn+OiD03Hi+UOvMSCYnojY S3Fj/UNROxw2E7+y1nPMRKJKj/dMqKtXo5CTWUrrhUrHojUncvHbJQq8Vy6A7D2rCnFB bkRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680790099; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jW9/JTaDylpgeIAJECkEaG5KDiwSixA3SIgLvbU4AaM=; b=rBI7hdItsA7opDcAFjuep7bnSu7NGDIc5FKZ4q9rUyy5Ht0Ww+dVnfU9uKpq0//xAB 02drwSwi7pUkg5A0tI0dSPlzSo5zNxWNi523NgI8nJznRIZcCrnciOgMsJfc9oyZTvLj su3NWmQ6xQQsZuhqI19NTQ7eE8bYLoEqfSmv4klUH0n3kGyyEWbJsZu2mNvnD6hmClbR VvNFvemTDxjQoAdn16HY+mOaFYa2ssFLcMPeY/5ElxmHoKITLMhTyD2CxLKbyimLgz/e hRS13Ox3hav+WWvfF6i9ZHqr87fn/Q4LWFd2+nTSg6BcFl2qFr+rOYrcOvs5yKSov+S3 Rpqg== X-Gm-Message-State: AAQBX9ei6F6OZfvGP899j0a3iLz0jqJ9TYmMnTVrDcO/QcCX24eQKLr9 Kiwe1AlGy1SOQy67JENLmmn9bKXhH94fm24kSxXjTA== X-Google-Smtp-Source: AKy350bAFhVGMCwwfVhJTeYwOabUOGX15M4Xne8OZst89OyUnu4BGUc8xSUVle/vqmycb4Kkggnju+5sLAPNViJx+lE= X-Received: by 2002:a17:906:af6b:b0:931:6e39:3d0b with SMTP id os11-20020a170906af6b00b009316e393d0bmr3399835ejb.15.1680790099297; Thu, 06 Apr 2023 07:08:19 -0700 (PDT) MIME-Version: 1.0 References: <20230405185427.1246289-1-yosryahmed@google.com> <20230405185427.1246289-2-yosryahmed@google.com> In-Reply-To: From: Yosry Ahmed Date: Thu, 6 Apr 2023 07:07:42 -0700 Message-ID: Subject: Re: [PATCH v5 1/2] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim To: David Hildenbrand Cc: Andrew Morton , Alexander Viro , "Darrick J. Wong" , Christoph Lameter , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Matthew Wilcox (Oracle)" , Miaohe Lin , Johannes Weiner , Peter Xu , NeilBrown , Shakeel Butt , Michal Hocko , Yu Zhao , Dave Chinner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: suuadmar5m69k7pe8s3ezp74xstrqhfo X-Rspamd-Queue-Id: 308FCA000E X-HE-Tag: 1680790100-487961 X-HE-Meta: U2FsdGVkX19znwAtdwwr3sqvFxIIhI04skTCvE0yeBmoUIgjSt1UM6VFlXaKVyBivw45n9aKFJTY7eT1lj/k7Jdt7H8CT8pvPHXurd40Rk9As/nzfDjKVNWrAiNke/pXr71bbxJS3ATYyiLSEs2wuX4JXNfmIR74NKPulMyzfvZ4eMtb4xSHlRzVFlX6Lz+RpegcLyYrNo1ae5SV9h6uarZqFzKMfNtLI5nNHyigA+4Tjz959Xql8sUz1O6Pe13DisImDHsIIMguCYrS+LN+0Vt8CUCEYo9mrt8z8NxNY6IbtIyjf3+88vLIzsTJOt+uxnZnmYhRlqS1vSRYaOggDlHnOGkS8e2NkSP1IKWDBUMP6CdN10YzDUmkMNNVHN/wGCdwKvdeSshLQBpcDSkmrcPwUR6qoLP9K+sMWkWB6WS2TwxJsG96bBwsjGl53oyYWyBRk5B6bgQAcbkEPMv4/Z3HJWgbPF/9mayvwnsQSsiyPz9Gb9pMgMlRwo35UbEymdxssIrL8UktyPizanc1pxnulRito+oqVX8Q2CxeXLhTwAZRehDZ+TVamFLCayHXHpTD3K8SXcolBzN6jT1WOv81w0rQzZDlyATMQtB2+xg3/gKr6L5PwgwABC015dk2uWRyejnI0iX5gP0XFyS2Vj8CSnPaAtj2TjpsXHTBMUKzUOu3uJITrMlN46L4TBX0a7nhH/qHKrN5r62pAcSqVmwIjLqllJo2e5/Rc612WuImBgtPIG1VolmEnZFckw5hTg1QRFkxNJWuwJc70sWYGIOK/J2oGkl/BtS9q6sE3B2tOj2hRe/aKc+0jFEUO1Akx0KL2MPRhUym4GpKx1dEQHxlES98y7hLzBUmyqtB8ee+0oR3EpEf7zBgidyik17oZHNxk9aOl4DFrzz3+tKet14xlB7gZSM+M3B3ndvNw8D7k4p7/Zfoau2xeZSXbPFQbPp1fuvgcGerRtQzAYY u9UkSR5Y wRUtfORXnfuRLgCTLBx7988N7FNhQ+XLrZ1IOSm13N2i57zU4JkyWE6aBgVdBVmrwNWJqDx21ydnCMjx2m0uJX0s7QXmmXRrTpJVTel1fjGB4J27/3zoigvfGX5jc0Wv9i1GJeItSEuiJ7Zc85e8EgepZCUzeczfLv1DKVi3cu2MJl3imh7Xm5DsT11kU5xFd+twcz2ejDvrqrmoamZ/UTw22jK7ISt0gPADLMjIZ1KedgRt68nTq1ynXi/91NN3G5SUr8go3NHhtMwIy6yhMivxFIa+biHALKnVILX4VX7pjO2hQhJmtkmMvUXaDX3EcursMnaJepsPfNPUqYdl3Rvu3KzdJBNtGu8YLERQhFiCubXuWf8wNP2RF92Bw3jo/HlpSES5ht3rh2VVRXYsE1+vtRWxx9Qi+o5GHKowGnQNDe8Q= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Thanks for taking a look, David! On Thu, Apr 6, 2023 at 3:31=E2=80=AFAM David Hildenbrand = wrote: > > On 05.04.23 20:54, Yosry Ahmed wrote: > > We keep track of different types of reclaimed pages through > > reclaim_state->reclaimed_slab, and we add them to the reported number > > of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg > > reclaim, we have no clue if those pages are charged to the memcg under > > reclaim. > > > > Slab pages are shared by different memcgs, so a freed slab page may hav= e > > only been partially charged to the memcg under reclaim. The same goes = for > > clean file pages from pruned inodes (on highmem systems) or xfs buffer > > pages, there is no simple way to currently link them to the memcg under > > reclaim. > > > > Stop reporting those freed pages as reclaimed pages during memcg reclai= m. > > This should make the return value of writing to memory.reclaim, and may > > help reduce unnecessary reclaim retries during memcg charging. Writing= to > > memory.reclaim on the root memcg is considered as cgroup_reclaim(), but > > for this case we want to include any freed pages, so use the > > global_reclaim() check instead of !cgroup_reclaim(). > > > > Generally, this should make the return value of > > try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.= g. > > freed a slab page that was mostly charged to the memcg under reclaim), > > the return value of try_to_free_mem_cgroup_pages() can be underestimate= d, > > but this should be fine. The freed pages will be uncharged anyway, and = we > > Can't we end up in extreme situations where > try_to_free_mem_cgroup_pages() returns close to 0 although a huge amount > of memory for that cgroup was freed up. > > Can you extend on why "this should be fine" ? > > I suspect that overestimation might be worse than underestimation. (see > my comment proposal below) In such extreme scenarios even though try_to_free_mem_cgroup_pages() would return an underestimated value, the freed memory for the cgroup will be uncharged. try_charge() (and most callers of try_to_free_mem_cgroup_pages()) do so in a retry loop, so even if try_to_free_mem_cgroup_pages() returns an underestimated value charging will succeed the next time around. The only case where this might be a problem is if it happens in the final retry, but I guess we need to be *really* unlucky for this extreme scenario to happen. One could argue that if we reach such a situation the cgroup will probably OOM soon anyway. > > > can charge the memcg the next time around as we usually do memcg reclai= m > > in a retry loop. > > > > The next patch performs some cleanups around reclaim_state and adds an > > elaborate comment explaining this to the code. This patch is kept > > minimal for easy backporting. > > > > Signed-off-by: Yosry Ahmed > > Cc: stable@vger.kernel.org > > Fixes: ? > > Otherwise it's hard to judge how far to backport this. It's hard to judge. The issue has been there for a while, but memory.reclaim just made it more user visible. I think we can attribute it to per-object slab accounting, because before that any freed slab pages in cgroup reclaim would be entirely charged to that cgroup. Although in all fairness, other types of freed pages that use reclaim_state->reclaimed_slab cannot be attributed to the cgroup under reclaim have been there before that. I guess slab is the most significant among them tho, so for the purposes of backporting I guess: Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > --- > > > > global_reclaim(sc) does not exist in kernels before 6.3. It can be > > replaced with: > > !cgroup_reclaim(sc) || mem_cgroup_is_root(sc->target_mem_cgroup) > > > > --- > > mm/vmscan.c | 8 +++++--- > > 1 file changed, 5 insertions(+), 3 deletions(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 9c1c5e8b24b8f..c82bd89f90364 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -5346,8 +5346,10 @@ static int shrink_one(struct lruvec *lruvec, str= uct scan_control *sc) > > vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - s= canned, > > sc->nr_reclaimed - reclaimed); > > > > - sc->nr_reclaimed +=3D current->reclaim_state->reclaimed_slab; > > - current->reclaim_state->reclaimed_slab =3D 0; > > Worth adding a comment like > > /* > * Slab pages cannot universally be linked to a single memcg. So only > * account them as reclaimed during global reclaim. Note that we might > * underestimate the amount of memory reclaimed (but won't overestimate > * it). > */ > > but ... > > > + if (global_reclaim(sc)) { > > + sc->nr_reclaimed +=3D current->reclaim_state->reclaimed_s= lab; > > + current->reclaim_state->reclaimed_slab =3D 0; > > + } > > > > return success ? MEMCG_LRU_YOUNG : 0; > > } > > @@ -6472,7 +6474,7 @@ static void shrink_node(pg_data_t *pgdat, struct = scan_control *sc) > > > > shrink_node_memcgs(pgdat, sc); > > > > ... do we want to factor the add+clear into a simple helper such that we > can have above comment there? > > static void cond_account_reclaimed_slab(reclaim_state, sc) > { > /* > * Slab pages cannot universally be linked to a single memcg. So > * only account them as reclaimed during global reclaim. Note > * that we might underestimate the amount of memory reclaimed > * (but won't overestimate it). > */ > if (global_reclaim(sc)) { > sc->nr_reclaimed +=3D reclaim_state->reclaimed_slab; > reclaim_state->reclaimed_slab =3D 0; > } > } > > Yes, effective a couple LOC more, but still straight-forward for a > stable backport The next patch in the series performs some refactoring and cleanups, among which we add a helper called flush_reclaim_state() that does exactly that and contains a sizable comment. I left this outside of this patch in v5 to make the effective change as small as possible for backporting. Looks like it can be confusing tho without the comment. How about I pull this part to this patch as well for v6? > > > - if (reclaim_state) { > > + if (reclaim_state && global_reclaim(sc)) { > > sc->nr_reclaimed +=3D reclaim_state->reclaimed_slab; > > reclaim_state->reclaimed_slab =3D 0; > > } > > -- > Thanks, > > David / dhildenb >