From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51BAEC77B61 for ; Mon, 1 May 2023 10:13:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 550E5900003; Mon, 1 May 2023 06:13:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5010D900002; Mon, 1 May 2023 06:13:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A209900003; Mon, 1 May 2023 06:13:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2B64B900002 for ; Mon, 1 May 2023 06:13:16 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E1CF0A07FF for ; Mon, 1 May 2023 10:13:15 +0000 (UTC) X-FDA: 80741273550.12.11A3600 Received: from mail-ej1-f41.google.com (mail-ej1-f41.google.com [209.85.218.41]) by imf10.hostedemail.com (Postfix) with ESMTP id 160DEC0014 for ; Mon, 1 May 2023 10:13:13 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=VaFrUpCl; spf=pass (imf10.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.41 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682935994; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Q8lhfypd62FTx15FUrLgQ9Jnn3JuzKJA2np2M0uDQxQ=; b=C5nib80pUN000ZrTmKC8OzfuBlhTjQv+E7xS2vL9FDjF6TMosiS7tcLsVFP3ovDzqkZw8k ombHBZbcMtnPbizO626fXOyC8ZfW0ggPYCXGRVRGR2/L5wQuYiFsXhtIXCFq7B1LPLFpqW YILAKxRo5XKQFEarKojE4xOY/9Q1bQs= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=VaFrUpCl; spf=pass (imf10.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.41 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682935994; a=rsa-sha256; cv=none; b=FYvNodtuXNPaxHkzfxd3PKsR3NHyt5reoWlGloGIaoLrqi6V6hUcp2FYwMZ2Zs+UjPt8bl /Zox0o8LeMhuUvdPAKU+uLQQNiPL9N36KNOD8Z/O8stk2s9XzflXnvgTanvlZCtk3PNmWg aGRxLU/AXH9c9eVo1BMjf96H1yn7wU8= Received: by mail-ej1-f41.google.com with SMTP id a640c23a62f3a-958bb7731a9so484832466b.0 for ; Mon, 01 May 2023 03:13:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682935992; x=1685527992; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Q8lhfypd62FTx15FUrLgQ9Jnn3JuzKJA2np2M0uDQxQ=; b=VaFrUpClBFo29v9l4Djcb79DHfwoOettCa5PtHL9yYH6qyblJ0B4m6QuPFSuGIlXhk HoVhrhtsiER0i35ICq11i5bRuzT3jaBt4ehpszIDZZ4/K1SE7gnW+uBKOb7vi+pd4F7b aznnjkVukqkQq/9/WjqDyLA2Z8JGMGvjyrf+BamTY416e2TJHbFFp/14+7WVy3X+3TEi 213PXFoPSPk5v1KYzmd6hDzLSS1CS6fWP1GiXcWIdUTdGyBWR7QHWEMnR7FcDwBOlJ+O 5YRRduFfYbGGpDl1AGuaCCaAKwWhMdGOiqtCu/aGfDKIZyQgThXUq91xQZqs79+cTLRe t30Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682935992; x=1685527992; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Q8lhfypd62FTx15FUrLgQ9Jnn3JuzKJA2np2M0uDQxQ=; b=N5yhXasYiqQYfhfe6WTqz97IuOKiNQurolRjwox0m/+p5hbx/LTPB2jbs6uH+5u77s QsCHHPPEto36zLXqOGD8CBmMwzFSKOGnln5rdLQrrICsUK2QMNKVdenAHfSfDsDa3iRB SdpeBYkGXzrhHgNFPyknUQYhsChhC4k66kRCgxLbPLU6nbLwTS508QITXJQ3FgGXcX5w L8AAywZePtMEwZHeZ3T/cOFs372yjb7+DJA8uaTGVoz8tH18JEGEAI9MUPOga1V62bJa Vu/RfBVs08HJF8CCV/AdyKXf7F13zCtnz4vepsNF9+nF8jhDS4y0SxHLkX0arHtzvLIn Vcww== X-Gm-Message-State: AC+VfDx2/WhFN3Mjf7cbTzL7Q1chiq9nVJIsz+GZquwOe5Zdoc0vlkZX Zry0VGFuUhfTxuhMHsbT4Kh7R/wKFOAuQnbkqPkScg== X-Google-Smtp-Source: ACHHUZ5FHHpuX/K+BNP8YqIzaDjBHp8LYmuwyxZrxNRQg7Tb0947e3YdeSix/a/OfzGmnQs+mspTQb/7MGE010+WRJ8= X-Received: by 2002:a17:906:dc8c:b0:933:4d37:82b2 with SMTP id cs12-20020a170906dc8c00b009334d3782b2mr15295040ejc.57.1682935992353; Mon, 01 May 2023 03:13:12 -0700 (PDT) MIME-Version: 1.0 References: <20230413104034.1086717-1-yosryahmed@google.com> <20230413104034.1086717-2-yosryahmed@google.com> In-Reply-To: <20230413104034.1086717-2-yosryahmed@google.com> From: Yosry Ahmed Date: Mon, 1 May 2023 03:12:36 -0700 Message-ID: Subject: Re: [PATCH v6 1/3] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim To: Andrew Morton , Alexander Viro , "Darrick J. Wong" , Christoph Lameter , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Matthew Wilcox (Oracle)" , Miaohe Lin , David Hildenbrand , Johannes Weiner , Peter Xu , NeilBrown , Shakeel Butt , Michal Hocko , Yu Zhao , Dave Chinner , Tim Chen Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, Sergey Senozhatsky Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 160DEC0014 X-Stat-Signature: 8j4saj5i5r1cigrpqzu571i3gkpj6sxb X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1682935993-518528 X-HE-Meta: U2FsdGVkX1/rjEwhwwfwqpVbIMDE2g+sBnGsT/5JqQ4KTeBioHWMmn0xkC0nWSgbdtt+KYxzBYk9JtsyctpKqczvR2S40FfVz3/64fot540IO7490u4scw0jTbKTiTEaNek7E3o1GkS7x01G4vVx+RaLjKCpgYJAHN3Aq2aF8bRYiuXJ2gQDxvXLcAZtV8VyikfeWKQu3Pt+4RaHA+iqc5MQG8mX2vo2kJC7HDVWkkmn5e8DYXOI6B2F9ylxL3kO+9yPfNMu4TDMzcmZayyIw3ia+jFeN5/GqIVCcACXPV97cSMUG6Rne5blDAh/Jti4OaUndT64a5nTYEgep6i/4ddMmMtsr+EtBsYyo/O2AZunlRNHLH0aZ+xAXAwchqxs3sPbhFmpRaCnwRvEDqnxX8bk5AhbvfQ26DVR22iDCeH2DCoEM0t4yq62wzyJAR+a7ayZZK8zDZ+9rnlgcRsy6QbQRicjfx5VXtlwtsgzZSEugC3dQ5EAcAlvJPLBv5OwjUylDRgkSLbZpaQLO22Mh3AF7W2Rdueme0I0A8IzhKgZ6liVfXQ+hIL/9sU9nv8h6IqGcbfr1x/Jd0Yc+OUtHwdyd/Bitoy8ZFl8jHUdmPeCxocxH6MV0mbghIY/++6qhdQQ7xNoS2z+1zAWeY64C+7N+Ulo9w920FkqM5jzNqgVvuD0mPd18LRroiq19gHsn8JPEE9memswWjkOklTpR+If4Iyj26myPrdbdMHHMxHMkk2oeLF+M4xMp76ctdGJrUFbcuNdJjvbrxGlW/zrgxtG0vO+cMfJBkV6pOeEPTClJZdnINbMbPFJqTbvQPbpVLGr7I3j90KbyDTPQJY64RfOPjjpPpKiUqp9DpNl1oD7z2hbEoDZ9gEp7gdSKMaHQ2qRZgdKbD3zGnCl9st3kMG6Pt1sMDeUyiYXlhH6VSqEmNdzZVVHonmWrmQIHt3K489Lws7+LoNi9mP1J30 og/uERr3 OOdzKMIsL7H+zwXVFzYE53RChuRvSNhi4N38LJSoYAVa0GAH8DtwpiECvt1KeEYZ11etOTfoAjbdTw85oEHz4p4z/VvqXSWGL7n/9ZX3sXaoBfbI/UHBNqPys/jr0gYttjr0I5D47IdHJgiarodItqzHad/3U8gMKPWDpvvQ/NkpqsWNjIuT2hnXZCWWAvthqCSHEtl1Uxg9R/EbtlElvnoU1jC8R+EHV3NMpiBpevel78NjiwHjq2WA/mPuGRTAGw6ba3kSNbJFZOY9ChAa4b6pHDdjDI9p4Ry7b6x/L9Bub23rQdPENdcf5VG87vOBFKGTu X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Apr 13, 2023 at 3:40=E2=80=AFAM Yosry Ahmed = wrote: > > We keep track of different types of reclaimed pages through > reclaim_state->reclaimed_slab, and we add them to the reported number > of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg > reclaim, we have no clue if those pages are charged to the memcg under > reclaim. > > Slab pages are shared by different memcgs, so a freed slab page may have > only been partially charged to the memcg under reclaim. The same goes fo= r > clean file pages from pruned inodes (on highmem systems) or xfs buffer > pages, there is no simple way to currently link them to the memcg under > reclaim. > > Stop reporting those freed pages as reclaimed pages during memcg reclaim. > This should make the return value of writing to memory.reclaim, and may > help reduce unnecessary reclaim retries during memcg charging. Writing t= o > memory.reclaim on the root memcg is considered as cgroup_reclaim(), but > for this case we want to include any freed pages, so use the > global_reclaim() check instead of !cgroup_reclaim(). > > Generally, this should make the return value of > try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g. > freed a slab page that was mostly charged to the memcg under reclaim), > the return value of try_to_free_mem_cgroup_pages() can be underestimated, > but this should be fine. The freed pages will be uncharged anyway, and we > can charge the memcg the next time around as we usually do memcg reclaim > in a retry loop. > > Fixes: f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects > instead of pages") > > Signed-off-by: Yosry Ahmed > --- > mm/vmscan.c | 49 ++++++++++++++++++++++++++++++++++++++++++------- > 1 file changed, 42 insertions(+), 7 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 9c1c5e8b24b8..be657832be48 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -511,6 +511,46 @@ static bool writeback_throttling_sane(struct scan_co= ntrol *sc) > } > #endif > > +/* > + * flush_reclaim_state(): add pages reclaimed outside of LRU-based recla= im to > + * scan_control->nr_reclaimed. > + */ > +static void flush_reclaim_state(struct scan_control *sc) > +{ > + /* > + * Currently, reclaim_state->reclaimed includes three types of pa= ges > + * freed outside of vmscan: > + * (1) Slab pages. > + * (2) Clean file pages from pruned inodes (on highmem systems). > + * (3) XFS freed buffer pages. > + * > + * For all of these cases, we cannot universally link the pages t= o a > + * single memcg. For example, a memcg-aware shrinker can free one= object > + * charged to the target memcg, causing an entire page to be free= d. > + * If we count the entire page as reclaimed from the memcg, we en= d up > + * overestimating the reclaimed amount (potentially under-reclaim= ing). > + * > + * Only count such pages for global reclaim to prevent under-recl= aiming > + * from the target memcg; preventing unnecessary retries during m= emcg > + * charging and false positives from proactive reclaim. > + * > + * For uncommon cases where the freed pages were actually mostly > + * charged to the target memcg, we end up underestimating the rec= laimed > + * amount. This should be fine. The freed pages will be uncharged > + * anyway, even if they are not counted here properly, and we wil= l be > + * able to make forward progress in charging (which is usually in= a > + * retry loop). > + * > + * We can go one step further, and report the uncharged objcg pag= es in > + * memcg reclaim, to make reporting more accurate and reduce > + * underestimation, but it's probably not worth the complexity fo= r now. > + */ > + if (current->reclaim_state && global_reclaim(sc)) { > + sc->nr_reclaimed +=3D current->reclaim_state->reclaimed; > + current->reclaim_state->reclaimed =3D 0; Ugh.. this breaks the build. This should have been current->reclaim_state->reclaimed_slab. It doesn't get renamed from "reclaimed_slab" to "reclaim" until the next patch. When I moved flush_reclaim_state() from patch 2 to patch 1 I forgot to augment it. My bad. The break is fixed by the very next patch, and the patches have already landed in Linus's tree, so there isn't much that can be done at this point. Sorry about that. Just wondering, why wouldn't this breakage be caught by any of the build bots? > + } > +} > + > static long xchg_nr_deferred(struct shrinker *shrinker, > struct shrink_control *sc) > { > @@ -5346,8 +5386,7 @@ static int shrink_one(struct lruvec *lruvec, struct= scan_control *sc) > vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - s= canned, > sc->nr_reclaimed - reclaimed); > > - sc->nr_reclaimed +=3D current->reclaim_state->reclaimed_slab; > - current->reclaim_state->reclaimed_slab =3D 0; > + flush_reclaim_state(sc); > > return success ? MEMCG_LRU_YOUNG : 0; > } > @@ -6450,7 +6489,6 @@ static void shrink_node_memcgs(pg_data_t *pgdat, st= ruct scan_control *sc) > > static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > { > - struct reclaim_state *reclaim_state =3D current->reclaim_state; > unsigned long nr_reclaimed, nr_scanned; > struct lruvec *target_lruvec; > bool reclaimable =3D false; > @@ -6472,10 +6510,7 @@ static void shrink_node(pg_data_t *pgdat, struct s= can_control *sc) > > shrink_node_memcgs(pgdat, sc); > > - if (reclaim_state) { > - sc->nr_reclaimed +=3D reclaim_state->reclaimed_slab; > - reclaim_state->reclaimed_slab =3D 0; > - } > + flush_reclaim_state(sc); > > /* Record the subtree's reclaim efficiency */ > if (!sc->proactive) > -- > 2.40.0.577.gac1e443424-goog >