From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DD30C32772 for ; Tue, 23 Aug 2022 16:21:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE9B0940009; Tue, 23 Aug 2022 12:21:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B977D940007; Tue, 23 Aug 2022 12:21:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A5F09940009; Tue, 23 Aug 2022 12:21:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 97772940007 for ; Tue, 23 Aug 2022 12:21:29 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5EEBCC0B72 for ; Tue, 23 Aug 2022 16:21:29 +0000 (UTC) X-FDA: 79831372698.22.BBCD5A0 Received: from mail-yw1-f171.google.com (mail-yw1-f171.google.com [209.85.128.171]) by imf16.hostedemail.com (Postfix) with ESMTP id EE19E180007 for ; Tue, 23 Aug 2022 16:21:28 +0000 (UTC) Received: by mail-yw1-f171.google.com with SMTP id 00721157ae682-32a09b909f6so393458377b3.0 for ; Tue, 23 Aug 2022 09:21:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=Xs/HommssY7s0B/PLHAjlcRJ4mqHCr3IhAd1yTCEzCE=; b=J1Vec7bio8CJXhkUKFR6AzmUJeklF7b0qMyGawJeDXz5GGJgAljz1jk82Jwg21H8b8 MJDJD1xs5FwNkG1j2kMpeWNyP8pdGsmfN7aU3UVQEvk9NhMuUsL6r683dw6VoMlDcd8G psVsV9/S3s8Q4wef27D6xaRF+221cD3tzkamSJpXS2MAQsDQckI8ZFsOYbX6rALQQaSV 46ZkmcB6yVW57VjIJmlDnR5Wkv5Dus4vv871VPSQX4sYGeuTFuwOcwZL9uOnR/Xn+Kui XKgQrFZnf+EpiWWc/svuhx+dAmCsL20Vwn/xBwhpc8LO+atcSrj5yMcOkVlkDmN25JPa m/9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=Xs/HommssY7s0B/PLHAjlcRJ4mqHCr3IhAd1yTCEzCE=; b=Bx4aZq5B+A7J6eyd8bJO5ngi2phHNWIXJ1pCKkADdFVv7qLBJDlHwOyvtOXLqAcK1k lboYMY/8PryGi60T1RvwUXqZ8dMDDVEMPGVQvKh3y/fXvpzjTuyOkxouJDeR7L0rLYt1 p4SkuqMjm2MF1jxl8KE0txWa6eqljnJNasDlLPM9pg9W8ObPjC6qo1hYC5nu3J0zXGvM UZLKm53G53/2XKk29/uhH45FESRYnU4LuGfjEywbs5wjGnuhI01lVqToeJeJQwzqs0Yz myepOUoMnvUdn23nlePmgDZR7yAm5Pm91KyCmUqCEtwzXgXQ4njE33B7FyaMy6fPM5yH 11Lg== X-Gm-Message-State: ACgBeo0FIueQ5UwhrYcMn0zxBRD/WJ/uaMCG2Fbf6XkPICAYCl0A7jpu pqrTv7fxd6U+g6L8k/6WPVZ2/+LBzupLy1hiQicneA== X-Google-Smtp-Source: AA6agR47/L67Bj7Cc/lyZkYwlGjaDqvzkMLSsjTeVk3+ad19DdfL5aVeIkNzpxKNNVDICRVyPaS7fnN/yTQab9d+4Rw= X-Received: by 2002:a05:6902:1366:b0:691:4335:455b with SMTP id bt6-20020a056902136600b006914335455bmr24056494ybb.282.1661271687900; Tue, 23 Aug 2022 09:21:27 -0700 (PDT) MIME-Version: 1.0 References: <1660908562-17409-1-git-send-email-zhaoyang.huang@unisoc.com> In-Reply-To: From: Suren Baghdasaryan Date: Tue, 23 Aug 2022 09:21:16 -0700 Message-ID: Subject: Re: [RFC PATCH] memcg: use root_mem_cgroup when css is inherited To: Michal Hocko Cc: Zhaoyang Huang , Tejun Heo , Shakeel Butt , "zhaoyang.huang" , Johannes Weiner , Linux MM , LKML , Cgroups , Ke Wang , Zefan Li , Roman Gushchin , Muchun Song Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661271688; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Xs/HommssY7s0B/PLHAjlcRJ4mqHCr3IhAd1yTCEzCE=; b=NHiiikcKE7OslTRpdqcDW6g2r7laErvp5kX9Z/zagwYh35ksdhVqLsqoPhhkBECD/LYgUE GYmo0L0xOd60dPH+C2LqB5tUgN23W1PIyugrnPrkVQ4BF077tBsKA5tVBWoXX8YFfPElFQ eg6nT4FTrppV3Ux/ys3GraUUFmsfklw= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=J1Vec7bi; spf=pass (imf16.hostedemail.com: domain of surenb@google.com designates 209.85.128.171 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661271688; a=rsa-sha256; cv=none; b=gk5XDl+OkzYzJkDGnRSlSAVjTv8PKbfM+m2trqMONodG1ZJEiTBO8RsBxEL9HA2P5ai3qp 6aDlgSVvde+GI8O9wXZt1qJuVboPGe5+yPWGoDn6AeccpGL3IV08Xp/7gutad0oGsM1Yxj V8xn4OT4a9pdIxb0tygLurJs8eXzpfw= Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=J1Vec7bi; spf=pass (imf16.hostedemail.com: domain of surenb@google.com designates 209.85.128.171 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Stat-Signature: o7deq7yrznjc8jk769pesity987zaq89 X-Rspamd-Queue-Id: EE19E180007 X-Rspamd-Server: rspam10 X-HE-Tag: 1661271688-208492 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 23, 2022 at 4:51 AM Michal Hocko wrote: > > On Tue 23-08-22 17:20:59, Zhaoyang Huang wrote: > > On Tue, Aug 23, 2022 at 4:33 PM Michal Hocko wrote: > > > > > > On Tue 23-08-22 14:03:04, Zhaoyang Huang wrote: > > > > On Tue, Aug 23, 2022 at 1:21 PM Michal Hocko wrote: > > > > > > > > > > On Tue 23-08-22 10:31:57, Zhaoyang Huang wrote: > > > [...] > > > > > > I would like to quote the comments from google side for more details > > > > > > which can also be observed from different vendors. > > > > > > "Also be advised that when you enable memcg v2 you will be using > > > > > > per-app memcg configuration which implies noticeable overhead because > > > > > > every app will have its own group. For example pagefault path will > > > > > > regress by about 15%. And obviously there will be some memory overhead > > > > > > as well. That's the reason we don't enable them in Android by > > > > > > default." > > > > > > > > > > This should be reported and investigated. Because per-application memcg > > > > > vs. memcg in general shouldn't make much of a difference from the > > > > > performance side. I can see a potential performance impact for no-memcg > > > > > vs. memcg case but even then 15% is quite a lot. > > > > Less efficiency on memory reclaim caused by multi-LRU should be one of > > > > the reason, which has been proved by comparing per-app memcg on/off. > > > > Besides, theoretically workingset could also broken as LRU is too > > > > short to compose workingset. > > > > > > Do you have any data to back these claims? Is this something that could > > > be handled on the configuration level? E.g. by applying low limit > > > protection to keep the workingset in the memory? > > I don't think so. IMO, workingset works when there are pages evicted > > from LRU and then refault which provide refault distance for pages. > > Applying memcg's protection will have all LRU out of evicted which > > make the mechanism fail. > > It is really hard to help you out without any actual data. The idea was > though to use the low limit protection to adaptively configure > respective memcgs to reduce refaults. You already have data about > refaults ready so increasing the limit for often refaulting memcgs would > reduce the trashing. Sorry for joining late. A couple years ago I tested root-memcg vs per-app memcg configurations on an Android phone. Here is a snapshot from my findings: Problem ======= We see tangible increase in major faults and workingset refaults when transitioning from root-only memory cgroup to per-application cgroups on Android. Test results ============ Results while running memory-demanding workload: root memcg per-app memcg delta workingset_refault 1771228 3874281 +118.73% workingset_nodereclaim 4543 13928 +206.58% pgpgin 13319208 20618944 +54.81% pgpgout 1739552 3080664 +77.1% pgpgoutclean 2616571 4805755 +83.67% pswpin 359211 3918716 +990.92% pswpout 1082238 5697463 +426.45% pgfree 28978393 32531010 +12.26% pgactivate 2586562 8731113 +237.56% pgdeactivate 3811074 11670051 +206.21% pgfault 38692510 46096963 +19.14% pgmajfault 441288 4100020 +829.1% pgrefill 4590451 12768165 +178.15% Results while running application cycle test (20 apps, 20 cycles): root memcg per-app memcg delta workingset_refault 10634691 11429223 +7.47% workingset_nodereclaim 37477 59033 +57.52% pgpgin 70662840 69569516 -1.55% pgpgout 2605968 2695596 +3.44% pgpgoutclean 13514955 14980610 +10.84% pswpin 1489851 3780868 +153.77% pswpout 4125547 8050819 +95.15% pgfree 99823083 105104637 +5.29% pgactivate 7685275 11647913 +51.56% pgdeactivate 14193660 21459784 +51.19% pgfault 89173166 100598528 +12.81% pgmajfault 1856172 4227190 +127.74% pgrefill 16643554 23203927 +39.42% Tests were conducted on an Android phone with 4GB RAM. Similar regression was reported a couple years ago here: https://www.spinics.net/lists/linux-mm/msg121665.html I plan on checking the difference again on newer kernels (likely 5.15) after LPC this September. > > [...] > > > A.cgroup.controllers = memory > > > A.cgroup.subtree_control = memory > > > > > > A/B.cgroup.controllers = memory > > > A/B.cgroup.subtree_control = memory > > > A/B/B1.cgroup.controllers = memory > > > > > > A/C.cgroup.controllers = memory > > > A/C.cgroup.subtree_control = "" > > > A/C/C1.cgroup.controllers = "" > > Yes for above hierarchy and configuration. > > > > > > Is your concern that C1 is charged to A/C or that you cannot actually make > > > A/C.cgroup.controllers = "" because you want to maintain memory in A? > > > Because that would be breaking the internal node constrain rule AFAICS. > > No. I just want to keep memory on B. > > That would require A to be without controllers which is not possible due > to hierarchical constrain. > > > > Or maybe you just really want a different hierarchy where > > > A == root_cgroup and want the memory acocunted in B > > > (root/B.cgroup.controllers = memory) but not in C (root/C.cgroup.controllers = "")? > > Yes. > > > > > > That would mean that C memory would be maintained on the global (root > > > memcg) LRUs which is the only internal node which is allowed to have > > > resources because it is special. > > Exactly. I would like to have all groups like C which have no parent's > > subtree_control = memory charge memory to root. Under this > > implementation, memory under enabled group will be protected by > > min/low while other groups' memory share the same LRU to have > > workingset things take effect. > > One way to achieve that would be shaping the hierarchy the following way > root > / \ > no_memcg[1] memcg[2] > |||||||| ||||| > app_cgroups app_cgroups > > with > no_memcg.subtree_control = "" > memcg.subtree_control = memory > > no? > > You haven't really described why you need per application freezer cgroup > but I suspect you want to selectively freeze applications. Is there > any obstacle to have a dedicated frozen cgroup and migrate tasks to be > frozen there? We intend for Android to gradually migrate to v2 cgroups for all controllers and given that it has to use a unified hierarchy, per-application hierarchy provides highest flexibility. That way we can control every aspect of every app without affecting others. Of course that comes with its overhead. Thanks, Suren. > -- > Michal Hocko > SUSE Labs