From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27A81C433F5 for ; Mon, 4 Apr 2022 11:23:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1D0626B0072; Mon, 4 Apr 2022 07:23:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 158496B0073; Mon, 4 Apr 2022 07:23:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F13E68D0001; Mon, 4 Apr 2022 07:23:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id DC9896B0072 for ; Mon, 4 Apr 2022 07:23:25 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 987B1244C2 for ; Mon, 4 Apr 2022 11:23:15 +0000 (UTC) X-FDA: 79318960350.08.A70AF9C Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) by imf28.hostedemail.com (Postfix) with ESMTP id 20911C0005 for ; Mon, 4 Apr 2022 11:23:14 +0000 (UTC) Received: by mail-qt1-f177.google.com with SMTP id i4so7359878qti.7 for ; Mon, 04 Apr 2022 04:23:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=uTEQc40WAUSwoIQpKgQjJIExKC4lQtRNhQFFG5ZkcD4=; b=pSQrIEqHbVdTq/rz7ZjoIHJeXGubsbHAjJLdZQp8l5AlsVTZJ+zGV6P/H00sP5zGjo q+P5rRL6Xf1ojgtC3ksx1Ef8i2OuaW6hN5Xvmhv81HeIWyvZeivSRdnL27IsTVbg6i3O 9B3re0wo1wSl/8BcUSuX4LVgJRfaDdhN6r87xtLvideB2JC7a+DPgCUTNGgGwOjgRVyW Zvxxz7gIeRHEVgHhHIX7quhCuYkWirUKgloE0bw5kmWIex1c1AUdUPNaRxvptrxfhIvR QjWrwJZu5C4IjXkkouQaycFHjCIqdJbz9C21zz2s0nyXkEv6veEazyX+AXCVSnLXMhE4 b+Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=uTEQc40WAUSwoIQpKgQjJIExKC4lQtRNhQFFG5ZkcD4=; b=HHQ89nxK1NBVVFDPCvD34d/Gy1ikjaRIPMV1WCuUjc+O7aZiS8Ppk5y+zEKxdCi+nj apXGk5/f62dy6u2a6CFyYW3KUIDpsxeplDiH/vB0MyRlPm7nGV0XsT/zevf6SBctrOoC W3YHaMDbrnzzYP+eum0Cn6V96UF1Uei7KfUdU+GVNwDnASDiR6O+we3ZjvF1gGesIPhv OAG3o5Cl/+yUlRhwySfPGdy7jPUtTlXsypkKa9PfvKM0kUS1y/WXVOiMNrgOANBbitsi uOMkhddtLnPHgOT0YsKshOWADX1kHqjaqWdrbOMYdpdDtuk5CCAhyxz7lkb3BPGcokAb U2dQ== X-Gm-Message-State: AOAM531WxRjqL1e84SDdg1F09aXbUCBHuRNy4RV7KuoX8WC7xtpmkxgI YwEdurKj7j9iNR5pD2QMWpbs9RAQCyUQ39Ql2hI= X-Google-Smtp-Source: ABdhPJwsqMcUrJ8a8kJxjin3FIFKsyMPd5nCbkXKV5uDt5pycXdxclS9X6RJOkF7HV24PAyV8hVuJ5+fhP+GvaC6fM4= X-Received: by 2002:a05:622a:1999:b0:2e2:2928:db7d with SMTP id u25-20020a05622a199900b002e22928db7dmr17447404qtc.160.1649071394354; Mon, 04 Apr 2022 04:23:14 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Zhaoyang Huang Date: Mon, 4 Apr 2022 19:23:03 +0800 Message-ID: Subject: Re: [RFC PATCH] cgroup: introduce dynamic protection for memcg To: Michal Hocko Cc: Suren Baghdasaryan , "zhaoyang.huang" , Andrew Morton , Johannes Weiner , Vladimir Davydov , "open list:MEMORY MANAGEMENT" , LKML , cgroups mailinglist , Ke Wang Content-Type: text/plain; charset="UTF-8" X-Rspam-User: Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=pSQrIEqH; spf=pass (imf28.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.160.177 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 20911C0005 X-Stat-Signature: czibu16359bkhn53w7soamoyycnc4j4e X-HE-Tag: 1649071394-934290 X-Bogosity: Ham, tests=bogofilter, spamicity=0.005352, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Apr 4, 2022 at 5:32 PM Michal Hocko wrote: > > On Mon 04-04-22 17:23:43, Zhaoyang Huang wrote: > > On Mon, Apr 4, 2022 at 5:07 PM Zhaoyang Huang wrote: > > > > > > On Mon, Apr 4, 2022 at 4:51 PM Michal Hocko wrote: > > > > > > > > On Mon 04-04-22 10:33:58, Zhaoyang Huang wrote: > > > > [...] > > > > > > One thing that I don't understand in this approach is: why memory.low > > > > > > should depend on the system's memory pressure. It seems you want to > > > > > > allow a process to allocate more when memory pressure is high. That is > > > > > > very counter-intuitive to me. Could you please explain the underlying > > > > > > logic of why this is the right thing to do, without going into > > > > > > technical details? > > > > > What I want to achieve is make memory.low be positive correlation with > > > > > timing and negative to memory pressure, which means the protected > > > > > memcg should lower its protection(via lower memcg.low) for helping > > > > > system's memory pressure when it's high. > > > > > > > > I have to say this is still very confusing to me. The low limit is a > > > > protection against external (e.g. global) memory pressure. Decreasing > > > > the protection based on the external pressure sounds like it goes right > > > > against the purpose of the knob. I can see reasons to update protection > > > > based on refaults or other metrics from the userspace but I still do not > > > > see how this is a good auto-magic tuning done by the kernel. > > > > > > > > > The concept behind is memcg's > > > > > fault back of dropped memory is less important than system's latency > > > > > on high memory pressure. > > > > > > > > Can you give some specific examples? > > > For both of the above two comments, please refer to the latest test > > > result in Patchv2 I have sent. I prefer to name my change as focus > > > transfer under pressure as protected memcg is the focus when system's > > > memory pressure is low which will reclaim from root, this is not > > > against current design. However, when global memory pressure is high, > > > then the focus has to be changed to the whole system, because it > > > doesn't make sense to let the protected memcg out of everybody, it > > > can't > > > do anything when the system is trapped in the kernel with reclaiming work. > > Does it make more sense if I describe the change as memcg will be > > protect long as system pressure is under the threshold(partially > > coherent with current design) and will sacrifice the memcg if pressure > > is over the threshold(added change) > > No, not really. For one it is still really unclear why there should be any > difference in the semantic between global and external memory pressure > in general. The low limit is always a protection from the external > pressure. And what should be the actual threshold? Amount of the reclaim > performed, effectivness of the reclaim or what? Please find bellowing for the test result, which shows current design has more effective protection when system memory pressure is high. It could be argued that the protected memcg lost the protection as its usage dropped too much. I would like to say that this is just the goal of the change. Is it reasonable to let the whole system be trapped in memory pressure while the memcg holds the memory? With regard to threshold, it is a dynamic decayed watermark value which represents the historic(watermark) and present(update to new usage if it expands again) usage. Actually, I have update the code by adding opt-in code which means this is a opt type of the memcg. This patch is coherent to the original design if user want to set the fixed value by default and also provide a new way of dynamic protected memcg without external monitor and interactivation. We simply test above change by comparing it with current design on a v5.4 based system in 3GB RAM in bellowing steps, via which we can find that fixed memory.low have the system experience high memory pressure with holding too much memory. 1. setting up the topology seperatly as [1] 2. place a memory cost process into B and have it consume 1GB memory from userspace. 3. generating global memory pressure via mlock 1GB memory. 4. watching B's memory.current and PSI_MEM. 5. repeat 3,4 twice. [1]. setting fixed low=500MB; low=600MB; wm_decay_factor=36(68s decay 1/2) A(low=500MB) / B(low=500MB) What we observed are: PSI_MEM, usage PSI_MEM,usage PSI_MEM,usage (Mlock 1GB) (Mlock 2GB) (stable) low=600MB s=23 f=17 u=720/600MB s=91 f=48 u=202MB s=68 f=32 u=106MB low=500MB s=22 f=13 u=660/530MB s=88 f=50 u=156MB s=30 f=20 u=120MB patch s=23 f=12 u=692/470MB s=40 f=23 u=67MB s=21 f=18 u=45MB > -- > Michal Hocko > SUSE Labs