From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F4FDC433EF for ; Tue, 10 May 2022 19:34:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C96BD6B0072; Tue, 10 May 2022 15:34:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C47706B0073; Tue, 10 May 2022 15:34:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE6866B0074; Tue, 10 May 2022 15:34:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9CDD16B0072 for ; Tue, 10 May 2022 15:34:34 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 45FAD21D27 for ; Tue, 10 May 2022 19:34:34 +0000 (UTC) X-FDA: 79450835268.10.C546176 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf08.hostedemail.com (Postfix) with ESMTP id 78E92160090 for ; Tue, 10 May 2022 19:34:20 +0000 (UTC) Received: by mail-pl1-f177.google.com with SMTP id j14so17656117plx.3 for ; Tue, 10 May 2022 12:34:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=bQthYIwGazUHXvJP4lOeiwCuBM9QlZF5ToR+SlrapXg=; b=WKDEchBVC4Mp134gTUdZstuBf95M4RJvGSkZViPHvH1YSJM33gQRVZWZL9HZNWW223 Qgy84a9u4AjEUm9YrMVUW9UNx0E2sQYgIN0MYN+gOf1caJZneovM75xCXkgevzWeAsSs fM6pqvNvNTmffmbhpx9GcAic/N3JOAqSHe9kcVUGE4Hq0Zaxgnv8Fjt1Naee4CaKnfjj RCinT3zwa59llD67qILIX1cl1IPfp6fk0aGqq7nXOd/ahyrQ90DNGCjyCv6aprfqu2uC vgmyE+UAa2PzfG4jdNqQlPInnL6EHMiLb94OdoFMMrvQT9Bx1Ici0tfDSEEqtb0cASaT F32Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=bQthYIwGazUHXvJP4lOeiwCuBM9QlZF5ToR+SlrapXg=; b=FRaLfKbIKk1YBwuB67RSFM4WapVGvOK5uVcn6E2ZYblvNp5hzyTxUtQzUeTNZgZgs9 JM/d8DLlpAWXKCdtNiFLMvqkjh7uty8eKdJCX2ezpBvQC5AJepfMcJFHPBEP1SVXBz16 k8rB1TBfi4EVvHXy/Uzx1vYUHiQK7R48zPNOgO77+I9bmHLilLMgYstro62dvqt1zJI1 8dzIxj8lmkK4dIemnS8dvTU98QT7oX14LEzpgzeIgCI55yozrEnbm/VeFbZn0+yVSb4K yNCSa1Zk+6iAF2efUuE9jQDt9hwyiV2ysHUpvSrCjprWU7Jene81Wt1LMv/euPSj9BzJ IGXg== X-Gm-Message-State: AOAM533tiJd34gXK2GyqUTtI5Faxx/hZhB/oTSKZOeDoRtlRTKLmzj94 oF5sC1IVspWELeqdDzQ5988Y9QI1iIzuOfzMj3U= X-Google-Smtp-Source: ABdhPJxUFMq+haarYgwZ66z+ncVF/vfNC4KlopREAV3XXY82SC6n/r928x8vXvUee+OfLRX/rpcr31r0SvO93QHc4MI= X-Received: by 2002:a17:902:d552:b0:15e:e759:cd38 with SMTP id z18-20020a170902d55200b0015ee759cd38mr22134697plf.87.1652211272813; Tue, 10 May 2022 12:34:32 -0700 (PDT) MIME-Version: 1.0 References: <20220505033814.103256-1-xu.xin16@zte.com.cn> <6275d3e7.1c69fb81.1d62.4504@mx.google.com> <6278fa75.1c69fb81.9c598.f794@mx.google.com> <6279c354.1c69fb81.7f6c1.15e0@mx.google.com> In-Reply-To: <6279c354.1c69fb81.7f6c1.15e0@mx.google.com> From: Yang Shi Date: Tue, 10 May 2022 12:34:20 -0700 Message-ID: Subject: Re: [PATCH] mm/memcg: support control THP behaviour in cgroup To: CGEL Cc: Michal Hocko , Andrew Morton , Johannes Weiner , Matthew Wilcox , Roman Gushchin , Shakeel Butt , Miaohe Lin , William Kucharski , Peter Xu , Hugh Dickins , Vlastimil Babka , Muchun Song , Suren Baghdasaryan , Linux Kernel Mailing List , Linux MM , Cgroups , Yang Yang Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: 7u83q3iwajrzs9xhf1p4f71cxqdmr1tm X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 78E92160090 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=WKDEchBV; spf=pass (imf08.hostedemail.com: domain of shy828301@gmail.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-HE-Tag: 1652211260-233676 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, May 9, 2022 at 6:43 PM CGEL wrote: > > On Mon, May 09, 2022 at 01:48:39PM +0200, Michal Hocko wrote: > > On Mon 09-05-22 11:26:43, CGEL wrote: > > > On Mon, May 09, 2022 at 12:00:28PM +0200, Michal Hocko wrote: > > > > On Sat 07-05-22 02:05:25, CGEL wrote: > > > > [...] > > > > > If there are many containers to run on one host, and some of them have high > > > > > performance requirements, administrator could turn on thp for them: > > > > > # docker run -it --thp-enabled=always > > > > > Then all the processes in those containers will always use thp. > > > > > While other containers turn off thp by: > > > > > # docker run -it --thp-enabled=never > > > > > > > > I do not know. The THP config space is already too confusing and complex > > > > and this just adds on top. E.g. is the behavior of the knob > > > > hierarchical? What is the policy if parent memcg says madivise while > > > > child says always? How does the per-application configuration aligns > > > > with all that (e.g. memcg policy madivise but application says never via > > > > prctl while still uses some madvised - e.g. via library). > > > > > > > > > > The cgroup THP behavior is align to host and totally independent just likes > > > /sys/fs/cgroup/memory.swappiness. That means if one cgroup config 'always' > > > for thp, it has no matter with host or other cgroup. This make it simple for > > > user to understand or control. > > > > All controls in cgroup v2 should be hierarchical. This is really > > required for a proper delegation semantic. > > > > Could we align to the semantic of /sys/fs/cgroup/memory.swappiness? > Some distributions like Ubuntu is still using cgroup v1. Other than enable flag, how would you handle the defrag flag hierarchically? It is much more complicated. > > > > If memcg policy madivise but application says never, just like host, the result > > > is no THP for that application. > > > > > > > > By doing this we could promote important containers's performance with less > > > > > footprint of thp. > > > > > > > > Do we really want to provide something like THP based QoS? To me it > > > > sounds like a bad idea and if the justification is "it might be useful" > > > > then I would say no. So you really need to come with a very good usecase > > > > to promote this further. > > > > > > At least on some 5G(communication technology) machine, it's useful to provide > > > THP based QoS. Those 5G machine use micro-service software architecture, in > > > other words one service application runs in one container. > > > > I am not really sure I understand. If this is one application per > > container (cgroup) then why do you really need per-group setting? > > Does the application is a set of different processes which are only very > > loosely tight? > > > For micro-service architecture, the application in one container is not a > set of loosely tight processes, it's aim at provide one certain service, > so different containers means different service, and different service > has different QoS demand. > > The reason why we need per-group(per-container) setting is because most > container are managed by compose software, the compose software provide > UI to decide how to run a container(likes setting swappiness value). For > example the docker compose: > https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command > > To make it clearer, I try to make a summary for why container needs this patch: > 1.one machine can run different containers; > 2.for some scenario, container runs only one service inside(can be only one > application); > 3.different containers provide different services, different services have > different QoS demands; > 4.THP has big influence on QoS. It's fast for memory access, but eat more > memory; I have been involved in this kind of topic discussion offline a couple of times. But TBH I don't see how you could achieve QoS by this flag. THP allocation is *NOT* guaranteed. And the overhead may be quite high. It depends on how fragmented the system is. > 5.containers usually managed by compose software, which treats container as > base management unit; > 6.this patch provide cgroup THP controller, which can be a method to adjust > container memory QoS. > > > > Container becomes > > > the suitable management unit but not the whole host. And some performance > > > sensitive containers desiderate THP to provide low latency communication. > > > But if we use THP with 'always', it will consume more memory(on our machine > > > that is about 10% of total memory). And unnecessary huge pages will increase > > > memory pressure, add latency for minor pages faults, and add overhead when > > > splitting huge pages or coalescing normal sized pages into huge pages. > > > > It is still not really clear to me how do you achieve that the whole > > workload in the said container has the same THP requirements. > > -- > > Michal Hocko > > SUSE Labs