From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08EF9C4BA2D for ; Thu, 27 Feb 2020 12:50:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B052C24697 for ; Thu, 27 Feb 2020 12:50:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="LwZJ5hvM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B052C24697 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 47F306B0003; Thu, 27 Feb 2020 07:50:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4096D6B0005; Thu, 27 Feb 2020 07:50:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A9606B0006; Thu, 27 Feb 2020 07:50:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id 0D5116B0003 for ; Thu, 27 Feb 2020 07:50:15 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BC86099A0 for ; Thu, 27 Feb 2020 12:50:14 +0000 (UTC) X-FDA: 76535889948.12.north97_7675ac1dbe729 X-HE-Tag: north97_7675ac1dbe729 X-Filterd-Recvd-Size: 6966 Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Thu, 27 Feb 2020 12:50:13 +0000 (UTC) Received: by mail-qt1-f196.google.com with SMTP id l16so2165442qtq.1 for ; Thu, 27 Feb 2020 04:50:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=+/pCx60f/1X+9oiLWtdCr7slRVvciDKJGzX506kGhKA=; b=LwZJ5hvMkch1yo5ly172o7GJ3mfainNjJJKqHYsyqJ2MSPmvQnLSjknp31jnre3Ema 1FUB9Tef3NCtRijJuv9eFIoeetxVNKLwMWAJBDQVp46vYR2nLb8T/5gzw6xZc3TWEzzo aDoDQV65wdYVhTygc0oRW+ctO9RbCHUwfFlxd8mbUxnldyd90iPPcsHHN9LEXURzD+Bd 4A58qRoJNQrHoYHUxFysgDEityuGQQVtrnQt/dAaNpSvwaM6omm5Oyk7vd9jXJhW23jD 2BKJgPA7rfl6RdxiivWiiA6tQiPXyQIoXHWk919tGT2rNBcQLL89YyEXXEhwilHkkHU0 /U5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=+/pCx60f/1X+9oiLWtdCr7slRVvciDKJGzX506kGhKA=; b=VMKMvN0ol8NBxHq1Xbeez+h773fW75wOJVQEM+xd/lqtEB/ATL4lHbUWApr0yZL0OC MxUKKklownEXr5cXuFT9ceyFB7kfaX5oH4U4gYiYN9cBEMrRH4YDk7cPWYIITzwUdG7u s2GBZqpJVVKpQXjlnSC2K6ignnBxi9ECA3NJvcS9OtidfFbuDPBDWak+d9BdxFjimh9O yVNZOFnUb4p7bg+fLK+xqYFW0sN1RprMg5yuzaZ1WzfNCnOzH4DHGvD28eYWW4CGdALU yE6QND/TFzySkN1efddVEK3NbL403QdHAN7cxtD79YdalsnoBvFA8jMBatHs3meI6gk7 upsw== X-Gm-Message-State: APjAAAUwqGbZY1Ll/po8XyM9CfXSQy99a2dEKbqkZIir9XEp6uI1AWEF U+mR++FNDoDqjUmhGoZZ4CcXHA== X-Google-Smtp-Source: APXvYqx0U3vFZWg/sKNGxZUUnusjqiG2GsAOVxX0+YLX9k71Ov4hCfuLa99A8LZxIzOfIXy6PmyhWA== X-Received: by 2002:ac8:7103:: with SMTP id z3mr4958280qto.172.1582807813193; Thu, 27 Feb 2020 04:50:13 -0800 (PST) Received: from localhost (pool-108-27-252-85.nycmny.fios.verizon.net. [108.27.252.85]) by smtp.gmail.com with ESMTPSA id v82sm3040358qka.51.2020.02.27.04.50.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Feb 2020 04:50:12 -0800 (PST) Date: Thu, 27 Feb 2020 07:50:11 -0500 From: Johannes Weiner To: Yang Shi Cc: Shakeel Butt , Andrew Morton , Michal Hocko , Tejun Heo , Roman Gushchin , Linux MM , Cgroups , LKML , Kernel Team Subject: Re: [PATCH] mm: memcontrol: asynchronous reclaim for memory.high Message-ID: <20200227125011.GB39625@cmpxchg.org> References: <20200219181219.54356-1-hannes@cmpxchg.org> <20200226222642.GB30206@cmpxchg.org> <2be6ac8d-e290-0a85-5cfa-084968a7fe36@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2be6ac8d-e290-0a85-5cfa-084968a7fe36@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Feb 26, 2020 at 04:12:23PM -0800, Yang Shi wrote: > On 2/26/20 2:26 PM, Johannes Weiner wrote: > > So we should be able to fully resolve this problem inside the kernel, > > without going through userspace, by accounting CPU cycles used by the > > background reclaim worker to the cgroup that is being reclaimed. > > Actually I'm wondering if we really need account CPU cycles used by > background reclaimer or not. For our usecase (this may be not general), the > purpose of background reclaimer is to avoid latency sensitive workloads get > into direct relcaim (avoid the stall from direct relcaim). In fact it just > "steal" CPU cycles from lower priority or best-effort workloads to guarantee > latency sensitive workloads behave well. If the "stolen" CPU cycles are > accounted, it means the latency sensitive workloads would get throttled from > somewhere else later, i.e. by CPU share. That doesn't sound right. "Not accounting" isn't an option. If we don't annotate the reclaim work, the cycles will go to the root cgroup. That means that the latency-sensitive workload can steal cycles from the low-pri job, yes, but also that the low-pri job can steal from the high-pri one. Say your two workloads on the system are a web server and a compile job and the CPU shares are allocated 80:20. The compile job will cause most of the reclaim. If the reclaim cycles can escape to the root cgroup, the compile job will effectively consume more than 20 shares and the low-pri job will get less than 80. But let's say we executed all background reclaim in the low-pri group, to allow the high-pri group to steal cycles from the low-pri group, but not the other way round. Again an 80:20 CPU distribution. Now the reclaim work competes with the compile job over a very small share of CPU. The reclaim work that the high priority job is relying on is running at low priority. That means that the compile job can cause the web server to go into direct reclaim. That's a priority inversion. > We definitely don't want to the background reclaimer eat all CPU cycles. So, > the whole background reclaimer is opt in stuff. The higher level cluster > management and administration components make sure the cgroups are setup > correctly, i.e. enable for specific cgroups, setup watermark properly, etc. > > Of course, this may be not universal and may be just fine for some specific > configurations or usecases. Yes, I suspect it works for you because you set up watermarks on the high-pri job but not on the background jobs, thus making sure only high-pri jobs can steal cycles from the rest of the system. However, we do want low-pri jobs to have background reclaim as well. A compile job may not be latency-sensitive, but it still benefits from a throughput POV when the reclaim work runs concurrently. And if there are idle CPU cycles available that the high-pri work isn't using right now, it would be wasteful not to make use of them. So yes, I can see how such an accounting loophole can be handy. By letting reclaim CPU cycles sneak out of containment, you can kind of use it for high-pri jobs. Or rather *one* high-pri job, because more than one becomes unsafe again, where one can steal a large number of cycles from others at the same priority. But it's more universally useful to properly account CPU cycles that are actually consumed by a cgroup, to that cgroup, and then reflect the additional CPU explicitly in the CPU weight configuration. That way you can safely have background reclaim on jobs of all priorities.