From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C11D1C61DF4 for ; Fri, 24 Nov 2023 04:20:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 33A918D0060; Thu, 23 Nov 2023 23:20:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2EA7E8D0002; Thu, 23 Nov 2023 23:20:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B22E8D0060; Thu, 23 Nov 2023 23:20:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 098E88D0002 for ; Thu, 23 Nov 2023 23:20:45 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id C359BB6610 for ; Fri, 24 Nov 2023 04:20:44 +0000 (UTC) X-FDA: 81491546808.16.67FA36B Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf10.hostedemail.com (Postfix) with ESMTP id 713C2C0006 for ; Fri, 24 Nov 2023 04:20:42 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of jack@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=jack@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700799642; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cg3DGvbOEt50txoR9joJHLc3YRQ70yC2NV2kLm4i3zQ=; b=dGs6NCb0vX9qUSmdthW6fb1T0WjNprbeEfgvjZTkkDMjGpzPYkdWlGBGFfSggaBury+VlF y85HCpyi2LnVD8zdieRNWT44z3sAVeMjXXEU1dVvB3Tv6ebEMM1ACQ2ng2x8WinTU2Ybuv h5KQnTjQE61uFk9trGU8nJ+Xqnshj+Q= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700799642; a=rsa-sha256; cv=none; b=gnNoOC0C/ctU7UtdJKQblvcu2u+oaXPV65MANJwqxP2Tcc4xNS1cOEbTLu/GSL0qW3gZ3l Xi2SIKmdo5XAEb6ei/u3Yw2Qy9XJJzUnVxWVhz8GzJHhEV6p7m0nJZA65wx/mRpduPPUBT 49MQQIicOqejQ6qQ3vYAy7BPpr2xPPw= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of jack@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=jack@suse.cz; dmarc=none Received: from imap2.dmz-prg2.suse.org (imap2.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 6989321AD9; Thu, 23 Nov 2023 16:20:34 +0000 (UTC) Received: from imap2.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap2.dmz-prg2.suse.org (Postfix) with ESMTPS id 4E84413A82; Thu, 23 Nov 2023 16:20:34 +0000 (UTC) Received: from dovecot-director2.suse.de ([10.150.64.162]) by imap2.dmz-prg2.suse.org with ESMTPSA id sjq7EtJ7X2VVGQAAn2gu4w (envelope-from ); Thu, 23 Nov 2023 16:20:34 +0000 Received: by quack3.suse.cz (Postfix, from userid 1000) id 9536EA07D9; Thu, 23 Nov 2023 17:20:25 +0100 (CET) Date: Thu, 23 Nov 2023 17:20:25 +0100 From: Jan Kara To: Chengming Zhou Cc: Jan Kara , LKML , linux-mm , Tejun Heo , Johannes Weiner , Christoph Hellwig , shr@devkernel.io, neilb@suse.de, Michal Hocko Subject: Re: Question: memcg dirty throttle caused by low per-memcg dirty thresh Message-ID: <20231123162025.4sibecbomc3apfkw@quack3> References: <109029e0-1772-4102-a2a8-ab9076462454@linux.dev> <20231122144932.m44oiw5lufwkc5pw@quack3> <7e3d3ff6-b453-404b-beaf-cdd23fb3e1a2@linux.dev> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7e3d3ff6-b453-404b-beaf-cdd23fb3e1a2@linux.dev> X-Spamd-Bar: +++++ X-Rspamd-Queue-Id: 713C2C0006 X-Rspam-User: X-Stat-Signature: ycw3gsphck6qkxdqpknw179j6xyi9r87 X-Rspamd-Server: rspam03 X-HE-Tag: 1700799642-532919 X-HE-Meta: U2FsdGVkX192ZtJ1l50zBW2ubvT7riZMqxiZodSv1oowMnHWVxCqWBgd7JR0iUHGQF28ffgrxuGd6EMs3rwibq/qe9QkY5d9s9CWEASWsN5i3WHJzDRpBL6Z5LtCcrs28FzFcm9m4tqz66zlzymWuJytz0Piha7m93pXy86uQEtZY+TGz/mwLPVL6I23ZpGPh3/dWERhSU9J7F+TL7Axv3G+gNu2M8DVCmvMfCOQiKMsOcGlqqM7tf19b01cPmw2oSUhI9RMLL0M80AFTlldU0SsTG5WswgzfttRRw3SA2Vkz6K3eO8PLsthn5pWzyoxIW2MkkmSFxq782L+CsLhaMEUfV1EP4E1wbcGVOJpUKrYN3KT528rL44Z3jo9CGC1Y6PW1i+i3Bw0gWcDAdarFJu6mFBOcjyen612dak0f5BbD4NNIr6bz95xvT8V2kk3r/mMg3i1Dkgyx9uyJCq4RaUTUA4rd7W/w/ydYTsP5CAPjOaqwwgkCcs2WqASbqIDzWuq5gXS+KaKxOotFbMHQKbbOQRBNGo7qB24n3V0n4RS5i890M9UG0qJstVPc9A5eFnUnjjmhA4RqxvxbLJp0p29CxHkFRGQ1i7N3RkWXn+4dxIAyTobSOah1oyf8yEgEeFdvCbEQrzODDKWXaV74FxDZXuIciz01E1oJkGxP7gliLscCjL+lmpFrdEdO5euTTdri/RxZSn3UkyQHAZDDwGnlepYtP9MD1p7fVSQ1ml9QgvjNPrZXF8qzWbeZ99ln3mS/GAgHx7VzfWvgwL9a2sQJ7pwkI5BTz9RBXJT0cIU6iUfvZlhXQp7Ck8dW2qay+CIAcpfOBMcBBc1Tbyon+st94CFaB79PIKI+OidzV+x0tmXEuwmElSpljh9U3881jWrd3OI33cMjbteAHJVbjR9uba3lveg0hJf8ANQob0xJP6P29wMxrXVYJtGbcqUEZLeLKW2N186BUosL0B zYUwbBko ZdW+OC6XKcR9kQf3+SXESlN/wLNUPLkwGDUnv/ghnmUXyB3l+BwdT2n/BXJibqBsnTqAtNedotU/cPNU0N8WgsAUh4DopG1CW3xOGZFcbRs2TKgIHh9vuc9gyh+BvCPYFoH5Q0uAQ2/Pqw+D/L2x+zmkIb57UJ8+SJKmdE0hKgdkXOAFvM4Mtj0gzBbOrsuL7GYGS855XbO3dPBWaH0iUDE8pFGCLwenYVGisQE9V2ie+mOnn74kA2vXKNSM54pCJqY32uqhMbTux5u4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed 22-11-23 23:32:50, Chengming Zhou wrote: > On 2023/11/22 22:49, Jan Kara wrote: > > Hello! > > > > On Wed 22-11-23 17:38:25, Chengming Zhou wrote: > >> Sorry to bother you, we encountered a problem related to the memcg dirty > >> throttle after migrating from cgroup v1 to v2, so want to ask for some > >> comments or suggestions. > >> > >> 1. Problem > >> > >> We have the "containerd" service running under system.slice, with > >> its memory.max set to 5GB. It will be constantly throttled in the > >> balance_dirty_pages() since the memcg has dirty memory more than > >> the memcg dirty thresh. > >> > >> We haven't this problem on cgroup v1, because cgroup v1 doesn't have > >> the per-memcg writeback and per-memcg dirty thresh. Only the global > >> dirty thresh will be checked in balance_dirty_pages(). > > > > As Michal writes, if you allow too many memcg pages to become dirty, you > > might be facing issues with page reclaim so there are actually good reasons > > why you want amount of dirty pages in each memcg reasonably limited. Also > > Yes, the memcg dirty limit (20%) is good for the memcg reclaim path. > But for some workloads (like burst dirtier) which may only create many dirty > pages in a short time, we want its memory.max 60% can be dirtied without > being throttled. And this is not much harmful for its memcg reclaim path. Well, I'd rather say that your memcg likely doesn't hit reclaim path too much (the memory is reasonably sized for the task) and thus high fraction of dirty pagecache pages does not really matter much. > > generally increasing number of available dirty pages beyond say 1GB is not > > going to bring any benefit in the overall writeback performance. It may > > still be useful in case you generate a lot of (or large) temporary files > > which get quickly deleted and thus with high enough dirty limit they don't > > have to be written to the disk at all. Similarly if the generation of dirty > > data is very bursty (i.e. you generate a lot of dirty data in a short while > > and then don't dirty anything for a long time), having higher dirty limit > > may be useful. What is your usecase that you think you'll benefit from > > higher dirty limit? > > I think it's the burst dirtier in our case, and we have good performance > improvement if we change the global dirty_ratio to 60 just for testing. OK. > >> 3. Solution? > >> > >> But we could't think of a good solution to support this. The current > >> memcg dirty thresh is calculated from a complex rule: > >> > >> memcg dirty thresh = memcg avail * dirty_ratio > >> > >> memcg avail is from combination of: memcg max/high, memcg files > >> and capped by system-wide clean memory excluding the amount being used > >> in the memcg. > >> > >> Although we may find a way to calculate the per-memcg dirty thresh, > >> we can't use it directly, since we still need to calculate/distribute > >> dirty thresh to the per-wb dirty thresh share. > >> > >> R - A - B > >> \-- C > >> > >> For example, if we know the dirty thresh of A, but wb is in C, we > >> have no way to distribute the dirty thresh shares to the wb in C. > >> > >> But we have to get the dirty thresh of the wb in C, since we need it > >> to control throttling process of the wb in balance_dirty_pages(). > >> > >> I may have missed something above, but the problem seems clear IMHO. > >> Looking forward to any comment or suggestion. > > > > I'm not sure I follow what is the problem here. In balance_dirty_pages() we > > have global dirty threshold (tracked in gdtc) and memcg dirty threshold > > (tracked in mdtc). This can get further scaled down based on the device > > throughput (that is the difference between 'thresh' and 'wb_thresh') but > > that is independent of the way mdtc->thresh is calculated. So if we provide > > a different way of calculating mdtc->thresh, technically everything should > > keep working as is. > > > > Sorry for the confusion. The problem is exactly how to calculate mdtc->thresh. > > R - A - B > \-- C > > Case 1: > > Suppose the C has "memory.dirty_limit" set, should we just use it as mdtc->thresh? > I see the current code also consider the system clean memory in mdtc_calc_avail(), > should we also need to consider it when "memory.dirty_limit" set? > > Case 2: > > Suppose the C hasn't "memory.dirty_limit" set, but A has "memory.dirty_limit" set, > how do we calculate the C mdtc->thresh ? > > Obviously we can't directly use the A "memory.dirty_limit", since it should be > distributed to B and C ? > > So the problem is that I don't know how to reasonably calculate the mdtc->thresh, > even given a memcg tree with some memcgs have "memory.dirty_limit" set. :\ I see, thanks for explanation. I guess we would need to redistribute dirtiable memory in hierarchical manner like we do it for other resources. The most natural would probably be to somehow follow the behavior of other memcg memory limits - but I know close to nothing about how that works so Michal would have to elaborate. Honza -- Jan Kara SUSE Labs, CR