From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 514FAC47082 for ; Wed, 9 Jun 2021 00:12:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C0C086135D for ; Wed, 9 Jun 2021 00:12:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C0C086135D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 214996B006C; Tue, 8 Jun 2021 20:12:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C59B6B006E; Tue, 8 Jun 2021 20:12:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03E716B0070; Tue, 8 Jun 2021 20:12:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0079.hostedemail.com [216.40.44.79]) by kanga.kvack.org (Postfix) with ESMTP id C42A46B006C for ; Tue, 8 Jun 2021 20:12:41 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4E39F5DE0 for ; Wed, 9 Jun 2021 00:12:41 +0000 (UTC) X-FDA: 78232259322.32.DCEE688 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf04.hostedemail.com (Postfix) with ESMTP id 48AD03C3 for ; Wed, 9 Jun 2021 00:12:38 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 07FB1610F8; Wed, 9 Jun 2021 00:12:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1623197558; bh=Np2CNaI10f2uKvZB+8HAW9VBjJUZilRyHauJK0qYGM0=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=UnB1q0jQlepG/n+9aunHOVHnESQHqNJO0tnNJiaZReW3e+zFEQDsENcwOJkkw9VFN 3pOvqYOf63gtypGuz/jPXvIjdIW/SOhYrSTfPrJ0kaArAGVJYON1mq5qHbxVqL1XSq yu58wXvf/psjV9Um+ReIXRJThocZ1wDtmkDD+EHY= Date: Tue, 8 Jun 2021 17:12:37 -0700 From: Andrew Morton To: Roman Gushchin Cc: Tejun Heo , , , , Alexander Viro , Jan Kara , Dennis Zhou , Dave Chinner , Subject: Re: [PATCH v9 8/8] writeback, cgroup: release dying cgwbs by switching attached inodes Message-Id: <20210608171237.be2f4223de89458841c10fd4@linux-foundation.org> In-Reply-To: <20210608230225.2078447-9-guro@fb.com> References: <20210608230225.2078447-1-guro@fb.com> <20210608230225.2078447-9-guro@fb.com> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=UnB1q0jQ; spf=pass (imf04.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Stat-Signature: 8eshf5fwranr39wuno8ic1fc8uznxscy X-Rspamd-Queue-Id: 48AD03C3 X-Rspamd-Server: rspam06 X-HE-Tag: 1623197558-695596 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 8 Jun 2021 16:02:25 -0700 Roman Gushchin wrote: > Asynchronously try to release dying cgwbs by switching attached inodes > to the nearest living ancestor wb. It helps to get rid of per-cgroup > writeback structures themselves and of pinned memory and block cgroups, > which are significantly larger structures (mostly due to large per-cpu > statistics data). This prevents memory waste and helps to avoid > different scalability problems caused by large piles of dying cgroups. > > Reuse the existing mechanism of inode switching used for foreign inode > detection. To speed things up batch up to 115 inode switching in a > single operation (the maximum number is selected so that the resulting > struct inode_switch_wbs_context can fit into 1024 bytes). Because > every switching consists of two steps divided by an RCU grace period, > it would be too slow without batching. Please note that the whole > batch counts as a single operation (when increasing/decreasing > isw_nr_in_flight). This allows to keep umounting working (flush the > switching queue), however prevents cleanups from consuming the whole > switching quota and effectively blocking the frn switching. > > A cgwb cleanup operation can fail due to different reasons (e.g. not > enough memory, the cgwb has an in-flight/pending io, an attached inode > in a wrong state, etc). In this case the next scheduled cleanup will > make a new attempt. An attempt is made each time a new cgwb is offlined > (in other words a memcg and/or a blkcg is deleted by a user). In the > future an additional attempt scheduled by a timer can be implemented. > > ... > > +/* > + * Maximum inodes per isw. A specific value has been chosen to make > + * struct inode_switch_wbs_context fit into 1024 bytes kmalloc. > + */ > +#define WB_MAX_INODES_PER_ISW 115 Can't we do 1024/sizeof(struct inode_switch_wbs_context)?