From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DE73CA9EA0 for ; Tue, 22 Oct 2019 13:31:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED9F62053B for ; Tue, 22 Oct 2019 13:31:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED9F62053B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A2A566B000A; Tue, 22 Oct 2019 09:31:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 98C526B000C; Tue, 22 Oct 2019 09:31:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 87C216B000D; Tue, 22 Oct 2019 09:31:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id 5A32F6B000A for ; Tue, 22 Oct 2019 09:31:51 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id E667E688B for ; Tue, 22 Oct 2019 13:31:50 +0000 (UTC) X-FDA: 76071508380.29.clam50_1016e72bf0836 X-HE-Tag: clam50_1016e72bf0836 X-Filterd-Recvd-Size: 2835 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Oct 2019 13:31:50 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 53E02B13E; Tue, 22 Oct 2019 13:31:49 +0000 (UTC) Date: Tue, 22 Oct 2019 15:31:48 +0200 From: Michal Hocko To: Roman Gushchin Cc: linux-mm@kvack.org, Johannes Weiner , linux-kernel@vger.kernel.org, kernel-team@fb.com, Shakeel Butt , Vladimir Davydov , Waiman Long , Christoph Lameter Subject: Re: [PATCH 00/16] The new slab memory controller Message-ID: <20191022133148.GP9379@dhcp22.suse.cz> References: <20191018002820.307763-1-guro@fb.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191018002820.307763-1-guro@fb.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu 17-10-19 17:28:04, Roman Gushchin wrote: > This patchset provides a new implementation of the slab memory controller, > which aims to reach a much better slab utilization by sharing slab pages > between multiple memory cgroups. Below is the short description of the new > design (more details in commit messages). > > Accounting is performed per-object instead of per-page. Slab-related > vmstat counters are converted to bytes. Charging is performed on page-basis, > with rounding up and remembering leftovers. > > Memcg ownership data is stored in a per-slab-page vector: for each slab page > a vector of corresponding size is allocated. To keep slab memory reparenting > working, instead of saving a pointer to the memory cgroup directly an > intermediate object is used. It's simply a pointer to a memcg (which can be > easily changed to the parent) with a built-in reference counter. This scheme > allows to reparent all allocated objects without walking them over and changing > memcg pointer to the parent. > > Instead of creating an individual set of kmem_caches for each memory cgroup, > two global sets are used: the root set for non-accounted and root-cgroup > allocations and the second set for all other allocations. This allows to > simplify the lifetime management of individual kmem_caches: they are destroyed > with root counterparts. It allows to remove a good amount of code and make > things generally simpler. What is the performance impact? Also what is the effect on the memory reclaim side and the isolation. I would expect that mixing objects from different cgroups would have a negative/unpredictable impact on the memcg slab shrinking. -- Michal Hocko SUSE Labs