From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1B38C6FD1D for ; Thu, 30 Mar 2023 16:37:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 578ED6B007E; Thu, 30 Mar 2023 12:37:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 528736B0080; Thu, 30 Mar 2023 12:37:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F0D26B0081; Thu, 30 Mar 2023 12:37:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2FB116B007E for ; Thu, 30 Mar 2023 12:37:21 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 07AD6801DC for ; Thu, 30 Mar 2023 16:37:21 +0000 (UTC) X-FDA: 80626119882.10.C3EC011 Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com [64.147.123.21]) by imf26.hostedemail.com (Postfix) with ESMTP id A01A414001B for ; Thu, 30 Mar 2023 16:37:18 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=devkernel.io header.s=fm3 header.b=HK0GjyYP; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=KvQ8IMmE; dmarc=none; spf=pass (imf26.hostedemail.com: domain of shr@devkernel.io designates 64.147.123.21 as permitted sender) smtp.mailfrom=shr@devkernel.io ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680194239; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BGwzN9alArpP+icGRPMJNrHFgvXg2GlJMjXi1YHIIuU=; b=PRKCR7xnjiHPpVvWXBua5q907zJpESg+Oq8RGS5medBnWASdeE8aCT/vGP9UMc6YmU6BgY XOPrsa9rLWsl2yxtBxCc8V099Xr6wL5j9BdMfOVXZNgt420uN56er3u/ahmBy9NrxF1fm6 Eh6BsMa5mPzpU2NRdxtt+xpDiTDzzS8= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=devkernel.io header.s=fm3 header.b=HK0GjyYP; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=KvQ8IMmE; dmarc=none; spf=pass (imf26.hostedemail.com: domain of shr@devkernel.io designates 64.147.123.21 as permitted sender) smtp.mailfrom=shr@devkernel.io ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680194239; a=rsa-sha256; cv=none; b=LzYDKybPsnx4WuvPvb3NH/4mvZVF6V9ui7aAR83BPdQr04bbOheVbBj3DgZ9wg/XAfj8jM Aau3U9f3SlkIO6vPkEY+eaMo1lExcNbVO8NOtusdwGC7mBP3jnm1phignz2Kz0s6bK4RyY cLQAzzx7LnGXcpVl/9ZG7hNZQ5Tlwvs= Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.west.internal (Postfix) with ESMTP id D4EF6320098E; Thu, 30 Mar 2023 12:37:14 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Thu, 30 Mar 2023 12:37:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=devkernel.io; h= cc:cc:content-type:content-type:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm3; t=1680194234; x=1680280634; bh=BG wzN9alArpP+icGRPMJNrHFgvXg2GlJMjXi1YHIIuU=; b=HK0GjyYPrspfUfsigV M5c6WyqLPA1v/0CYq51mUt94arzTP+rMcNp3CVv4waWrQtIuJywSfQNxylh5D5Ta AdHSg1OqXc1/ZoUOc8iBSA4Qn8Dn2VyThB6ijF/zl9iJKwLj8gDXcq48IGny3J/8 joZH3mKsu6Fpv8qls6a2KHSQNn0fNu9CvseycKHmp2DYRr9Xg47bVU66V3tfXDmM M6KQ2nCGYt4Lou0aICwY8LmQWuUPirN9IqwCYQZUPGaOse7aNW1u6p6OGY+MBe2/ 5FO3YiGiPvn+eYsr6YQCsw96+skI2/5V/08Pp0C+Uu6oEpnIGpmtjsHIwcCLLJdu gKgQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-type:content-type:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1680194234; x=1680280634; bh=BGwzN9alArpP+ icGRPMJNrHFgvXg2GlJMjXi1YHIIuU=; b=KvQ8IMmEuABtTXS/YLUQYqfoEoj4D Atkzcj4w1TokeBF/Bv8hQZ7TLBV4kUxPmijbLkf4X+aU+gX8/BfMTruK0b9fsIBE IiVvq9eMA1zFHYiv0Vj2aHhtYISqBnx2/fwA8bgiTjUTilarFEWZhBwlsdDIi3/J KMOod7thqRNtk/fnQftrYQ/z6UUTFqlJ8Otk9hVNOtmnQYlsMHZ61qoL8qzPZXMk 7PMsOsRcZL9n499ndAjfkhgHxA6TC5cXU6Iu1yfe0hBPdjNplZlWuLU4m2fP3Tic DG81Cy/Gjlu1Gc2PSf/sKsEvrUUhU/Y2RNRziOvPiKi+E5VdmbTXJF7pA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdehledgudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepfhgfhffvvefuffgjkfggtgesthdtredttdertdenucfhrhhomhepufhtvghf rghnucftohgvshgthhcuoehshhhrseguvghvkhgvrhhnvghlrdhioheqnecuggftrfgrth htvghrnhepveelgffghfehudeitdehjeevhedthfetvdfhledutedvgeeikeeggefgudeg uedtnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsh hhrhesuggvvhhkvghrnhgvlhdrihho X-ME-Proxy: Feedback-ID: i84614614:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 30 Mar 2023 12:37:12 -0400 (EDT) References: <20230310182851.2579138-1-shr@devkernel.io> <273a2f82-928f-5ad1-0988-1a886d169e83@redhat.com> <20230315210545.GA116016@cmpxchg.org> <20230315211927.GB116016@cmpxchg.org> User-agent: mu4e 1.6.11; emacs 28.2.50 From: Stefan Roesch To: David Hildenbrand Cc: Johannes Weiner , kernel-team@fb.com, linux-mm@kvack.org, riel@surriel.com, mhocko@suse.com, linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, akpm@linux-foundation.org, Mike Kravetz Subject: Re: [PATCH v4 0/3] mm: process/cgroup ksm support Date: Thu, 30 Mar 2023 09:19:19 -0700 In-reply-to: Message-ID: MIME-Version: 1.0 Content-Type: text/plain X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A01A414001B X-Stat-Signature: ab6ggmi79umdmebb8j4jtuzrjb47nqry X-HE-Tag: 1680194238-243257 X-HE-Meta: U2FsdGVkX18gmysUydDmB1nGlirAdUhy1Gn942+80CHCfyqLh9Gj5RTTfqSYQKRQ7CKkTt1UUz79r9tH8jRcG50qnYhanUKnbRR2qpCCzGcfm4ibyBPWKHrT5kT2fla0szIuN1vZHrxoijwpBRgsGOpTktCnpy6wyKuZh0CMW4/mI1fQIzyRTwH4gDXXx+/sa++SK2yLdvzKo/LuR5kxEYU1jUYiUDOEL0E9R2bJAv/z5vsnKF/cEqdiORFSwzQW8P7Zf5/i9MFlrSbb3B69hwSJgbrhbx6miIC4Qekn3H6zyTzFCr1w8gFaVZA/skmU33x3SivACnNXTX0XwYb92h1E30GhBmnBuGxEF0z0GaVuH4EtxU7ZP9OzfrEn1/YbgFExj9slyAv+zWon5OvrAk0lIzYF2UK4aLtJSkZyDO3Wz1/XEhPpZHeDyYWkAOZrB3VderXGw0rrW00kpPgctQyZPakypvkrHUjjMkx8pjBOVdO4CNgmTGrNtMI78nKSsWRMFq7g0BpI4mXaGl6V4VgfERf80vQfgKG8UCFE8ZP7YyA8ZWocfOV6cWRaH3IS9qT5FasU803c/NAND5ZcuJviWlvNos4dp6Fo1XP+5ox1Q31ARUME5wHCOdlRnCQ8CM5jWZNUmjzOeWKbGxKPZdtjgiQpQPmZKi2bMZThRfOBsgGSM7uvh/4THbaS2dgwBlZLxxu3EdCjUhl9xbUJzpuIQkJN9QzIWp+DDzzovGmH4ZPMDlZ/Px0IUF+Ymg+nXQXaMI/xbXOdyR6i2H0HGczMpeQscWJfcK3ISA4INwIHW9/w1p2znBCoZu3tiagbYFbwv5A5uqxl0team3kIZ0mMXOruRq+w5HfTmYwWKBw56BQJFHuavJZ36sXbpOCuJy/PPCU9OT3aBw7Q1X/4IfhWAgGIz3Mansjf01k7MNyxb+BF+x1YeVWlRpZDkMHOK9EtkyI0yA5zCHUVnB1 92eoxrKQ fHNNF/r1If3wOJMTmVirtm5ztmgaFyWw/GKITkExn9G5UqWjntW4xwmwysfoEHXDj28RCc3t5Akuy/ENZb6XNnaX+PC6mcMyoKb5f0Jd4ae3Z6Uxtj5vSmRQp+gONPvL9UuHuSGUAfyuR+H03YMJwlRVncGWBTgJgYRRL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: David Hildenbrand writes: > On 15.03.23 22:19, Johannes Weiner wrote: >> On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote: >>> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote: >>>> On 10.03.23 19:28, Stefan Roesch wrote: >>>>> So far KSM can only be enabled by calling madvise for memory regions. To >>>>> be able to use KSM for more workloads, KSM needs to have the ability to be >>>>> enabled / disabled at the process / cgroup level. >>>>> >>>>> Use case 1: >>>>> The madvise call is not available in the programming language. An example for >>>>> this are programs with forked workloads using a garbage collected language without >>>>> pointers. In such a language madvise cannot be made available. >>>>> >>>>> In addition the addresses of objects get moved around as they are garbage >>>>> collected. KSM sharing needs to be enabled "from the outside" for these type of >>>>> workloads. >>>>> >>>>> Use case 2: >>>>> The same interpreter can also be used for workloads where KSM brings no >>>>> benefit or even has overhead. We'd like to be able to enable KSM on a workload >>>>> by workload basis. >>>>> >>>>> Use case 3: >>>>> With the madvise call sharing opportunities are only enabled for the current >>>>> process: it is a workload-local decision. A considerable number of sharing >>>>> opportuniites may exist across multiple workloads or jobs. Only a higler level >>>>> entity like a job scheduler or container can know for certain if its running >>>>> one or more instances of a job. That job scheduler however doesn't have >>>>> the necessary internal worklaod knowledge to make targeted madvise calls. >>>>> >>>>> Security concerns: >>>>> In previous discussions security concerns have been brought up. The problem is >>>>> that an individual workload does not have the knowledge about what else is >>>>> running on a machine. Therefore it has to be very conservative in what memory >>>>> areas can be shared or not. However, if the system is dedicated to running >>>>> multiple jobs within the same security domain, its the job scheduler that has >>>>> the knowledge that sharing can be safely enabled and is even desirable. >>>>> >>>>> Performance: >>>>> Experiments with using UKSM have shown a capacity increase of around 20%. >>>> >>>> Stefan, can you do me a favor and investigate which pages we end up >>>> deduplicating -- especially if it's mostly only the zeropage and if it's >>>> still that significant when disabling THP? >>>> >>>> >>>> I'm currently investigating with some engineers on playing with enabling KSM >>>> on some selected processes (enabling it blindly on all VMAs of that process >>>> via madvise() ). >>>> >>>> One thing we noticed is that such (~50 times) 20MiB processes end up saving >>>> ~2MiB of memory per process. That made me suspicious, because it's the THP >>>> size. >>>> >>>> What I think happens is that we have a 2 MiB area (stack?) and only touch a >>>> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes. >>>> >>>> KSM somehow ends up splitting that THP and deduplicates all resulting >>>> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer >>>> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the >>>> processes with THP enabled, but I only took a look at a sample of the >>>> process' smaps so far. >>> >>> THP and KSM is indeed an interesting problem. Better TLB hits with >>> THPs, but reduced chance of deduplicating memory - which may or may >>> not result in more IO that outweighs any THP benefits. >>> >>> That said, the service in the experiment referenced above has swap >>> turned on and is under significant memory pressure. Unused splitpages >>> would get swapped out. The difference from KSM was from deduplicating >>> pages that were in active use, not internal THP fragmentation. >> Brainfart, my apologies. It could have been the ksm-induced splits >> themselves that allowed the unused subpages to get swapped out in the >> first place. > > Yes, it's not easy to spot that this is implemented. I just wrote a simple > reproducer to confirm: modifying a single subpage in a bunch of THP ranges will > populate a THP whereby most of the THP is zeroes. > > As long as you keep accessing the single subpage via the PMD I assume chances of > getting it swapped out are lower, because the folio will be references/dirty. > > KSM will come around and split the THP filled mostly with zeroes and deduplciate > the resulting zero pages. > > [that's where a zeropage-only KSM could be very valuable eventually I think] > We can certainly run an experiment where THP is turned off to verify if we observe similar savings, >> But no, I double checked that workload just now. On a weekly average, >> it has about 50 anon THPs and 12 million regular anon. THP is not a >> factor in the reduction results. > > You mean with KSM enabled or with KSM disabled for the process? Not sure if your > observation reliably implies that the scenario described couldn't have happened, > but it's late in Germany already :) > > In any case, it would be nice to get a feeling for how much variety in these 20% > of deduplicated pages are. For example, if it's 99% the same page or just a wild > collection. > > Maybe "cat /sys/kernel/mm/ksm/pages_shared" would be expressive already. But I > seem to be getting "126" in my simple example where only zeropages should get > deduplicated, so I have to take another look at the stats tomorrow ... /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an Instagram workload. The workload consists of 36 processes plus a few sidecar processes. Also to give some idea for individual VMA's 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM: 73160 KB)