From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AD7EC3DA78 for ; Tue, 17 Jan 2023 14:54:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C76D6B0073; Tue, 17 Jan 2023 09:54:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 177386B0074; Tue, 17 Jan 2023 09:54:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 067256B0075; Tue, 17 Jan 2023 09:54:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E8A776B0073 for ; Tue, 17 Jan 2023 09:54:38 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id B3A681A0511 for ; Tue, 17 Jan 2023 14:54:38 +0000 (UTC) X-FDA: 80364587436.26.065E3D7 Received: from gentwo.de (gentwo.de [161.97.139.209]) by imf21.hostedemail.com (Postfix) with ESMTP id D24D31C001B for ; Tue, 17 Jan 2023 14:54:36 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gentwo.de header.s=default header.b=ypAisRGZ; spf=pass (imf21.hostedemail.com: domain of cl@gentwo.de designates 161.97.139.209 as permitted sender) smtp.mailfrom=cl@gentwo.de; dmarc=pass (policy=none) header.from=gentwo.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673967277; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YUkx402qu4UC96PEyoN3rqY+K2l/1K3Egsf24+uIlVI=; b=wuyf2OJgsbVZNIa1oIS2PL3PS/YxJfM+KpKmm412i/s3wOsTmcQcrp2B2F6cmU77dCZG0R XhPNdFDYUdbRxoBhUADu2UQfRUImHNnkve19/99+FLO9JK8C41e6Jhx3HQUqYNYMX5fjQH MX0SzVHLW3CLlLjoNnNShSsHUzIxbFc= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gentwo.de header.s=default header.b=ypAisRGZ; spf=pass (imf21.hostedemail.com: domain of cl@gentwo.de designates 161.97.139.209 as permitted sender) smtp.mailfrom=cl@gentwo.de; dmarc=pass (policy=none) header.from=gentwo.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673967277; a=rsa-sha256; cv=none; b=00Lo3HStEWAQcbqdCUUJCdoswA1j1dDAIdZQnHoMy6k9ESMv41OqFM5YdjcgTp/wstMwh/ bzBJHQ8iJe1M+hPdLCJlwaTueEqT/t8jFkxn53iSNtGg2iTwSTO/2nHMstui1kSusLpqvP Zv4JunSnJHfO9Z0B6SV20VLcVu4BWDM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gentwo.de; s=default; t=1673967274; bh=YUkx402qu4UC96PEyoN3rqY+K2l/1K3Egsf24+uIlVI=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=ypAisRGZyTCIeesL+SJlm0iw/Cb2dIENh54JtFmm2DkpKzG+PFzkGkzebKLuUfM+T lAzCGSeKJ602YdzTfdgqB44JL7YHyg6DCqjr00veaVaJsLndS4fAK/eddxAtk6EX5Z uPOYcyzguXCmn64jQU1Z8YzFLQWbi/mwfZF8N161d0nA77GGFtdpkUoDRghiZiRloP dpBHCPW1EGj0cZk10kbscdXCPvVWH5mYvKHscCNPU3ccJWJleBN4xuo51ocvO/2Xi8 WF2eNwvoBE9UJky3Y8BMgm531lDkVarmhree82/m5J+elJHtB8eFtUOm4TXDvZgQUs SL3Tp9/ZbJYXA== Received: by gentwo.de (Postfix, from userid 1001) id 57685B000FF; Tue, 17 Jan 2023 15:54:34 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by gentwo.de (Postfix) with ESMTP id 5062DB0001E; Tue, 17 Jan 2023 15:54:34 +0100 (CET) Date: Tue, 17 Jan 2023 15:54:34 +0100 (CET) From: Christoph Lameter To: Jesper Dangaard Brouer cc: netdev@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Mel Gorman , Joonsoo Kim , penberg@kernel.org, penberg@kernel.org, vbabka@suse.cz, Jakub Kicinski , "David S. Miller" , edumazet@google.com, pabeni@redhat.com Subject: Re: [PATCH RFC] mm+net: allow to set kmem_cache create flag for SLAB_NEVER_MERGE In-Reply-To: <167396280045.539803.7540459812377220500.stgit@firesoul> Message-ID: <36f5761f-d4d9-4ec9-a64-7a6c6c8b956f@gentwo.de> References: <167396280045.539803.7540459812377220500.stgit@firesoul> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: D24D31C001B X-Stat-Signature: fizsx95rkswsfdyoqb8dcjtydpun8txx X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1673967276-195539 X-HE-Meta: U2FsdGVkX19vpg8OAt03knzk/tmKEnvw12ZS0OhLgBRPdJyG+UuJ8zuJ6gaybcXBPzPPnQQaJcEnGpeRjZkJSLVVHiLesUhE13F7cyEohKtx7F99yl3wmXwXbJt8VLfcsNArDzM0ywkvBecrO6VTT2H2ZwMGWts/GK7iHduNL0L7+ZL0IDs476yBmZoSysBt0dDvBL2GLErg5FWQaJJV2MIQrqXogjGJ+/i+Cf0EPPY7dVsImZ+1jO88d5ZpERQC6dfAMmIX5CdkRGDPtxvlCAzbuzSdC/fyKAeTMyyAQe/baAkv/DXmrjTBigW+aSnU+tI/1uyD/lppQlbW6t00zqEoqZTt25InYj5Nz0vyiBiMm+PoW7tnZrbtsD5y3Wjdwu26yxbQiSpGaOoUHvr/G5oWkB2aX6JeReeGnnflK4xWzA9NR9G9dGBydqOgesRK8o5j7JIqgvXCNpd1p4cjG1DoJ+8oU++UHo1vsp0oA6XK8ThZ6YOuwkTFhw5A9mK/bDz1NO6sVa+FLfkKdxBe0D1h3P6+ogB9dCTVn8RWuU1dSdmxJJfcCWUunGHzGdK+EIXscbSaLvbIYuZHKNslNiW3KMqE/ypEbJezEfaj2Itz+aPfAw0s7QZ5VgAiITlW8K5W/DcTjVl2J9SQ1Q5zfnuHZA/0uOjJZ8F5g1mxJeJeqGQJUqZ36+D34wx9d6bqpbu6d4Nx/xu9Nc5tnZ4KNLOj/5x+ZEzySOlepJt2wkJcn7sWN9EXDxUm47BB2+0pZse/W6lfROotjCmytFLklZnWXYAdQhmYoph2m1jl4D5YJw57PQxADhSmSQv+ysESQpz04+3E04LYGBDdJFU/AZq7kXdpT06fcnjlp4zaQKxMh2hz/dSlDg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 17 Jan 2023, Jesper Dangaard Brouer wrote: > When running different network performance microbenchmarks, I started > to notice that performance was reduced (slightly) when machines had > longer uptimes. I believe the cause was 'skbuff_head_cache' got > aliased/merged into the general slub for 256 bytes sized objects (with > my kernel config, without CONFIG_HARDENED_USERCOPY). Well that is a common effect that we see in multiple subsystems. This is due to general memory fragmentation. Depending on the prior load the performance could actually be better after some runtime if the caches are populated avoiding the page allocator etc. The merging could actually be beneficial since there may be more partial slabs to allocate from and thus avoiding expensive calls to the page allocator. I wish we had some effective way of memory defragmentation.