From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00764C27C53 for ; Fri, 7 Jun 2024 10:49:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 90A436B00A7; Fri, 7 Jun 2024 06:49:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8BA2C6B00A9; Fri, 7 Jun 2024 06:49:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 783866B00AB; Fri, 7 Jun 2024 06:49:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 58D006B00A7 for ; Fri, 7 Jun 2024 06:49:31 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 090B54045F for ; Fri, 7 Jun 2024 10:49:31 +0000 (UTC) X-FDA: 82203771342.28.D30D92C Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf11.hostedemail.com (Postfix) with ESMTP id 5563E40010 for ; Fri, 7 Jun 2024 10:49:28 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717757368; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=48EMWhbMSU4SdyvW/TGSxHDXyZh/pkBmgMeemEPrP2k=; b=I7Yj5dgj0PGgHdt4ml5ebrHCmD1Q1+O2xzZopJxxqnfljrkI6K4jpudlzH+x5gaKbeUpMr 1Mle85MEYLQ97gjh2lfnRAggA3G5pbTGl3Pkr1l5EW1YQqQJuRonsnYArjE8195EwbB7uV 04osGMwrrjXCA9Qw3cY2Zu26swYzzwo= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717757368; a=rsa-sha256; cv=none; b=uv0z0vliCG8NrC+7MoFnAwRcfAnUTVjMpuYldL6cSzr7uv/zzJRBThTRWNI2J3SkxQajHM nB+jlfFGpZH9TnFbhcZTHfzs4yza+c0pMSSQNslEfKaadH036mkROcG1KAMpcr725435yb vVeJgxPi9SDpAv6Jyx/qRiLdnO3m+sw= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CCDBD2F4; Fri, 7 Jun 2024 03:49:51 -0700 (PDT) Received: from [10.57.70.246] (unknown [10.57.70.246]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5AAC33F762; Fri, 7 Jun 2024 03:49:26 -0700 (PDT) Message-ID: Date: Fri, 7 Jun 2024 11:49:24 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 0/2] mm: swap: mTHP swap allocator base on swap cluster order Content-Language: en-GB To: Barry Song , Chris Li Cc: Andrew Morton , Kairui Song , "Huang, Ying" , linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20240524-swap-allocator-v1-0-47861b423b26@kernel.org> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5563E40010 X-Stat-Signature: kf9yd4u6ojg8acmfdbkusk7m8jwjnace X-Rspam-User: X-HE-Tag: 1717757368-525488 X-HE-Meta: U2FsdGVkX1/RVW1qtKo6oMkbvBFpJb6ZcWc0t9NxTWIq4/a2zoRfrqdzF7vzNE+eQdDL3qIorvcsEgqMnTDH8FJQVpZW518IIg4fkS9PRbA0r8R7NdQDHMH1b7POKAKM6vPCmwalJ+8HE4j/VcDaGsYzb3AsDuAhgxaL3P82pIdhuMNcbwp0p8pI+1cz3M/3L4DYPBt7yFckSmt1jXvNGOyL7sgBk0gLOerK4muSXS4aRhCb2Ry95KpxOmT/i2TmmpnW2wAuK0u5QwDwA/eg+cKmC/2W3TFEQdeWFZywzlV1YNojXvPFSay+BGQpRdvRLkUEluwr/5+pGDzDP75ko/9qpyNx2lLrjvjS9/4eFQsIZDgLnyZ4I2mTq+Z9/Ly1fC0+/9O1EY7RCjRWJ+R+j+BTQyx1hknCBOm79lWbanpv9rvS9fQb6UaXHwz9VZxP+ghJtxyEEp5etVpPPlSPp19uuCH/SXode8ripdtxzRvZ/JXctjvN48yK0Vd7a1Obr+OcI85kAoW5+YO2SaKKRmzxWE/aFgbGile9kYX0aVBpbl87o1KumNyG7Cj1ZQXS4Zpmy6A9af6/2CLQYA73jf1S1RMJkR02YZQfw2flivGN5h+wE5Qfp8uwgXuSncPHqmzWn640oLjYJYFxEo0MqSQDxlIvDcWLp+2//J6Ke7G9PLXZ6y3qUsSdskih0gH3UHlF6zePgM8Xyw+ijKRca9tJarFryxrIk2cvc/+/Pm+/ud2n46wvyF5j6tUIFblEntpCOBUzfZnd3iMc1vEVSm68YkH8D0Dy5/Q/as2K2W4lUtioMOQRVYf8hn7UCY28V7oUCeIl+FNVeIclftpjxBN+oSO54jHw9t5hz6Bvri8e5ZRUXOrpGzXRc2vXsZUmYUqmBU+an+cx8abA8OK/9JbMOet4egEF1iu6P8OCv3zCKDVBtVB6FXEsOmAqsH7iLsde1dWSN+1PLibaZPS CzWljQcn VW4fR7wcDmCBueLepbRaeqHWbrrFhtaMrTnJlfS5QZQPWNyYJjvLcZgc/ePxUlZHySlrEiryae0lOg4AaBhzBKE8qeHfEJ9Z+otb6QFX04JSI3ntAu9KLioePkGCcKelRlCuBkTFMysUoQZVShdDZK1f4AQ7f4osccoLPCqfPOBFc4A2VeP6GDLn59/hd++vxfld7SZhfDdn93R6RLl9WeEpmt/I1/rUX03IxbpKFWjD0ddnwClDAG/xMPCdUG7s8BFYUWr4fgcQWW+LamKmXHtR6mNefG+ClIZvMRDRWatAQt8ZpqBUlR/VpfQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 30/05/2024 08:49, Barry Song wrote: > On Wed, May 29, 2024 at 9:04 AM Chris Li wrote: >> >> I am spinning a new version for this series to address two issues >> found in this series: >> >> 1) Oppo discovered a bug in the following line: >> + ci = si->cluster_info + tmp; >> Should be "tmp / SWAPFILE_CLUSTER" instead of "tmp". >> That is a serious bug but trivial to fix. >> >> 2) order 0 allocation currently blindly scans swap_map disregarding >> the cluster->order. Given enough order 0 swap allocations(close to the >> swap file size) the order 0 allocation head will eventually sweep >> across the whole swapfile and destroy other cluster order allocations. >> >> The short term fix is just skipping clusters that are already assigned >> to higher orders. >> >> In the long term, I want to unify the non-SSD to use clusters for >> locking and allocations as well, just try to follow the last >> allocation (less seeking) as much as possible. > > Hi Chris, > > I am sharing some new test results with you. This time, we used two > zRAM devices by modifying get_swap_pages(). > > zram0 -> dedicated for order-0 swpout > zram1 -> dedicated for order-4 swpout > > We allocate a generous amount of space for zRAM1 to ensure it never gets full > and always has ample free space. However, we found that Ryan's approach > does not perform well even in this straightforward scenario. Despite zRAM1 > having 80% of its space remaining, we still experience issues obtaining > contiguous swap slots and encounter a high swpout_fallback ratio. > > Sorry for the report, Ryan :-) No problem; clearly it needs to be fixed, and I'll help where I can. I'm pretty sure that this is due to fragmentation preventing clusters from being freed back to the free list. > > In contrast, with your patch, we consistently see the thp_swpout_fallback ratio > at 0%, indicating a significant improvement in the situation. Unless I've misunderstood something critical, Chris's change is just allowing a cpu to steal a block from another cpu's current cluster for that order. So it just takes longer (approx by a factor of the number of cpus in the system) to get to the state where fragmentation is causing fallbacks? As I said in the other thread, I think the more robust solution is to implement scanning for high order blocks. > > Although your patch still has issues supporting the mixing of order-0 and > order-4 pages in a swap device, it represents a significant improvement. > > I would be delighted to witness your approach advancing with Ying > Huang’s assistance. However, due to my current commitments, I > regret that I am unable to allocate time for debugging. > >> >> Chris >> >> >> >> On Fri, May 24, 2024 at 10:17 AM Chris Li wrote: >>> >>> This is the short term solutiolns "swap cluster order" listed >>> in my "Swap Abstraction" discussion slice 8 in the recent >>> LSF/MM conference. >>> >>> When commit 845982eb264bc "mm: swap: allow storage of all mTHP >>> orders" is introduced, it only allocates the mTHP swap entries >>> from new empty cluster list. That works well for PMD size THP, >>> but it has a serius fragmentation issue reported by Barry. >>> >>> https://lore.kernel.org/all/CAGsJ_4zAcJkuW016Cfi6wicRr8N9X+GJJhgMQdSMp+Ah+NSgNQ@mail.gmail.com/ >>> >>> The mTHP allocation failure rate raises to almost 100% after a few >>> hours in Barry's test run. >>> >>> The reason is that all the empty cluster has been exhausted while >>> there are planty of free swap entries to in the cluster that is >>> not 100% free. >>> >>> Address this by remember the swap allocation order in the cluster. >>> Keep track of the per order non full cluster list for later allocation. >>> >>> This greatly improve the sucess rate of the mTHP swap allocation. >>> While I am still waiting for Barry's test result. I paste Kairui's test >>> result here: >>> >>> I'm able to reproduce such an issue with a simple script (enabling all order of mthp): >>> >>> modprobe brd rd_nr=1 rd_size=$(( 10 * 1024 * 1024)) >>> swapoff -a >>> mkswap /dev/ram0 >>> swapon /dev/ram0 >>> >>> rmdir /sys/fs/cgroup/benchmark >>> mkdir -p /sys/fs/cgroup/benchmark >>> cd /sys/fs/cgroup/benchmark >>> echo 8G > memory.max >>> echo $$ > cgroup.procs >>> >>> memcached -u nobody -m 16384 -s /tmp/memcached.socket -a 0766 -t 32 -B binary & >>> >>> /usr/local/bin/memtier_benchmark -S /tmp/memcached.socket \ >>> -P memcache_binary -n allkeys --key-minimum=1 \ >>> --key-maximum=18000000 --key-pattern=P:P -c 1 -t 32 \ >>> --ratio 1:0 --pipeline 8 -d 1024 >>> >>> Before: >>> Totals 48805.63 0.00 0.00 5.26045 1.19100 38.91100 59.64700 51063.98 >>> After: >>> Totals 71098.84 0.00 0.00 3.60585 0.71100 26.36700 39.16700 74388.74 >>> >>> And the fallback ratio dropped by a lot: >>> Before: >>> hugepages-32kB/stats/anon_swpout_fallback:15997 >>> hugepages-32kB/stats/anon_swpout:18712 >>> hugepages-512kB/stats/anon_swpout_fallback:192 >>> hugepages-512kB/stats/anon_swpout:0 >>> hugepages-2048kB/stats/anon_swpout_fallback:2 >>> hugepages-2048kB/stats/anon_swpout:0 >>> hugepages-1024kB/stats/anon_swpout_fallback:0 >>> hugepages-1024kB/stats/anon_swpout:0 >>> hugepages-64kB/stats/anon_swpout_fallback:18246 >>> hugepages-64kB/stats/anon_swpout:17644 >>> hugepages-16kB/stats/anon_swpout_fallback:13701 >>> hugepages-16kB/stats/anon_swpout:18234 >>> hugepages-256kB/stats/anon_swpout_fallback:8642 >>> hugepages-256kB/stats/anon_swpout:93 >>> hugepages-128kB/stats/anon_swpout_fallback:21497 >>> hugepages-128kB/stats/anon_swpout:7596 >>> >>> (Still collecting more data, the success swpout was mostly done early, then the fallback began to increase, nearly 100% failure rate) >>> >>> After: >>> hugepages-32kB/stats/swpout:34445 >>> hugepages-32kB/stats/swpout_fallback:0 >>> hugepages-512kB/stats/swpout:1 >>> hugepages-512kB/stats/swpout_fallback:134 >>> hugepages-2048kB/stats/swpout:1 >>> hugepages-2048kB/stats/swpout_fallback:1 >>> hugepages-1024kB/stats/swpout:6 >>> hugepages-1024kB/stats/swpout_fallback:0 >>> hugepages-64kB/stats/swpout:35495 >>> hugepages-64kB/stats/swpout_fallback:0 >>> hugepages-16kB/stats/swpout:32441 >>> hugepages-16kB/stats/swpout_fallback:0 >>> hugepages-256kB/stats/swpout:2223 >>> hugepages-256kB/stats/swpout_fallback:6278 >>> hugepages-128kB/stats/swpout:29136 >>> hugepages-128kB/stats/swpout_fallback:52 >>> >>> Reported-by: Barry Song <21cnbao@gmail.com> >>> Tested-by: Kairui Song >>> Signed-off-by: Chris Li >>> --- >>> Chris Li (2): >>> mm: swap: swap cluster switch to double link list >>> mm: swap: mTHP allocate swap entries from nonfull list >>> >>> include/linux/swap.h | 18 ++-- >>> mm/swapfile.c | 252 +++++++++++++++++---------------------------------- >>> 2 files changed, 93 insertions(+), 177 deletions(-) >>> --- >>> base-commit: c65920c76a977c2b73c3a8b03b4c0c00cc1285ed >>> change-id: 20240523-swap-allocator-1534c480ece4 >>> >>> Best regards, >>> -- >>> Chris Li >>> > > Thanks > Barry