From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 448CEC4345F for ; Tue, 16 Apr 2024 20:14:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D3C776B009D; Tue, 16 Apr 2024 16:14:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CED2B6B009E; Tue, 16 Apr 2024 16:14:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B66736B00A0; Tue, 16 Apr 2024 16:14:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 99B7D6B009D for ; Tue, 16 Apr 2024 16:14:47 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1CBDF120B71 for ; Tue, 16 Apr 2024 20:14:47 +0000 (UTC) X-FDA: 82016498214.15.A442A83 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf14.hostedemail.com (Postfix) with ESMTP id 772BE10000B for ; Tue, 16 Apr 2024 20:14:44 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="W8Od4/FB"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=ShsaUQM5; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="W8Od4/FB"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=ShsaUQM5; dmarc=none; spf=pass (imf14.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713298484; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Z7WiYuxpSuB+TscMuMCpsdNIbU6FTnps0eqdPSfyed0=; b=KBqt9U/kONJemobyezJ+ldGHlKJVGvrcT5zxzysEH35Nqjjg4iuJfm1mOIH/U0LyAUQknf YH1acZMnsWkW4Usjk1MdTjG0n5F1v2870f+M4MMfTqig/CU9jL2IaGkjbY40v27/X41n3h Vh0S2ZIq32DRZ7OjFqF8sKbC6AXFkko= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="W8Od4/FB"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=ShsaUQM5; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="W8Od4/FB"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=ShsaUQM5; dmarc=none; spf=pass (imf14.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713298484; a=rsa-sha256; cv=none; b=1MltRfR71vWo7zMWkLi7JRGd+2aNxn+2DQaNUZFwtX8Glmr9AGmuuH8HqaPzVrSub2SaJU 2uwUbGmoO+TzL++0Gm1TsEwyg3r81h+gnuKnLdnI3C7a8o3BdjqsyIcmveja1C2DhOFPjK WoXQBx+RM0shokwn0nuCMRMCpPAGB8c= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 61C251F86A; Tue, 16 Apr 2024 20:14:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1713298482; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=Z7WiYuxpSuB+TscMuMCpsdNIbU6FTnps0eqdPSfyed0=; b=W8Od4/FBPF+F0wX+NA//eNUeut6BjcAACj8bOXavfvnXg3V84S+m5QuGrc/ZlJ2/WQnjjS LVWfL4eaDPUkImukNEwZH9RihRDtWksRP7MnkgrwHDC4/NlzZOIla43d94O66tYWkaSpku HHzh+nHsgPFONIk4+PPmtoREqh+NXsg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1713298482; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=Z7WiYuxpSuB+TscMuMCpsdNIbU6FTnps0eqdPSfyed0=; b=ShsaUQM5oGWb4Y7AjW8nxUzH7WGMt1DBIUgja9uUfgJX4D5AyqS+S/4Vat7VCLuWFVlF5w IrYKccrkxtlbMHDw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1713298482; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=Z7WiYuxpSuB+TscMuMCpsdNIbU6FTnps0eqdPSfyed0=; b=W8Od4/FBPF+F0wX+NA//eNUeut6BjcAACj8bOXavfvnXg3V84S+m5QuGrc/ZlJ2/WQnjjS LVWfL4eaDPUkImukNEwZH9RihRDtWksRP7MnkgrwHDC4/NlzZOIla43d94O66tYWkaSpku HHzh+nHsgPFONIk4+PPmtoREqh+NXsg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1713298482; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=Z7WiYuxpSuB+TscMuMCpsdNIbU6FTnps0eqdPSfyed0=; b=ShsaUQM5oGWb4Y7AjW8nxUzH7WGMt1DBIUgja9uUfgJX4D5AyqS+S/4Vat7VCLuWFVlF5w IrYKccrkxtlbMHDw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 4DBD413931; Tue, 16 Apr 2024 20:14:42 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id //GmEjLcHmbPZAAAD6G6ig (envelope-from ); Tue, 16 Apr 2024 20:14:42 +0000 Message-ID: <653d0bf6-9acc-4625-a307-af195437e744@suse.cz> Date: Tue, 16 Apr 2024 22:14:42 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] slub: limit number of slabs to scan in count_partial() To: Jianfeng Wang , "Christoph Lameter (Ampere)" Cc: "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "penberg@kernel.org" , "rientjes@google.com" , "iamjoonsoo.kim@lge.com" , "akpm@linux-foundation.org" , Junxiao Bi References: <20240411164023.99368-1-jianfeng.w.wang@oracle.com> <38ef26aa-169b-48ad-81ad-8378e7a38f25@suse.cz> <1207c5d7-8bb7-4574-b811-0cd5f7eaf33d@suse.cz> <5552D041-8549-4E76-B3EC-03C76C117077@oracle.com> <567ed01c-f0f5-45ee-9711-cc5719ee7666@suse.cz> <91e70916-d86a-450e-8cf7-a083fc25d665@oracle.com> From: Vlastimil Babka Content-Language: en-US Autocrypt: addr=vbabka@suse.cz; keydata= xsFNBFZdmxYBEADsw/SiUSjB0dM+vSh95UkgcHjzEVBlby/Fg+g42O7LAEkCYXi/vvq31JTB KxRWDHX0R2tgpFDXHnzZcQywawu8eSq0LxzxFNYMvtB7sV1pxYwej2qx9B75qW2plBs+7+YB 87tMFA+u+L4Z5xAzIimfLD5EKC56kJ1CsXlM8S/LHcmdD9Ctkn3trYDNnat0eoAcfPIP2OZ+ 9oe9IF/R28zmh0ifLXyJQQz5ofdj4bPf8ecEW0rhcqHfTD8k4yK0xxt3xW+6Exqp9n9bydiy tcSAw/TahjW6yrA+6JhSBv1v2tIm+itQc073zjSX8OFL51qQVzRFr7H2UQG33lw2QrvHRXqD Ot7ViKam7v0Ho9wEWiQOOZlHItOOXFphWb2yq3nzrKe45oWoSgkxKb97MVsQ+q2SYjJRBBH4 8qKhphADYxkIP6yut/eaj9ImvRUZZRi0DTc8xfnvHGTjKbJzC2xpFcY0DQbZzuwsIZ8OPJCc LM4S7mT25NE5kUTG/TKQCk922vRdGVMoLA7dIQrgXnRXtyT61sg8PG4wcfOnuWf8577aXP1x 6mzw3/jh3F+oSBHb/GcLC7mvWreJifUL2gEdssGfXhGWBo6zLS3qhgtwjay0Jl+kza1lo+Cv BB2T79D4WGdDuVa4eOrQ02TxqGN7G0Biz5ZLRSFzQSQwLn8fbwARAQABzSBWbGFzdGltaWwg QmFia2EgPHZiYWJrYUBzdXNlLmN6PsLBlAQTAQoAPgIbAwULCQgHAwUVCgkICwUWAgMBAAIe AQIXgBYhBKlA1DSZLC6OmRA9UCJPp+fMgqZkBQJkBREIBQkRadznAAoJECJPp+fMgqZkNxIQ ALZRqwdUGzqL2aeSavbum/VF/+td+nZfuH0xeWiO2w8mG0+nPd5j9ujYeHcUP1edE7uQrjOC Gs9sm8+W1xYnbClMJTsXiAV88D2btFUdU1mCXURAL9wWZ8Jsmz5ZH2V6AUszvNezsS/VIT87 AmTtj31TLDGwdxaZTSYLwAOOOtyqafOEq+gJB30RxTRE3h3G1zpO7OM9K6ysLdAlwAGYWgJJ V4JqGsQ/lyEtxxFpUCjb5Pztp7cQxhlkil0oBYHkudiG8j1U3DG8iC6rnB4yJaLphKx57NuQ PIY0Bccg+r9gIQ4XeSK2PQhdXdy3UWBr913ZQ9AI2usid3s5vabo4iBvpJNFLgUmxFnr73SJ KsRh/2OBsg1XXF/wRQGBO9vRuJUAbnaIVcmGOUogdBVS9Sun/Sy4GNA++KtFZK95U7J417/J Hub2xV6Ehc7UGW6fIvIQmzJ3zaTEfuriU1P8ayfddrAgZb25JnOW7L1zdYL8rXiezOyYZ8Fm ZyXjzWdO0RpxcUEp6GsJr11Bc4F3aae9OZtwtLL/jxc7y6pUugB00PodgnQ6CMcfR/HjXlae h2VS3zl9+tQWHu6s1R58t5BuMS2FNA58wU/IazImc/ZQA+slDBfhRDGYlExjg19UXWe/gMcl De3P1kxYPgZdGE2eZpRLIbt+rYnqQKy8UxlszsBNBFsZNTUBCACfQfpSsWJZyi+SHoRdVyX5 J6rI7okc4+b571a7RXD5UhS9dlVRVVAtrU9ANSLqPTQKGVxHrqD39XSw8hxK61pw8p90pg4G /N3iuWEvyt+t0SxDDkClnGsDyRhlUyEWYFEoBrrCizbmahOUwqkJbNMfzj5Y7n7OIJOxNRkB IBOjPdF26dMP69BwePQao1M8Acrrex9sAHYjQGyVmReRjVEtv9iG4DoTsnIR3amKVk6si4Ea X/mrapJqSCcBUVYUFH8M7bsm4CSxier5ofy8jTEa/CfvkqpKThTMCQPNZKY7hke5qEq1CBk2 wxhX48ZrJEFf1v3NuV3OimgsF2odzieNABEBAAHCwXwEGAEKACYCGwwWIQSpQNQ0mSwujpkQ PVAiT6fnzIKmZAUCZAUSmwUJDK5EZgAKCRAiT6fnzIKmZOJGEACOKABgo9wJXsbWhGWYO7mD 8R8mUyJHqbvaz+yTLnvRwfe/VwafFfDMx5GYVYzMY9TWpA8psFTKTUIIQmx2scYsRBUwm5VI EurRWKqENcDRjyo+ol59j0FViYysjQQeobXBDDE31t5SBg++veI6tXfpco/UiKEsDswL1WAr tEAZaruo7254TyH+gydURl2wJuzo/aZ7Y7PpqaODbYv727Dvm5eX64HCyyAH0s6sOCyGF5/p eIhrOn24oBf67KtdAN3H9JoFNUVTYJc1VJU3R1JtVdgwEdr+NEciEfYl0O19VpLE/PZxP4wX PWnhf5WjdoNI1Xec+RcJ5p/pSel0jnvBX8L2cmniYnmI883NhtGZsEWj++wyKiS4NranDFlA HdDM3b4lUth1pTtABKQ1YuTvehj7EfoWD3bv9kuGZGPrAeFNiHPdOT7DaXKeHpW9homgtBxj 8aX/UkSvEGJKUEbFL9cVa5tzyialGkSiZJNkWgeHe+jEcfRT6pJZOJidSCdzvJpbdJmm+eED w9XOLH1IIWh7RURU7G1iOfEfmImFeC3cbbS73LQEFGe1urxvIH5K/7vX+FkNcr9ujwWuPE9b 1C2o4i/yZPLXIVy387EjA6GZMqvQUFuSTs/GeBcv0NjIQi8867H3uLjz+mQy63fAitsDwLmR EP+ylKVEKb0Q2A== In-Reply-To: <91e70916-d86a-450e-8cf7-a083fc25d665@oracle.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Action: no action X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 772BE10000B X-Rspam-User: X-Stat-Signature: ccazphf57fmj6ofspmckmakx3hu136io X-HE-Tag: 1713298484-888424 X-HE-Meta: U2FsdGVkX1+z1mOjx+Z2H/nXy2LsV3goXCbBzWZkSG70gIfmfQGXA6sMzpmyFiEYg00YvrFo+X5u4Npk8dskiPM9nrKF8JFWtQpnp26Aq9eFEfEqIzyK7/mf+EM7C348HZOq4WkDKZf3mjhIcjoC99fV0JXMIC0+0+SBXdkQSkBSwAC/3E0aIYPcg6qz/jEq63rnqHEWo9/7wfJdPTxNZ2Q6QUM9aq5M3w2nPW1AoxJVnePp2uUfofRb2iiqEOIvKvtzA8/B0u7SRA5HE9YCtCCa6JmXpqESv7kHf+sn3lc1E+tCpV6Z8r/KiSkMYzER8dR8MzANh3fD38YfnMtSUYOcliBbQN3oa/sXh7DlcgyYAJRbmjLOBkNjzeQwdOghDQTvCCfiI7pPECuNFj1HRN9sU2kJitua9IHrhiUqCL3hmhrQO6mrEGtH0ntf3S3cz6VXhVcTHdncvwJCjQzfWF+wJF0vkQNfqUlqwJ9YQ4R+f6siMMzd+z6SHb+qQXY04UoqURjcOcPtRCiGWeKW2HZpTOyGYO/qyU5cCo7Be+fdqEbt/7Qd7w0ok31JqQxI7VdiJOOeJetk730JUGIq3ohde0t5ip2BppBGaoBnpJmIC3r0Iec9lSxt0LHLI2NNVupA03pQeoPq3NVJ1uhPjpaJR85eGMg357ZnackhMqTX/taD2oU5OEnggy6c6yKIUPohVDQKwUrd59YkyAgWa4qNw5q2apHDCUBcwp0woYc70J2sLdwv/OrEMr7waPXI0EUt1pJvSQtMG5O76hEEkKxiLbbKe3Pk8cdoOHMtWanaXTz4hZdfswokhn9+CPn5WNKsSy/YaB7qZ66pg7H8qYQWDsUDDxrFDDbhzjGxQtt4kZ7ftVekyDbBpSIHi0E2zkdnduY0HNaYGGliCjjYXWlgg7hiyggu7Q7A8BIoX1zvVb/RX7RTn6kBApgin+cb9v3ByYrWx3GXsOrzex6 Pfp/6sCD zjrctsdEeebTWJnMTtuXfzWIheAqjXRuscfXK1V82IybMxYxO7Ni96kf05JEX0yOFifGIdhZkfw9Hz1ukdxg8YNoSzKYahvkHSZLwS7hAvr/1IpDG+7sMVl78YA6iBIvKLC5dVVqqYQLD3UR3b2cQyzO1HJT5vDdsykzbHc07P5Jh+LcICXlbYIuoH2nHiutplisMXhLHGyKEkP1tQw3Bl1Gxgu6VYQnWI1Mk7KBHxeRtvsup54qG+Cf3cdxdcBP9qR+Pam7gJlg7mpCVBZZV2DUxUynSD53Sjwp6inuzZvlrAKZOX0m1UAtgjkk7MwSh7TZVHuOFZy0+yLplpXswDmgZ1MOZ43WHrq+1ulITstHWB9095/+wifIrfpeUF16MUWNzRAnQ16Vecec= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 4/16/24 8:58 PM, Jianfeng Wang wrote: > > > On 4/15/24 12:35 AM, Vlastimil Babka wrote: >> On 4/13/24 3:17 AM, Jianfeng Wang wrote: >>> >>>> On Apr 12, 2024, at 1:44 PM, Jianfeng Wang wrote: >>>> >>>> On 4/12/24 1:20 PM, Vlastimil Babka wrote: >>>>> On 4/12/24 7:29 PM, Jianfeng Wang wrote: >>>>>> >>>>>> On 4/12/24 12:48 AM, Vlastimil Babka wrote: >>>>>>> On 4/11/24 7:02 PM, Christoph Lameter (Ampere) wrote: >>>>>>>> On Thu, 11 Apr 2024, Jianfeng Wang wrote: >>>>>>>> >>>>>>>>> So, the fix is to limit the number of slabs to scan in >>>>>>>>> count_partial(), and output an approximated result if the list is too >>>>>>>>> long. Default to 10000 which should be enough for most sane cases. >>>>>>>> >>>>>>>> >>>>>>>> That is a creative approach. The problem though is that objects on the >>>>>>>> partial lists are kind of sorted. The partial slabs with only a few >>>>>>>> objects available are at the start of the list so that allocations cause >>>>>>>> them to be removed from the partial list fast. Full slabs do not need to >>>>>>>> be tracked on any list. >>>>>>>> >>>>>>>> The partial slabs with few objects are put at the end of the partial list >>>>>>>> in the hope that the few objects remaining will also be freed which would >>>>>>>> allow the freeing of the slab folio. >>>>>>>> >>>>>>>> So the object density may be higher at the beginning of the list. >>>>>>>> >>>>>>>> kmem_cache_shrink() will explicitly sort the partial lists to put the >>>>>>>> partial pages in that order. >>>>>>>> >>> >>> Realized that I’d do "echo 1 > /sys/kernel/slab/dentry/shrink” to sort the list explicitly. >>> After that, the numbers become: >>> N = 10000 -> diff = 7.1 % >>> N = 20000 -> diff = 5.7 % >>> N = 25000 -> diff = 5.4 % >>> So, expecting ~5-7% difference after shrinking. >>> >>>>>>>> Can you run some tests showing the difference between the estimation and >>>>>>>> the real count? >>>>>> >>>>>> Yes. >>>>>> On a server with one NUMA node, I create a case that uses many dentry objects. >>>>> >>>>> Could you describe in more detail how do you make dentry cache to grow such >>>>> a large partial slabs list? Thanks. >>>>> >>>> >>>> I utilized the fact that creating a folder will create a new dentry object; >>>> deleting a folder will delete all its sub-folder's dentry objects. >>>> >>>> Then, I started to create N folders, while each folder has M empty sub-folders. >>>> Assuming that these operations would consume a large number of dentry >>>> objects in the sequential order. Their slabs were very likely to be full slabs. >>>> After all folders were created, I deleted a subset of the N folders (i.e., >>>> one out of every two folders). This would create many holes, which turned a >>>> subset of full slabs into partial slabs. >> >> Thanks, right, so that's quite a deterministic way to achieve the long >> partial lists with very close to uniform ratio of free/used, so no wonder >> the resulting accuracy is good and the diff is very small. But in practice >> the workloads that may lead to long lists will not be so uniform. The result >> after shrinking shows what happens if there's bias in which slabs we inspect >> due to the sorting. But still most of the slabs will have the near-uniform >> free/used ratio so the sorting will not do so much difference. But another >> workload might do that. >> >> So what happens if you inspect X slabs from the head and X from the tail as >> I suggested? That should help your test case even after you sort, and also >> should in theory be more accurate even for less uniform workloads. > > Yes, the approach of counting from both directions and then approximating > works better after sorting the partial list. Yeah I think we could go with that approach then. Let's do 5000 from each side. You can check whether n->nr_partial < 10000 and then just scan the whole list in single direction with no approximation, and otherwise 5000 from each side with approximation. I think the code as you show below will scan some slabs in the middle of the list twice if there's between 5000 and 10000 on the list, so checking n->nr_partial would avoid that. Thanks! > Here is what I did. > --- > +static unsigned long count_partial(struct kmem_cache_node *n, > + int (*get_count)(struct slab *)) > +{ > + unsigned long flags; > + unsigned long x = 0; > + unsigned long scanned = 0; > + struct slab *slab; > + > + spin_lock_irqsave(&n->list_lock, flags); > + list_for_each_entry(slab, &n->partial, slab_list) { > + x += get_count(slab); > + if (++scanned > MAX_PARTIAL_TO_SCAN) > + break; > + } > + > + if (scanned > max_partial_to_scan) { > + scanned = 0; > + list_for_each_entry_reverse(slab, &n->partial, slab_list) { > + x += get_count(slab); > + if (++scanned > MAX_PARTIAL_TO_SCAN) { > + /* Approximate total count of objects */ > + x = mult_frac(x, n->nr_partial, scanned * 2); > + x = min(x, node_nr_objs(n)); > + break; > + } > + } > + } > + spin_unlock_irqrestore(&n->list_lock, flags); > + return x; > +} > --- > > Results: > --- > * Pre-shrink: > MAX_SLAB_TO_SCAN | Diff (single-direction) | Diff (double-direction) | > 1000 | 0.43 % | 0.80 % | > 5000 | 0.06 % | 0.16 % | > 10000 | 0.02 % | -0.003 % | > 20000 | 0.009 % | 0.03 % | > > * After-shrink: > MAX_SLAB_TO_SCAN | Diff (single-direction) | Diff (double-direction) | > 1000 | 12.46 % | 3.60 % | > 5000 | 5.38 % | 0.22 % | > 10000 | 4.99 % | -0.06 % | > 20000 | 4.86 % | -0.17 % | > --- > > For |MAX_SLAB_TO_SCAN| >= 5000, count_partial() returns the exact > object count if the length of the partial list is not larger than > |MAX_SLAB_TO_SCAN|. Otherwise, it counts |MAX_SLAB_TO_SCAN| from both > the head and from the tail, and outputs an approximation that shows > <1% difference. > > With a slightly larger limit (like 10000), count_partial() should > produce the exact number for most cases (that won't lead to a > lockup) and avoid lockups with a good estimate.