From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E36E1C4345F for ; Tue, 23 Apr 2024 04:34:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 297D06B00AC; Tue, 23 Apr 2024 00:34:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 21FE56B00AD; Tue, 23 Apr 2024 00:34:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09A526B00AE; Tue, 23 Apr 2024 00:34:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DD4C36B00AC for ; Tue, 23 Apr 2024 00:34:29 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3FE004077F for ; Tue, 23 Apr 2024 04:34:29 +0000 (UTC) X-FDA: 82039530258.18.E411DAA Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf26.hostedemail.com (Postfix) with ESMTP id 24BD3140002 for ; Tue, 23 Apr 2024 04:34:26 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2023-11-20 header.b=QefNgSMt; spf=pass (imf26.hostedemail.com: domain of jianfeng.w.wang@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=jianfeng.w.wang@oracle.com; dmarc=pass (policy=quarantine) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713846867; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QpW3LdLlD0n/aMD1cX3DNmVCkrLHGgdSgN6Vc52lEOA=; b=Yo0ShuFtNKSYuKpJ8SBJTjrm6lFeS0cmTbJx5unAY2IWbiCIQ9o+Jf0zFCraiHim8YbWyy e3PtSPYVAtfRm+fUYM7saabbNHXJT7CaxHgJc2W/Fs4ALhIAOY8GSOpkq29MbxAAIMs6IC 6QOQqOkL089m0ApBZdauWmUEZ5PRXEo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713846867; a=rsa-sha256; cv=none; b=plY1JT5xV2fZMILtaybqry2g+2dHEpNPpsi9hxvLd7c7wdyKATx0V2SODxegvpXbBgZ/yu eUJr6JMeqPczNWOLdDUuYo9hOtbCMuCwXoHQOAhddoKXWWJIZ2GLvoKUEeoJiAjbf64OXO 5G7zywZyANOYcfD86cg+6HFDyR/ACWI= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2023-11-20 header.b=QefNgSMt; spf=pass (imf26.hostedemail.com: domain of jianfeng.w.wang@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=jianfeng.w.wang@oracle.com; dmarc=pass (policy=quarantine) header.from=oracle.com Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 43MHnYZB014577; Tue, 23 Apr 2024 04:34:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2023-11-20; bh=QpW3LdLlD0n/aMD1cX3DNmVCkrLHGgdSgN6Vc52lEOA=; b=QefNgSMt+uJD+T6+oExFX0Cczjpe1ZtK0WnDewSxd+Z/thKxwk4U+rPGPN2k7O4h6mMA mqyg5/f5GVEvCw07ITSRI1hoUDmK/07CYbVVPTXCyK/DWTfEcMUObZTeasdS6ERQBT9q +/ZYXqz8Dg0qxFK3e2mJanLF0mIbhC4b7eiDrjyjdHitQvp0i8lQQ3TpuuRsx9qWlEOh oXeBSpcKgDQ+5nu+bisLG51zdUnxo3DgfQWnvrhFg+GoZj1SG3G1LU/5YD4m33GP4zub 8HqSQ0NIHO4JxC5L6a+9QonEg2WWJlj1441lEWzPP9+INsQt0ILpePhhetrypc4yhdKl XA== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3xm68vc4wg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 23 Apr 2024 04:34:24 +0000 Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 43N2PQr5009745; Tue, 23 Apr 2024 04:34:23 GMT Received: from jfwang-mac.us.oracle.com (dhcp-10-65-139-85.vpn.oracle.com [10.65.139.85]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3xm45cvasm-2; Tue, 23 Apr 2024 04:34:23 +0000 From: Jianfeng Wang To: linux-mm@kvack.org, vbabka@suse.cz Cc: cl@linux.com, rientjes@google.com, akpm@linux-foundation.org Subject: [PATCH v4 1/2] slub: introduce count_partial_free_approx() Date: Mon, 22 Apr 2024 21:34:19 -0700 Message-ID: <20240423043420.13854-2-jianfeng.w.wang@oracle.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20240423043420.13854-1-jianfeng.w.wang@oracle.com> References: <20240423043420.13854-1-jianfeng.w.wang@oracle.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-23_02,2024-04-22_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxlogscore=999 mlxscore=0 phishscore=0 spamscore=0 malwarescore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2404010000 definitions=main-2404230012 X-Proofpoint-GUID: OAvz3oC0hLpR-KtEfDqm-0Qjnigw2z9H X-Proofpoint-ORIG-GUID: OAvz3oC0hLpR-KtEfDqm-0Qjnigw2z9H X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 24BD3140002 X-Stat-Signature: y34f8facrj7xyb67ys5kne7oh36xqeg4 X-HE-Tag: 1713846866-431068 X-HE-Meta: U2FsdGVkX1++iqDP+DtUFAzu/ey6zjo8/VzB3snOeo9iv4nlEL3wKV4fP0fGOiJX6EZd38A6zlcB12QW/aVFRewdds5TT/QYrq5vRcgnJFyThv2e7vh8EB8bO5J0X4YB8XVOoxvG0OTTQO3/SIZMaQ7HiorKO2OG5Qi97CK3nx7yXmu40X83XmaxiocsRwsgDhVAEOPqw3f4JjuaNkibFpXypMp3w5t9XJJDFx+RyvtdJh5VMn1ow8RFfSfCyVAgxgoI56lpZVErRgw16rsglMZPEJq4BqeZnXZwrtaRPJIWhLvNrUI580xf8dvChsVQ/PAASdS8LeV0GtVu0hXNzg9YDVwiwCaLFP/1w3ummkn32yglWAZIaWC6h+XDt+P7n4aPmAQWWLa4lJT7vKiWCrvnOqp5YH9itSrp9fBgKM9oqPQM0DQYc+A1H/kTl5NbV8qk2QlAthXCxflzltIi3DCCKgDhQ8oArTPfkuk2nR15bdNRL4MjHdXeGw/lm1ff3KottIvgKTltPP7vZtqZ8up3Q1/BoZSfM8lpJC6tjCgyxxfTegdNPtVo4dGyZCNZafL20Sx1sO/z5Xmpd/vdAnTommPd+/Hya5rR+VFrAqwcMi582mybIG+zS2z2cZekoocpvkbuiFWGKe1QVjZJUP3EnwY7lSueqiRtuxkX/Zge56Ab8/vpJdAWZzk58/MvDbgsdUjSLTYJCUC9cIcmJJAmgXdGxsHQE0aBRl6tW+brmn1JRNvhj+q6jhwBBt1kZfgPRrI6TUpVEkDcK47cCt/UjrXG65MRKBeNCNRwJofgytleVz9FJLQP16/1kk+e6O4KrUOdls4r8xHMaR+Z+Z7ps8cexZH00x/kGVHV/Csop5ymUPBaOYBCo4hS9HR1Y82Mu7+EPL3ZI1CU9XyemAxgBdGFRqHCEvnPEeJwAwIhuYH1eQOXKZd3vqrNLD0oUItfCW1l0MF+FZHv8f3 eh9a0jk5 kffyt2lbPbhqY+G2vNr23sVZ1Mv2Kis/RmO8ApFsH1POvXNtV4x7pp04yqQkvxjwmDTBKzsCGN1pueosEB1Yr7169TkQxzrD3/pyH7mYUyn0hMPaqnjeII3dXYcsF5CZqR+/PB9ymgy+1l/pv3DDiP+0rH5FsnBgF2AiMQ8TJieNuNRS+bxDNR0utIWVchTQrJPxBVPuEFjiwKxrpr2nhka1XwZtX87xB6pSeGmAY32N8nbHJIOxXehyJ1ai+HmLtatxrlrtumv5o9bf1M2l55s1GBGa7Z91rzMWT8lNQyRyNVLGY3uJkOE4yvwfpdTeC0fviAKhyWUxXKZRUxgbduezaT30kEnMEPwamwpnYCOWO9nSNc3B6hDKefylgsXmOBjCA8o6Sm9aJHCkc1zR+mHKjSsIwlnDR+jxbw8d7frmH7gDEocVAUgKEnEJTO47o0Q9dL9mU+IAjsmE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When reading "/proc/slabinfo", the kernel needs to report the number of free objects for each kmem_cache. The current implementation uses count_partial() to get it by scanning each kmem_cache_node's partial slab list and summing free objects from every partial slab. This process must hold per-kmem_cache_node spinlock and disable IRQ, and may take a long time. Consequently, it can block slab allocations on other CPUs and cause timeouts for network devices, when the partial list is long. In production, even NMI watchdog can be triggered due to this matter: e.g., for "buffer_head", the number of partial slabs was observed to be ~1M in one kmem_cache_node. This problem was also confirmed by others [1-3]. Iterating a partial list to get the exact count of objects can cause soft lockups for a long list with or without the lock (e.g., if preemption is disabled), and may not be very useful: the object count can change after the lock is released. The approach of maintaining free-object counters requires atomic operations on the fast path [3]. So, the fix is to introduce count_partial_free_approx(). This function can be used for getting the free object count in a kmem_cache_node's partial list. It limits the number of slabs to scan and avoids scanning the whole list by giving an approximation for a long list. Suppose the limit is N. If the list's length is not greater than N, output the exact count by traversing the list; if its length is greater than N, output an approximated count by traversing a subset of the list. The proposed method is to scan N/2 slabs from the list's head and N/2 slabs from the tail. For a partial list with ~280K slabs, benchmarks show that it performs better than just counting from the list's head, after slabs get sorted by kmem_cache_shrink(). Default the limit to 10000, as it produces an approximation within 1% of the exact count for both scenarios. Then, use count_partial_free_approx() in get_slabinfo(). Benchmarks: Diff = (exact - approximated) / exact * Normal case (w/o kmem_cache_shrink()): | MAX_TO_SCAN | Diff (count from head)| Diff (count head+tail)| | 1000 | 0.43 % | 1.09 % | | 5000 | 0.06 % | 0.37 % | | 10000 | 0.02 % | 0.16 % | | 20000 | 0.009 % | -0.003 % | * Skewed case (w/ kmem_cache_shrink()): | MAX_TO_SCAN | Diff (count from head)| Diff (count head+tail)| | 1000 | 12.46 % | 6.75 % | | 5000 | 5.38 % | 1.27 % | | 10000 | 4.99 % | 0.22 % | | 20000 | 4.86 % | -0.06 % | [1] https://lore.kernel.org/linux-mm/alpine.DEB.2.21.2003031602460.1537@www.lameter.com/T/ [2] https://lore.kernel.org/lkml/alpine.DEB.2.22.394.2008071258020.55871@www.lameter.com/T/ [3] https://lore.kernel.org/lkml/1e01092b-140d-2bab-aeba-321a74a194ee@linux.com/T/ Signed-off-by: Jianfeng Wang Acked-by: David Rientjes --- mm/slub.c | 39 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 38 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 1bb2a93cf7b6..6d8ecad07daf 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3229,6 +3229,43 @@ static unsigned long count_partial(struct kmem_cache_node *n, #endif /* CONFIG_SLUB_DEBUG || SLAB_SUPPORTS_SYSFS */ #ifdef CONFIG_SLUB_DEBUG +#define MAX_PARTIAL_TO_SCAN 10000 + +static unsigned long count_partial_free_approx(struct kmem_cache_node *n) +{ + unsigned long flags; + unsigned long x = 0; + struct slab *slab; + + spin_lock_irqsave(&n->list_lock, flags); + if (n->nr_partial <= MAX_PARTIAL_TO_SCAN) { + list_for_each_entry(slab, &n->partial, slab_list) + x += slab->objects - slab->inuse; + } else { + /* + * For a long list, approximate the total count of objects in + * it to meet the limit on the number of slabs to scan. + * Scan from both the list's head and tail for better accuracy. + */ + unsigned long scanned = 0; + + list_for_each_entry(slab, &n->partial, slab_list) { + x += slab->objects - slab->inuse; + if (++scanned == MAX_PARTIAL_TO_SCAN / 2) + break; + } + list_for_each_entry_reverse(slab, &n->partial, slab_list) { + x += slab->objects - slab->inuse; + if (++scanned == MAX_PARTIAL_TO_SCAN) + break; + } + x = mult_frac(x, n->nr_partial, scanned); + x = min(x, node_nr_objs(n)); + } + spin_unlock_irqrestore(&n->list_lock, flags); + return x; +} + static noinline void slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid) { @@ -7089,7 +7126,7 @@ void get_slabinfo(struct kmem_cache *s, struct slabinfo *sinfo) for_each_kmem_cache_node(s, node, n) { nr_slabs += node_nr_slabs(n); nr_objs += node_nr_objs(n); - nr_free += count_partial(n, count_free); + nr_free += count_partial_free_approx(n); } sinfo->active_objs = nr_objs - nr_free; -- 2.42.1