From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52E60C433E0 for ; Fri, 31 Jul 2020 02:57:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1F95721775 for ; Fri, 31 Jul 2020 02:57:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1F95721775 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C04C38D000F; Thu, 30 Jul 2020 22:57:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB64B8D000B; Thu, 30 Jul 2020 22:57:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACBAF8D000F; Thu, 30 Jul 2020 22:57:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id 978BE8D000B for ; Thu, 30 Jul 2020 22:57:43 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 48DE01EE6 for ; Fri, 31 Jul 2020 02:57:43 +0000 (UTC) X-FDA: 77096860806.26.horse03_37018a626f80 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 1B4441804B65C for ; Fri, 31 Jul 2020 02:57:43 +0000 (UTC) X-HE-Tag: horse03_37018a626f80 X-Filterd-Recvd-Size: 4573 Received: from out30-57.freemail.mail.aliyun.com (out30-57.freemail.mail.aliyun.com [115.124.30.57]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Fri, 31 Jul 2020 02:57:41 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R971e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07488;MF=xlpang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0U4Gz1EO_1596164257; Received: from xunleideMacBook-Pro.local(mailfrom:xlpang@linux.alibaba.com fp:SMTPD_---0U4Gz1EO_1596164257) by smtp.aliyun-inc.com(127.0.0.1); Fri, 31 Jul 2020 10:57:38 +0800 Reply-To: xlpang@linux.alibaba.com Subject: Re: [PATCH 1/2] mm/slub: Introduce two counters for the partial objects To: Pekka Enberg Cc: Christoph Lameter , Andrew Morton , Wen Yang , Yang Shi , Roman Gushchin , "linux-mm@kvack.org" , LKML References: <1593678728-128358-1-git-send-email-xlpang@linux.alibaba.com> <7374a9fd-460b-1a51-1ab4-25170337e5f2@linux.alibaba.com> From: xunlei Message-ID: <5eeb5c3d-1a34-ad96-9010-4d8a5ac32241@linux.alibaba.com> Date: Fri, 31 Jul 2020 10:57:38 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 X-Rspamd-Queue-Id: 1B4441804B65C X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020/7/7 =E4=B8=8B=E5=8D=8811:23, Pekka Enberg wrote: > Hi! >=20 > (Sorry for the delay, I missed your response.) >=20 > On Fri, Jul 3, 2020 at 12:38 PM xunlei wrote= : >> >> On 2020/7/2 PM 7:59, Pekka Enberg wrote: >>> On Thu, Jul 2, 2020 at 11:32 AM Xunlei Pang wrote: >>>> The node list_lock in count_partial() spend long time iterating >>>> in case of large amount of partial page lists, which can cause >>>> thunder herd effect to the list_lock contention, e.g. it cause >>>> business response-time jitters when accessing "/proc/slabinfo" >>>> in our production environments. >>> >>> Would you have any numbers to share to quantify this jitter? I have n= o >> >> We have HSF RT(High-speed Service Framework Response-Time) monitors, t= he >> RT figures fluctuated randomly, then we deployed a tool detecting "irq >> off" and "preempt off" to dump the culprit's calltrace, capturing the >> list_lock cost up to 100ms with irq off issued by "ss", this also caus= ed >> network timeouts. >=20 > Thanks for the follow up. This sounds like a good enough motivation > for this patch, but please include it in the changelog. >=20 >>> objections to this approach, but I think the original design >>> deliberately made reading "/proc/slabinfo" more expensive to avoid >>> atomic operations in the allocation/deallocation paths. It would be >>> good to understand what is the gain of this approach before we switch >>> to it. Maybe even run some slab-related benchmark (not sure if there'= s >>> something better than hackbench these days) to see if the overhead of >>> this approach shows up. >> >> I thought that before, but most atomic operations are serialized by th= e >> list_lock. Another possible way is to hold list_lock in __slab_free(), >> then these two counters can be changed from atomic to long. >> >> I also have no idea what's the standard SLUB benchmark for the >> regression test, any specific suggestion? >=20 > I don't know what people use these days. When I did benchmarking in > the past, hackbench and netperf were known to be slab-allocation > intensive macro-benchmarks. Christoph also had some SLUB > micro-benchmarks, but I don't think we ever merged them into the tree. I tested hackbench on 24-CPU machine, here are the results: "hackbench 20 thread 1000" =3D=3D orignal(without any patch) Time: 53.793 Time: 54.305 Time: 54.073 =3D=3D with my patch1~2 Time: 54.036 Time: 53.840 Time: 54.066 Time: 53.449 =3D=3D with my patch1~2, plus using a percpu partial free objects counter Time: 53.303 Time: 52.994 Time: 53.218 Time: 53.268 Time: 53.739 Time: 53.072 The results show no performance regression, it's strange that the figures even get a little better when using percpu counter. Thanks, Xunlei