From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65E88C433E0 for ; Mon, 15 Mar 2021 19:01:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E48AF64F3F for ; Mon, 15 Mar 2021 19:01:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E48AF64F3F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 786526B0036; Mon, 15 Mar 2021 15:01:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 736736B006C; Mon, 15 Mar 2021 15:01:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D6D26B0070; Mon, 15 Mar 2021 15:01:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4259D6B0036 for ; Mon, 15 Mar 2021 15:01:30 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id BD7F71803E8E9 for ; Mon, 15 Mar 2021 19:01:29 +0000 (UTC) X-FDA: 77923027098.02.57AB840 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf19.hostedemail.com (Postfix) with ESMTP id DF7399003533 for ; Mon, 15 Mar 2021 18:49:59 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 7FD47AE8F; Mon, 15 Mar 2021 18:49:58 +0000 (UTC) To: Xunlei Pang , Christoph Lameter , Pekka Enberg , Roman Gushchin , Konstantin Khlebnikov , David Rientjes , Matthew Wilcox , Shu Ming , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Wen Yang , James Wang , Thomas Gleixner References: <1615303512-35058-1-git-send-email-xlpang@linux.alibaba.com> From: Vlastimil Babka Subject: Re: [PATCH v3 0/4] mm/slub: Fix count_partial() problem Message-ID: <793c884a-9d60-baaf-fab8-3e5f4a024124@suse.cz> Date: Mon, 15 Mar 2021 19:49:57 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0 MIME-Version: 1.0 In-Reply-To: <1615303512-35058-1-git-send-email-xlpang@linux.alibaba.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US X-Stat-Signature: ct719fa1r3xoh9oaiijewrfwijq3gaz6 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: DF7399003533 Received-SPF: none (suse.cz>: No applicable sender policy available) receiver=imf19; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: none/none X-HE-Tag: 1615834199-823959 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/9/21 4:25 PM, Xunlei Pang wrote: > count_partial() can hold n->list_lock spinlock for quite long, which > makes much trouble to the system. This series eliminate this problem. Before I check the details, I have two high-level comments: - patch 1 introduces some counting scheme that patch 4 then changes, coul= d we do this in one step to avoid the churn? - the series addresses the concern that spinlock is being held, but doesn= 't address the fact that counting partial per-node slabs is not nearly enoug= h if we want accurate in /proc/slabinfo because there are also perc= pu slabs and per-cpu partial slabs, where we don't track the free objects at= all. So after this series while the readers of /proc/slabinfo won't block the spinlock, they will get the same garbage data as before. So Christoph is = not wrong to say that we can just report active_objs =3D=3D num_objs and it w= on't actually break any ABI. At the same time somebody might actually want accurate object statistics = at the expense of peak performance, and it would be nice to give them such optio= n in SLUB. Right now we don't provide this accuracy even with CONFIG_SLUB_STAT= S, although that option provides many additional tuning stats, with addition= al overhead. So my proposal would be a new config for "accurate active objects" (or ju= st tie it to CONFIG_SLUB_DEBUG?) that would extend the approach of percpu counte= rs in patch 4 to all alloc/free, so that it includes percpu slabs. Without this= config enabled, let's just report active_objs =3D=3D num_objs. Vlastimil > v1->v2: > - Improved changelog and variable naming for PATCH 1~2. > - PATCH3 adds per-cpu counter to avoid performance regression > in concurrent __slab_free(). >=20 > v2->v3: > - Changed "page->inuse" to the safe "new.inuse", etc. > - Used CONFIG_SLUB_DEBUG and CONFIG_SYSFS condition for new counters. > - atomic_long_t -> unsigned long >=20 > [Testing] > There seems might be a little performance impact under extreme > __slab_free() concurrent calls according to my tests. >=20 > On my 32-cpu 2-socket physical machine: > Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz >=20 > 1) perf stat --null --repeat 10 -- hackbench 20 thread 20000 >=20 > =3D=3D original, no patched > Performance counter stats for 'hackbench 20 thread 20000' (10 runs): >=20 > 24.536050899 seconds time elapsed = ( +- 0.24% ) >=20 >=20 > Performance counter stats for 'hackbench 20 thread 20000' (10 runs): >=20 > 24.588049142 seconds time elapsed = ( +- 0.35% ) >=20 >=20 > =3D=3D patched with patch1~4 > Performance counter stats for 'hackbench 20 thread 20000' (10 runs): >=20 > 24.670892273 seconds time elapsed = ( +- 0.29% ) >=20 >=20 > Performance counter stats for 'hackbench 20 thread 20000' (10 runs): >=20 > 24.746755689 seconds time elapsed = ( +- 0.21% ) >=20 >=20 > 2) perf stat --null --repeat 10 -- hackbench 32 thread 20000 >=20 > =3D=3D original, no patched > Performance counter stats for 'hackbench 32 thread 20000' (10 runs): >=20 > 39.784911855 seconds time elapsed = ( +- 0.14% ) >=20 > Performance counter stats for 'hackbench 32 thread 20000' (10 runs): >=20 > 39.868687608 seconds time elapsed = ( +- 0.19% ) >=20 > =3D=3D patched with patch1~4 > Performance counter stats for 'hackbench 32 thread 20000' (10 runs): >=20 > 39.681273015 seconds time elapsed = ( +- 0.21% ) >=20 > Performance counter stats for 'hackbench 32 thread 20000' (10 runs): >=20 > 39.681238459 seconds time elapsed = ( +- 0.09% ) >=20 >=20 > Xunlei Pang (4): > mm/slub: Introduce two counters for partial objects > mm/slub: Get rid of count_partial() > percpu: Export per_cpu_sum() > mm/slub: Use percpu partial free counter >=20 > include/linux/percpu-defs.h | 10 ++++ > kernel/locking/percpu-rwsem.c | 10 ---- > mm/slab.h | 4 ++ > mm/slub.c | 120 +++++++++++++++++++++++++++++-----= -------- > 4 files changed, 97 insertions(+), 47 deletions(-) >=20