From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EED3C2B9F8 for ; Mon, 24 May 2021 23:40:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C7E196141F for ; Mon, 24 May 2021 23:40:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C7E196141F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 832C26B0071; Mon, 24 May 2021 19:40:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 53E8F6B0072; Mon, 24 May 2021 19:40:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2CBE16B0075; Mon, 24 May 2021 19:40:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0131.hostedemail.com [216.40.44.131]) by kanga.kvack.org (Postfix) with ESMTP id EE4316B006C for ; Mon, 24 May 2021 19:40:50 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8682E8249980 for ; Mon, 24 May 2021 23:40:50 +0000 (UTC) X-FDA: 78177747060.33.0CE338F Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf23.hostedemail.com (Postfix) with ESMTP id 38C16A0001C9 for ; Mon, 24 May 2021 23:40:45 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1621899648; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QAc7SlXYGxt2yuMRE5XHgxbmgFR+ii55sq8tCy9hQcc=; b=OX7OPqIoBu3mU8tTU8iucEudGCJLSAPXm9wKohyhEcrd9YnAXAfyczVFjiQ08hIf6tCjH/ ZwBFkKQx9q1HrzMp1VxGFEF86wP5Ug1+Sn/CvtM1OV4PZrt42dFlm7dZVPOQdhCY8R4vhT o85Y40TG7RGgXZBl7jPbcgzkNTNF6P8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1621899648; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QAc7SlXYGxt2yuMRE5XHgxbmgFR+ii55sq8tCy9hQcc=; b=9Wtrrbdf8ROT0GDVejk3inSy+nSfQtLKinR6vFqxAOZz6+oR6PKp+MlaJ3Iu1N/1SetK8T KdstWqO4650PJTDQ== Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id EAC66AF1F; Mon, 24 May 2021 23:40:47 +0000 (UTC) From: Vlastimil Babka To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Sebastian Andrzej Siewior , Thomas Gleixner , Mel Gorman , Jesper Dangaard Brouer , Peter Zijlstra , Jann Horn , Vlastimil Babka Subject: [RFC 01/26] mm, slub: allocate private object map for sysfs listings Date: Tue, 25 May 2021 01:39:21 +0200 Message-Id: <20210524233946.20352-2-vbabka@suse.cz> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210524233946.20352-1-vbabka@suse.cz> References: <20210524233946.20352-1-vbabka@suse.cz> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=OX7OPqIo; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=9Wtrrbdf; dmarc=none; spf=pass (imf23.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 38C16A0001C9 X-Stat-Signature: m1ns8q4md3ck36gk6gremtp4gtnqnk5h X-HE-Tag: 1621899645-232499 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Slub has a static spinlock protected bitmap for marking which objects are= on freelist when it wants to list them, for situations where dynamically allocating such map can lead to recursion or locking issues, and on-stack bitmap would be too large. The handlers of sysfs files alloc_calls and free_calls also currently use= this shared bitmap, but their syscall context makes it straightforward to allo= cate a private map before entering locked sections, so switch these processing p= aths to use a private bitmap. Signed-off-by: Vlastimil Babka --- mm/slub.c | 43 +++++++++++++++++++++++++++++-------------- 1 file changed, 29 insertions(+), 14 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 3f96e099817a..4c876749f322 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -448,6 +448,18 @@ static inline bool cmpxchg_double_slab(struct kmem_c= ache *s, struct page *page, static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; static DEFINE_SPINLOCK(object_map_lock); =20 +static void __fill_map(unsigned long *obj_map, struct kmem_cache *s, + struct page *page) +{ + void *addr =3D page_address(page); + void *p; + + bitmap_zero(obj_map, page->objects); + + for (p =3D page->freelist; p; p =3D get_freepointer(s, p)) + set_bit(__obj_to_index(s, addr, p), obj_map); +} + /* * Determine a map of object in use on a page. * @@ -457,17 +469,11 @@ static DEFINE_SPINLOCK(object_map_lock); static unsigned long *get_map(struct kmem_cache *s, struct page *page) __acquires(&object_map_lock) { - void *p; - void *addr =3D page_address(page); - VM_BUG_ON(!irqs_disabled()); =20 spin_lock(&object_map_lock); =20 - bitmap_zero(object_map, page->objects); - - for (p =3D page->freelist; p; p =3D get_freepointer(s, p)) - set_bit(__obj_to_index(s, addr, p), object_map); + __fill_map(object_map, s, page); =20 return object_map; } @@ -4813,17 +4819,17 @@ static int add_location(struct loc_track *t, stru= ct kmem_cache *s, } =20 static void process_slab(struct loc_track *t, struct kmem_cache *s, - struct page *page, enum track_item alloc) + struct page *page, enum track_item alloc, + unsigned long *obj_map) { void *addr =3D page_address(page); void *p; - unsigned long *map; =20 - map =3D get_map(s, page); + __fill_map(obj_map, s, page); + for_each_object(p, s, addr, page->objects) - if (!test_bit(__obj_to_index(s, addr, p), map)) + if (!test_bit(__obj_to_index(s, addr, p), obj_map)) add_location(t, s, get_track(s, p, alloc)); - put_map(map); } =20 static int list_locations(struct kmem_cache *s, char *buf, @@ -4834,11 +4840,18 @@ static int list_locations(struct kmem_cache *s, c= har *buf, struct loc_track t =3D { 0, 0, NULL }; int node; struct kmem_cache_node *n; + unsigned long *obj_map; + + obj_map =3D bitmap_alloc(oo_objects(s->oo), GFP_KERNEL); + if (!obj_map) + return sysfs_emit(buf, "Out of memory\n"); =20 if (!alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL)) { + bitmap_free(obj_map); return sysfs_emit(buf, "Out of memory\n"); } + /* Push back cpu slabs */ flush_all(s); =20 @@ -4851,12 +4864,14 @@ static int list_locations(struct kmem_cache *s, c= har *buf, =20 spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) - process_slab(&t, s, page, alloc); + process_slab(&t, s, page, alloc, obj_map); list_for_each_entry(page, &n->full, slab_list) - process_slab(&t, s, page, alloc); + process_slab(&t, s, page, alloc, obj_map); spin_unlock_irqrestore(&n->list_lock, flags); } =20 + bitmap_free(obj_map); + for (i =3D 0; i < t.count; i++) { struct location *l =3D &t.loc[i]; =20 --=20 2.31.1