From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0BF4C4338F for ; Mon, 23 Aug 2021 14:58:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 65EC7613DB for ; Mon, 23 Aug 2021 14:58:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 65EC7613DB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E6800900003; Mon, 23 Aug 2021 10:58:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D09436B006C; Mon, 23 Aug 2021 10:58:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 89BC48D0001; Mon, 23 Aug 2021 10:58:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0075.hostedemail.com [216.40.44.75]) by kanga.kvack.org (Postfix) with ESMTP id 52A266B0073 for ; Mon, 23 Aug 2021 10:58:40 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DECB02439C for ; Mon, 23 Aug 2021 14:58:39 +0000 (UTC) X-FDA: 78506651958.11.8A7829F Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf06.hostedemail.com (Postfix) with ESMTP id 64A3A801A8A3 for ; Mon, 23 Aug 2021 14:58:39 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 1686220009; Mon, 23 Aug 2021 14:58:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1629730718; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ywpqepUgy/Rz6MTdGigVCOR/VtCNeGcEfMKJU4z/L9Q=; b=psnXgLusa8d+4Ln6u0fs4tAC/BWI14xusUrbpRNNQ80f2yxJDdjV1p13Yuv3CT+n+9uCz8 Et4oW0HLA1N0Y4UMgnuChgx2K1yFdI3aeOq75PJWS51KfCbAZUaM8UTRt2mn+aI0OuwOtP fhRycnVTBorug/x4QECwYqZzlqnlju8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1629730718; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ywpqepUgy/Rz6MTdGigVCOR/VtCNeGcEfMKJU4z/L9Q=; b=GhrBClxgT/g5IuCGGGgRzRcX+Gjp2ajt2k7xWiAyJXXTpT4/pi/y9bnvH9K0ArOs9sCkno zt4BiYr9uHHZxfAg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DFC5113BE0; Mon, 23 Aug 2021 14:58:37 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id MPQJNp23I2EFQQAAMHmgww (envelope-from ); Mon, 23 Aug 2021 14:58:37 +0000 From: Vlastimil Babka To: Andrew Morton , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Galbraith , Sebastian Andrzej Siewior , Thomas Gleixner , Mel Gorman , Jesper Dangaard Brouer , Jann Horn , Vlastimil Babka Subject: [PATCH v5 02/35] mm, slub: allocate private object map for debugfs listings Date: Mon, 23 Aug 2021 16:57:53 +0200 Message-Id: <20210823145826.3857-3-vbabka@suse.cz> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210823145826.3857-1-vbabka@suse.cz> References: <20210823145826.3857-1-vbabka@suse.cz> MIME-Version: 1.0 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=psnXgLus; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=GhrBClxg; spf=pass (imf06.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-Stat-Signature: xxh9wcmgrf8cbz8c6dkjbwjfqukqihqf X-Rspamd-Queue-Id: 64A3A801A8A3 X-Rspamd-Server: rspam04 X-HE-Tag: 1629730719-75036 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Slub has a static spinlock protected bitmap for marking which objects are= on freelist when it wants to list them, for situations where dynamically allocating such map can lead to recursion or locking issues, and on-stack bitmap would be too large. The handlers of debugfs files alloc_traces and free_traces also currently= use this shared bitmap, but their syscall context makes it straightforward to allo= cate a private map before entering locked sections, so switch these processing p= aths to use a private bitmap. Signed-off-by: Vlastimil Babka Acked-by: Christoph Lameter Acked-by: Mel Gorman --- mm/slub.c | 44 +++++++++++++++++++++++++++++--------------- 1 file changed, 29 insertions(+), 15 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f6063ec97a55..fb603fdf58cb 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -454,6 +454,18 @@ static inline bool cmpxchg_double_slab(struct kmem_c= ache *s, struct page *page, static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; static DEFINE_SPINLOCK(object_map_lock); =20 +static void __fill_map(unsigned long *obj_map, struct kmem_cache *s, + struct page *page) +{ + void *addr =3D page_address(page); + void *p; + + bitmap_zero(obj_map, page->objects); + + for (p =3D page->freelist; p; p =3D get_freepointer(s, p)) + set_bit(__obj_to_index(s, addr, p), obj_map); +} + #if IS_ENABLED(CONFIG_KUNIT) static bool slab_add_kunit_errors(void) { @@ -483,17 +495,11 @@ static inline bool slab_add_kunit_errors(void) { re= turn false; } static unsigned long *get_map(struct kmem_cache *s, struct page *page) __acquires(&object_map_lock) { - void *p; - void *addr =3D page_address(page); - VM_BUG_ON(!irqs_disabled()); =20 spin_lock(&object_map_lock); =20 - bitmap_zero(object_map, page->objects); - - for (p =3D page->freelist; p; p =3D get_freepointer(s, p)) - set_bit(__obj_to_index(s, addr, p), object_map); + __fill_map(object_map, s, page); =20 return object_map; } @@ -4879,17 +4885,17 @@ static int add_location(struct loc_track *t, stru= ct kmem_cache *s, } =20 static void process_slab(struct loc_track *t, struct kmem_cache *s, - struct page *page, enum track_item alloc) + struct page *page, enum track_item alloc, + unsigned long *obj_map) { void *addr =3D page_address(page); void *p; - unsigned long *map; =20 - map =3D get_map(s, page); + __fill_map(obj_map, s, page); + for_each_object(p, s, addr, page->objects) - if (!test_bit(__obj_to_index(s, addr, p), map)) + if (!test_bit(__obj_to_index(s, addr, p), obj_map)) add_location(t, s, get_track(s, p, alloc)); - put_map(map); } #endif /* CONFIG_DEBUG_FS */ #endif /* CONFIG_SLUB_DEBUG */ @@ -5816,14 +5822,21 @@ static int slab_debug_trace_open(struct inode *in= ode, struct file *filep) struct loc_track *t =3D __seq_open_private(filep, &slab_debugfs_sops, sizeof(struct loc_track)); struct kmem_cache *s =3D file_inode(filep)->i_private; + unsigned long *obj_map; + + obj_map =3D bitmap_alloc(oo_objects(s->oo), GFP_KERNEL); + if (!obj_map) + return -ENOMEM; =20 if (strcmp(filep->f_path.dentry->d_name.name, "alloc_traces") =3D=3D 0) alloc =3D TRACK_ALLOC; else alloc =3D TRACK_FREE; =20 - if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL= )) + if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL= )) { + bitmap_free(obj_map); return -ENOMEM; + } =20 for_each_kmem_cache_node(s, node, n) { unsigned long flags; @@ -5834,12 +5847,13 @@ static int slab_debug_trace_open(struct inode *in= ode, struct file *filep) =20 spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) - process_slab(t, s, page, alloc); + process_slab(t, s, page, alloc, obj_map); list_for_each_entry(page, &n->full, slab_list) - process_slab(t, s, page, alloc); + process_slab(t, s, page, alloc, obj_map); spin_unlock_irqrestore(&n->list_lock, flags); } =20 + bitmap_free(obj_map); return 0; } =20 --=20 2.32.0