From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D226EC433FE for ; Tue, 31 May 2022 15:06:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E13E6B0073; Tue, 31 May 2022 11:06:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0FD706B0078; Tue, 31 May 2022 11:06:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E692A6B0074; Tue, 31 May 2022 11:06:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id CAB0B6B0072 for ; Tue, 31 May 2022 11:06:17 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 93A0560508 for ; Tue, 31 May 2022 15:06:17 +0000 (UTC) X-FDA: 79526363994.01.6E84A06 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id B20901A0054 for ; Tue, 31 May 2022 15:06:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DACXc824H83CIe0LjIokraXXUf/wuVorNIAYQPEe/oE=; b=a7Rvf958Z/RzJACIwfREmGAvJ+ 6yNOQnH7X/i8dt7fz2Ndvn765g/xNbVaDkxrAlXrPlsAcgbLZf/FEFkK2Vppu7YH0+H2p4TEJEr+a 8qeua6qjPxFNLAF6Lbpq0zDo5JdvPX7QBCWyhm/QdRf2PDh9RNnw1V5mmzPdSwOltKgxCOWBcx54r lEZAADUGwWPA3uFerriJng6Ln+x/CPwN6oFqPtZB8Vi8ZePJT+VILVTesDNjUJOCRb+5baadh/5H9 JUP4x9/kv3Q1Zg16Wed4PRlZDKjR8VS6bH6beeN7QuRLT5b6d6MrKB8/vg2Fj37zsly7hpuyeTPX2 PtKNLHKA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nw3Rl-005T1c-Ls; Tue, 31 May 2022 15:06:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 6/6] slub: Allocate frozen pages Date: Tue, 31 May 2022 16:06:11 +0100 Message-Id: <20220531150611.1303156-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220531150611.1303156-1-willy@infradead.org> References: <20220531150611.1303156-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B20901A0054 X-Stat-Signature: kwuses9sk58qj7h7kjitgeosg7d4ejxz X-Rspam-User: Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=a7Rvf958; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1654009560-784598 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since slub does not use the page refcount, it can allocate and free frozen pages, saving one atomic operation per free. --- mm/slub.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index e5535020e0fd..420a56746a01 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1789,21 +1789,21 @@ static void *setup_object(struct kmem_cache *s, void *object) static inline struct slab *alloc_slab_page(gfp_t flags, int node, struct kmem_cache_order_objects oo) { - struct folio *folio; + struct page *page; struct slab *slab; unsigned int order = oo_order(oo); if (node == NUMA_NO_NODE) - folio = (struct folio *)alloc_pages(flags, order); + page = alloc_frozen_pages(flags, order); else - folio = (struct folio *)__alloc_pages_node(node, flags, order); + page = __alloc_frozen_pages(flags, order, node, NULL); - if (!folio) + if (!page) return NULL; - slab = folio_slab(folio); - __folio_set_slab(folio); - if (page_is_pfmemalloc(folio_page(folio, 0))) + slab = (struct slab *)page; + __SetPageSlab(page); + if (page_is_pfmemalloc(page)) slab_set_pfmemalloc(slab); return slab; @@ -2005,8 +2005,8 @@ static struct slab *new_slab(struct kmem_cache *s, gfp_t flags, int node) static void __free_slab(struct kmem_cache *s, struct slab *slab) { - struct folio *folio = slab_folio(slab); - int order = folio_order(folio); + struct page *page = (struct page *)slab; + int order = compound_order(page); int pages = 1 << order; if (kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) { @@ -2018,12 +2018,12 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) } __slab_clear_pfmemalloc(slab); - __folio_clear_slab(folio); - folio->mapping = NULL; + __ClearPageSlab(page); + page->mapping = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += pages; unaccount_slab(slab, order, s); - __free_pages(folio_page(folio, 0), order); + free_frozen_pages(page, order); } static void rcu_free_slab(struct rcu_head *h) @@ -3541,7 +3541,7 @@ static inline void free_large_kmalloc(struct folio *folio, void *object) pr_warn_once("object pointer: 0x%p\n", object); kfree_hook(object); - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, + lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(folio_page(folio, 0), order); } -- 2.34.1