From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A4A0C25B0F for ; Tue, 9 Aug 2022 17:19:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6530D8E0005; Tue, 9 Aug 2022 13:19:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AD8508E0012; Tue, 9 Aug 2022 13:19:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55B4C8E0001; Tue, 9 Aug 2022 13:19:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A27DC8E0001 for ; Tue, 9 Aug 2022 13:19:02 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 8201514107B for ; Tue, 9 Aug 2022 17:19:02 +0000 (UTC) X-FDA: 79780714524.18.6347E18 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 28A134002D for ; Tue, 9 Aug 2022 17:19:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=QN/ZbBJjZ8+ZhpgDcIeKz+3/aEmEd5zzsjBF7Q9PlnA=; b=qI2GBgl/wdFeTlrVWiISrG3uhY iwvjHU+dxbwgGh0nhWv6FZAW0j9TBaaBo6RGCeH9Bsl1u0xxlUw+JoJl84w44bh0vS8Na8RW8aJvQ LX+OIy+2ISMRjL8Ex16cxE/Dn6irJ+X44vY5neB6BN0wL99OMTpkxLJJDNU/ehv15ghaX3x+Oy8jF qrZRq3i0RapCkfDpYYjP/cxDGI4icwrtiE75C1Ff2PdGhUtk+Zav9r+AnfEVbjjKwC9Yw+/DXbGgx rbD8FPAKO63jpARihAQ4QGd6NlDoQIi9Ol3gAi/fIUv4Vn+HE4h99Y9BagHG/pqH1WmHlLfv1Hb2m iZbmBI2A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLSse-00FdFw-Dj; Tue, 09 Aug 2022 17:19:00 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , William Kucharski Subject: [PATCH v2 16/16] slub: Allocate frozen pages Date: Tue, 9 Aug 2022 18:18:54 +0100 Message-Id: <20220809171854.3725722-17-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220809171854.3725722-1-willy@infradead.org> References: <20220809171854.3725722-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660065542; a=rsa-sha256; cv=none; b=aicdjH4Ui1G0yV6JN3s8fvhTxFONeCjJj3vbABFQoCDdC/3vJQE/uNxrMycCqHrg5Ok8dE B50bUBOm1U6MWRMmnJmQSyW2uJHjOqic1iJduS3AFxKskQxYK1FHYlyCUs5YWLT/wTOgK+ RsEVpDCdpuaUuqHpIGeMWKh8FVfJ7as= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="qI2GBgl/"; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660065542; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QN/ZbBJjZ8+ZhpgDcIeKz+3/aEmEd5zzsjBF7Q9PlnA=; b=LEem5UPflf1LDxI1VLIiYAm4borPKQvCru4dUWF/oZVln+RF3Itdf2niZmsaJpkM/UKhJS ckp/Lyjc/gIgjYCIDyaha5FhN3eVfEAjZSrtbva0DvwwpxKbAIYCY33WEH6XOOouU7mokT 2KXpx/IXO/Is1EYLrbNkB5mt6DuVzPM= X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 28A134002D Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="qI2GBgl/"; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Stat-Signature: 7a4e3ud1ybrydppozcoawm67mxoos6ht X-HE-Tag: 1660065541-909618 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since slub does not use the page refcount, it can allocate and free frozen pages, saving one atomic operation per free. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: William Kucharski --- mm/slub.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 862dbd9af4f5..65d14d7aa7a9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1816,21 +1816,21 @@ static void *setup_object(struct kmem_cache *s, void *object) static inline struct slab *alloc_slab_page(gfp_t flags, int node, struct kmem_cache_order_objects oo) { - struct folio *folio; + struct page *page; struct slab *slab; unsigned int order = oo_order(oo); if (node == NUMA_NO_NODE) - folio = (struct folio *)alloc_pages(flags, order); + page = alloc_frozen_pages(flags, order); else - folio = (struct folio *)__alloc_pages_node(node, flags, order); + page = __alloc_frozen_pages(flags, order, node, NULL); - if (!folio) + if (!page) return NULL; - slab = folio_slab(folio); - __folio_set_slab(folio); - if (page_is_pfmemalloc(folio_page(folio, 0))) + slab = (struct slab *)page; + __SetPageSlab(page); + if (page_is_pfmemalloc(page)) slab_set_pfmemalloc(slab); return slab; @@ -2032,8 +2032,8 @@ static struct slab *new_slab(struct kmem_cache *s, gfp_t flags, int node) static void __free_slab(struct kmem_cache *s, struct slab *slab) { - struct folio *folio = slab_folio(slab); - int order = folio_order(folio); + struct page *page = (struct page *)slab; + int order = compound_order(page); int pages = 1 << order; if (kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) { @@ -2045,12 +2045,12 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) } __slab_clear_pfmemalloc(slab); - __folio_clear_slab(folio); - folio->mapping = NULL; + __ClearPageSlab(page); + page->mapping = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += pages; unaccount_slab(slab, order, s); - __free_pages(folio_page(folio, 0), order); + free_frozen_pages(page, order); } static void rcu_free_slab(struct rcu_head *h) @@ -3568,7 +3568,7 @@ static inline void free_large_kmalloc(struct folio *folio, void *object) pr_warn_once("object pointer: 0x%p\n", object); kfree_hook(object); - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, + lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(folio_page(folio, 0), order); } -- 2.35.1