From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5D33C433F5 for ; Tue, 31 May 2022 15:06:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 46FC96B007B; Tue, 31 May 2022 11:06:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41EE36B007D; Tue, 31 May 2022 11:06:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26B866B007E; Tue, 31 May 2022 11:06:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 144126B007B for ; Tue, 31 May 2022 11:06:28 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id DA2C533383 for ; Tue, 31 May 2022 15:06:27 +0000 (UTC) X-FDA: 79526364414.27.E4DD63A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 5E8288005D for ; Tue, 31 May 2022 15:05:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n0qFSi1QKalEAUEp753ssM4ATrTrrY6uyjfAy6W8jLo=; b=RYFExA/9qWnX+ryr/8XcaWefcJ SSG366b68pRDN6187FgrWCTR8JtnlEi+pSvmT0dCp3byPl9nDi7OImM4yBDpxRo6WPTrnxSF6jZun 1RRdjorN771del6kFDdbueIFdLzWRHahB/oKzTQIPjeiz5sz/R6g6GALsRNbxcYwSWLcwUpbGuUyN fczT0g+p2LfJ+kJ7+e414lsNo3wbPOMeMNdWdmixs3/6H6ZRzXh9VK6dvCUqyCDmC0k0xmo39iP5j 6oygCtrwCqx69DcyVPyKBUubDPtadqvJOnPwVce4KdSfZa/QwoaLeGGHeApGy29vKCHC/csRyF9Vo 36a/JFDQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nw3Rl-005T1a-JT; Tue, 31 May 2022 15:06:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 5/6] slab: Allocate frozen pages Date: Tue, 31 May 2022 16:06:10 +0100 Message-Id: <20220531150611.1303156-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220531150611.1303156-1-willy@infradead.org> References: <20220531150611.1303156-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 5E8288005D X-Rspam-User: X-Stat-Signature: g3jgjoguq9od4ao1o9a81u9tihdrjau5 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="RYFExA/9"; dmarc=none; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1654009553-152696 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since slab does not use the page refcount, it can allocate and free frozen pages, saving one atomic operation per free. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index f8cd00f4ba13..c5c53ed304d1 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1355,23 +1355,23 @@ slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nodeid) static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - struct folio *folio; + struct page *page; struct slab *slab; flags |= cachep->allocflags; - folio = (struct folio *) __alloc_pages_node(nodeid, flags, cachep->gfporder); - if (!folio) { + page = __alloc_frozen_pages(flags, cachep->gfporder, nodeid, NULL); + if (!page) { slab_out_of_memory(cachep, flags, nodeid); return NULL; } - slab = folio_slab(folio); + __SetPageSlab(page); + slab = (struct slab *)page; account_slab(slab, cachep->gfporder, cachep, flags); - __folio_set_slab(folio); /* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */ - if (sk_memalloc_socks() && page_is_pfmemalloc(folio_page(folio, 0))) + if (sk_memalloc_socks() && page_is_pfmemalloc(page)) slab_set_pfmemalloc(slab); return slab; @@ -1383,18 +1383,17 @@ static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab) { int order = cachep->gfporder; - struct folio *folio = slab_folio(slab); + struct page *page = (struct page *)slab; - BUG_ON(!folio_test_slab(folio)); __slab_clear_pfmemalloc(slab); - __folio_clear_slab(folio); - page_mapcount_reset(folio_page(folio, 0)); - folio->mapping = NULL; + __ClearPageSlab(page); + page_mapcount_reset(page); + page->mapping = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += 1 << order; unaccount_slab(slab, order, cachep); - __free_pages(folio_page(folio, 0), order); + free_frozen_pages(page, order); } static void kmem_rcu_free(struct rcu_head *head) -- 2.34.1