From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E967AC19F2D for ; Tue, 9 Aug 2022 17:19:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 896328E0002; Tue, 9 Aug 2022 13:19:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C5EA8E0003; Tue, 9 Aug 2022 13:19:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAFAF8E0007; Tue, 9 Aug 2022 13:19:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 660388E0001 for ; Tue, 9 Aug 2022 13:19:02 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 379FC12117F for ; Tue, 9 Aug 2022 17:19:02 +0000 (UTC) X-FDA: 79780714524.31.FB02805 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 9787DC0157 for ; Tue, 9 Aug 2022 17:19:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mny3MCbhT5zvvsFtsWdtIZwYefZJpIPtneKxlIf/H4E=; b=JoOJBJfRCIFeTmpFiM9WOLsiD3 tK9u65C3DxZnj48L00n0im6AKndFDIc8KHk46WUzZup2nuLrkyciOurn5NgX+ePp9jLbfXg66Y2xn WpIhNxXCPmRbXOCk+F7d30YXanX6etsZLmAitaaBIpV4CiNju1hSrK5nX4ZobaYUqHCIPVva9fztQ 8n7IN1ZJRJcongTzq9REgrrDoxWgfeh+gbCCOP9gwl5lcn65Plf6/nwQj7btMhc/qvjJ/SWV9We17 aQc24m8d9PqMQNMbFURiLYHa/4/yZFvOmQAgtxm7tSX6cWC32tQQB5u5cMT/6kotdUHBp6cPYw4w3 jsR3VPdg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLSse-00FdFl-6j; Tue, 09 Aug 2022 17:19:00 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , William Kucharski Subject: [PATCH v2 14/16] mm/mempolicy: Add alloc_frozen_pages() Date: Tue, 9 Aug 2022 18:18:52 +0100 Message-Id: <20220809171854.3725722-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220809171854.3725722-1-willy@infradead.org> References: <20220809171854.3725722-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660065541; a=rsa-sha256; cv=none; b=M+eehhHyO79Hwy0218M8WNewQXWzEu1wWlXvVCGfOczOhL7FdYPUK09rXAJo6YgakcCY3k 25HFFhMUbzfim1xWxoAyuNhDLtPMa6+JAb6wW11XgfBYsiRvpw5rlzarwOCSPFemrS+HtX jvqWbNc+dD2h+8/I09VzL67OdHGDGbs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660065541; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mny3MCbhT5zvvsFtsWdtIZwYefZJpIPtneKxlIf/H4E=; b=mMhVmSV0g0VrYPGCyzu1oNO/eZQadg4bjKGY2qX144szA9c6XvBWqcJogDEsGdM5ZHCHs3 sMbShlDiLhTW0+0B/Tkz6rvXcdkht7Ux4M7XzcfWZQDz+uvbFNwuU/tZ+pnq0P0GZnoRFo +9iTRGhqwYO2FmSt/SXlWDYpa1Vjmsw= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JoOJBJfR; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: bs1urmwcn3hrmr5py7srwwyghopm9xpu X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9787DC0157 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JoOJBJfR; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1660065541-565900 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide an interface to allocate pages from the page allocator without incrementing their refcount. This saves an atomic operation on free, which may be beneficial to some users (eg slab). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: William Kucharski --- mm/internal.h | 9 ++++++++ mm/mempolicy.c | 61 +++++++++++++++++++++++++++++++------------------- 2 files changed, 47 insertions(+), 23 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 7e6079216a17..6f02bc32b406 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -367,6 +367,15 @@ struct page *__alloc_frozen_pages(gfp_t, unsigned int order, int nid, void free_frozen_pages(struct page *, unsigned int order); void free_unref_page_list(struct list_head *list); +#ifdef CONFIG_NUMA +struct page *alloc_frozen_pages(gfp_t, unsigned int order); +#else +static inline struct page *alloc_frozen_pages(gfp_t gfp, unsigned int order) +{ + return __alloc_frozen_pages(gfp, order, numa_node_id(), NULL); +} +#endif + extern void zone_pcp_update(struct zone *zone, int cpu_online); extern void zone_pcp_reset(struct zone *zone); extern void zone_pcp_disable(struct zone *zone); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b73d3248d976..09ecc499d5fc 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2100,7 +2100,7 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order, { struct page *page; - page = __alloc_pages(gfp, order, nid, NULL); + page = __alloc_frozen_pages(gfp, order, nid, NULL); /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ if (!static_branch_likely(&vm_numa_stat_key)) return page; @@ -2126,9 +2126,9 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, */ preferred_gfp = gfp | __GFP_NOWARN; preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); - page = __alloc_pages(preferred_gfp, order, nid, &pol->nodes); + page = __alloc_frozen_pages(preferred_gfp, order, nid, &pol->nodes); if (!page) - page = __alloc_pages(gfp, order, nid, NULL); + page = __alloc_frozen_pages(gfp, order, nid, NULL); return page; } @@ -2167,8 +2167,11 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, mpol_cond_put(pol); gfp |= __GFP_COMP; page = alloc_page_interleave(gfp, order, nid); - if (page && order > 1) - prep_transhuge_page(page); + if (page) { + set_page_refcounted(page); + if (order > 1) + prep_transhuge_page(page); + } folio = (struct folio *)page; goto out; } @@ -2180,8 +2183,11 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, gfp |= __GFP_COMP; page = alloc_pages_preferred_many(gfp, order, node, pol); mpol_cond_put(pol); - if (page && order > 1) - prep_transhuge_page(page); + if (page) { + set_page_refcounted(page); + if (order > 1) + prep_transhuge_page(page); + } folio = (struct folio *)page; goto out; } @@ -2235,21 +2241,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, } EXPORT_SYMBOL(vma_alloc_folio); -/** - * alloc_pages - Allocate pages. - * @gfp: GFP flags. - * @order: Power of two of number of pages to allocate. - * - * Allocate 1 << @order contiguous pages. The physical address of the - * first page is naturally aligned (eg an order-3 allocation will be aligned - * to a multiple of 8 * PAGE_SIZE bytes). The NUMA policy of the current - * process is honoured when in process context. - * - * Context: Can be called from any context, providing the appropriate GFP - * flags are used. - * Return: The page on success or NULL if allocation fails. - */ -struct page *alloc_pages(gfp_t gfp, unsigned order) +struct page *alloc_frozen_pages(gfp_t gfp, unsigned order) { struct mempolicy *pol = &default_policy; struct page *page; @@ -2267,12 +2259,35 @@ struct page *alloc_pages(gfp_t gfp, unsigned order) page = alloc_pages_preferred_many(gfp, order, policy_node(gfp, pol, numa_node_id()), pol); else - page = __alloc_pages(gfp, order, + page = __alloc_frozen_pages(gfp, order, policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol)); return page; } + +/** + * alloc_pages - Allocate pages. + * @gfp: GFP flags. + * @order: Power of two of number of pages to allocate. + * + * Allocate 1 << @order contiguous pages. The physical address of the + * first page is naturally aligned (eg an order-3 allocation will be aligned + * to a multiple of 8 * PAGE_SIZE bytes). The NUMA policy of the current + * process is honoured when in process context. + * + * Context: Can be called from any context, providing the appropriate GFP + * flags are used. + * Return: The page on success or NULL if allocation fails. + */ +struct page *alloc_pages(gfp_t gfp, unsigned order) +{ + struct page *page = alloc_frozen_pages(gfp, order); + + if (page) + set_page_refcounted(page); + return page; +} EXPORT_SYMBOL(alloc_pages); struct folio *folio_alloc(gfp_t gfp, unsigned order) -- 2.35.1