From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29745C19F2D for ; Tue, 9 Aug 2022 17:19:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 402ED8E0003; Tue, 9 Aug 2022 13:19:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 81DCB8E000D; Tue, 9 Aug 2022 13:19:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D65D8E000A; Tue, 9 Aug 2022 13:19:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 941748E000A for ; Tue, 9 Aug 2022 13:19:02 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7552E121144 for ; Tue, 9 Aug 2022 17:19:02 +0000 (UTC) X-FDA: 79780714524.27.F21C3AA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 215BA18016E for ; Tue, 9 Aug 2022 17:19:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VbmlMAwp8npzL5nbhbwbQv1TPDSEd2nO3dXxWM17iss=; b=QrVuTAZewGPAP6O0EbDjWu0p7W uBBfolkdDxYgTr3ulQHZUbgr9NY8xVeID50sbjmyQEDmyrN0GryQ86qEI0zowLn0jh2UWMTCyebIN E0kfcWcXjc23dXugEgJroN0yXzbdvdA+hEnZ7AHcK9eRaY8eKydHTJ21YoTUicQ3Npn9tuHmcsy4f L3PcTLv9Ilp3YKuGW+3NBm0jSZoyeY3S/9wJYX/UQzgp9Wxvd/jB6mDt90Cl7Flz/ROVoCqKX72WI ayCsPrfXQrvHfjMkCLT9UdKbCbIkVnuX63x4cTihf7u4nUqGOENMOc1FMaw/2NTBb7b3jQnN0+WkX 7UZm1B6A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLSsd-00FdF5-7R; Tue, 09 Aug 2022 17:18:59 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 06/16] mm/page_alloc: Move set_page_refcounted() to callers of get_page_from_freelist() Date: Tue, 9 Aug 2022 18:18:44 +0100 Message-Id: <20220809171854.3725722-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220809171854.3725722-1-willy@infradead.org> References: <20220809171854.3725722-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660065542; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VbmlMAwp8npzL5nbhbwbQv1TPDSEd2nO3dXxWM17iss=; b=E5UsjpTkYf0Nvr9GP3pPw4Q3AbHhKIQxHOgbuVaaY8+OTI8WpEheU4h92g0z/jQ2qeIc8S txDNPUul7iS9WmgveU1Sh0YEFn8obgBDs20tHQWD8ykzGuXTdNXoPMvtX8Fs1X9OWAoXOh 0gunh31eZ5XK7NYlMjEBJTcwzOXOGQk= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QrVuTAZe; dmarc=none; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660065542; a=rsa-sha256; cv=none; b=HMvLyGlAmgipYOIfv/IvAFqnEsCJCl9VqwzSQ+98ocVURqir4RZx3outmNoXiGAM9ngdy1 WqaETqwSOSpHPO/zoKVL3Lt5/vZBl1wco6HvoOctxwshPCtSORzoDiu7V7xtdXjCAyBLj/ GKf+PwATAsBlYl5i9GsOAjQ808+Vn0c= X-Rspamd-Queue-Id: 215BA18016E X-Rspamd-Server: rspam03 X-Rspam-User: Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QrVuTAZe; dmarc=none; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Stat-Signature: oi3knjitrxgzwpn5tectaw9ii4f11bfn X-HE-Tag: 1660065541-779593 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In preparation for allocating frozen pages, stop initialising the page refcount in get_page_from_freelist(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/page_alloc.c | 25 +++++++++++++++++-------- 1 file changed, 17 insertions(+), 8 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9bc53001f56c..8c9102ab7a87 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4280,7 +4280,6 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, gfp_mask, alloc_flags, ac->migratetype); if (page) { prep_new_page(page, order, gfp_mask, alloc_flags); - set_page_refcounted(page); /* * If this is a high-order atomic allocation then check @@ -4374,6 +4373,8 @@ __alloc_pages_cpuset_fallback(gfp_t gfp_mask, unsigned int order, page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac); + if (page) + set_page_refcounted(page); return page; } @@ -4412,8 +4413,10 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, page = get_page_from_freelist((gfp_mask | __GFP_HARDWALL) & ~__GFP_DIRECT_RECLAIM, order, ALLOC_WMARK_HIGH|ALLOC_CPUSET, ac); - if (page) + if (page) { + set_page_refcounted(page); goto out; + } /* Coredumps can quickly deplete all memory reserves */ if (current->flags & PF_DUMPCORE) @@ -4504,10 +4507,8 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order, count_vm_event(COMPACTSTALL); /* Prep a captured page if available */ - if (page) { + if (page) prep_new_page(page, order, gfp_mask, alloc_flags); - set_page_refcounted(page); - } /* Try get a page from the freelist if available */ if (!page) @@ -4516,6 +4517,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order, if (page) { struct zone *zone = page_zone(page); + set_page_refcounted(page); zone->compact_blockskip_flush = false; compaction_defer_reset(zone, order, true); count_vm_event(COMPACTSUCCESS); @@ -4765,6 +4767,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order, drained = true; goto retry; } + set_page_refcounted(page); out: psi_memstall_leave(&pflags); @@ -5058,8 +5061,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * that first */ page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac); - if (page) + if (page) { + set_page_refcounted(page); goto got_pg; + } /* * For costly allocations, try direct compaction first, as it's likely @@ -5138,8 +5143,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, /* Attempt with potentially adjusted zonelist and alloc_flags */ page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac); - if (page) + if (page) { + set_page_refcounted(page); goto got_pg; + } /* Caller is not willing to reclaim, we can't balance anything */ if (!can_direct_reclaim) @@ -5516,8 +5523,10 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, /* First allocation attempt */ page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac); - if (likely(page)) + if (likely(page)) { + set_page_refcounted(page); goto out; + } alloc_gfp = gfp; ac.spread_dirty_pages = false; -- 2.35.1