From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0EC6C001DF for ; Mon, 24 Jul 2023 13:46:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 06D448E0005; Mon, 24 Jul 2023 09:46:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F373B8E0001; Mon, 24 Jul 2023 09:46:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B8EA1900002; Mon, 24 Jul 2023 09:46:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9FD608E0001 for ; Mon, 24 Jul 2023 09:46:53 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 67047B2152 for ; Mon, 24 Jul 2023 13:46:53 +0000 (UTC) X-FDA: 81046631106.18.9B44655 Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) by imf26.hostedemail.com (Postfix) with ESMTP id 6B89F14000E for ; Mon, 24 Jul 2023 13:46:50 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ikYTxmvN; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf26.hostedemail.com: domain of usama.arif@bytedance.com designates 209.85.128.54 as permitted sender) smtp.mailfrom=usama.arif@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690206411; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2+5f3AgE150Mh6cYaW30LczJjyIbsUmp9ItlqNwo9Gs=; b=uBkOMlNxzGUKNtavUbdI8A3O69er6gwmclehSAYHl2kcNgLEBFx88eXc/Tko+RGMmYgjWZ sUSrRjpTUX2UlgvUBa/BN4PZd0gk7aVfSH6wAFtm4Q6nXCauhBg1kmPdwRbCcY6lnUWAfU IPXNf6mWssq0zCCulOKXuvCgISfIL3I= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ikYTxmvN; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf26.hostedemail.com: domain of usama.arif@bytedance.com designates 209.85.128.54 as permitted sender) smtp.mailfrom=usama.arif@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690206411; a=rsa-sha256; cv=none; b=REPP1SmjkSvRv/CpgilIcQjFg7EWQItfesryrDQPTjLP4YXfqC+6cWixhCwgTbabPiqypg Uq+JWvBKkbEHg8AyaN7oEBUO6T7faWDi9GOGk474BVC8QXIHXHeQT5ljBSF6m7gnjZwrvw pBz1xHFt+Sjzwl19c/XWr6V3sa3lGo8= Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-3fbc244d307so41942675e9.1 for ; Mon, 24 Jul 2023 06:46:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1690206408; x=1690811208; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2+5f3AgE150Mh6cYaW30LczJjyIbsUmp9ItlqNwo9Gs=; b=ikYTxmvNKkByuJlR0NutAuZwBS+HffntLHXBTOm8ABSvwCfWL2fKqLXT+iOY+era+y 0f+kQ2586uXX6d82nfLBSpOQr1gqnMFjKQ47G8ZLbVTI8NSLejjGc+kDPEvl1HVxnBUF jCpEFBNwQAA+XsldO7xDMXzVVhi5WukMODB/6kuUFR9VRoS+9pioBp657mkjAf4PNZ/i F4wa0/ndwm2yPUB7BijO2XndG4yNOsOBIbw8Z0pGdeoakBDEl8/5c+TqIcujNE6fR8ZQ FpFK7b0NO5bBOQMrm+8GlX2hKWNa2zqyvycuRIQfoA56RG/Cn9fc7U79QcopUi7lSSGW lQSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690206408; x=1690811208; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2+5f3AgE150Mh6cYaW30LczJjyIbsUmp9ItlqNwo9Gs=; b=VOvuPcU1Y/nNZTncPh/tyXzaLBWp0rDK/W70sBSV58L84E+RJ/BHhzBInJ1BAZkbKx bgopmINIwtyjTuiZndIokh0ZP70V15uAcxt4aywtSYx0hhy+zxXQVh7eJxhxVxVDIP/y sqJxOx4vfoznP3vPZ7zJ5zTrBzTcmN/aWW1ZQF8IuGfSOzuxXeNFp6QtUfwvo9Q7Nyxa Aq0Ig5OaeJSkWuFRYQp4oI5B11i80uYX9U7enN93VwKw5IFH7e2ON6cOvHX6qpsO5Dzi /scxpLe+xnhE7FHuNbalfcuCYKYwwWi1Wf94nZLnRjKWvus8rQrwoLowpC64z7TSWXh6 m2bw== X-Gm-Message-State: ABy/qLbm6gynJFhsoBPrUT5fXRpO6RtX4xI8qLd2QHnVvMFP5coRzBH+ aOsCO2SNEtPGjoaK/IzZywgJDNLXktLbGqKG5dI= X-Google-Smtp-Source: APBJJlG2oRr8ivYhrN6NtzN8nuU0+P+lfubBc/YogasGIn8tgMOZpf/GJE4V1pfPjhr20IzDbaEKIw== X-Received: by 2002:a1c:750a:0:b0:3fd:ad65:ea8b with SMTP id o10-20020a1c750a000000b003fdad65ea8bmr3233762wmc.12.1690206408483; Mon, 24 Jul 2023 06:46:48 -0700 (PDT) Received: from localhost.localdomain ([2a02:6b6a:b465:0:d7c4:7f46:8fed:f874]) by smtp.gmail.com with ESMTPSA id e19-20020a05600c219300b003fbe791a0e8sm10209354wme.0.2023.07.24.06.46.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Jul 2023 06:46:48 -0700 (PDT) From: Usama Arif To: linux-mm@kvack.org, muchun.song@linux.dev, mike.kravetz@oracle.com, rppt@kernel.org Cc: linux-kernel@vger.kernel.org, fam.zheng@bytedance.com, liangma@liangbit.com, simon.evans@bytedance.com, punit.agrawal@bytedance.com, Usama Arif Subject: [RFC 1/4] mm/hugetlb: Skip prep of tail pages when HVO is enabled Date: Mon, 24 Jul 2023 14:46:41 +0100 Message-Id: <20230724134644.1299963-2-usama.arif@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230724134644.1299963-1-usama.arif@bytedance.com> References: <20230724134644.1299963-1-usama.arif@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 6B89F14000E X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 6y6owp3tgp4qtr4am3tkx4cgwnoewsez X-HE-Tag: 1690206410-339825 X-HE-Meta: U2FsdGVkX18vagZ4BG87dQqzJVTLWa5Glk75489/CKRnkZ2s0kZeoUqYI9A1rHbucSp/pVVEMwVbWZ4x6+JTuQCWuc4ltsQ04t2eExrFzrjY6ZhPUf7NRRzE9N+GIuIWxyD0r9H0ZeeBcC0jJaQBrWOA7GpntCoLVQWDVI5hqMQa0/LboVDInDcn0fiXKkfqm8KcKc8KeOhnIJr0ekWCiWs28THcbQa3hjeRs2m6lWk6llydzxD8CyxIPcgRoYcAQG115kcHTEy9Ky5AuOFxb37Yv6IT2WeZzkXRcIFhgVEqv9jYe8LpKqZG7dCpYIOBOQKBtOpnrrQFb8slUp4mbs6htzs8u3YfXvvR6+GcApgpfaf3jNI8KJ3qGRtPmtRrMOrLqGMjirHT79+kQM9X19f1SK0/10XobmEoFd5dFi+XcsLo/jjm9dMPAKuoKsOjWF+hz0Fdo5sYKwn0eM9f/CvEYSZsM5SF3eOdINDABt2O6LwatdDPc7gk80zUvunZNL5fxfWvehp9JINP5Xs0zEdzpgWjVAfDDG9IEKLWZ8kOu8tyHdoumWeu4BzUMnOUvryMr8EX5bkC4Azok5MYj95boCWCPgOaKlFZhvJe+pRIMyhenSpSE7F69h4YHOvgKm5/Gk49ZTM7rEt4EdaZhTX7FOxPY3ivsAkQHv1sXVLbvaLLjJqPGoNMS2/CM+24ZKgg5uku5BRmI3vzgXpSZqBEwazjYIma/Z3mC5h5hkHUUFsRYk/QoQBfTEeoMVizF3Uadq/0JgqiOo3qov/AM2Q90dSeEQoEvp8nrdEQiXYv/nS1xWT13LZ692Yxb/m2P8LvHrKJ2G3PFT8e2ffPl21xyHO9rwniPEv08hocJwwJGY3rX8D+sSQDTvi3WrBsI5Y8fyUcTrirXU1i3evmsSSV3EtRS+RkjSfK7hXQ1VmfY+AqSWaPgxOynDr+/cbSOgTNqD1/OIbvcew/OPB FuOZYFx/ eWIWLcjPKORLuON4u1i/bBTFAn16MUR3FU1hTMF+2RITS9q5yG4EgYiOUvXgQi9xtrdwWK3G6RMLUa/CQoJRopfoX8Ue0C31q4DU1sz/wxLt4vz1xmhKES5JZPnWAFp94So1aPA/8a4ZsTllVzsSxKnrHFSzp9xomzI7ghmeOF31N53kUN7Ea7OLGtwo9xNMsI4p4hTzK1LuQRyeWvDnIcpRi2QXLI74kZml57o3koySyE+wHGUTk306G2q2Jmtal6QgnKB0xYw+F/cE3QBCOujgvEwX8N9M5KLvSQmV7/f6g2aMJOK2L3xmtF+ZRQ1cjphg/1iH9WaPl01GJSKDqd5bSNAEtYO33A6Ou2jBkrTL5NwutQjVZD63vjI/EugOvcySuj6MLNLQHKXSktZ7GWfr4v08njV6sPyd4FQMqA94+ZD4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000007, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When vmemmap is optimizable, it will free all the duplicated tail pages in hugetlb_vmemmap_optimize while preparing the new hugepage. Hence, there is no need to prepare them. For 1G x86 hugepages, it avoids preparing 262144 - 64 = 262080 struct pages per hugepage. Signed-off-by: Usama Arif --- mm/hugetlb.c | 30 +++++++++++++++++++++--------- mm/hugetlb_vmemmap.c | 2 +- mm/hugetlb_vmemmap.h | 1 + 3 files changed, 23 insertions(+), 10 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 64a3239b6407..24352abbb9e5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1943,13 +1943,22 @@ static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int ni } static bool __prep_compound_gigantic_folio(struct folio *folio, - unsigned int order, bool demote) + unsigned int order, bool demote, + bool hugetlb_vmemmap_optimizable) { int i, j; int nr_pages = 1 << order; struct page *p; __folio_clear_reserved(folio); + + /* + * No need to prep pages that will be freed later by hugetlb_vmemmap_optimize + * in prep_new_huge_page. Hence, reduce nr_pages to the pages that will be kept. + */ + if (hugetlb_vmemmap_optimizable) + nr_pages = HUGETLB_VMEMMAP_RESERVE_SIZE / sizeof(struct page); + for (i = 0; i < nr_pages; i++) { p = folio_page(folio, i); @@ -2020,15 +2029,15 @@ static bool __prep_compound_gigantic_folio(struct folio *folio, } static bool prep_compound_gigantic_folio(struct folio *folio, - unsigned int order) + unsigned int order, bool hugetlb_vmemmap_optimizable) { - return __prep_compound_gigantic_folio(folio, order, false); + return __prep_compound_gigantic_folio(folio, order, false, hugetlb_vmemmap_optimizable); } static bool prep_compound_gigantic_folio_for_demote(struct folio *folio, - unsigned int order) + unsigned int order, bool hugetlb_vmemmap_optimizable) { - return __prep_compound_gigantic_folio(folio, order, true); + return __prep_compound_gigantic_folio(folio, order, true, hugetlb_vmemmap_optimizable); } /* @@ -2185,7 +2194,8 @@ static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h, if (!folio) return NULL; if (hstate_is_gigantic(h)) { - if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { + if (!prep_compound_gigantic_folio(folio, huge_page_order(h), + vmemmap_should_optimize(h, &folio->page))) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! @@ -3201,7 +3211,8 @@ static void __init gather_bootmem_prealloc(void) VM_BUG_ON(!hstate_is_gigantic(h)); WARN_ON(folio_ref_count(folio) != 1); - if (prep_compound_gigantic_folio(folio, huge_page_order(h))) { + if (prep_compound_gigantic_folio(folio, huge_page_order(h), + vmemmap_should_optimize(h, page))) { WARN_ON(folio_test_reserved(folio)); prep_new_hugetlb_folio(h, folio, folio_nid(folio)); free_huge_page(page); /* add to the hugepage allocator */ @@ -3624,8 +3635,9 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) subpage = folio_page(folio, i); inner_folio = page_folio(subpage); if (hstate_is_gigantic(target_hstate)) - prep_compound_gigantic_folio_for_demote(inner_folio, - target_hstate->order); + prep_compound_gigantic_folio_for_demote(folio, + target_hstate->order, + vmemmap_should_optimize(target_hstate, subpage)); else prep_compound_page(subpage, target_hstate->order); folio_change_private(inner_folio, NULL); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index c2007ef5e9b0..b721e87de2b3 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -486,7 +486,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) } /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */ -static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) +bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) { if (!READ_ONCE(vmemmap_optimize_enabled)) return false; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 25bd0e002431..3525c514c061 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -57,4 +57,5 @@ static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h) { return hugetlb_vmemmap_optimizable_size(h) != 0; } +bool vmemmap_should_optimize(const struct hstate *h, const struct page *head); #endif /* _LINUX_HUGETLB_VMEMMAP_H */ -- 2.25.1