From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8019DC433FE for ; Fri, 21 Oct 2022 16:37:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB3278E001F; Fri, 21 Oct 2022 12:37:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D3C658E0001; Fri, 21 Oct 2022 12:37:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B693F8E001F; Fri, 21 Oct 2022 12:37:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 99D258E0001 for ; Fri, 21 Oct 2022 12:37:46 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 7E1D3140611 for ; Fri, 21 Oct 2022 16:37:46 +0000 (UTC) X-FDA: 80045512932.16.A967C74 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf15.hostedemail.com (Postfix) with ESMTP id 17C68A003F for ; Fri, 21 Oct 2022 16:37:45 +0000 (UTC) Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-36885d835e9so33779197b3.17 for ; Fri, 21 Oct 2022 09:37:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=scj2/ZcN7CmnhbVTPEMSvCXttKvb/J7A59aTeFA+Dhs=; b=S0gGSTHVvjAreWqeZs5A9raMMMeKXhXFwUCqBkMsUskY4XwYA32hBXLRGplHk45QeI 5hNgcngQPNVjy5Kk2gQ5YzOxS99JOyMMSdcynaw9rmdXfKkiFEtbFwINokhrgbql0JE2 0sKmbmef0q7IWIbffBh2wHykR88/4gHiqdd0pAw6PZWkeF9kCARu3pi/qAdcULFiEFK2 zqn43ZTH5u5pcpqC+v95dptWRM8ktSEwZXJsat+d9M6BgvVw7Tq+4w0GYsymA05kIp1Y kIdA/tQpriZyd9lUub9jcQWZ1R2v+B1qZdNcMF4fmnajZ6EhnPh0qg0O2vMtK7smyeOR wLPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=scj2/ZcN7CmnhbVTPEMSvCXttKvb/J7A59aTeFA+Dhs=; b=dJqsROvH0Hl0Bd0MmTYY2kLylVj/AtlG1vhgeSEToXUxx/VBTTZJlJ0Q174nPxTXjC AxzfBHUo7D/Pyrdbi27bUOyqj1NqvY3c8GmA2LRyX45r+g5wEE48sHZ19BbNNSJeWT7H 1m7lS97VjqPB6KytnZam5zVpcX7T2SpACHoH7aPs8yAab5sW2AxqpWwHZb22NBAUUDu0 Sfpel5UXSKAnNt7JY1rV45W37LjdmXCF017NM4GEA5516d4Hn7UMqBbsjb9uH+d9/GQ6 TOwezTU4aKvBAbE+9snnfTm9pbRvpOotTeUhXr8KBOjrTOo6NSgytTUQCC1Hpm3EaFBX gj+w== X-Gm-Message-State: ACrzQf2i6F/w24mLeDHofXeEEWKbha6TnGUUshzkL2CKFT6wQAcfwXVL yyYeqQhBtJMk8UUbUNNMqBCaRkN9gjeNZHKh X-Google-Smtp-Source: AMsMyM4ktFt5TlSATVEDX64pMc+gxEkbubGz5MVYH6J1QITcv/uZTkTFvGYjWRDFvS1XdP3P3wGfcec2WQcdi1Y5 X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a25:c713:0:b0:6ca:203:504f with SMTP id w19-20020a25c713000000b006ca0203504fmr11754100ybe.574.1666370265356; Fri, 21 Oct 2022 09:37:45 -0700 (PDT) Date: Fri, 21 Oct 2022 16:36:47 +0000 In-Reply-To: <20221021163703.3218176-1-jthoughton@google.com> Mime-Version: 1.0 References: <20221021163703.3218176-1-jthoughton@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021163703.3218176-32-jthoughton@google.com> Subject: [RFC PATCH v2 31/47] hugetlb: sort hstates in hugetlb_init_hstates From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton Content-Type: text/plain; charset="UTF-8" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666370266; a=rsa-sha256; cv=none; b=o98i7vTzqBNd/y2zOs9gh3A/+RaNOkGLdCHET4rM+90+SSh/bKQKdNLeKsEocp5swO1oIO lq2Usp4/oe/9V0ERI3cHgfr3EbcMNRQUPCg+L97b4vV8jRCwbuhIL2fOqrU2VcClaDn8b9 3EyTnSThnlDB3+sqfY4B1iiSo0U4nts= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=S0gGSTHV; spf=pass (imf15.hostedemail.com: domain of 32cpSYwoKCNoFPDKQCDPKJCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--jthoughton.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=32cpSYwoKCNoFPDKQCDPKJCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666370266; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=scj2/ZcN7CmnhbVTPEMSvCXttKvb/J7A59aTeFA+Dhs=; b=Mb0upAYM5FBweEjyMm6sFogDr9YW+jq6dcs8xcSfxw/7WenrMbBLt/KLfYAxxb8AHZTQrP 2qMl5THFpDnNKIagNwSIVxWJ3uswSOxNaVvl9vm74KE+Y7O/5gO28rFDN5mX6rbs7ggkKl u8U1WW3rrCQks6ZB5g9g4wck7IPrkjc= Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=S0gGSTHV; spf=pass (imf15.hostedemail.com: domain of 32cpSYwoKCNoFPDKQCDPKJCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--jthoughton.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=32cpSYwoKCNoFPDKQCDPKJCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: bq1xqk6rnfktxy9xj3pwhcqtknix8sh1 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 17C68A003F X-Rspam-User: X-HE-Tag: 1666370265-943110 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When using HugeTLB high-granularity mapping, we need to go through the supported hugepage sizes in decreasing order so that we pick the largest size that works. Consider the case where we're faulting in a 1G hugepage for the first time: we want hugetlb_fault/hugetlb_no_page to map it with a PUD. By going through the sizes in decreasing order, we will find that PUD_SIZE works before finding out that PMD_SIZE or PAGE_SIZE work too. This commit also changes bootmem hugepages from storing hstate pointers directly to storing the hstate sizes. The hstate pointers used for boot-time-allocated hugepages become invalid after we sort the hstates. `gather_bootmem_prealloc`, called after the hstates have been sorted, now converts the size to the correct hstate. Signed-off-by: James Houghton --- include/linux/hugetlb.h | 2 +- mm/hugetlb.c | 49 ++++++++++++++++++++++++++++++++--------- 2 files changed, 40 insertions(+), 11 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d305742e9d44..e25f97cdd086 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -772,7 +772,7 @@ struct hstate { struct huge_bootmem_page { struct list_head list; - struct hstate *hstate; + unsigned long hstate_sz; }; int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bb0005d57cab..d6f07968156c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -49,6 +50,10 @@ int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; +/* + * After hugetlb_init_hstates is called, hstates will be sorted from largest + * to smallest. + */ struct hstate hstates[HUGE_MAX_HSTATE]; #ifdef CONFIG_CMA @@ -3189,7 +3194,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid) /* Put them into a private list first because mem_map is not up yet */ INIT_LIST_HEAD(&m->list); list_add(&m->list, &huge_boot_pages); - m->hstate = h; + m->hstate_sz = huge_page_size(h); return 1; } @@ -3203,7 +3208,7 @@ static void __init gather_bootmem_prealloc(void) list_for_each_entry(m, &huge_boot_pages, list) { struct page *page = virt_to_page(m); - struct hstate *h = m->hstate; + struct hstate *h = size_to_hstate(m->hstate_sz); VM_BUG_ON(!hstate_is_gigantic(h)); WARN_ON(page_count(page) != 1); @@ -3319,9 +3324,38 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) kfree(node_alloc_noretry); } +static int compare_hstates_decreasing(const void *a, const void *b) +{ + unsigned long sz_a = huge_page_size((const struct hstate *)a); + unsigned long sz_b = huge_page_size((const struct hstate *)b); + + if (sz_a < sz_b) + return 1; + if (sz_a > sz_b) + return -1; + return 0; +} + +static void sort_hstates(void) +{ + unsigned long default_hstate_sz = huge_page_size(&default_hstate); + + /* Sort from largest to smallest. */ + sort(hstates, hugetlb_max_hstate, sizeof(*hstates), + compare_hstates_decreasing, NULL); + + /* + * We may have changed the location of the default hstate, so we need to + * update it. + */ + default_hstate_idx = hstate_index(size_to_hstate(default_hstate_sz)); +} + static void __init hugetlb_init_hstates(void) { - struct hstate *h, *h2; + struct hstate *h; + + sort_hstates(); for_each_hstate(h) { /* oversize hugepages were init'ed in early boot */ @@ -3340,13 +3374,8 @@ static void __init hugetlb_init_hstates(void) continue; if (hugetlb_cma_size && h->order <= HUGETLB_PAGE_ORDER) continue; - for_each_hstate(h2) { - if (h2 == h) - continue; - if (h2->order < h->order && - h2->order > h->demote_order) - h->demote_order = h2->order; - } + if (h - 1 >= &hstates[0]) + h->demote_order = huge_page_order(h - 1); } } -- 2.38.0.135.g90850a2211-goog