From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAB43C001DC for ; Sun, 30 Jul 2023 12:53:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB9AD6B0082; Sun, 30 Jul 2023 08:53:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D41B28D0001; Sun, 30 Jul 2023 08:53:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE2576B0085; Sun, 30 Jul 2023 08:53:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id AFD546B0082 for ; Sun, 30 Jul 2023 08:53:38 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 848B314016B for ; Sun, 30 Jul 2023 12:53:38 +0000 (UTC) X-FDA: 81068269716.01.DCE6A01 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) by imf21.hostedemail.com (Postfix) with ESMTP id BBD021C000D for ; Sun, 30 Jul 2023 12:53:36 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=smartx-com.20221208.gappssmtp.com header.s=20221208 header.b=xDxMxziv; spf=none (imf21.hostedemail.com: domain of xueshi.hu@smartx.com has no SPF policy when checking 209.85.210.170) smtp.mailfrom=xueshi.hu@smartx.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690721616; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IWO97P41pTgDcE9XlJTPRLerxITDufaXqvlZ/NpQ9xw=; b=yQEuzSLqfnl6/RoOg6Y1ERzGcM2oaJwlThl+Fnz0mpTXF5OvbnYo8VLJ23JynazvBiOZA2 QHbjdSPbHBFBqvvtH+dSOTbkxKZiqJzbdEfd+NB0+VM8lWt243CPE2Lyyt9afnbwBwsuOQ n1/JqePwzW8kBNn08770Xu4rpKlvz90= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690721616; a=rsa-sha256; cv=none; b=HVNMwqWh7boXg6sW6LhgE+63rzXsj/Ny3irHsFjp4dd+OmAsRRy8YA6GwBZMcEJ505pqdf YjeiqzZlI9yJ//X1REmmJmDdmDqKgDFazpNZMdz+Z6H6v3/EDgXdTQ5vh9eBGhHjsq2ZMd F85Av8u72HjoRW+werfxKgQKEYxfhQE= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=smartx-com.20221208.gappssmtp.com header.s=20221208 header.b=xDxMxziv; spf=none (imf21.hostedemail.com: domain of xueshi.hu@smartx.com has no SPF policy when checking 209.85.210.170) smtp.mailfrom=xueshi.hu@smartx.com; dmarc=none Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-687087d8ddaso2636282b3a.1 for ; Sun, 30 Jul 2023 05:53:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20221208.gappssmtp.com; s=20221208; t=1690721615; x=1691326415; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IWO97P41pTgDcE9XlJTPRLerxITDufaXqvlZ/NpQ9xw=; b=xDxMxzivmPoCQ5Fmg16rIaIbb3Kvq8jwCzfplakfdiKklZMISrTxUkJBMYVUGtAAjc F7cPL9D879N/5lxcxuUlxavJp+b2E6J3e8tBdR7g4vsIPUoRRei6ptSyEBzSQ1khR+jO hZvA/+mwcor37ik6Hk2H4HdEI9CAjhbijNCIquSaiVLkHicGmhd7KkElkMc8Uog9EAiU rp97oWogeipUAFAjdyoJ/Ht7LYBvS0dRDBGi3d9inWCpZdA8JT5/doSliJieWXhIrtU3 0c00oFw0Kn2xdYU+J0YPH8ieNjBinolCHEWJqcyh3UhYH6mcVr6NOk1xtzmdsdlzxHpi 4tfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690721615; x=1691326415; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IWO97P41pTgDcE9XlJTPRLerxITDufaXqvlZ/NpQ9xw=; b=L2aqt8TgvRpb5T/XR9Tzl/2iFSgPl7ZkgvmtyQ+3/CALEgaRKB6IL1NZVO7hNM+W2h 3x3JfJRk/xrG2PXJLMrN9TFNAM6ej8WsgYYEl1OXGNPJZraouSw48bclX8fXhOy9owfR XGNMjd+7cgKG2BZ6Oyb9Bq6OMg+95GEF92EDUQLbX1C7k8rsN/fFMw8wzcktxAHniT8O 6WDkLn5qBnrfeQkbatovTbltM2EHfFZxM8zrUKm9TC5kADGwjYZaMJu2c3PLw8/ULqL4 f7MeVH9CV4ylQzHapE2NX12/xDthzsvJvPdRLs5/bd7zSxT3sx1wU9BNytRy/8QseY+p s3cA== X-Gm-Message-State: ABy/qLYbaIwZCJwtlxbbYKH/W4uaHNZ9lejk3pYBvhAX5gyGzxkQrtz4 OmRmLzXJGOCsjfy/B9i+eksBgdIBOx/3CWkFXjHIbGPm X-Google-Smtp-Source: APBJJlFYgGkrDJF0ubWm02nWwXPxWaALdL2s0VhrM6lw/bGEQ6lBZkfPzaea0YSGu3wJUH7YVop3Ng== X-Received: by 2002:a05:6a00:13a5:b0:687:1bb1:91ab with SMTP id t37-20020a056a0013a500b006871bb191abmr7184692pfg.33.1690721615484; Sun, 30 Jul 2023 05:53:35 -0700 (PDT) Received: from nixos.tailf4e9e.ts.net ([47.75.78.161]) by smtp.googlemail.com with ESMTPSA id s9-20020aa78d49000000b00687260020b1sm1731130pfe.72.2023.07.30.05.53.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Jul 2023 05:53:35 -0700 (PDT) From: Xueshi Hu To: mike.kravetz@oracle.com, muchun.song@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, Xueshi Hu Subject: [PATCH 2/3] mm/hugeltb: clean up hstate::max_huge_pages Date: Sun, 30 Jul 2023 20:51:55 +0800 Message-Id: <20230730125156.207301-3-xueshi.hu@smartx.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230730125156.207301-1-xueshi.hu@smartx.com> References: <20230730125156.207301-1-xueshi.hu@smartx.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: frfa7ahmfqegh35qgc91bp4bfs181xzs X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: BBD021C000D X-Rspam-User: X-HE-Tag: 1690721616-550987 X-HE-Meta: U2FsdGVkX1/ef2ThqtH2CiNwhq6xO34Mi9uRsVC8TpvdksY9ZBZ8IThJZ0+VfKQlbg5tAi3Yj2FQb5LA7Np145uj6tlSuor2xHdWrBiya01SNM0s5XPJLAE9CnUTqoXopNizWLlITZp4tJuXhJgh77ibuMvz2VzZMBoZbH5BycZmlZKJRaBoNm18bmX9WIFcpPX68vcGNynrRGnmjn6FLpiBojT++uZ6eQEel+6EsLVvBDoZ9Byu5ltG9oOLjuudsXILWZVU5iJe5aNmWgnHMMJDaR6Mp+uk+ZuFlAWn4t+ZDl/4Pgts2pAws99pYC375ZbvYangU+aALP4GE2Hu36NQozdefevIlrLwemc7t4SuBGtcIChYA9LPw+LmTKk0lvyIt9ApxsOzNcAJQWxvT3s+E66jsAJzgJB0oWzvmG7RNNwQ530Fl/l6xqEgVNzWfOYYi8ElNGRs4SMTXVAfRKg/EaqPTiHmQSj8X+6TVYWQccomKIrfDxquAHBZHpVOUy2/tiXNZO91pgngxOdKKCq1HHeEkc2r9knrDDrIkDWbalCKpB1tatM07L1wmsFgFCg4Zd6Be7DQAsxiAsumMOFdq99RYwUkslr/YCS8O5eZUwOWrK8qkokPu4FXqPMJcXF6MhBx05qNm++yoOesCtY94xgKQkqBr3tATDCgHYQodDVpFyv9DXdtXTWHNP42Z/4QM4o+AOnuzkKztpp46CYuiRDGfrHlLspmqvG+W4tgzfeD0J24qVnNaqZ7Dza4y50Z1QRk5b494cehD5K1whYTy4UTVPxdr8Fw+lw8m8+Z02JfFjcf4BhWUtiRdG7cCt+X9yhHoBijR9MnRNhSQENl02fwqDGaUd+2m/q8Qt4/9g1atVwmpvssiZhp5zY4oBSGCsbFa/m5OMrITxJzKDDbTdRW/9T44a7oIsEgQYt0cobFP8+bdmMFUhvEHcZ+eUamiD638ahuIzBgsZW PE+yCtdI gZ4M1EBEhn0mcxIePNn9iOwpdtmfCjPQL54w+Df4jyjd+AL73MDPAIX0UN7ZSX9/yS2Tg43rlmZCpdOFm7Qd/jq+ZW5Ne/4j1bSXS+mNaUiNxhAAWJ2ks60QzctHVpk4ddO4gRdHaTxkEdifbgf3c9RS8DEpRBZE3agQh5qnqkQmOkwymwnVL/PusMWC5rHaPFyDYVFOskHWpjAQ19/ltYZF0avpJO1ezKhHZ/Ftwc5YsmAKyObQ1b6v2ojTAZWFUupqKKApwW17bolUQpaW79iBfL9At93fitRE60VaHBsLVDiGCgFWBNyif4wQo1xTKH7blej55KmXLw9ymREH5VKsxHsBb8fbpD/yBOnapLZGFNqcRSxiVekjkEv0+Gd5easNXYdpAl8Q+9bLDdh05cPqP73PBy4UyXyH7WHqhf3tX8bTzbzBXB0VeX6rVceNPJUMDqIL2P/AXI1lRWdXLeKIlQw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Presently, the sole use case of hstate::max_huge_pages is confined to hugetlb_sysctl_handler_common() and hugetlbfs_size_to_hpages(). The former has been replaced with hstate::nr_huge_pages, while the latter can be effortlessly substituted. After hugeltb subsystem has been initialized, hstate::max_huge_pages always equals to persistent_huge_pages(). It's a burden to maintain the equation[1][2]. After this patch, hstate::max_huge_pages is only used in kernel command line parameter parsing. Renaming set_max_huge_pages() to set_nr_huge_pages() would enhance the readability of the code. [1]: Commit a43a83c79b4f ("mm/hugetlb: fix incorrect update of max_huge_pages") [2]: Commit c1470b33bb6e ("mm/hugetlb: fix incorrect hugepages count during mem hotplug") Signed-off-by: Xueshi Hu --- fs/hugetlbfs/inode.c | 2 +- mm/hugetlb.c | 24 +++++------------------- 2 files changed, 6 insertions(+), 20 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 316c4cebd3f3..cd1a3e4bf8fb 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -1375,7 +1375,7 @@ hugetlbfs_size_to_hpages(struct hstate *h, unsigned long long size_opt, if (val_type == SIZE_PERCENT) { size_opt <<= huge_page_shift(h); - size_opt *= h->max_huge_pages; + size_opt *= (h->nr_huge_pages - h->surplus_huge_pages); do_div(size_opt, 100); } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 76af189053f0..56647235ab21 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2343,14 +2343,13 @@ int dissolve_free_huge_page(struct page *page) } remove_hugetlb_folio(h, folio, false); - h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); /* * Normally update_and_free_hugtlb_folio will allocate required vmemmmap * before freeing the page. update_and_free_hugtlb_folio will fail to * free the page if it can not allocate required vmemmap. We - * need to adjust max_huge_pages if the page is not freed. + * need to adjust nr_huge_pages if the page is not freed. * Attempt to allocate vmemmmap here so that we can take * appropriate action on failure. */ @@ -2360,7 +2359,6 @@ int dissolve_free_huge_page(struct page *page) } else { spin_lock_irq(&hugetlb_lock); add_hugetlb_folio(h, folio, false); - h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } @@ -3274,8 +3272,6 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); pr_warn("HugeTLB: allocating %u of page size %s failed node%d. Only allocated %lu hugepages.\n", h->max_huge_pages_node[nid], buf, nid, i); - h->max_huge_pages -= (h->max_huge_pages_node[nid] - i); - h->max_huge_pages_node[nid] = i; } static void __init hugetlb_hstate_alloc_pages(struct hstate *h) @@ -3336,7 +3332,6 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n", h->max_huge_pages, buf, i); - h->max_huge_pages = i; } kfree(node_alloc_noretry); } @@ -3460,7 +3455,7 @@ static int adjust_pool_surplus(struct hstate *h, nodemask_t *nodes_allowed, } #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages) -static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, +static int set_nr_huge_pages(struct hstate *h, unsigned long count, int nid, nodemask_t *nodes_allowed) { unsigned long min_count, ret; @@ -3601,7 +3596,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, break; } out: - h->max_huge_pages = persistent_huge_pages(h); spin_unlock_irq(&hugetlb_lock); mutex_unlock(&h->resize_lock); @@ -3639,7 +3633,7 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h)); /* - * Taking target hstate mutex synchronizes with set_max_huge_pages. + * Taking target hstate mutex synchronizes with set_nr_huge_pages. * Without the mutex, pages added to target hstate could be marked * as surplus. * @@ -3664,14 +3658,6 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) spin_lock_irq(&hugetlb_lock); - /* - * Not absolutely necessary, but for consistency update max_huge_pages - * based on pool changes for the demoted page. - */ - h->max_huge_pages--; - target_hstate->max_huge_pages += - pages_per_huge_page(h) / pages_per_huge_page(target_hstate); - return rc; } @@ -3770,13 +3756,13 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy, } else { /* * Node specific request. count adjustment happens in - * set_max_huge_pages() after acquiring hugetlb_lock. + * set_nr_huge_pages() after acquiring hugetlb_lock. */ init_nodemask_of_node(&nodes_allowed, nid); n_mask = &nodes_allowed; } - err = set_max_huge_pages(h, count, nid, n_mask); + err = set_nr_huge_pages(h, count, nid, n_mask); return err ? err : len; } -- 2.40.1