From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2881EC001DE for ; Sun, 6 Aug 2023 07:49:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B4ED08D0008; Sun, 6 Aug 2023 03:49:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AFD6B8D0009; Sun, 6 Aug 2023 03:49:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 950BA8D0008; Sun, 6 Aug 2023 03:49:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7E9188D0007 for ; Sun, 6 Aug 2023 03:49:57 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 462411C94E3 for ; Sun, 6 Aug 2023 07:49:57 +0000 (UTC) X-FDA: 81092906034.05.7200AC6 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf17.hostedemail.com (Postfix) with ESMTP id 68EC040003 for ; Sun, 6 Aug 2023 07:49:55 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=smartx-com.20221208.gappssmtp.com header.s=20221208 header.b=j0mUyk86; dmarc=none; spf=none (imf17.hostedemail.com: domain of xueshi.hu@smartx.com has no SPF policy when checking 209.85.214.175) smtp.mailfrom=xueshi.hu@smartx.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691308195; a=rsa-sha256; cv=none; b=147RMscJEMlsjTV/miu3vsGtXzER9rXd1ClHm0/TnoMsZFOjYAVIYbB1tNCpuEunZbktIQ 0z7eH8MzOH8k5Yw9ssqeM5PI4+SMCIDjdFSJn94UGElRefZV2WDX4CcJUEMTIUDLrKwhOU oyLl6M8fSMCleTRB5iljKpIDWe0nzLI= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=smartx-com.20221208.gappssmtp.com header.s=20221208 header.b=j0mUyk86; dmarc=none; spf=none (imf17.hostedemail.com: domain of xueshi.hu@smartx.com has no SPF policy when checking 209.85.214.175) smtp.mailfrom=xueshi.hu@smartx.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691308195; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bESDgBeC8VG0WeABkpzPbNx9S1dNl6w+TJhQUnJNvuY=; b=uKLxRMAVfgKLTTIpwwk4F47e63ipV2+FmiAUEVFQO6fdXeSo4b2vpXAKSOG0oPLoJgnwnV cW5fpMaesSb+PVCONZSQwgb2I6aKP3lj+VrMPd33iNR7IFTaRwZ60a5djAfYiBACxOu6Wv Qv9AZgndxKVOVpkgABYEDn+fp1qEUQE= Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-1bbf8cb694aso30800095ad.3 for ; Sun, 06 Aug 2023 00:49:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20221208.gappssmtp.com; s=20221208; t=1691308192; x=1691912992; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bESDgBeC8VG0WeABkpzPbNx9S1dNl6w+TJhQUnJNvuY=; b=j0mUyk86Y+L0W/lPLGWO37sgMvzxFyNFnvPHdTcucxp6ExmGz1vgaKFhDgicf4eALX CcboTXVoB9YdanrWD1af/Ym+GRswuhYI/jtC3P7pBetKMbUD8Hr8j1oV2VSMxZWSqZaJ kj9mk6gysUCccc3YDRdLFv0sCXUw+1elkGdGh3CMxnbv8ohhDY7alr9SExXijyWXjS0o vMNI9bHLj3fKRsc1HEp9SSiCl/dkWp45Kpl3nQC4YiI6PO48NMpzJ8Vk4jAbVOCXCUzs Ke0m1yVTSr3aJnL9BR5Ef7S4tO9qjLwOS8/iFW4Npovb6ryu1KiRFf30DBqdk4+1cWG0 sxOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691308192; x=1691912992; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bESDgBeC8VG0WeABkpzPbNx9S1dNl6w+TJhQUnJNvuY=; b=gmhYfwA9+z9NaO86j5wCmQDlnM/BZyR7Vk6sCDHp3H3mClX7LVHD9ECl620tqAo0hK 3rPXl+XcclifaQ7AWY20HtQyG9dxWuhaKQCBLMaj6jI5hFdRitZ3jJbXXpL4JT2DgdlP n/I2szxuaadPbSmOk8AV+0bXPq+bgGNwxmYwG0p4LxWtfrP3UJqa64P70Zcj6xO11wcj XHIxHcYXF8PGmUyqHeVsoxkk4YqEMgydL+bmB24R4hyWN4zZNZZo4qs127h6EA7u/nJO wbv3dJ6cTF4ThuPmqCOwuEoL5pQfqOICr+HEbI3dsApjuw0DevnO33ckPKFgMI80JCzw HgJQ== X-Gm-Message-State: AOJu0YwKiRSekH2SaSvsz1PygOaPYkisBv1BFCnsaO1XrPNYPHteFjWt FBNmWhkw3+QZU0K8su9GkvAWC5TtFcZ3w5i7g/UmDJ2adTs= X-Google-Smtp-Source: AGHT+IEORQqGLZlvviHoT0yKvywDKwgrGqlwgBU1COFRxeX6Xw7TdwQ7hJ+pw+m//1yv/KCVzgReQw== X-Received: by 2002:a17:902:8b84:b0:1b8:72e2:c63 with SMTP id ay4-20020a1709028b8400b001b872e20c63mr6670457plb.8.1691308191700; Sun, 06 Aug 2023 00:49:51 -0700 (PDT) Received: from nixos.tailf4e9e.ts.net ([47.75.78.161]) by smtp.googlemail.com with ESMTPSA id j3-20020a170902c3c300b001b9cf6342e2sm4522814plj.42.2023.08.06.00.49.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Aug 2023 00:49:51 -0700 (PDT) From: Xueshi Hu To: mike.kravetz@oracle.com, muchun.song@linux.dev, corbet@lwn.net, akpm@linux-foundation.org, n-horiguchi@ah.jp.nec.com, osalvador@suse.de Cc: linux-mm@kvack.org, Xueshi Hu Subject: [PATCH v2 3/4] mm/hugeltb: fix nodes huge page allocation when there are surplus pages Date: Sun, 6 Aug 2023 15:48:52 +0800 Message-Id: <20230806074853.317203-4-xueshi.hu@smartx.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230806074853.317203-1-xueshi.hu@smartx.com> References: <20230806074853.317203-1-xueshi.hu@smartx.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 68EC040003 X-Stat-Signature: 9ab9c6aigkf6ud7xuo8rpx5wtwr93hf5 X-HE-Tag: 1691308195-514577 X-HE-Meta: U2FsdGVkX19QCtB3Ocnl7RpcoprPpkN7nBE3eHIpwkZ0fYzKnVHneGvPVIQRs5JbBPVxZICki0D3i5EMt7aX4+3YTBVBJ2U29tOLMo8UxPzDkD9+o7L52/oh3nls4/gjflirRulP5ddsanb89UeIL0nhOLr+tPxaE62oYuDaAWutUkBZ+1COzbxR/zh9LQzY/4JWaNCA6r8pK5YiHnFi/CCmx0rGQUUfmQn6y4mJAwOYeTYsoYtuk3ALQecFPSHKzgvtWuip7kd92YBovE1fiFXrmBwySV0oB13L0CAFlBqpvzkXIGUYi0qkLpCp9O7DFVYFFWl8Gq6fcRINGT6idFUbps9VzSixqAI9z0Mmt1RZVKwtcZ0TcqbMYgt+MOoZBhv9ChUY6F+RBJlJmXOl7T8CGZ1qqFzwjlDwKOgedxZAfGVm5ZWjSmjjrk9Ny+sccTvdiHvogY6aVdjTuuinTcslQ7HcoTRJp9csWoQCy6tHsPPFfzAGTJNHLzGOYaqE9Ms1j7HjN2QwITj7NpNn5f8KlP83SEEmhiUSklEQ8tKAU3AVLaagpZ97pTOSp8+abMK5OqsN6cwgy5X/mPyZwjINE+MCSMfGb0f1EBiyy7XUoUhLEoj3flnhT0+oHB1q/m559b7tYN3dCa3yCZAcQMHPUfPe5KNnDD1Y1pZeR+1LybNDtsmZNZqGAj1m3upPxB1rXrVJ3TQY2bWw4RTRtZS3eG/P9voAxDN1bIaquvfbVADNoPFWxjzvj3PFbpEgCu/tQHKVT/0DWHkKEVbBbAyGgcAWjdh5hpP+jys4an+EqStljYOtvX5SPCdFvfUtPNcfOetqaq37cnPioHaRb6UZqOAp97OfvpqLWon6kcWgh24re32a5PL0K8J/QT7urHIK8I07V9n7T6d+JYiTmEwWFQlV9R2eF/qZlkpn1GhmdkUfhm63NlkyPz8nyFO0s/j4iOaUOcWy4qx7kU4 B1DzYr9N 3Yl8FuJ62SKloDENuUK4YojSHRDMfpghHJD1aGL74v9MWSts2yRLCQvqoQhe8xv0oHBRFWwxM3XEOYAw/yX2Bdq84UHpGtJvUpEcSnPAN328ceAYLokW1x9/lDTVv7r5iSTOMWwOJG+mr22Uh+L8/PKSR8VjmrIXaCTWJ/1qelkFt6vYZOT4CHsMrRGdfJgutKqPZRrdNFyBk7RxxOa86PhcY23I7nhOOeg3fFgwck3YHHJYhDiwZjmfHAKw43LGicSUzmZy0vqvByl8TV3CxkevRh1UnKdYXOXNUwAA6fU/JP2cUV2XnrQmiBXRUv5Sb3m/CGQXCq4o2xwkCp6e9KWypxGi8HMgQfy8rU5U1x8JRItBxtyuLHn4zPBSNIjqCb0SWTITEsNny8kmllErQrt8NRyTJm1gwo/DZVk/jsaYPWgI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In set_nr_huge_pages(), local variable "count" is used to record persistent_huge_pages(), but when it cames to nodes huge page allocation, the semantics changes to nr_huge_pages. When there exists surplus huge pages and using the interface under /sys/devices/system/node/node*/hugepages to change huge page pool size, this difference can result in the allocation of an unexpected number of huge pages. Steps to reproduce the bug: Starting with: Node 0 Node 1 Total HugePages_Total 0.00 0.00 0.00 HugePages_Free 0.00 0.00 0.00 HugePages_Surp 0.00 0.00 0.00 create 100 huge pages in Node 0 and consume it, then set Node 0 's nr_hugepages to 0. yields: Node 0 Node 1 Total HugePages_Total 200.00 0.00 200.00 HugePages_Free 0.00 0.00 0.00 HugePages_Surp 200.00 0.00 200.00 write 100 to Node 1's nr_hugepages echo 100 > /sys/devices/system/node/node1/\ hugepages/hugepages-2048kB/nr_hugepages gets: Node 0 Node 1 Total HugePages_Total 200.00 400.00 600.00 HugePages_Free 0.00 400.00 400.00 HugePages_Surp 200.00 0.00 200.00 Kernel is expected to create only 100 huge pages and it gives 200. Fixes: fd875dca7c71 ("hugetlbfs: fix potential over/underflow setting node specific nr_hugepages") Signed-off-by: Xueshi Hu --- mm/hugetlb.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 56647235ab21..8ed4fffdebda 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3490,7 +3490,9 @@ static int set_nr_huge_pages(struct hstate *h, unsigned long count, int nid, if (nid != NUMA_NO_NODE) { unsigned long old_count = count; - count += h->nr_huge_pages - h->nr_huge_pages_node[nid]; + count += persistent_huge_pages(h) - + (h->nr_huge_pages_node[nid] - + h->surplus_huge_pages_node[nid]); /* * User may have specified a large count value which caused the * above calculation to overflow. In this case, they wanted -- 2.40.1