From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 063ECC433F5 for ; Tue, 28 Sep 2021 16:47:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8CB1061262 for ; Tue, 28 Sep 2021 16:47:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8CB1061262 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D03996B006C; Tue, 28 Sep 2021 12:47:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CB2F56B0071; Tue, 28 Sep 2021 12:47:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B535A900002; Tue, 28 Sep 2021 12:47:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0057.hostedemail.com [216.40.44.57]) by kanga.kvack.org (Postfix) with ESMTP id A1C736B006C for ; Tue, 28 Sep 2021 12:47:34 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3BAE5348CF for ; Tue, 28 Sep 2021 16:47:34 +0000 (UTC) X-FDA: 78637563228.01.D2A28E8 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf28.hostedemail.com (Postfix) with ESMTP id 9BB6190000A2 for ; Tue, 28 Sep 2021 16:47:33 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 94F4B61266; Tue, 28 Sep 2021 16:47:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1632847652; bh=Pskz7v+W9MM9s2/wxgcD6ILNg+ssV/7prMMIUCcGGos=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ukjaT8O8/K4SM65l1W1H/l90k2eWlyoxtNo8MnQV9+QqDl0uuiA1KiGwAmUgJ/Ppp nql/PAwihw/qfo7iCWE68KsA4LouMoNM2eRGU+3lu9f0jHm90+s/J9LUTkHeJ6KTW/ 9YqG3r5DfFVOnsgTmpxtoIEyRPNokl10iIN0CjDyA3Eh2PahR4gGfoUcR2Qd8QIo76 3uNSgQ1x9riGgpBNEEaZegIZnJdUVNZ22Ggb0JviTofOlA2bAg8fJ1LWrMSLGYagvY rdQs+HJR4X14FDIujNjZX+YUSoH+wQ4bNdor6kQBWucDmpQLGZWixvrcYAoaXzQHwC KujanvGNePzLQ== Date: Tue, 28 Sep 2021 09:47:31 -0700 From: Mike Rapoport To: Zhenguo Yao Cc: mike.kravetz@oracle.com, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, corbet@lwn.net, akpm@linux-foundation.org, yaozhenguo@jd.com, willy@infradead.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v7] hugetlbfs: Extend the definition of hugepages parameter to support node allocation Message-ID: References: <20210927104149.46884-1-yaozhenguo1@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210927104149.46884-1-yaozhenguo1@gmail.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9BB6190000A2 X-Stat-Signature: irgfturbhb3ozsrgzpmis7adu3uq7up1 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ukjaT8O8; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org X-HE-Tag: 1632847653-904710 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, On Mon, Sep 27, 2021 at 06:41:49PM +0800, Zhenguo Yao wrote: > We can specify the number of hugepages to allocate at boot. But the > hugepages is balanced in all nodes at present. In some scenarios, > we only need hugepages in one node. For example: DPDK needs hugepages > which are in the same node as NIC. if DPDK needs four hugepages of 1G > size in node1 and system has 16 numa nodes. We must reserve 64 hugepages > in kernel cmdline. But, only four hugepages are used. The others should > be free after boot. If the system memory is low(for example: 64G), it will > be an impossible task. So, Extending hugepages parameter to support > specifying hugepages at a specific node. > For example add following parameter: > > hugepagesz=1G hugepages=0:1,1:3 > > It will allocate 1 hugepage in node0 and 3 hugepages in node1. > > Signed-off-by: Zhenguo Yao > --- ... > diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c > index 9a75ba078e1b..dd40ce6e7565 100644 > --- a/arch/powerpc/mm/hugetlbpage.c > +++ b/arch/powerpc/mm/hugetlbpage.c > @@ -229,17 +229,22 @@ static int __init pseries_alloc_bootmem_huge_page(struct hstate *hstate) > m->hstate = hstate; > return 1; > } > + > +bool __init node_specific_alloc_support(void) I'd suggest to namespace this to hugetlb, e.g. hugetlb_node_alloc_supported() > +{ > + return false; > +} > #endif > > > -int __init alloc_bootmem_huge_page(struct hstate *h) > +int __init alloc_bootmem_huge_page(struct hstate *h, int nid) > { > > #ifdef CONFIG_PPC_BOOK3S_64 > if (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled()) > return pseries_alloc_bootmem_huge_page(h); > #endif > - return __alloc_bootmem_huge_page(h); > + return __alloc_bootmem_huge_page(h, nid); > } > > #ifndef CONFIG_PPC_BOOK3S_64 ... > @@ -2868,33 +2869,41 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, > return ERR_PTR(-ENOSPC); > } > > -int alloc_bootmem_huge_page(struct hstate *h) > +int alloc_bootmem_huge_page(struct hstate *h, int nid) > __attribute__ ((weak, alias("__alloc_bootmem_huge_page"))); > -int __alloc_bootmem_huge_page(struct hstate *h) > +int __alloc_bootmem_huge_page(struct hstate *h, int nid) > { > struct huge_bootmem_page *m; > int nr_nodes, node; > > + if (nid >= nr_online_nodes) > + return 0; > + /* do node specific alloc */ > + if (nid != NUMA_NO_NODE) { > + m = memblock_alloc_try_nid_raw(huge_page_size(h), huge_page_size(h), > + 0, MEMBLOCK_ALLOC_ACCESSIBLE, nid); > + if (m) > + goto found; > + else > + return 0; Nit: you could make it a bit simpler with if (!m) return 0; goto found; > + } > + /* do all node balanced alloc */ > for_each_node_mask_to_alloc(h, nr_nodes, node, &node_states[N_MEMORY]) { > - void *addr; > - > - addr = memblock_alloc_try_nid_raw( > + m = memblock_alloc_try_nid_raw( > huge_page_size(h), huge_page_size(h), > 0, MEMBLOCK_ALLOC_ACCESSIBLE, node); > - if (addr) { > - /* > - * Use the beginning of the huge page to store the > - * huge_bootmem_page struct (until gather_bootmem > - * puts them into the mem_map). > - */ > - m = addr; > + /* > + * Use the beginning of the huge page to store the > + * huge_bootmem_page struct (until gather_bootmem > + * puts them into the mem_map). > + */ > + if (m) > goto found; > - } > + else > + return 0; ditto > } > - return 0; > > found: > - BUG_ON(!IS_ALIGNED(virt_to_phys(m), huge_page_size(h))); > /* Put them into a private list first because mem_map is not up yet */ > INIT_LIST_HEAD(&m->list); > list_add(&m->list, &huge_boot_pages); -- Sincerely yours, Mike.