From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74984C10F29 for ; Mon, 9 Mar 2020 23:27:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 252D22465A for ; Mon, 9 Mar 2020 23:27:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="nMmVyVnP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 252D22465A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C880C6B006C; Mon, 9 Mar 2020 19:27:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C38AC6B006E; Mon, 9 Mar 2020 19:27:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4DC56B0070; Mon, 9 Mar 2020 19:27:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id 9A0676B006C for ; Mon, 9 Mar 2020 19:27:35 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 36B358771 for ; Mon, 9 Mar 2020 23:27:35 +0000 (UTC) X-FDA: 76577412870.18.bomb57_422f04739c62c X-HE-Tag: bomb57_422f04739c62c X-Filterd-Recvd-Size: 3839 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Mar 2020 23:27:34 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 915232253D; Mon, 9 Mar 2020 23:27:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1583796453; bh=17h8a4N6kANiR6E29lK3wwgr86T1/bnJ7Vwdl/7eM7A=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=nMmVyVnPb7lDl5gONl001Z4fN5F5qqZ80XrozF+A47QUpLkUcaQVbSXpCAC7gbp7B UTVbi5CrgsY9RN1WwtaeIgwAgMJ38/UbRstMsC+2QC7H680PbwAXOUIfOJZQTmLI8U xmV1M1W8Dq1wMbtatvZGKrIDI61Wfslb2OfvcvB4= Date: Mon, 9 Mar 2020 16:27:33 -0700 From: Andrew Morton To: Roman Gushchin Cc: Johannes Weiner , Michal Hocko , , , , Rik van Riel Subject: Re: [PATCH] mm: hugetlb: optionally allocate gigantic hugepages using cma Message-Id: <20200309162733.3e5488f0410bffd9a9461330@linux-foundation.org> In-Reply-To: <20200309223216.1974290-1-guro@fb.com> References: <20200309223216.1974290-1-guro@fb.com> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 9 Mar 2020 15:32:16 -0700 Roman Gushchin wrote: > Commit 944d9fec8d7a ("hugetlb: add support for gigantic page allocation > at runtime") has added the run-time allocation of gigantic pages. However > it actually works only at early stages of the system loading, when > the majority of memory is free. After some time the memory gets > fragmented by non-movable pages, so the chances to find a contiguous > 1 GB block are getting close to zero. Even dropping caches manually > doesn't help a lot. > > At large scale rebooting servers in order to allocate gigantic hugepages > is quite expensive and complex. At the same time keeping some constant > percentage of memory in reserved hugepages even if the workload isn't > using it is a big waste: not all workloads can benefit from using 1 GB > pages. > > The following solution can solve the problem: > 1) On boot time a dedicated cma area* is reserved. The size is passed > as a kernel argument. > 2) Run-time allocations of gigantic hugepages are performed using the > cma allocator and the dedicated cma area > > In this case gigantic hugepages can be allocated successfully with a > high probability, however the memory isn't completely wasted if nobody > is using 1GB hugepages: it can be used for pagecache, anon memory, > THPs, etc. > > * On a multi-node machine a per-node cma area is allocated on each node. > Following gigantic hugetlb allocation are using the first available > numa node if the mask isn't specified by a user. > > Usage: > 1) configure the kernel to allocate a cma area for hugetlb allocations: > pass hugetlb_cma=10G as a kernel argument > > 2) allocate hugetlb pages as usual, e.g. > echo 10 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages > > If the option isn't enabled or the allocation of the cma area failed, > the current behavior of the system is preserved. > > Only x86 is covered by this patch, but it's trivial to extend it to > cover other architectures as well. > Sounds promising. I'm not seeing any dependencies on CONFIG_CMA in there. Does the code actually compile if CONFIG_CMA=n? If yes, then does it add unneeded bloat?