From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23F98C433F5 for ; Wed, 1 Jun 2022 10:48:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 93D3A8D0007; Wed, 1 Jun 2022 06:48:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E8948D0006; Wed, 1 Jun 2022 06:48:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AE108D0007; Wed, 1 Jun 2022 06:48:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 6960E8D0006 for ; Wed, 1 Jun 2022 06:48:21 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 0AB7A8077D for ; Wed, 1 Jun 2022 10:48:21 +0000 (UTC) X-FDA: 79529342802.08.692FD81 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by imf06.hostedemail.com (Postfix) with ESMTP id 51438180057 for ; Wed, 1 Jun 2022 10:48:16 +0000 (UTC) Received: by mail-pj1-f49.google.com with SMTP id n10so1645169pjh.5 for ; Wed, 01 Jun 2022 03:48:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=ygjUxfkqHRj2c+XbYZgzwfLNN/gLk7A7fF4S+oHb6+U=; b=bgpwLbbe1FlcWjVnusaNYiEwx0t2E4qplUZnv6r7lVSq/8qBCdgRYzAhIbF2V40Fqb kJVgd2FD5tULhTVIaS6h0kPX+cKe+Gtkl5g9Zl/AXqaXskOFlcOuuFoFlxfnUeicwmcf RNSjY99gv5VcKQR5EagRSgaW15uApuc6u5SZa4Px4pn3Toa0q0SSsbeS3KNouNwY26mc JCXOwkFFVtFYXHd1dSvb3o96GHcIitjb3xYSwVYlYOqdf3e6oGJ7MY45Crc/e7U5XayD YeHODFHb6Jc6DUcZ8GlzwSrFBvZt5iGJJbalseM3gJAV45pZWE7D5vS7B7M0ff99j2ix hQhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=ygjUxfkqHRj2c+XbYZgzwfLNN/gLk7A7fF4S+oHb6+U=; b=3SXIBjBw1e7//WBDXQ4d7PJqfbmQmGZNJ1E6fGq0TPqq93ihnOxjD4GSfZnVudcX2c 9q99ysgX7yYjRdOa/sPiiV2Eh4r9TLVAHfxzrqNjugr2iIPjKOmiEaL5UZME3T6A1SgA phC5lOxKAM0UeA2VxFzQqeb5w60RTR+5BFQbYcqZXxpK5k+NOVaBQ9EhkczWpZAUGJXU Cwqzvahc8n6Vrvp2F9UJ3YKF5sy1yaRZCgwje23eUJwrVrjmd9AtM7Lwy/xmsHP8xjL1 lxPU35157uvh8A3mzqWXoVpMFAFXZVQFfmOKnMhekiw4xuAR5a6U9yvuDPAVkZ7QpNvq Lhxw== X-Gm-Message-State: AOAM531ASPy2pmt+H278tLmq+0mApFUnqd6ULJ8nlKsYTzeHmR8gMCX4 qd08Usq0QqMJ8qODfik5mvV3iu2f17+stw== X-Google-Smtp-Source: ABdhPJxUJ6OjZuyCicU2quC0fvCgVui98qkaje1vjhkSIDc9Ym4TBRDr5UETuYDCiMCuuMiwhweVfQ== X-Received: by 2002:a17:90a:de0b:b0:1e3:33e9:6665 with SMTP id m11-20020a17090ade0b00b001e333e96665mr9204009pjv.27.1654080498895; Wed, 01 Jun 2022 03:48:18 -0700 (PDT) Received: from localhost ([2408:8207:18da:2310:f9bc:fb5f:5b66:a80e]) by smtp.gmail.com with ESMTPSA id n4-20020a170903404400b001616b71e5e3sm1166040pla.171.2022.06.01.03.48.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jun 2022 03:48:18 -0700 (PDT) Date: Wed, 1 Jun 2022 18:48:13 +0800 From: Muchun Song To: David Hildenbrand , mike.kravetz@oracle.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, smuchun@bytedance.com Subject: Re: [PATCH 1/3] mm: hugetlb_vmemmap: cleanup hugetlb_vmemmap related functions Message-ID: References: <20220404074652.68024-1-songmuchun@bytedance.com> <20220404074652.68024-2-songmuchun@bytedance.com> <087817e3-98ce-09f6-9ae9-68e544f43775@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <087817e3-98ce-09f6-9ae9-68e544f43775@redhat.com> X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 51438180057 X-Stat-Signature: 41ytajx436g57mbnxux5hoj4nxrkfxza X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=bgpwLbbe; spf=pass (imf06.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.49 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1654080496-373867 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jun 01, 2022 at 11:28:44AM +0200, David Hildenbrand wrote: > On 04.04.22 09:46, Muchun Song wrote: > > The word of "free" is not expressive enough to express the feature of optimizing > > vmemmap pages associated with each HugeTLB, rename this keywork to "optimeze". > > And some function names are prefixed with "huge_page" instead of "hugetlb", it is > > easily to be confused with THP. In this patch , cheanup related functions to make > > code more clear and expressive. > > No strong opinion (I remember I kicked of the discussion), but I was > wondering if instead of alloc vs. free we could be using something like > optimize vs. restore/rollback. > > E.g., hugetlb_vmemmap_optimize() vs. hugetlb_vmemmap_restore(). > I think it is a good suggestion. > > Maybe there are other suggestions? > > > > > Signed-off-by: Muchun Song > > --- > > include/linux/hugetlb.h | 2 +- > > mm/hugetlb.c | 10 +++++----- > > mm/hugetlb_vmemmap.c | 42 ++++++++++++++++++++---------------------- > > mm/hugetlb_vmemmap.h | 20 ++++++++++---------- > > 4 files changed, 36 insertions(+), 38 deletions(-) > > > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > > index 53c1b6082a4c..c16fbb1228a3 100644 > > --- a/include/linux/hugetlb.h > > +++ b/include/linux/hugetlb.h > > @@ -618,7 +618,7 @@ struct hstate { > > unsigned int free_huge_pages_node[MAX_NUMNODES]; > > unsigned int surplus_huge_pages_node[MAX_NUMNODES]; > > #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP > > - unsigned int nr_free_vmemmap_pages; > > + unsigned int optimize_vmemmap_pages; > > I suggest converting that into a bool and just calling it > > "bool optimize_vmemmap_pages". > > You can easily compute what hugetlb_vmemmap_init() at runtime from the > page and RESERVE_VMEMMAP_NR, right? > Right. A little overhead, but hugetlb_vmemmap_alloc() is not hot path, maybe we can accept the increased overhead of calculating at runtime. Hi Mike, Do you have any objections on this? If no, I think we can do this in a separate patch. > At least the calculation in alloc_huge_page_vmemmap() and > free_huge_page_vmemmap() become *less* weird for me if the magic value > RESERVE_VMEMMAP_NR isn't used explicitly for vmemmap_addr but implicitly > for vmemmap_end. > > > #endif > > #ifdef CONFIG_CGROUP_HUGETLB > > /* cgroup control files */ > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index dd642cfc538b..1f9fbdddc86b 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -1540,7 +1540,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) > > if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > > return; > > > > - if (alloc_huge_page_vmemmap(h, page)) { > > + if (hugetlb_vmemmap_alloc(h, page)) { > > spin_lock_irq(&hugetlb_lock); > > /* > > * If we cannot allocate vmemmap pages, just refuse to free the > > @@ -1617,7 +1617,7 @@ static DECLARE_WORK(free_hpage_work, free_hpage_workfn); > > > > static inline void flush_free_hpage_work(struct hstate *h) > > { > > - if (free_vmemmap_pages_per_hpage(h)) > > + if (hugetlb_optimize_vmemmap_pages(h)) > > It might be reasonable to call that hugetlb_should_optimize_vmemmap() > then, letting it return a bool. > How about the name of "hugetlb_vmemmap_optimizable()"? "should" seems to tell the user that this hugetlb should be optimized, however, optimization also depends on "hugetlb_free_vmemmap=on". "optimizable" seems to be more appropriate, right? Thanks.