From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78EE3CD342E for ; Tue, 19 Sep 2023 03:11:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AFF6B6B0493; Mon, 18 Sep 2023 23:11:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A66296B0494; Mon, 18 Sep 2023 23:11:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 901646B0495; Mon, 18 Sep 2023 23:11:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 723756B0493 for ; Mon, 18 Sep 2023 23:11:12 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 45853403F3 for ; Tue, 19 Sep 2023 03:11:12 +0000 (UTC) X-FDA: 81251870784.27.E664F2D Received: from out-221.mta1.migadu.com (out-221.mta1.migadu.com [95.215.58.221]) by imf16.hostedemail.com (Postfix) with ESMTP id 5AA0F18001C for ; Tue, 19 Sep 2023 03:11:10 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=e8NCzfFb; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf16.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.221 as permitted sender) smtp.mailfrom=muchun.song@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695093070; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MSeWBe58gmrwW7zPrpTo80TQ0Mc9s3DAzN/0nh5B+zU=; b=1vyBHpCfCJh1yn/GYv0UZrIWdq6tNYeWRFrgSe3cufRn871zIJ883g/9rqR605hgciUzAO ItD4imrzAqHQhsdkCOl9S4WOiY4CJwNwa3oP/tCBFWBwbHbCQ03KE0pmolb6HPvm+yJxJc NzKhEbKE+NfH6JMQPetEbzdyO2wLrPM= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=e8NCzfFb; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf16.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.221 as permitted sender) smtp.mailfrom=muchun.song@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695093070; a=rsa-sha256; cv=none; b=ZFHk5+h1lGmkqlw2Bp43tUIfg3l50UcxU/dKASyyd8pCLcK4AozTbmVgrtuId8w1p0jy+p vcIgjXamKFYuILgt4wyy91AJLpy+/DliqSnHH7/3uIi9evfZZqivHZBeo5vBpQF6UphxW2 NpIdDvDFqMy7UfUhMQYt1YXC5xnsuOA= Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695093068; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MSeWBe58gmrwW7zPrpTo80TQ0Mc9s3DAzN/0nh5B+zU=; b=e8NCzfFbUN0plzAl9ZK2ewM9AztMX7+9E9OUdgT4b2V/gp9Tvv+/HzlVD+soR8h5gOjUVx 3gm//IP95oZYZm2srHjLOuHrJ/wG0RMzh767zBYYqsyTozUgEqgvukqovE/lWvP5DiNCkG 8FVXnQiQ0gAPhITqDtSSp5IQ0Pqwcds= Date: Tue, 19 Sep 2023 11:10:59 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v4 3/8] hugetlb: perform vmemmap optimization on a list of pages To: Mike Kravetz , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Muchun Song , Joao Martins , Oscar Salvador , David Hildenbrand , Miaohe Lin , David Rientjes , Anshuman Khandual , Naoya Horiguchi , Barry Song <21cnbao@gmail.com>, Michal Hocko , Matthew Wilcox , Xiongchun Duan , Andrew Morton References: <20230918230202.254631-1-mike.kravetz@oracle.com> <20230918230202.254631-4-mike.kravetz@oracle.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20230918230202.254631-4-mike.kravetz@oracle.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 5AA0F18001C X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: rtpnpcpur7macycogkk1jsz3eubnggee X-HE-Tag: 1695093070-160323 X-HE-Meta: U2FsdGVkX1/BDbI/HeH/xPT1wnH21CQgnuOH1ChTvnbcSZm57yF8xd7wbLTPVdwiIiOcwAFxW9l4SaTIuvJyGBbUZqs0Ae/IW89dpkIEMArH7Nt2ZHKwnuJSx4ViT5/dGfYrotBOgAckfjEO9zsmzC5hGXG60uRP9I8FatMMuM3+pcmD8VlE7mTzm1vxQ7THarDeoHkV3l5NyHbwN1grgZK9dVN5s63zlcj4yqWGE71nh6ZMliHTmP41ZHeYvtZubAA2YN0fjtRgCS30LN7JDFM6J8tUN3v0nH2OS7J6jDSxfqEjZfB9SbYY26LANpnLMZIydMwFc6Jmqsug5qYyBNmLUYs3AloCKD0/fCyczsVEMt8861MRPnSLO2zLShAANfKsDw/aYTs1/Fpry/H5ACe/x2IX/HGRgjNHnLqPHvGuxsYAttyz2YdjdKumrlHkvr+aOGvWKKEY6WPlmJoPCxHstmLJ2z61dpKgwiF6MmpTrqUuTgTFZNa2uCSxxByg6cRoEh9NAncgQtsXl3Lkt5oxvlQZLNjji78M7CNmyYE02ZOwwgme0r7/Yuvss54+eAyojS0AbiuD/vQuDfcEnaipMiGeqMj3L6Nuu2jmfrGGnAeT0tXZLaVye9KGXYd5MifsFm5NhIE7BIm2n76T4brXn7jklf1/NvNI5sNP8uVlBTjh3cK8o98gJG6XOR6hjlrj4BSUZQivifS5LzFW6VYs+Lrgg3RTJZfxe12a9+qA9dH816GHBVZPrMoEiepulSnRt0Cbkb3VXCn9+AtQ3Vu/M7rncttiTE1aaZMoxcTTB/cdWYxZB6A3VgJ4ang2JltNWvoLgtLsW9iTQHw4TTom5xpowNF2jsqKMjeJCnyZINa12ycVLjSlZPT2zzz08SIFRXcyeNVbVn1w7f832F9LS4u4DoVBcGArjiu0jbFeIikuaT3DuqYDH7SMELTzAjJNBcw3iRWfSBWZazS K9ZFIv1w lN7mx/rhm4pD8wt62vdNoIVHCajkmZKkpilBgApxJGmDDjV4QD5MzkNDca/ruSUTyY1go5DuVLSe2wGv+/11V/HDPEeGF0dPGaaQERgltZs/ZGDbzNctIOO0gJQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/9/19 07:01, Mike Kravetz wrote: > When adding hugetlb pages to the pool, we first create a list of the > allocated pages before adding to the pool. Pass this list of pages to a > new routine hugetlb_vmemmap_optimize_folios() for vmemmap optimization. > > Due to significant differences in vmemmmap initialization for bootmem > allocated hugetlb pages, a new routine prep_and_add_bootmem_folios > is created. > > We also modify the routine vmemmap_should_optimize() to check for pages > that are already optimized. There are code paths that might request > vmemmap optimization twice and we want to make sure this is not > attempted. > > Signed-off-by: Mike Kravetz > --- > mm/hugetlb.c | 50 +++++++++++++++++++++++++++++++++++++------- > mm/hugetlb_vmemmap.c | 11 ++++++++++ > mm/hugetlb_vmemmap.h | 5 +++++ > 3 files changed, 58 insertions(+), 8 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 8624286be273..d6f3db3c1313 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -2269,6 +2269,11 @@ static void prep_and_add_allocated_folios(struct hstate *h, > { > struct folio *folio, *tmp_f; > > + /* > + * Send list for bulk vmemmap optimization processing > + */ From the kernel development document, one-line comment format is "/**/". > + hugetlb_vmemmap_optimize_folios(h, folio_list); > + > /* > * Add all new pool pages to free lists in one lock cycle > */ > @@ -3309,6 +3314,40 @@ static void __init hugetlb_folio_init_vmemmap(struct folio *folio, > prep_compound_head((struct page *)folio, huge_page_order(h)); > } > > +static void __init prep_and_add_bootmem_folios(struct hstate *h, > + struct list_head *folio_list) > +{ > + struct folio *folio, *tmp_f; > + > + /* > + * Send list for bulk vmemmap optimization processing > + */ > + hugetlb_vmemmap_optimize_folios(h, folio_list); > + > + /* > + * Add all new pool pages to free lists in one lock cycle > + */ > + spin_lock_irq(&hugetlb_lock); > + list_for_each_entry_safe(folio, tmp_f, folio_list, lru) { > + if (!folio_test_hugetlb_vmemmap_optimized(folio)) { > + /* > + * If HVO fails, initialize all tail struct pages > + * We do not worry about potential long lock hold > + * time as this is early in boot and there should > + * be no contention. > + */ > + hugetlb_folio_init_tail_vmemmap(folio, > + HUGETLB_VMEMMAP_RESERVE_PAGES, > + pages_per_huge_page(h)); > + } > + __prep_account_new_huge_page(h, folio_nid(folio)); > + enqueue_hugetlb_folio(h, folio); > + } > + spin_unlock_irq(&hugetlb_lock); > + > + INIT_LIST_HEAD(folio_list); I'm not sure what is the purpose of the reinitialization to list head? > +} > + > /* > * Put bootmem huge pages into the standard lists after mem_map is up. > * Note: This only applies to gigantic (order > MAX_ORDER) pages. > @@ -3329,7 +3368,7 @@ static void __init gather_bootmem_prealloc(void) > * in this list. If so, process each size separately. > */ > if (h != prev_h && prev_h != NULL) > - prep_and_add_allocated_folios(prev_h, &folio_list); > + prep_and_add_bootmem_folios(prev_h, &folio_list); > prev_h = h; > > VM_BUG_ON(!hstate_is_gigantic(h)); > @@ -3337,12 +3376,7 @@ static void __init gather_bootmem_prealloc(void) > > hugetlb_folio_init_vmemmap(folio, h, > HUGETLB_VMEMMAP_RESERVE_PAGES); > - __prep_new_hugetlb_folio(h, folio); > - /* If HVO fails, initialize all tail struct pages */ > - if (!HPageVmemmapOptimized(&folio->page)) > - hugetlb_folio_init_tail_vmemmap(folio, > - HUGETLB_VMEMMAP_RESERVE_PAGES, > - pages_per_huge_page(h)); > + init_new_hugetlb_folio(h, folio); > list_add(&folio->lru, &folio_list); > > /* > @@ -3354,7 +3388,7 @@ static void __init gather_bootmem_prealloc(void) > cond_resched(); > } > > - prep_and_add_allocated_folios(h, &folio_list); > + prep_and_add_bootmem_folios(h, &folio_list); > } > > static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index 76682d1d79a7..4558b814ffab 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -483,6 +483,9 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */ > static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) > { > + if (HPageVmemmapOptimized((struct page *)head)) > + return false; > + > if (!READ_ONCE(vmemmap_optimize_enabled)) > return false; > > @@ -572,6 +575,14 @@ void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head) > SetHPageVmemmapOptimized(head); > } > > +void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list) > +{ > + struct folio *folio; > + > + list_for_each_entry(folio, folio_list, lru) > + hugetlb_vmemmap_optimize(h, &folio->page); > +} > + > static struct ctl_table hugetlb_vmemmap_sysctls[] = { > { > .procname = "hugetlb_optimize_vmemmap", > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h > index 4573899855d7..c512e388dbb4 100644 > --- a/mm/hugetlb_vmemmap.h > +++ b/mm/hugetlb_vmemmap.h > @@ -20,6 +20,7 @@ > #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP > int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head); > void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head); > +void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list); > > static inline unsigned int hugetlb_vmemmap_size(const struct hstate *h) > { > @@ -48,6 +49,10 @@ static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct page > { > } > > +static inline void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list) > +{ > +} > + > static inline unsigned int hugetlb_vmemmap_optimizable_size(const struct hstate *h) > { > return 0;