From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9160DCE79A8 for ; Wed, 20 Sep 2023 03:05:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 10BDC6B00FB; Tue, 19 Sep 2023 23:05:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0BC5F6B00FC; Tue, 19 Sep 2023 23:05:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9EB06B00FD; Tue, 19 Sep 2023 23:05:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D6A646B00FB for ; Tue, 19 Sep 2023 23:05:49 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A77CBC133F for ; Wed, 20 Sep 2023 03:05:49 +0000 (UTC) X-FDA: 81255486018.20.9E38825 Received: from out-213.mta0.migadu.com (out-213.mta0.migadu.com [91.218.175.213]) by imf25.hostedemail.com (Postfix) with ESMTP id D15C4A0007 for ; Wed, 20 Sep 2023 03:05:47 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=azFm46in; spf=pass (imf25.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.213 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695179148; a=rsa-sha256; cv=none; b=xK4NHBBjTOeG8OebX8oLYYAzsxsnupIbz84HTjYd0cR5W/IUq3zBpVwxhpcFPNtesXpNUV 1ooQu65F8cHf/4f8ka5XQpJJqCHZgvLxfay6xFc20Wk94AwHnKCzurbgaFLe8qM4nEGEih rk8m3pAHyhuqHvzo6QgOyPC3N4xYX2k= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=azFm46in; spf=pass (imf25.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.213 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695179148; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jDBdAOX8X4/yrrTFzJbuMsKMOkcsSNgaTZUefdy6Nao=; b=W7ShRNV4eqoPI4c4vsm0J+wYU2pSPWvyZUNYuZ0HdOuCyoy5MKoCUfiibCAQxlfqxlS7Tx 3qdj1d+WwoLGbDRvDRk8AZnN5sPzMh5r8WiMAjQxup+AHsKSffaBLBSdOjWU3xuAxWIDGL kvXOh9ZaYPFYJhiNHcQEOJWIAozB4Yg= Content-Type: text/plain; charset=us-ascii DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695179146; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jDBdAOX8X4/yrrTFzJbuMsKMOkcsSNgaTZUefdy6Nao=; b=azFm46innPw/65tmW3UAt68ipYmSNr80v//8HmFizwfiNGkGw0fe2zrfnr7UfHOdeBPqpj F9QzphTq0VG+xK+qAXTuI4XP7d5x8Pu7odblTZmZF3GKJ/fwZhrEcBXlxcHiAnQb7kjvgi U28tex+IzTVGMKeyEg+PEObm9Q7TFNo= Mime-Version: 1.0 Subject: Re: [PATCH v4 3/8] hugetlb: perform vmemmap optimization on a list of pages X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20230919204954.GA425719@monkey> Date: Wed, 20 Sep 2023 11:05:30 +0800 Cc: Linux-MM , LKML , Muchun Song , Joao Martins , Oscar Salvador , David Hildenbrand , Miaohe Lin , David Rientjes , Anshuman Khandual , Naoya Horiguchi , Barry Song <21cnbao@gmail.com>, Michal Hocko , Matthew Wilcox , Xiongchun Duan , Andrew Morton Content-Transfer-Encoding: quoted-printable Message-Id: <57BC1D0C-23B2-4363-8B14-9602B69D53D5@linux.dev> References: <20230918230202.254631-1-mike.kravetz@oracle.com> <20230918230202.254631-4-mike.kravetz@oracle.com> <20230919204954.GA425719@monkey> To: Mike Kravetz X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: D15C4A0007 X-Stat-Signature: wedpyrx841crkrcikr7tian65cswzn6y X-Rspam-User: X-HE-Tag: 1695179147-489313 X-HE-Meta: U2FsdGVkX1+ED4wo5cfQhDbvpsxTKl7Fy3t9HBy3jHeAYEoMtga0HUC9Mg5NiJWwshYOYSAVresMhCVKBaE9/va5tCBulEutdZtdCS5yoUsYVSJ4cjoYGkwqitglJA9Gy8mP+4ONNr+8ucK2Cb4iPedEummmF//Tj5pD+68CKc7R0/nSnCPxqXqeacUbFD4NTbkRph1S2pH2EZlXGEapRDsmfgUb0QkzrzSbakkdmOskgEhBAd2Hb3tOSvYM478bZW9OMcabIcGOG5MV17JMDawR8JozLycxfMl9Fa26p0Agj9ZuzmWl24BYotSP+GnkixiIjGHe2pLfzy1cuqNudB4mIcVLYdWBFBH1l2ywe8JB+29+kdWWkZ1lfIYx6OBkcL3H/stsSM7VoaWu3Gt5bvXWddy06jYaH1RuHpHAlpcbx9WoShOyAFnl/bL5H8FWVtw9mQ0tP3llXXokMnAIXK+blIx8f5D6xMxIZLflFRZDcXctGU2JySOThD5X3SeAml7N2GXcmnB2nPKmvrVxgIdvlvGFX7MJV7D/AmS0PShmUba2LJC6hyHoIhWUPFlEcjFh1/1IkRu8R5psrHZuvPOZxUZ1gFxfLfteb9iiNKfu3c0EfYxHTr+n+B8UCRT7X3WH4zP+0FUHK8q5UVaWld2gv+UxYGxaUe38t1ug43RX/yVBD44aVlBYxF0vTYxzI26pkRpJzt4+wsl1z1pqEzNYxlW3IGuEqFAviuE0mV35HX+RY7bZvgjDilB13jQWdAsQkg0aZJcxA9F/xmZWVIOHYhKYQP+LXMDRdmcmKJEiHCwqy662Hb4ObQPISs871SeUfFDNaePmxl8XCf3B/4e1OAb5KdFO1qxKXQcvhSCaMdR3QHJ7h9SF6Tybw/1uswjFcLEKlcw5p5U3wP7BsJrpesnlmRXnCzfz93L8raCTRXr9fDme4tmEwC+7fqaCppR8HrkJzgq5YVgwdQ8 fuUsGSx7 W5pn4KLbMmabkxwvDiGSy+bxqPs2jwfWkkczh/vOcNYDCAus9E5F8M0ZQZkAUe973+z8DYJEHoSa34IehH474sSHwPCtp2aXqNuNM6Db7joGjXT6xSH7a5Qmg/E/rdLPCwdprFnxJERkJXH8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > On Sep 20, 2023, at 04:49, Mike Kravetz = wrote: >=20 > On 09/19/23 11:10, Muchun Song wrote: >>=20 >>=20 >> On 2023/9/19 07:01, Mike Kravetz wrote: >>> When adding hugetlb pages to the pool, we first create a list of the >>> allocated pages before adding to the pool. Pass this list of pages = to a >>> new routine hugetlb_vmemmap_optimize_folios() for vmemmap = optimization. >>>=20 >>> Due to significant differences in vmemmmap initialization for = bootmem >>> allocated hugetlb pages, a new routine prep_and_add_bootmem_folios >>> is created. >>>=20 >>> We also modify the routine vmemmap_should_optimize() to check for = pages >>> that are already optimized. There are code paths that might request >>> vmemmap optimization twice and we want to make sure this is not >>> attempted. >>>=20 >>> Signed-off-by: Mike Kravetz >>> --- >>> mm/hugetlb.c | 50 = +++++++++++++++++++++++++++++++++++++------- >>> mm/hugetlb_vmemmap.c | 11 ++++++++++ >>> mm/hugetlb_vmemmap.h | 5 +++++ >>> 3 files changed, 58 insertions(+), 8 deletions(-) >>>=20 >>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>> index 8624286be273..d6f3db3c1313 100644 >>> --- a/mm/hugetlb.c >>> +++ b/mm/hugetlb.c >>> @@ -2269,6 +2269,11 @@ static void = prep_and_add_allocated_folios(struct hstate *h, >>> { >>> struct folio *folio, *tmp_f; >>> + /* >>> + * Send list for bulk vmemmap optimization processing >>> + */ >>=20 >> =46rom the kernel development document, one-line comment format is = "/**/". >>=20 >=20 > Will change the comments introduced here. BTW, there are some places as well, please updates all, thanks. >=20 >>> + hugetlb_vmemmap_optimize_folios(h, folio_list); >>> + >>> /* >>> * Add all new pool pages to free lists in one lock cycle >>> */ >>> @@ -3309,6 +3314,40 @@ static void __init = hugetlb_folio_init_vmemmap(struct folio *folio, >>> prep_compound_head((struct page *)folio, huge_page_order(h)); >>> } >>> +static void __init prep_and_add_bootmem_folios(struct hstate *h, >>> + struct list_head *folio_list) >>> +{ >>> + struct folio *folio, *tmp_f; >>> + >>> + /* >>> + * Send list for bulk vmemmap optimization processing >>> + */ >>> + hugetlb_vmemmap_optimize_folios(h, folio_list); >>> + >>> + /* >>> + * Add all new pool pages to free lists in one lock cycle >>> + */ >>> + spin_lock_irq(&hugetlb_lock); >>> + list_for_each_entry_safe(folio, tmp_f, folio_list, lru) { >>> + if (!folio_test_hugetlb_vmemmap_optimized(folio)) { >>> + /* >>> + * If HVO fails, initialize all tail struct pages >>> + * We do not worry about potential long lock hold >>> + * time as this is early in boot and there should >>> + * be no contention. >>> + */ >>> + hugetlb_folio_init_tail_vmemmap(folio, >>> + HUGETLB_VMEMMAP_RESERVE_PAGES, >>> + pages_per_huge_page(h)); >>> + } >>> + __prep_account_new_huge_page(h, folio_nid(folio)); >>> + enqueue_hugetlb_folio(h, folio); >>> + } >>> + spin_unlock_irq(&hugetlb_lock); >>> + >>> + INIT_LIST_HEAD(folio_list); >>=20 >> I'm not sure what is the purpose of the reinitialization to list = head? >>=20 >=20 > There really is no purpose. This was copied from > prep_and_add_allocated_folios which also has this unnecessary call. = It is > unnecessary as enqueue_hugetlb_folio() will do a list_move for each > folio on the list. Therefore, at the end of the loop we KNOW the list > is empty. Right. >=20 > I will remove here and in prep_and_add_allocated_folios. Thanks. >=20 > Thanks, > --=20 > Mike Kravetz