From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5DDDD1AD4F for ; Wed, 16 Oct 2024 12:01:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 32A206B0082; Wed, 16 Oct 2024 08:01:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D9456B0088; Wed, 16 Oct 2024 08:01:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A1926B0089; Wed, 16 Oct 2024 08:01:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F177F6B0082 for ; Wed, 16 Oct 2024 08:01:39 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9419EA00B7 for ; Wed, 16 Oct 2024 12:01:21 +0000 (UTC) X-FDA: 82679325666.21.33A16F9 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf17.hostedemail.com (Postfix) with ESMTP id 3567640014 for ; Wed, 16 Oct 2024 12:01:31 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=YEZDFPs4; spf=pass (imf17.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729079906; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=U58MUZ17v0vL9laMqXFnMeVLRxj8pBpYxhSH0za5kIs=; b=VPD08iGl+CFZVherQnEvGbTU7YMDM+a5m2hjSduV/WsmOo0KSr4i5pf7SA2CKdwUwJKhJK qvT78wMB8a56H4o1v9U30YTVm3aR3pTDQcFS56JjtQmrE+anXpKvdiD1pMW36WXJwcIqgR oFlXtF4IRlo5pHebUr7mi6+eTE8Km6g= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=YEZDFPs4; spf=pass (imf17.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729079906; a=rsa-sha256; cv=none; b=YXu84AtPx/q/Uh6oNybbY6jhr5EuWYCPBHfchIkvSqzqvI/D7zw5r/pHF8rZI/glw6Ngwg AGCzv4BJ6Lv79O08dveR2q28rfrbvnDJKFjUiU35JVii0XCvrbFbvw9B3fR11sr4ZBgu7R /2Hwit51q3bTf9vmLUhZyJX/OgrE0Mg= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 3F53DA43C64; Wed, 16 Oct 2024 12:01:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90725C4CEC5; Wed, 16 Oct 2024 12:01:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1729080096; bh=0wvAIZuJtVKXK3ZNWEeGI+f35fphWLLFON9Xv00nJtk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=YEZDFPs4oMQkuFmLAxFf9PGNhUSf+WbpaQp7U9Wu8VCYlBZy5+PXB9Ik9HsYa6peW jEbE+tfQw6WVBzH8VHTkWLuHdLYOFfTiSKxa5HAi7v79gvU/vPsxQN/OgRgQKVpVNl KmliJIIHPyVNBRaBhe1pWea5wFBQphkT2BZEEwlwy3dzvWu8GMiuW9AY0govJlA4t2 XuyZ7oTF+MVN+gsIOyNirYaZuAACHQ3Go4mRUZJi5MQp2jW2Pr36vWayqTf9zMRwY7 J1nM5HjwLkApUIi4+2g9EMI4VjEvf+TvJIm+Inri5Nn4Vk3mzoC+w3fWvDE2zSnBOY qAPC+mB9vkl9Q== Date: Wed, 16 Oct 2024 14:57:59 +0300 From: Mike Rapoport To: Su Hua Cc: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, suhua Subject: Re: [PATCH v1] memblock: Initialized the memory of memblock.reserve to the MIGRATE_MOVABL Message-ID: References: <20240925110235.3157-1-suhua1@kingsoft.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3567640014 X-Stat-Signature: gkfecjtt4cg6x1urpeowronfy61fu4wq X-Rspam-User: X-HE-Tag: 1729080091-621932 X-HE-Meta: U2FsdGVkX193h9mOOkL862cuBaTBphQKGUI4o1QauXbI1K55qFTr6nhPIjLShcX94mkOQz9S6zWHRy8Y0eM2QxrnEhV28ZlL46JF6VzvCkPcbZ65UJxtdoz2aRTIhcCTK+IAsA8KzB462OvzmzDteduVRSFrFkFPeoKGjqBqBGvBfm0HJTcJJdA1h1Iz7Eg3LSZ+2DoUNfRkwI4UXX64R7rZVuitl3g+oBH6jEN1kP4AMfQkYpPyai91bKvh/HNluTIGy8Kh7P+IHNgW/FNDLOim3gEvK58KZbnojFfD5g0pSK8iQW4ja0+CmPrWhqvVaphss7rBVgnCK4S/P7biLYsR0rqn0OAUKsk1mB2qJCRFEQFAgPT8e1ktclm+s10U5SXgpNuEnU49wyGQ4l6ecJQLKKeu8nVLr2JA+SvaKIw+GcLT1IAKRSz5B7E4I6UvuvRjwrl0nWPtireVkriPLh6oxdiK0vmGLriTpWP82I+J9/x5382yC4VSkF28pDW4Bc0m1fivPRFrBO24+ItHYijHOoV7UMmqCnNyCYwoDXDLrfj1EsPcm0W3RuFVzO31u5bz6Wtq/KeK8x9x/WC8jusO8SG7COcZs09wZiS89sAlMy8Q18r2w+p7ZTQSJqJD5FwJXC/sO33it3wjdlP10uNLmtKLlBDg5A093iH4M/lbC/J63lzctquvmZsXhEqJjCNKH5iy3QAS/pdpLXSeobekZMKPfCVdQ1ltivzLarv+i0809KnDlQg/5tHjlyYRYtq95tzxaQTzGyOCOehCVhKh/5rrIdO83KKx78H23nYOQ7PjMPs+B53YvTxAC15XBi71YumlKbzhZdDeODZqUVrH/78UmLph9PMOuQnU0ckrJ218M20LsKmKEIJVyt/ZtEmMJlCsfm23OSPIFfR5wN9MFw7v/6Fc+42Veynq9axZyD98ZmNEmeFntT8uTdKSpKZ2HqXRdVxHEUYftak lyiMUmE0 q8nwI02qiCObh/DLcsKBBtYTr19EnDyCuPy1DL/s7B4himqKsuvcgcPu+ePXq8oejTn0daXd/x1WAxEuQJ2LMCgn/4MhktAH5obfDVgPqX83q509h3Cx1O0E5t2kAqgSZko8j8+XP3gqlSJfjgiZXRxsLCNVOmfeOi1dd3ZKZzANVuMrJhC5NxJx0Fk0ax89K9C9mnvqqmwDT6H3zsG4DTNgQ+DWxYQtm/L6AcBrvyHuEHI1Qid+vuTQNhl3o1Cvi+imy7GH/FfCMANySBB3bpw3ng+stzpXvbSDp7NdDiQsGXbRAkS6MmkSnf+s2iwOoeNBo28s9Y5Ekuc2uHTrYTPL+JOHtmWwk/mAAIBHz/NZ37ffHILIMcPY+avBn13ic/kMoBDGQqv/hSRj7lYQrEKVirPoXd21e3V/W6Rb5eR9ruCCFn+6nDt/x2BCJq1PTs/qnOwLRKmdDfIdwsM6/xgYsxg7jqd7uIg/n X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi, On Sat, Oct 12, 2024 at 11:55:31AM +0800, Su Hua wrote: > Hi Mike > > Thanks for your advice and sorry for taking so long to reply. Please don't top-post on the Linux kernel mailing lists > I looked at the logic again. deferred_init_pages is currently used to > handle all (memory &&! reserved) area memblock,and put that memory in > buddy. > Change it to also handle reserved memory may involve more code > changes. I wonder if I can change the commit message: This patch is > mainly to > make the migration type to MIGRATE_MOVABLE when the reserve type page > is initialized, regardless of whether CONFIG_DEFERRED_STRUCT_PAGE_INIT > is set or not? > > When not set CONFIG_DEFERRED_STRUCT_PAGE_INIT, initializes memblock of > reserve type to MIGRATE_MOVABLE by default at memmap_init initializes > memory. This should be more clearly emphasized in the commit message. > Sincerely yours, > Su > > > Mike Rapoport 于2024年9月29日周日 17:18写道: > > > > On Wed, Sep 25, 2024 at 07:02:35PM +0800, suhua wrote: > > > After sparse_init function requests memory for struct page in memblock and > > > adds it to memblock.reserved, this memory area is present in both > > > memblock.memory and memblock.reserved. > > > > > > When CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set. The memmap_init function > > > is called during the initialization of the free area of the zone, this > > > function calls for_each_mem_pfn_range to initialize all memblock.memory, > > > excluding memory that is also placed in memblock.reserved, such as the > > > struct page metadata that describes the page, 1TB memory is about 16GB, > > > and generally this part of reserved memory occupies more than 90% of the > > > total reserved memory of the system. So all memory in memblock.memory is > > > set to MIGRATE_MOVABLE according to the alignment of pageblock_nr_pages. > > > For example, if hugetlb_optimize_vmemmap=1, huge pages are allocated, the > > > freed pages are placed on buddy's MIGRATE_MOVABL list for use. > > > > Please make sure you spell MIGRATE_MOVABLE and MIGRATE_UNMOVABLE correctly. > > > > > When CONFIG_DEFERRED_STRUCT_PAGE_INIT=y, only the first_deferred_pfn range > > > is initialized in memmap_init. The subsequent free_low_memory_core_early > > > initializes all memblock.reserved memory but not MIGRATE_MOVABL. All > > > memblock.memory is set to MIGRATE_MOVABL when it is placed in buddy via > > > free_low_memory_core_early and deferred_init_memmap. As a result, when > > > hugetlb_optimize_vmemmap=1 and huge pages are allocated, the freed pages > > > will be placed on buddy's MIGRATE_UNMOVABL list (For example, on machines > > > with 1TB of memory, alloc 2MB huge page size of 1000GB frees up about 15GB > > > to MIGRATE_UNMOVABL). Since the huge page alloc requires a MIGRATE_MOVABL > > > page, a fallback is performed to alloc memory from MIGRATE_UNMOVABL for > > > MIGRATE_MOVABL. > > > > > > Large amount of UNMOVABL memory is not conducive to defragmentation, so > > > the reserved memory is also set to MIGRATE_MOVABLE in the > > > free_low_memory_core_early phase following the alignment of > > > pageblock_nr_pages. > > > > > > Eg: > > > echo 500000 > /proc/sys/vm/nr_hugepages > > > cat /proc/pagetypeinfo > > > > > > before: > > > Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 > > > … > > > Node 0, zone Normal, type Unmovable 51 2 1 28 53 35 35 43 40 69 3852 > > > Node 0, zone Normal, type Movable 6485 4610 666 202 200 185 208 87 54 2 240 > > > Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 > > > Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 > > > Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 > > > Unmovable ≈ 15GB > > > > > > after: > > > Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 > > > … > > > Node 0, zone Normal, type Unmovable 0 1 1 0 0 0 0 1 1 1 0 > > > Node 0, zone Normal, type Movable 1563 4107 1119 189 256 368 286 132 109 4 3841 > > > Node 0, zone Normal, type Reclaimable 2 2 1 23 13 1 2 1 0 1 0 > > > Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0 > > > Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0 > > > > > > Signed-off-by: suhua > > > --- > > > mm/mm_init.c | 6 ++++++ > > > 1 file changed, 6 insertions(+) > > > > > > diff --git a/mm/mm_init.c b/mm/mm_init.c > > > index 4ba5607aaf19..e0190e3f8f26 100644 > > > --- a/mm/mm_init.c > > > +++ b/mm/mm_init.c > > > @@ -722,6 +722,12 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) > > > if (zone_spans_pfn(zone, pfn)) > > > break; > > > } > > > + > > > + if (pageblock_aligned(pfn)) { > > > + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE); > > > + cond_resched(); No need to call cond_resched() here > > > + } > > > + > > > __init_single_page(pfn_to_page(pfn), pfn, zid, nid); > > > } > > > #else > > > -- > > > 2.34.1 > > > > > > > -- > > Sincerely yours, > > Mike. -- Sincerely yours, Mike.