From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5031C433EF for ; Tue, 17 May 2022 11:38:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 251106B0072; Tue, 17 May 2022 07:38:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 201156B0073; Tue, 17 May 2022 07:38:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C9688D0001; Tue, 17 May 2022 07:38:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EE5DF6B0072 for ; Tue, 17 May 2022 07:38:30 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C202360E0F for ; Tue, 17 May 2022 11:38:30 +0000 (UTC) X-FDA: 79475037180.28.F4E4E33 Received: from mail-qk1-f177.google.com (mail-qk1-f177.google.com [209.85.222.177]) by imf14.hostedemail.com (Postfix) with ESMTP id AC7D31000C7 for ; Tue, 17 May 2022 11:38:27 +0000 (UTC) Received: by mail-qk1-f177.google.com with SMTP id c1so14277747qkf.13 for ; Tue, 17 May 2022 04:38:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=7MuDWeCSuQazazoHOlGyQ/2mA3OQREOb6rejrGv/uN0=; b=Ibizt8bP2vr4awv/7CMn8ECg8FVfXOSzi1w4kXv3P4hNWz1O1Vdo01q9zEbDQdPIFe 5M+To11ZIJS7qz0JHsIwjel9HgHFvBrbGURMP2e0aa7eAcaavTmRvT6Fxsn8DDCHgWt3 uUJVUWWIzevGisFFztQRGJllbFXM+qsDxG8fmHwSz4TBxqoPYWtm+j1qXQZbjDsPRHQ3 vEGB+BZyxBVIo/90VwwEFOesSRBmevHWc/iudfpkpO8rsaLFMnNnxjXIgPzXAwD1pMPY HgK/VfgcpAld3UHAJ46U0EZKlDHWWCJPMlkJoY/WGBSD51hI1SoMN/jWek2Z5u7eKGLd Yrfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=7MuDWeCSuQazazoHOlGyQ/2mA3OQREOb6rejrGv/uN0=; b=l+9MIhx9bR6jFWBymsAZzoQRPAFbdFql3m6ky/wqOQ85+D/OROoWobmnORkF9oZY9v 0mJhf0dqx7x1ivNr+6+PrXggvQVRHiYvVacU+lL8WZ6vlj3tks7880hIjsrSPCgKzByX BgMBC8MwJSNltiCH4dgRwq5wYuk8sjXsD9/SPgNtIZdUZpQhILwJvq7LAYbtdOILMtZD bzBLkXiwFRK1pziOagxA+VlnbSUQF3IOe/+WlvtE2/7zw9iyCrAOlTU64OjQ5LepRN7c WL/sLClm5P97CUxOfq/BznuElZmdUBq0GdosTArNW0V+Jbq6slr7F8d1/6U9r6ngWLYO nrTA== X-Gm-Message-State: AOAM5307yRZw/QsOXnfxC1NXWwe/DwSAj7oLcTW8tFaUbkb72V1W0g6s KG5bXYkCr9eatyyJ0RdFgi4WtnZjk8BZBa5mgMk= X-Google-Smtp-Source: ABdhPJxBxvj35eMYHQ7L3pX1KD3hqz+WaJDdihGkyJGjuExnQMlROHtZFSpN1tEoYTD36zSttWX6o59v7yqU15L7MqI= X-Received: by 2002:a05:620a:4451:b0:6a0:add:274b with SMTP id w17-20020a05620a445100b006a00add274bmr15704030qkp.196.1652787509582; Tue, 17 May 2022 04:38:29 -0700 (PDT) MIME-Version: 1.0 References: <20220509074330.4822-1-jaewon31.kim@samsung.com> <20220516173321.67402b7f09eacc43d4e476f4@linux-foundation.org> In-Reply-To: From: Jaewon Kim Date: Tue, 17 May 2022 20:38:18 +0900 Message-ID: Subject: Re: [RFC PATCH] page_ext: create page extension for all memblock memory regions To: Mike Rapoport Cc: Andrew Morton , Jaewon Kim , Vlastimil Babka , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Joonsoo Kim Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: AC7D31000C7 X-Stat-Signature: 7fgfewcidq3a5bxjyfo1uizdbswo41z1 X-Rspam-User: Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Ibizt8bP; spf=pass (imf14.hostedemail.com: domain of jaewon31.kim@gmail.com designates 209.85.222.177 as permitted sender) smtp.mailfrom=jaewon31.kim@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1652787507-692440 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000012, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hello Mike Rapoport Thank you for your comment. Oh really? Could you point out the code or the commit regarding 'all struct pages in any section should be valid and properly initialized' ? Actually I am using v5.10 based source tree on an arm64 device. I tried to look up and found the following commit in v5.16-rc1, did you mean this? 3de360c3fdb3 arm64/mm: drop HAVE_ARCH_PFN_VALID I guess memblock_is_memory code in pfn_valid in arch/arm64/mm/init.c, v5.10 might affect the page_ext_init. Thank you 2022=EB=85=84 5=EC=9B=94 17=EC=9D=BC (=ED=99=94) =EC=98=A4=ED=9B=84 5:25, M= ike Rapoport =EB=8B=98=EC=9D=B4 =EC=9E=91=EC=84=B1: > > On Mon, May 16, 2022 at 05:33:21PM -0700, Andrew Morton wrote: > > On Mon, 9 May 2022 16:43:30 +0900 Jaewon Kim wrote: > > > > > The page extension can be prepared for each section. But if the first > > > page is not valid, the page extension for the section was not > > > initialized though there were many other valid pages within the secti= on. > > What do you mean by "first page [in a section] is not valid"? > In recent kernels all struct pages in any section should be valid and > properly initialized. > > > > To support the page extension for all sections, refer to memblock mem= ory > > > regions. If the page is valid use the nid from pfn_to_nid, otherwise = use > > > the previous nid. > > > > > > Also this pagech changed log to include total sections and a section > > > size. > > > > > > i.e. > > > allocated 100663296 bytes of page_ext for 64 sections (1 section : 0x= 8000000) > > > > Cc Joonsoo, who wrote this code. > > Cc Mike, for memblock. > > > > Thanks. > > > > > > > > diff --git a/mm/page_ext.c b/mm/page_ext.c > > > index 2e66d934d63f..506d58b36a1d 100644 > > > --- a/mm/page_ext.c > > > +++ b/mm/page_ext.c > > > @@ -381,41 +381,43 @@ static int __meminit page_ext_callback(struct n= otifier_block *self, > > > void __init page_ext_init(void) > > > { > > > unsigned long pfn; > > > - int nid; > > > + int nid =3D 0; > > > + struct memblock_region *rgn; > > > + int nr_section =3D 0; > > > + unsigned long next_section_pfn =3D 0; > > > > > > if (!invoke_need_callbacks()) > > > return; > > > > > > - for_each_node_state(nid, N_MEMORY) { > > > + /* > > > + * iterate each memblock memory region and do not skip a section = having > > > + * !pfn_valid(pfn) > > > + */ > > > + for_each_mem_region(rgn) { > > > unsigned long start_pfn, end_pfn; > > > > > > - start_pfn =3D node_start_pfn(nid); > > > - end_pfn =3D node_end_pfn(nid); > > > - /* > > > - * start_pfn and end_pfn may not be aligned to SECTION an= d the > > > - * page->flags of out of node pages are not initialized. = So we > > > - * scan [start_pfn, the biggest section's pfn < end_pfn) = here. > > > - */ > > > + start_pfn =3D (unsigned long)(rgn->base >> PAGE_SHIFT); > > > + end_pfn =3D start_pfn + (unsigned long)(rgn->size >> PAGE= _SHIFT); > > > + > > > + if (start_pfn < next_section_pfn) > > > + start_pfn =3D next_section_pfn; > > > + > > > for (pfn =3D start_pfn; pfn < end_pfn; > > > pfn =3D ALIGN(pfn + 1, PAGES_PER_SECTION)) { > > > > > > - if (!pfn_valid(pfn)) > > > - continue; > > > - /* > > > - * Nodes's pfns can be overlapping. > > > - * We know some arch can have a nodes layout such= as > > > - * -------------pfn--------------> > > > - * N0 | N1 | N2 | N0 | N1 | N2|.... > > > - */ > > > - if (pfn_to_nid(pfn) !=3D nid) > > > - continue; > > > + if (pfn_valid(pfn)) > > > + nid =3D pfn_to_nid(pfn); > > > + nr_section++; > > > if (init_section_page_ext(pfn, nid)) > > > goto oom; > > > cond_resched(); > > > } > > > + next_section_pfn =3D pfn; > > > } > > > + > > > hotplug_memory_notifier(page_ext_callback, 0); > > > - pr_info("allocated %ld bytes of page_ext\n", total_usage); > > > + pr_info("allocated %ld bytes of page_ext for %d sections (1 secti= on : 0x%x)\n", > > > + total_usage, nr_section, (1 << SECTION_SIZE_BITS)); > > > invoke_init_callbacks(); > > > return; > > > > > > -- > > > 2.17.1 > > > > > -- > Sincerely yours, > Mike.