From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3FA4C433EF for ; Tue, 17 May 2022 00:01:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 471536B0072; Mon, 16 May 2022 20:01:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 420D76B0073; Mon, 16 May 2022 20:01:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C2A96B0074; Mon, 16 May 2022 20:01:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1C5786B0072 for ; Mon, 16 May 2022 20:01:23 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id E3E3F12162A for ; Tue, 17 May 2022 00:01:22 +0000 (UTC) X-FDA: 79473280404.04.85FFD00 Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) by imf29.hostedemail.com (Postfix) with ESMTP id EFF8B12002E for ; Tue, 17 May 2022 00:01:13 +0000 (UTC) Received: by mail-qt1-f177.google.com with SMTP id i20so13268550qti.11 for ; Mon, 16 May 2022 17:01:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=bwFNpFYZRI4s8sYRdgxEUvvxEJucfpjzv82WDEUnK6U=; b=OhrSGSIxaeDgQCl5Fa7khH4ffPvRr+LBU76vP11lvWXAfAwhDOlAIz18UHnX0Q53Rl mzugI8fOZUNpjXhXuhdaX58LegHizQWV9ikpReFzq4/lXvribHQJW2krXByHQ67Ppu+K xQLcR2RTlPosawHhYxQ5qyXWVCZE3Jls6BYusHIRXvzvvaoB2xzh2npDtHHS4Dp4Bnhm r3+DKCNYEVnQPmH3Y7fjqfP9xDMGiYyVpjz7oH86u8r/uvvrUd2QR/mwT2qftkEzJkZx oqN2PpUzRSx/Hv6Rt8vOJF5JwQXfjvuV0icn3EI7t8WpycX99l7HbgpWAoREtjaTLvsg fyCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=bwFNpFYZRI4s8sYRdgxEUvvxEJucfpjzv82WDEUnK6U=; b=o6Qex6ZjFNthX3JH0sBJ/lz+VOnw5eGgiLHeaSHnONf/Zaw3SksCigi9i8fOABGWms 5t3rKGyJq9JDiK2vHy7Xqw1smqSA6lVZzAZAWEfwlAJ/BcDCc5oVKYexHEZmaoa2ACcu LLt0w+BPNyUHqegbVzdHkneRbD0iHXcOnz9wzwzBjxputeP/HtaxLXd1XaSQjnim2i1t udIyxb9bFA3WHaWEz2SHneZfj75x85GOdGRRMlPEnf3SoK64vgroB30gUxyfYeUQfxSd PRWoXvv/WTfz5sgacdja1rP/zp6BbdJU6n313Rq6nuqraFfiYJ+7SJEF0qU44k8lqQ4h sPKQ== X-Gm-Message-State: AOAM530AcJQphewmx2rK5wXSSj2qqo+kD1jZXGop0RztTfoPfyWhRdZn 2d+CVGigLRuj01wLLoYjg+veKAGxLP4/JyIWw0g= X-Google-Smtp-Source: ABdhPJzVEABwTV+appVmP953wYlG+AunVSoKlWbSkWciXQ/gFLiD/SSK/t8dBxfpcc1aYIKqEU25vjrrnWvNM00vWrw= X-Received: by 2002:ac8:5e49:0:b0:2f3:be19:ec08 with SMTP id i9-20020ac85e49000000b002f3be19ec08mr17947594qtx.450.1652745681588; Mon, 16 May 2022 17:01:21 -0700 (PDT) MIME-Version: 1.0 References: <20220509074330.4822-1-jaewon31.kim@samsung.com> In-Reply-To: From: Jaewon Kim Date: Tue, 17 May 2022 09:01:10 +0900 Message-ID: Subject: Re: [RFC PATCH] page_ext: create page extension for all memblock memory regions To: Jaewon Kim , Joonsoo Kim , Vlastimil Babka Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=OhrSGSIx; spf=pass (imf29.hostedemail.com: domain of jaewon31.kim@gmail.com designates 209.85.160.177 as permitted sender) smtp.mailfrom=jaewon31.kim@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EFF8B12002E X-Stat-Signature: z7x86emgrsusfb9p8md3wrsaeofbaj1y X-HE-Tag: 1652745673-245447 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hello guys, could look into this patch? 2022=EB=85=84 5=EC=9B=94 10=EC=9D=BC (=ED=99=94) =EC=98=A4=EC=A0=84 9:00, J= aewon Kim =EB=8B=98=EC=9D=B4 =EC=9E=91=EC=84=B1: > > let me add Joonsoo Kim > > 2022=EB=85=84 5=EC=9B=94 9=EC=9D=BC (=EC=9B=94) =EC=98=A4=ED=9B=84 4:39, = Jaewon Kim =EB=8B=98=EC=9D=B4 =EC=9E=91=EC=84=B1: > > > > The page extension can be prepared for each section. But if the first > > page is not valid, the page extension for the section was not > > initialized though there were many other valid pages within the section= . > > > > To support the page extension for all sections, refer to memblock memor= y > > regions. If the page is valid use the nid from pfn_to_nid, otherwise us= e > > the previous nid. > > > > Also this pagech changed log to include total sections and a section > > size. > > > > i.e. > > allocated 100663296 bytes of page_ext for 64 sections (1 section : 0x80= 00000) > > > > Signed-off-by: Jaewon Kim > > --- > > mm/page_ext.c | 42 ++++++++++++++++++++++-------------------- > > 1 file changed, 22 insertions(+), 20 deletions(-) > > > > diff --git a/mm/page_ext.c b/mm/page_ext.c > > index 2e66d934d63f..506d58b36a1d 100644 > > --- a/mm/page_ext.c > > +++ b/mm/page_ext.c > > @@ -381,41 +381,43 @@ static int __meminit page_ext_callback(struct not= ifier_block *self, > > void __init page_ext_init(void) > > { > > unsigned long pfn; > > - int nid; > > + int nid =3D 0; > > + struct memblock_region *rgn; > > + int nr_section =3D 0; > > + unsigned long next_section_pfn =3D 0; > > > > if (!invoke_need_callbacks()) > > return; > > > > - for_each_node_state(nid, N_MEMORY) { > > + /* > > + * iterate each memblock memory region and do not skip a sectio= n having > > + * !pfn_valid(pfn) > > + */ > > + for_each_mem_region(rgn) { > > unsigned long start_pfn, end_pfn; > > > > - start_pfn =3D node_start_pfn(nid); > > - end_pfn =3D node_end_pfn(nid); > > - /* > > - * start_pfn and end_pfn may not be aligned to SECTION = and the > > - * page->flags of out of node pages are not initialized= . So we > > - * scan [start_pfn, the biggest section's pfn < end_pfn= ) here. > > - */ > > + start_pfn =3D (unsigned long)(rgn->base >> PAGE_SHIFT); > > + end_pfn =3D start_pfn + (unsigned long)(rgn->size >> PA= GE_SHIFT); > > + > > + if (start_pfn < next_section_pfn) > > + start_pfn =3D next_section_pfn; > > + > > for (pfn =3D start_pfn; pfn < end_pfn; > > pfn =3D ALIGN(pfn + 1, PAGES_PER_SECTION)) { > > > > - if (!pfn_valid(pfn)) > > - continue; > > - /* > > - * Nodes's pfns can be overlapping. > > - * We know some arch can have a nodes layout su= ch as > > - * -------------pfn--------------> > > - * N0 | N1 | N2 | N0 | N1 | N2|.... > > - */ > > - if (pfn_to_nid(pfn) !=3D nid) > > - continue; > > + if (pfn_valid(pfn)) > > + nid =3D pfn_to_nid(pfn); > > + nr_section++; > > if (init_section_page_ext(pfn, nid)) > > goto oom; > > cond_resched(); > > } > > + next_section_pfn =3D pfn; > > } > > + > > hotplug_memory_notifier(page_ext_callback, 0); > > - pr_info("allocated %ld bytes of page_ext\n", total_usage); > > + pr_info("allocated %ld bytes of page_ext for %d sections (1 sec= tion : 0x%x)\n", > > + total_usage, nr_section, (1 << SECTION_SIZE_BITS)); > > invoke_init_callbacks(); > > return; > > > > -- > > 2.17.1 > >