From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69005C433DB for ; Mon, 22 Feb 2021 22:40:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E856165033 for ; Mon, 22 Feb 2021 22:40:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E856165033 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4BD496B0005; Mon, 22 Feb 2021 17:40:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 46DEC6B006C; Mon, 22 Feb 2021 17:40:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 35CC86B006E; Mon, 22 Feb 2021 17:40:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id 20B146B0005 for ; Mon, 22 Feb 2021 17:40:48 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E049D8248047 for ; Mon, 22 Feb 2021 22:40:47 +0000 (UTC) X-FDA: 77847374934.13.89B8E96 Received: from mail-ej1-f53.google.com (mail-ej1-f53.google.com [209.85.218.53]) by imf30.hostedemail.com (Postfix) with ESMTP id BFB82E000108 for ; Mon, 22 Feb 2021 22:40:46 +0000 (UTC) Received: by mail-ej1-f53.google.com with SMTP id d8so31811490ejc.4 for ; Mon, 22 Feb 2021 14:40:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=CFd7CaA0KapM5IfLJ/4SuO3kY+et82d+j9QUV8pBE7M=; b=cytQC7qIOvT4VxYyAySB7SJGnGqVd+VYzfZC9ZOEtiwnFx/10HEnK1kR5NAUQ4wYNM jhlK5Xgqc2Fq+qHKO2D1Ge9atll1LB18yk0VX9xDHrz4T0gFW81cy6I8qvWpElqn7MUV zSBQzXXUB1UIKlWm/JkC8TJ1cXMBJodgaxBuuu8RZu9yuhNglya4hwoQ7pgbLyUIAZe2 O6eT7aSnI9lJmpjfH3r3BYCvrOsas4O1T6CVrPNqCeVuraliNEauLOvDi3ReiByUwigR SxuzBAPgHX/ZB5Azu79brQaZPP/vhbS+F32hPU88pCPdhn2KOrV38T90T1XDN3VTkbDl crBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=CFd7CaA0KapM5IfLJ/4SuO3kY+et82d+j9QUV8pBE7M=; b=NdKhPmEXHnfXsch9BF4FBwPFDNaXyn/nXTcGkYghQbHAYDeISLW3tSdQ+z99w52wc3 oX3uASTVH1tLutiH5EHex8qfyJ+lEXL8V/bOj3PdjK8dz5TvXZk58+OMmOUNv9EEmNec gShkMSvHP7sA2FPjBcABMXsvhKGkR7kUzq4hK5s/GxvwCCgfXeseJyR3m6YExymgmQGq y17ND/KFBhkO6LvwzA2oGTX4wQc/LhvGTXpkMJrPkjEmkSLptAjodyPYY4SyPmgM+M83 ng0FTFGny/KZfJK/72nNKxOZdOnjGITJOqx/YzGQQeqRLI6GjUOKWjyNyom/v/rtv5wo DEUA== X-Gm-Message-State: AOAM5316whfXQbX7R8YHQH0bKuIynJ5XkPihcBZ95XowR2ARbt0vQRtT vOZfZlWUyAjjmy6WxCnkr0VLp1LggjJ8czePsccKQA== X-Google-Smtp-Source: ABdhPJzo8EpgOmCHsJbINK822OVIGyq7hHdBwCsscuL5reVuk6qLLLiaP/zQzxPwDyXeb+GECf3J1bstpjgXTau4ihI= X-Received: by 2002:a17:906:5655:: with SMTP id v21mr22247227ejr.264.1614033645156; Mon, 22 Feb 2021 14:40:45 -0800 (PST) MIME-Version: 1.0 References: <20201208172901.17384-1-joao.m.martins@oracle.com> <20201208172901.17384-5-joao.m.martins@oracle.com> <621ff98b-cb75-e4d7-8f09-882cb2b984d2@oracle.com> In-Reply-To: <621ff98b-cb75-e4d7-8f09-882cb2b984d2@oracle.com> From: Dan Williams Date: Mon, 22 Feb 2021 14:40:40 -0800 Message-ID: Subject: Re: [PATCH RFC 3/9] sparse-vmemmap: Reuse vmemmap areas for a given page size To: Joao Martins Cc: Linux MM , Ira Weiny , linux-nvdimm , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BFB82E000108 X-Stat-Signature: yac9hb5hkqubqruwkrazub3pemeyi7fo Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf30; identity=mailfrom; envelope-from=""; helo=mail-ej1-f53.google.com; client-ip=209.85.218.53 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614033646-425325 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Feb 22, 2021 at 3:42 AM Joao Martins wrote: > > > > On 2/20/21 3:34 AM, Dan Williams wrote: > > On Tue, Dec 8, 2020 at 9:32 AM Joao Martins wrote: > >> > >> Introduce a new flag, MEMHP_REUSE_VMEMMAP, which signals that > >> struct pages are onlined with a given alignment, and should reuse the > >> tail pages vmemmap areas. On that circunstamce we reuse the PFN backing > > > > s/On that circunstamce we reuse/Reuse/ > > > > Kills a "we" and switches to imperative tense. I noticed a couple > > other "we"s in the previous patches, but this crossed my threshold to > > make a comment. > > > /me nods. Will fix. > > >> only the tail pages subsections, while letting the head page PFN remain > >> different. This presumes that the backing page structs are compound > >> pages, such as the case for compound pagemaps (i.e. ZONE_DEVICE with > >> PGMAP_COMPOUND set) > >> > >> On 2M compound pagemaps, it lets us save 6 pages out of the 8 necessary > > > > s/lets us save/saves/ > > > Will fix. > > >> PFNs necessary > > > > s/8 necessary PFNs necessary/8 PFNs necessary/ > > Will fix. > > > > >> to describe the subsection's 32K struct pages we are > >> onlining. > > > > s/we are onlining/being mapped/ > > > > ...because ZONE_DEVICE pages are never "onlined". > > > >> On a 1G compound pagemap it let us save 4096 pages. > > > > s/lets us save/saves/ > > > > Will fix both. > > >> > >> Sections are 128M (or bigger/smaller), > > > > Huh? > > > > Section size is arch-dependent if we are being hollistic. > On x86 it's 64M, 128M or 512M right? > > #ifdef CONFIG_X86_32 > # ifdef CONFIG_X86_PAE > # define SECTION_SIZE_BITS 29 > # define MAX_PHYSMEM_BITS 36 > # else > # define SECTION_SIZE_BITS 26 > # define MAX_PHYSMEM_BITS 32 > # endif > #else /* CONFIG_X86_32 */ > # define SECTION_SIZE_BITS 27 /* matt - 128 is convenient right now */ > # define MAX_PHYSMEM_BITS (pgtable_l5_enabled() ? 52 : 46) > #endif > > Also, me pointing about section sizes, is because a 1GB+ page vmemmap population will > cross sections in how sparsemem populates the vmemmap. And on that case we gotta reuse the > the PTE/PMD pages across multiple invocations of vmemmap_populate_basepages(). Either > that, or looking at the previous page PTE, but that might be ineficient. Ok, makes sense I think saying this description of needing to handle section crossing is clearer than mentioning one of the section sizes. > > >> @@ -229,38 +235,95 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, > >> for (; addr < end; addr += PAGE_SIZE) { > >> pgd = vmemmap_pgd_populate(addr, node); > >> if (!pgd) > >> - return -ENOMEM; > >> + return NULL; > >> p4d = vmemmap_p4d_populate(pgd, addr, node); > >> if (!p4d) > >> - return -ENOMEM; > >> + return NULL; > >> pud = vmemmap_pud_populate(p4d, addr, node); > >> if (!pud) > >> - return -ENOMEM; > >> + return NULL; > >> pmd = vmemmap_pmd_populate(pud, addr, node); > >> if (!pmd) > >> - return -ENOMEM; > >> - pte = vmemmap_pte_populate(pmd, addr, node, altmap); > >> + return NULL; > >> + pte = vmemmap_pte_populate(pmd, addr, node, altmap, block); > >> if (!pte) > >> - return -ENOMEM; > >> + return NULL; > >> vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); > >> } > >> > >> + return __va(__pfn_to_phys(pte_pfn(*pte))); > >> +} > >> + > >> +int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, > >> + int node, struct vmem_altmap *altmap) > >> +{ > >> + if (!__vmemmap_populate_basepages(start, end, node, altmap, NULL)) > >> + return -ENOMEM; > >> return 0; > >> } > >> > >> +static struct page * __meminit vmemmap_populate_reuse(unsigned long start, > >> + unsigned long end, int node, > >> + struct vmem_context *ctx) > >> +{ > >> + unsigned long size, addr = start; > >> + unsigned long psize = PHYS_PFN(ctx->align) * sizeof(struct page); > >> + > >> + size = min(psize, end - start); > >> + > >> + for (; addr < end; addr += size) { > >> + unsigned long head = addr + PAGE_SIZE; > >> + unsigned long tail = addr; > >> + unsigned long last = addr + size; > >> + void *area; > >> + > >> + if (ctx->block_page && > >> + IS_ALIGNED((addr - ctx->block_page), psize)) > >> + ctx->block = NULL; > >> + > >> + area = ctx->block; > >> + if (!area) { > >> + if (!__vmemmap_populate_basepages(addr, head, node, > >> + ctx->altmap, NULL)) > >> + return NULL; > >> + > >> + tail = head + PAGE_SIZE; > >> + area = __vmemmap_populate_basepages(head, tail, node, > >> + ctx->altmap, NULL); > >> + if (!area) > >> + return NULL; > >> + > >> + ctx->block = area; > >> + ctx->block_page = addr; > >> + } > >> + > >> + if (!__vmemmap_populate_basepages(tail, last, node, > >> + ctx->altmap, area)) > >> + return NULL; > >> + } > > > > I think that compound page accounting and combined altmap accounting > > makes this difficult to read, and I think the compound page case > > deserves it's own first class loop rather than reusing > > vmemmap_populate_basepages(). With the suggestion to drop altmap > > support I'd expect a vmmemap_populate_compound that takes a compound > > page size and goes the right think with respect to mapping all the > > tail pages to the same pfn. > > > I can move this to a separate loop as suggested. > > But to be able to map all tail pages in one call of vmemmap_populate_compound() > this might requires changes in sparsemem generic code that I am not so sure > they are warranted the added complexity. Otherwise I'll have to probably keep > this logic of @ctx to be able to pass the page to be reused (i.e. @block and > @block_page). That's actually the main reason that made me introduce > a struct vmem_context. Do you need to pass in a vmem_context, isn't that context local to vmemmap_populate_compound_pages()?