From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF21EC433F5 for ; Thu, 24 Feb 2022 15:35:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D1BCB8D0002; Thu, 24 Feb 2022 10:35:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CCA648D0001; Thu, 24 Feb 2022 10:35:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B92F48D0002; Thu, 24 Feb 2022 10:35:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id AB9388D0001 for ; Thu, 24 Feb 2022 10:35:20 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7D194238FC for ; Thu, 24 Feb 2022 15:35:20 +0000 (UTC) X-FDA: 79178072400.06.89BD98A Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf15.hostedemail.com (Postfix) with ESMTP id 2799AA0002 for ; Thu, 24 Feb 2022 15:35:18 +0000 (UTC) Received: by mail-pj1-f51.google.com with SMTP id g7-20020a17090a708700b001bb78857ccdso6038127pjk.1 for ; Thu, 24 Feb 2022 07:35:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=lsQgiVzP1TKRqYLV4++jxmoh8slLiUxtFgCPxjFlY0c=; b=vEh5lqRz8JWat204RnHZ/0ZhBBj0V+2xuqju6dJaXTJmcEU99tB7WTFV9jdSqDPAn+ tska+3AyjJRJRI+jhKV2x/sV72O5+p6Zs8H4KUG2vxu7GI6BVa3KejeE7YrqTldlBF0N ZzwAf2Rh7ozrziYm/WgYsbv6B7VQEK/q4u6bqzbcKz+VmqkhLNztedKcnh+GrLeIVP71 J/XhJYV7fOc80WRJBoQ6Z/2b6XVPXEcytOzncFT4/MBL9q9aHA4sc1Bm9tSU95ZZnR2j gwAXPzPQPh7PsqhBbPbcSKzp5zTlfI4J59MuA4wNPiHsEMErZm2p6Kb8uHT51tEbfEp2 Lcyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=lsQgiVzP1TKRqYLV4++jxmoh8slLiUxtFgCPxjFlY0c=; b=jtGEr9EWu2X13aCAv0LH/1q3L5ZIaIoEs6WpEjxja7cxN7TQT1htGYAdURBcigyhKT ew+qr8tiT+ZZigk7brH/SZ8F4CsQ6h+PMwlOsfEMO5PYScDLpwhtXpGxQtcFxv2iCAk8 pmGTjaOndIwD9nygTs7juhAE8W+myz9jmFWS7PTWHma3ijFoQ8q+k/LgINLRgIQ3XUgs ZzY2ykG2FIYHSAXX1iWmLEulnkStw7Urq5pxymCA1lT1p6udMj/QF7QUw+L7xZuPraDl RFbIVsjW7I5uymojb80spp0rilgDerjLPea0g6gEHOVTLl2JwwuiE4DnhcpXr7DIWwBq 9Rrw== X-Gm-Message-State: AOAM532qeyQQwGIacudwhqJD3INPxHqYkn0yfOcD7Drbu+yOARq/n77W I3J6bJKFwzRdh8WF61+x0uNc1bkF1Xn50/y3Rd+37Q== X-Google-Smtp-Source: ABdhPJy7tyw1CeQ6MqBWdqbcfJjWOEGOR64u7KAmeKWWG3aNdoFJVgpLnp9Qs8KNuOg3ROktAH6g5gTrP3v4ENNzOcg= X-Received: by 2002:a17:902:a404:b0:14b:1100:aebc with SMTP id p4-20020a170902a40400b0014b1100aebcmr2988122plq.133.1645716917764; Thu, 24 Feb 2022 07:35:17 -0800 (PST) MIME-Version: 1.0 References: <20220223194807.12070-1-joao.m.martins@oracle.com> <20220223194807.12070-5-joao.m.martins@oracle.com> <25983812-c876-ae82-0125-515500959696@oracle.com> In-Reply-To: <25983812-c876-ae82-0125-515500959696@oracle.com> From: Muchun Song Date: Thu, 24 Feb 2022 23:34:41 +0800 Message-ID: Subject: Re: [PATCH v6 4/5] mm/sparse-vmemmap: improve memory savings for compound devmaps To: Joao Martins Cc: Linux Memory Management List , Dan Williams , Vishal Verma , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Mike Kravetz , Andrew Morton , Jonathan Corbet , Christoph Hellwig , nvdimm@lists.linux.dev, Linux Doc Mailing List Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=vEh5lqRz; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf15.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 2799AA0002 X-Stat-Signature: irgiiqweenueq3fpi4s5tpgjm3pz19ik X-HE-Tag: 1645716918-302946 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Feb 24, 2022 at 7:47 PM Joao Martins wrote: > > On 2/24/22 05:54, Muchun Song wrote: > > On Thu, Feb 24, 2022 at 3:48 AM Joao Martins wrote: > >> diff --git a/include/linux/mm.h b/include/linux/mm.h > >> index 5f549cf6a4e8..b0798b9c6a6a 100644 > >> --- a/include/linux/mm.h > >> +++ b/include/linux/mm.h > >> @@ -3118,7 +3118,7 @@ p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); > >> pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); > >> pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); > >> pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, > >> - struct vmem_altmap *altmap); > >> + struct vmem_altmap *altmap, struct page *block); > > > > Have forgotten to update @block to @reuse here. > > > > Fixed. > > > [...] > >> + > >> +static int __meminit vmemmap_populate_range(unsigned long start, > >> + unsigned long end, > >> + int node, struct page *page) > > > > All of the users are passing a valid parameter of @page. This function > > will populate the vmemmap with the @page > > Yeap. > > > and without memory > > allocations. So the @node parameter seems to be unnecessary. > > > I am a little bit afraid of making this logic more fragile by removing node. > When we populate the the tail vmemmap pages, we *may need* to populate a new PMD page > . And we need the @node for those or anything preceeding that (even though it's highly > unlikely). It's just the PTE reuse that doesn't need node :( Agree. So I suggest adding @altmap to vmemmap_populate_range() like you have done as follows. > > > If you want to make this function more generic like > > vmemmap_populate_address() to handle memory allocations > > (the case of @page == NULL). I think vmemmap_populate_range() > > should add another parameter of `struct vmem_altmap *altmap`. > > Oh, that's a nice cleanup/suggestion. I've moved vmemmap_populate_range() to be > used by vmemmap_populate_basepages(), and delete the duplication. I'll > adjust the second patch for this cleanup, to avoid moving the same code > over again between the two patches. I'll keep your Rb in the second patch, this is > the diff to this version: > > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c > index 44cb77523003..1b30a82f285e 100644 > --- a/mm/sparse-vmemmap.c > +++ b/mm/sparse-vmemmap.c > @@ -637,8 +637,9 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, > int node, > return pte; > } > > -int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, > - int node, struct vmem_altmap *altmap) > +static int __meminit vmemmap_populate_range(unsigned long start, > + unsigned long end, int node, > + struct vmem_altmap *altmap) > { > unsigned long addr = start; > pte_t *pte; > @@ -652,6 +653,12 @@ int __meminit vmemmap_populate_basepages(unsigned long start, > unsigned long end, > return 0; > } > > +int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, > + int node, struct vmem_altmap *altmap) > +{ > + return vmemmap_populate_range(start, end, node, altmap); > +} > + > struct page * __meminit __populate_section_memmap(unsigned long pfn, > unsigned long nr_pages, int nid, struct vmem_altmap *altmap, > struct dev_pagemap *pgmap) > > Meanwhile I'll adjust the other callers of vmemmap_populate_range() in this patch. LGTM. > > > Otherwise, is it better to remove @node and rename @page to @reuse? > > I've kept the @node for now, due to the concern explained earlier, but > renamed vmemmap_populate_range() to have its new argument be named @reuse. Make sense.