From: Oscar Salvador <osalvador@techadventures.net>
To: Baoquan He <bhe@redhat.com>
Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org,
dave.hansen@intel.com, pagupta@redhat.com,
Pavel Tatashin <pasha.tatashin@oracle.com>,
linux-mm@kvack.org, kirill.shutemov@linux.intel.com
Subject: Re: [PATCH v6 2/5] mm/sparsemem: Defer the ms->section_mem_map clearing
Date: Thu, 28 Jun 2018 13:19:54 +0200 [thread overview]
Message-ID: <20180628111954.GA12956@techadventures.net> (raw)
In-Reply-To: <20180628062857.29658-3-bhe@redhat.com>
On Thu, Jun 28, 2018 at 02:28:54PM +0800, Baoquan He wrote:
> In sparse_init(), if CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y, system
> will allocate one continuous memory chunk for mem maps on one node and
> populate the relevant page tables to map memory section one by one. If
> fail to populate for a certain mem section, print warning and its
> ->section_mem_map will be cleared to cancel the marking of being present.
> Like this, the number of mem sections marked as present could become
> less during sparse_init() execution.
>
> Here just defer the ms->section_mem_map clearing if failed to populate
> its page tables until the last for_each_present_section_nr() loop. This
> is in preparation for later optimizing the mem map allocation.
>
> Signed-off-by: Baoquan He <bhe@redhat.com>
> Reviewed-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Looks good to me.
Reviewed-by: Oscar Salvador <osalvador@suse.de>
> ---
> mm/sparse-vmemmap.c | 4 ----
> mm/sparse.c | 15 ++++++++-------
> 2 files changed, 8 insertions(+), 11 deletions(-)
>
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index bd0276d5f66b..68bb65b2d34d 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -292,18 +292,14 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
> }
>
> for (pnum = pnum_begin; pnum < pnum_end; pnum++) {
> - struct mem_section *ms;
> -
> if (!present_section_nr(pnum))
> continue;
>
> map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL);
> if (map_map[pnum])
> continue;
> - ms = __nr_to_section(pnum);
> pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
> __func__);
> - ms->section_mem_map = 0;
> }
>
> if (vmemmap_buf_start) {
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 6314303130b0..6a706093307d 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -441,17 +441,13 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
>
> /* fallback */
> for (pnum = pnum_begin; pnum < pnum_end; pnum++) {
> - struct mem_section *ms;
> -
> if (!present_section_nr(pnum))
> continue;
> map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL);
> if (map_map[pnum])
> continue;
> - ms = __nr_to_section(pnum);
> pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
> __func__);
> - ms->section_mem_map = 0;
> }
> }
> #endif /* !CONFIG_SPARSEMEM_VMEMMAP */
> @@ -479,7 +475,6 @@ static struct page __init *sparse_early_mem_map_alloc(unsigned long pnum)
>
> pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
> __func__);
> - ms->section_mem_map = 0;
> return NULL;
> }
> #endif
> @@ -583,17 +578,23 @@ void __init sparse_init(void)
> #endif
>
> for_each_present_section_nr(0, pnum) {
> + struct mem_section *ms;
> + ms = __nr_to_section(pnum);
> usemap = usemap_map[pnum];
> - if (!usemap)
> + if (!usemap) {
> + ms->section_mem_map = 0;
> continue;
> + }
>
> #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
> map = map_map[pnum];
> #else
> map = sparse_early_mem_map_alloc(pnum);
> #endif
> - if (!map)
> + if (!map) {
> + ms->section_mem_map = 0;
> continue;
> + }
>
> sparse_init_one_section(__nr_to_section(pnum), pnum, map,
> usemap);
> --
> 2.13.6
>
--
Oscar Salvador
SUSE L3
next prev parent reply other threads:[~2018-06-28 11:19 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-28 6:28 [PATCH v6 0/5] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
2018-06-28 6:28 ` [PATCH v6 1/5] mm/sparse: Add a static variable nr_present_sections Baoquan He
2018-06-28 11:24 ` Oscar Salvador
2018-06-28 6:28 ` [PATCH v6 2/5] mm/sparsemem: Defer the ms->section_mem_map clearing Baoquan He
2018-06-28 11:19 ` Oscar Salvador [this message]
2018-06-28 6:28 ` [PATCH v6 3/5] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap Baoquan He
2018-06-28 13:08 ` Oscar Salvador
2018-06-28 6:28 ` [PATCH v6 4/5] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
2018-06-28 12:09 ` Oscar Salvador
2018-06-28 12:12 ` Pavel Tatashin
2018-06-28 13:12 ` Oscar Salvador
2018-06-28 14:05 ` Dave Hansen
2018-06-28 14:09 ` Pavel Tatashin
2018-06-28 6:28 ` [PATCH v6 5/5] mm/sparse: Remove CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER Baoquan He
2018-06-28 12:15 ` Pavel Tatashin
2018-07-11 22:26 ` [v6,5/5] " Guenter Roeck
2018-07-08 2:09 ` [PATCH v6 0/5] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180628111954.GA12956@techadventures.net \
--to=osalvador@techadventures.net \
--cc=akpm@linux-foundation.org \
--cc=bhe@redhat.com \
--cc=dave.hansen@intel.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=pagupta@redhat.com \
--cc=pasha.tatashin@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox