From: Catalin Marinas <catalin.marinas@arm.com>
To: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, akpm@linux-foundation.org,
will@kernel.org, mark.rutland@arm.com, mhocko@suse.com,
david@redhat.com, cai@lca.pw, logang@deltatee.com,
cpandya@codeaurora.org, arunks@codeaurora.org,
dan.j.williams@intel.com, mgorman@techsingularity.net,
osalvador@suse.de, ard.biesheuvel@arm.com, steve.capper@arm.com,
broonie@kernel.org, valentin.schneider@arm.com,
Robin.Murphy@arm.com, steven.price@arm.com,
suzuki.poulose@arm.com, ira.weiny@intel.com
Subject: Re: [PATCH V8 2/2] arm64/mm: Enable memory hot remove
Date: Mon, 7 Oct 2019 15:17:38 +0100 [thread overview]
Message-ID: <20191007141738.GA93112@E120351.arm.com> (raw)
In-Reply-To: <1569217425-23777-3-git-send-email-anshuman.khandual@arm.com>
On Mon, Sep 23, 2019 at 11:13:45AM +0530, Anshuman Khandual wrote:
> The arch code for hot-remove must tear down portions of the linear map and
> vmemmap corresponding to memory being removed. In both cases the page
> tables mapping these regions must be freed, and when sparse vmemmap is in
> use the memory backing the vmemmap must also be freed.
>
> This patch adds unmap_hotplug_range() and free_empty_tables() helpers which
> can be used to tear down either region and calls it from vmemmap_free() and
> ___remove_pgd_mapping(). The sparse_vmap argument determines whether the
> backing memory will be freed.
Can you change the 'sparse_vmap' name to something more meaningful which
would suggest freeing of the backing memory?
> It makes two distinct passes over the kernel page table. In the first pass
> with unmap_hotplug_range() it unmaps, invalidates applicable TLB cache and
> frees backing memory if required (vmemmap) for each mapped leaf entry. In
> the second pass with free_empty_tables() it looks for empty page table
> sections whose page table page can be unmapped, TLB invalidated and freed.
>
> While freeing intermediate level page table pages bail out if any of its
> entries are still valid. This can happen for partially filled kernel page
> table either from a previously attempted failed memory hot add or while
> removing an address range which does not span the entire page table page
> range.
>
> The vmemmap region may share levels of table with the vmalloc region.
> There can be conflicts between hot remove freeing page table pages with
> a concurrent vmalloc() walking the kernel page table. This conflict can
> not just be solved by taking the init_mm ptl because of existing locking
> scheme in vmalloc(). So free_empty_tables() implements a floor and ceiling
> method which is borrowed from user page table tear with free_pgd_range()
> which skips freeing page table pages if intermediate address range is not
> aligned or maximum floor-ceiling might not own the entire page table page.
>
> While here update arch_add_memory() to handle __add_pages() failures by
> just unmapping recently added kernel linear mapping. Now enable memory hot
> remove on arm64 platforms by default with ARCH_ENABLE_MEMORY_HOTREMOVE.
>
> This implementation is overall inspired from kernel page table tear down
> procedure on X86 architecture and user page table tear down method.
>
> Acked-by: Steve Capper <steve.capper@arm.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Given the amount of changes since version 7, do the acks still stand?
[...]
> +static void free_pte_table(pmd_t *pmdp, unsigned long addr, unsigned long end,
> + unsigned long floor, unsigned long ceiling)
> +{
> + struct page *page;
> + pte_t *ptep;
> + int i;
> +
> + if (!pgtable_range_aligned(addr, end, floor, ceiling, PMD_MASK))
> + return;
> +
> + ptep = pte_offset_kernel(pmdp, 0UL);
> + for (i = 0; i < PTRS_PER_PTE; i++) {
> + if (!pte_none(READ_ONCE(ptep[i])))
> + return;
> + }
> +
> + page = pmd_page(READ_ONCE(*pmdp));
Arguably, that's not the pmd page we are freeing here. Even if you get
the same result, pmd_page() is normally used for huge pages pointed at
by the pmd entry. Since you have the ptep already, why not use
virt_to_page(ptep)?
> + pmd_clear(pmdp);
> + __flush_tlb_kernel_pgtable(addr);
> + free_hotplug_pgtable_page(page);
> +}
> +
> +static void free_pmd_table(pud_t *pudp, unsigned long addr, unsigned long end,
> + unsigned long floor, unsigned long ceiling)
> +{
> + struct page *page;
> + pmd_t *pmdp;
> + int i;
> +
> + if (CONFIG_PGTABLE_LEVELS <= 2)
> + return;
> +
> + if (!pgtable_range_aligned(addr, end, floor, ceiling, PUD_MASK))
> + return;
> +
> + pmdp = pmd_offset(pudp, 0UL);
> + for (i = 0; i < PTRS_PER_PMD; i++) {
> + if (!pmd_none(READ_ONCE(pmdp[i])))
> + return;
> + }
> +
> + page = pud_page(READ_ONCE(*pudp));
Same here, virt_to_page(pmdp).
> + pud_clear(pudp);
> + __flush_tlb_kernel_pgtable(addr);
> + free_hotplug_pgtable_page(page);
> +}
> +
> +static void free_pud_table(pgd_t *pgdp, unsigned long addr, unsigned long end,
> + unsigned long floor, unsigned long ceiling)
> +{
> + struct page *page;
> + pud_t *pudp;
> + int i;
> +
> + if (CONFIG_PGTABLE_LEVELS <= 3)
> + return;
> +
> + if (!pgtable_range_aligned(addr, end, floor, ceiling, PGDIR_MASK))
> + return;
> +
> + pudp = pud_offset(pgdp, 0UL);
> + for (i = 0; i < PTRS_PER_PUD; i++) {
> + if (!pud_none(READ_ONCE(pudp[i])))
> + return;
> + }
> +
> + page = pgd_page(READ_ONCE(*pgdp));
As above.
> + pgd_clear(pgdp);
> + __flush_tlb_kernel_pgtable(addr);
> + free_hotplug_pgtable_page(page);
> +}
> +
> +static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr,
> + unsigned long end, bool sparse_vmap)
> +{
> + struct page *page;
> + pte_t *ptep, pte;
> +
> + do {
> + ptep = pte_offset_kernel(pmdp, addr);
> + pte = READ_ONCE(*ptep);
> + if (pte_none(pte))
> + continue;
> +
> + WARN_ON(!pte_present(pte));
> + page = sparse_vmap ? pte_page(pte) : NULL;
> + pte_clear(&init_mm, addr, ptep);
> + flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> + if (sparse_vmap)
> + free_hotplug_page_range(page, PAGE_SIZE);
You could only set 'page' if sparse_vmap (or even drop 'page' entirely).
The compiler is probably smart enough to optimise it but using a
pointless ternary operator just makes the code harder to follow.
> + } while (addr += PAGE_SIZE, addr < end);
> +}
[...]
> +static void free_empty_pte_table(pmd_t *pmdp, unsigned long addr,
> + unsigned long end)
> +{
> + pte_t *ptep, pte;
> +
> + do {
> + ptep = pte_offset_kernel(pmdp, addr);
> + pte = READ_ONCE(*ptep);
> + WARN_ON(!pte_none(pte));
> + } while (addr += PAGE_SIZE, addr < end);
> +}
> +
> +static void free_empty_pmd_table(pud_t *pudp, unsigned long addr,
> + unsigned long end, unsigned long floor,
> + unsigned long ceiling)
> +{
> + unsigned long next;
> + pmd_t *pmdp, pmd;
> +
> + do {
> + next = pmd_addr_end(addr, end);
> + pmdp = pmd_offset(pudp, addr);
> + pmd = READ_ONCE(*pmdp);
> + if (pmd_none(pmd))
> + continue;
> +
> + WARN_ON(!pmd_present(pmd) || !pmd_table(pmd) || pmd_sect(pmd));
> + free_empty_pte_table(pmdp, addr, next);
> + free_pte_table(pmdp, addr, next, floor, ceiling);
Do we need two closely named functions here? Can you not collapse
free_empty_pud_table() and free_pte_table() into a single one? The same
comment for the pmd/pud variants. I just find this confusing.
> + } while (addr = next, addr < end);
You could make these function in two steps: first, as above, invoke the
next level recursively; second, after the do..while loop, check whether
it's empty and free the pmd page as in free_pmd_table().
> +}
[...]
--
Catalin
next prev parent reply other threads:[~2019-10-07 14:17 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-23 5:43 [PATCH V8 0/2] " Anshuman Khandual
2019-09-23 5:43 ` [PATCH V8 1/2] arm64/mm: Hold memory hotplug lock while walking for kernel page table dump Anshuman Khandual
2019-09-23 5:43 ` [PATCH V8 2/2] arm64/mm: Enable memory hot remove Anshuman Khandual
2019-09-23 11:17 ` Matthew Wilcox
2019-09-24 8:41 ` Anshuman Khandual
2019-09-23 17:39 ` kbuild test robot
2019-10-07 14:17 ` Catalin Marinas [this message]
2019-10-08 4:36 ` Anshuman Khandual
2019-10-08 10:55 ` Catalin Marinas
2019-10-08 11:48 ` Anshuman Khandual
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191007141738.GA93112@E120351.arm.com \
--to=catalin.marinas@arm.com \
--cc=Robin.Murphy@arm.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=ard.biesheuvel@arm.com \
--cc=arunks@codeaurora.org \
--cc=broonie@kernel.org \
--cc=cai@lca.pw \
--cc=cpandya@codeaurora.org \
--cc=dan.j.williams@intel.com \
--cc=david@redhat.com \
--cc=ira.weiny@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=logang@deltatee.com \
--cc=mark.rutland@arm.com \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=osalvador@suse.de \
--cc=steve.capper@arm.com \
--cc=steven.price@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=valentin.schneider@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox