linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Steven Price <steven.price@arm.com>
To: Christophe Leroy <christophe.leroy@c-s.fr>, linux-mm@kvack.org
Cc: "Mark Rutland" <Mark.Rutland@arm.com>,
	x86@kernel.org, "James Morse" <james.morse@arm.com>,
	"Arnd Bergmann" <arnd@arndb.de>,
	"Ard Biesheuvel" <ard.biesheuvel@linaro.org>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Catalin Marinas" <catalin.marinas@arm.com>,
	"Dave Hansen" <dave.hansen@linux.intel.com>,
	"Will Deacon" <will.deacon@arm.com>,
	linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org,
	"Jérôme Glisse" <jglisse@redhat.com>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Paul Mackerras" <paulus@samba.org>,
	"Andy Lutomirski" <luto@kernel.org>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Borislav Petkov" <bp@alien8.de>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	linuxppc-dev@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org, "Liang,
	Kan" <kan.liang@linux.intel.com>
Subject: Re: [PATCH v6 04/19] powerpc: mm: Add p?d_large() definitions
Date: Thu, 28 Mar 2019 11:00:42 +0000	[thread overview]
Message-ID: <2b7d32ce-f258-1b34-1dbf-3a05ea9a0f6b@arm.com> (raw)
In-Reply-To: <8a2efe07-b99f-3caa-fab9-47e49043bf66@c-s.fr>

On 26/03/2019 16:58, Christophe Leroy wrote:
> 
> 
> Le 26/03/2019 à 17:26, Steven Price a écrit :
>> walk_page_range() is going to be allowed to walk page tables other than
>> those of user space. For this it needs to know when it has reached a
>> 'leaf' entry in the page tables. This information is provided by the
>> p?d_large() functions/macros.
>>
>> For powerpc pmd_large() was already implemented, so hoist it out of the
>> CONFIG_TRANSPARENT_HUGEPAGE condition and implement the other levels.
>>
>> Also since we now have a pmd_large always implemented we can drop the
>> pmd_is_leaf() function.
> 
> Wouldn't it be better to drop the pmd_is_leaf() in a second patch ?

Fair point, I'll split this patch.

Thanks for the review,

Steve

> Christophe
> 
>>
>> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> CC: Paul Mackerras <paulus@samba.org>
>> CC: Michael Ellerman <mpe@ellerman.id.au>
>> CC: linuxppc-dev@lists.ozlabs.org
>> CC: kvm-ppc@vger.kernel.org
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>>   arch/powerpc/include/asm/book3s/64/pgtable.h | 30 ++++++++++++++------
>>   arch/powerpc/kvm/book3s_64_mmu_radix.c       | 12 ++------
>>   2 files changed, 24 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> index 581f91be9dd4..f6d1ac8b832e 100644
>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> @@ -897,6 +897,12 @@ static inline int pud_present(pud_t pud)
>>       return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PRESENT));
>>   }
>>   +#define pud_large    pud_large
>> +static inline int pud_large(pud_t pud)
>> +{
>> +    return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PTE));
>> +}
>> +
>>   extern struct page *pud_page(pud_t pud);
>>   extern struct page *pmd_page(pmd_t pmd);
>>   static inline pte_t pud_pte(pud_t pud)
>> @@ -940,6 +946,12 @@ static inline int pgd_present(pgd_t pgd)
>>       return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT));
>>   }
>>   +#define pgd_large    pgd_large
>> +static inline int pgd_large(pgd_t pgd)
>> +{
>> +    return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PTE));
>> +}
>> +
>>   static inline pte_t pgd_pte(pgd_t pgd)
>>   {
>>       return __pte_raw(pgd_raw(pgd));
>> @@ -1093,6 +1105,15 @@ static inline bool pmd_access_permitted(pmd_t
>> pmd, bool write)
>>       return pte_access_permitted(pmd_pte(pmd), write);
>>   }
>>   +#define pmd_large    pmd_large
>> +/*
>> + * returns true for pmd migration entries, THP, devmap, hugetlb
>> + */
>> +static inline int pmd_large(pmd_t pmd)
>> +{
>> +    return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE));
>> +}
>> +
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>   extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
>>   extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
>> @@ -1119,15 +1140,6 @@ pmd_hugepage_update(struct mm_struct *mm,
>> unsigned long addr, pmd_t *pmdp,
>>       return hash__pmd_hugepage_update(mm, addr, pmdp, clr, set);
>>   }
>>   -/*
>> - * returns true for pmd migration entries, THP, devmap, hugetlb
>> - * But compile time dependent on THP config
>> - */
>> -static inline int pmd_large(pmd_t pmd)
>> -{
>> -    return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE));
>> -}
>> -
>>   static inline pmd_t pmd_mknotpresent(pmd_t pmd)
>>   {
>>       return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT);
>> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c
>> b/arch/powerpc/kvm/book3s_64_mmu_radix.c
>> index f55ef071883f..1b57b4e3f819 100644
>> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
>> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
>> @@ -363,12 +363,6 @@ static void kvmppc_pte_free(pte_t *ptep)
>>       kmem_cache_free(kvm_pte_cache, ptep);
>>   }
>>   -/* Like pmd_huge() and pmd_large(), but works regardless of config
>> options */
>> -static inline int pmd_is_leaf(pmd_t pmd)
>> -{
>> -    return !!(pmd_val(pmd) & _PAGE_PTE);
>> -}
>> -
>>   static pmd_t *kvmppc_pmd_alloc(void)
>>   {
>>       return kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL);
>> @@ -460,7 +454,7 @@ static void kvmppc_unmap_free_pmd(struct kvm *kvm,
>> pmd_t *pmd, bool full,
>>       for (im = 0; im < PTRS_PER_PMD; ++im, ++p) {
>>           if (!pmd_present(*p))
>>               continue;
>> -        if (pmd_is_leaf(*p)) {
>> +        if (pmd_large(*p)) {
>>               if (full) {
>>                   pmd_clear(p);
>>               } else {
>> @@ -593,7 +587,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t
>> *pgtable, pte_t pte,
>>       else if (level <= 1)
>>           new_pmd = kvmppc_pmd_alloc();
>>   -    if (level == 0 && !(pmd && pmd_present(*pmd) &&
>> !pmd_is_leaf(*pmd)))
>> +    if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_large(*pmd)))
>>           new_ptep = kvmppc_pte_alloc();
>>         /* Check if we might have been invalidated; let the guest
>> retry if so */
>> @@ -662,7 +656,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t
>> *pgtable, pte_t pte,
>>           new_pmd = NULL;
>>       }
>>       pmd = pmd_offset(pud, gpa);
>> -    if (pmd_is_leaf(*pmd)) {
>> +    if (pmd_large(*pmd)) {
>>           unsigned long lgpa = gpa & PMD_MASK;
>>             /* Check if we raced and someone else has set the same
>> thing */
>>
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel


  reply	other threads:[~2019-03-28 11:00 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-26 16:26 [PATCH v6 00/19] Convert x86 & arm64 to use generic page walk Steven Price
2019-03-26 16:26 ` [PATCH v6 01/19] arc: mm: Add p?d_large() definitions Steven Price
2019-03-26 16:26 ` [PATCH v6 02/19] arm64: " Steven Price
2019-03-26 16:26 ` [PATCH v6 03/19] mips: " Steven Price
2019-03-26 16:26 ` [PATCH v6 04/19] powerpc: " Steven Price
2019-03-26 16:58   ` Christophe Leroy
2019-03-28 11:00     ` Steven Price [this message]
2019-03-26 16:26 ` [PATCH v6 05/19] riscv: " Steven Price
2019-03-26 16:26 ` [PATCH v6 06/19] s390: " Steven Price
2019-03-26 16:26 ` [PATCH v6 07/19] sparc: " Steven Price
2019-03-26 16:26 ` [PATCH v6 08/19] x86: " Steven Price
2019-03-26 16:26 ` [PATCH v6 09/19] mm: Add generic p?d_large() macros Steven Price
2019-03-26 16:26 ` [PATCH v6 10/19] mm: pagewalk: Add p4d_entry() and pgd_entry() Steven Price
2019-03-26 16:26 ` [PATCH v6 11/19] mm: pagewalk: Allow walking without vma Steven Price
2019-03-26 16:26 ` [PATCH v6 12/19] mm: pagewalk: Add test_p?d callbacks Steven Price
2019-03-26 16:26 ` [PATCH v6 13/19] arm64: mm: Convert mm/dump.c to use walk_page_range() Steven Price
2019-03-26 16:26 ` [PATCH v6 14/19] x86: mm: Don't display pages which aren't present in debugfs Steven Price
2019-03-26 16:26 ` [PATCH v6 15/19] x86: mm: Point to struct seq_file from struct pg_state Steven Price
2019-03-26 16:26 ` [PATCH v6 16/19] x86: mm+efi: Convert ptdump_walk_pgd_level() to take a mm_struct Steven Price
2019-03-26 16:26 ` [PATCH v6 17/19] x86: mm: Convert ptdump_walk_pgd_level_debugfs() to take an mm_struct Steven Price
2019-03-26 16:26 ` [PATCH v6 18/19] x86: mm: Convert ptdump_walk_pgd_level_core() " Steven Price
2019-03-26 16:26 ` [PATCH v6 19/19] x86: mm: Convert dump_pagetables to use walk_page_range Steven Price

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2b7d32ce-f258-1b34-1dbf-3a05ea9a0f6b@arm.com \
    --to=steven.price@arm.com \
    --cc=Mark.Rutland@arm.com \
    --cc=ard.biesheuvel@linaro.org \
    --cc=arnd@arndb.de \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=christophe.leroy@c-s.fr \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=james.morse@arm.com \
    --cc=jglisse@redhat.com \
    --cc=kan.liang@linux.intel.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=will.deacon@arm.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox