From: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com>
To: linuxppc-dev@lists.ozlabs.org
Cc: kasan-dev@googlegroups.com, linux-mm@kvack.org,
Marco Elver <elver@google.com>,
Alexander Potapenko <glider@google.com>,
Heiko Carstens <hca@linux.ibm.com>,
Michael Ellerman <mpe@ellerman.id.au>,
Nicholas Piggin <npiggin@gmail.com>,
Madhavan Srinivasan <maddy@linux.ibm.com>,
Christophe Leroy <christophe.leroy@csgroup.eu>,
Hari Bathini <hbathini@linux.ibm.com>,
"Aneesh Kumar K . V" <aneesh.kumar@kernel.org>,
Donet Tom <donettom@linux.vnet.ibm.com>,
Pavithra Prakash <pavrampu@linux.vnet.ibm.com>,
LKML <linux-kernel@vger.kernel.org>,
"Ritesh Harjani (IBM)" <ritesh.list@gmail.com>
Subject: [PATCH v3 07/12] book3s64/hash: Make kernel_map_linear_page() generic
Date: Fri, 18 Oct 2024 22:59:48 +0530 [thread overview]
Message-ID: <5b67df7b29e68d7c78d6fc1f42d41137299bac6b.1729271995.git.ritesh.list@gmail.com> (raw)
In-Reply-To: <cover.1729271995.git.ritesh.list@gmail.com>
Currently kernel_map_linear_page() function assumes to be working on
linear_map_hash_slots array. But since in later patches we need a
separate linear map array for kfence, hence make
kernel_map_linear_page() take a linear map array and lock in it's
function argument.
This is needed to separate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++++++++++++++-------------
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index ab50bb33a390..11975a2f7403 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -272,11 +272,8 @@ void hash__tlbiel_all(unsigned int action)
}
#ifdef CONFIG_DEBUG_PAGEALLOC
-static u8 *linear_map_hash_slots;
-static unsigned long linear_map_hash_count;
-static DEFINE_RAW_SPINLOCK(linear_map_hash_lock);
-
-static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi)
+static void kernel_map_linear_page(unsigned long vaddr, unsigned long idx,
+ u8 *slots, raw_spinlock_t *lock)
{
unsigned long hash;
unsigned long vsid = get_kernel_vsid(vaddr, mmu_kernel_ssize);
@@ -290,7 +287,7 @@ static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi)
if (!vsid)
return;
- if (linear_map_hash_slots[lmi] & 0x80)
+ if (slots[idx] & 0x80)
return;
ret = hpte_insert_repeating(hash, vpn, __pa(vaddr), mode,
@@ -298,36 +295,40 @@ static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi)
mmu_linear_psize, mmu_kernel_ssize);
BUG_ON (ret < 0);
- raw_spin_lock(&linear_map_hash_lock);
- BUG_ON(linear_map_hash_slots[lmi] & 0x80);
- linear_map_hash_slots[lmi] = ret | 0x80;
- raw_spin_unlock(&linear_map_hash_lock);
+ raw_spin_lock(lock);
+ BUG_ON(slots[idx] & 0x80);
+ slots[idx] = ret | 0x80;
+ raw_spin_unlock(lock);
}
-static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long lmi)
+static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long idx,
+ u8 *slots, raw_spinlock_t *lock)
{
- unsigned long hash, hidx, slot;
+ unsigned long hash, hslot, slot;
unsigned long vsid = get_kernel_vsid(vaddr, mmu_kernel_ssize);
unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize);
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
- raw_spin_lock(&linear_map_hash_lock);
- if (!(linear_map_hash_slots[lmi] & 0x80)) {
- raw_spin_unlock(&linear_map_hash_lock);
+ raw_spin_lock(lock);
+ if (!(slots[idx] & 0x80)) {
+ raw_spin_unlock(lock);
return;
}
- hidx = linear_map_hash_slots[lmi] & 0x7f;
- linear_map_hash_slots[lmi] = 0;
- raw_spin_unlock(&linear_map_hash_lock);
- if (hidx & _PTEIDX_SECONDARY)
+ hslot = slots[idx] & 0x7f;
+ slots[idx] = 0;
+ raw_spin_unlock(lock);
+ if (hslot & _PTEIDX_SECONDARY)
hash = ~hash;
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
- slot += hidx & _PTEIDX_GROUP_IX;
+ slot += hslot & _PTEIDX_GROUP_IX;
mmu_hash_ops.hpte_invalidate(slot, vpn, mmu_linear_psize,
mmu_linear_psize,
mmu_kernel_ssize, 0);
}
+static u8 *linear_map_hash_slots;
+static unsigned long linear_map_hash_count;
+static DEFINE_RAW_SPINLOCK(linear_map_hash_lock);
static inline void hash_debug_pagealloc_alloc_slots(void)
{
if (!debug_pagealloc_enabled())
@@ -362,9 +363,11 @@ static int hash_debug_pagealloc_map_pages(struct page *page, int numpages,
if (lmi >= linear_map_hash_count)
continue;
if (enable)
- kernel_map_linear_page(vaddr, lmi);
+ kernel_map_linear_page(vaddr, lmi,
+ linear_map_hash_slots, &linear_map_hash_lock);
else
- kernel_unmap_linear_page(vaddr, lmi);
+ kernel_unmap_linear_page(vaddr, lmi,
+ linear_map_hash_slots, &linear_map_hash_lock);
}
local_irq_restore(flags);
return 0;
--
2.46.0
next prev parent reply other threads:[~2024-10-18 17:31 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-18 17:29 [PATCH v3 00/12] powerpc/kfence: Improve kfence support (mainly Hash) Ritesh Harjani (IBM)
2024-10-18 17:29 ` [PATCH v3 01/12] powerpc: mm/fault: Fix kfence page fault reporting Ritesh Harjani (IBM)
2024-10-18 17:40 ` Christophe Leroy
2024-10-22 2:42 ` Michael Ellerman
2024-10-22 3:09 ` Ritesh Harjani
2024-10-18 17:29 ` [PATCH v3 02/12] book3s64/hash: Remove kfence support temporarily Ritesh Harjani (IBM)
2024-10-18 17:29 ` [PATCH v3 03/12] book3s64/hash: Refactor kernel linear map related calls Ritesh Harjani (IBM)
2024-10-18 17:29 ` [PATCH v3 04/12] book3s64/hash: Add hash_debug_pagealloc_add_slot() function Ritesh Harjani (IBM)
2024-10-18 17:29 ` [PATCH v3 05/12] book3s64/hash: Add hash_debug_pagealloc_alloc_slots() function Ritesh Harjani (IBM)
2024-10-18 17:29 ` [PATCH v3 06/12] book3s64/hash: Refactor hash__kernel_map_pages() function Ritesh Harjani (IBM)
2024-10-18 17:29 ` Ritesh Harjani (IBM) [this message]
2024-10-18 17:29 ` [PATCH v3 08/12] book3s64/hash: Disable debug_pagealloc if it requires more memory Ritesh Harjani (IBM)
2024-10-18 17:29 ` [PATCH v3 09/12] book3s64/hash: Add kfence functionality Ritesh Harjani (IBM)
2024-10-18 17:29 ` [PATCH v3 10/12] book3s64/radix: Refactoring common kfence related functions Ritesh Harjani (IBM)
2024-10-18 17:29 ` [PATCH v3 11/12] book3s64/hash: Disable kfence if not early init Ritesh Harjani (IBM)
2024-10-18 17:29 ` [PATCH v3 12/12] book3s64/hash: Early detect debug_pagealloc size requirement Ritesh Harjani (IBM)
2024-11-07 8:42 ` [PATCH v3 00/12] powerpc/kfence: Improve kfence support (mainly Hash) Michael Ellerman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5b67df7b29e68d7c78d6fc1f42d41137299bac6b.1729271995.git.ritesh.list@gmail.com \
--to=ritesh.list@gmail.com \
--cc=aneesh.kumar@kernel.org \
--cc=christophe.leroy@csgroup.eu \
--cc=donettom@linux.vnet.ibm.com \
--cc=elver@google.com \
--cc=glider@google.com \
--cc=hbathini@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=maddy@linux.ibm.com \
--cc=mpe@ellerman.id.au \
--cc=npiggin@gmail.com \
--cc=pavrampu@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox