* Re: [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16
@ 2018-03-19 1:02 Liuwenliang (Abbott Liu)
0 siblings, 0 replies; 3+ messages in thread
From: Liuwenliang (Abbott Liu) @ 2018-03-19 1:02 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli, akpm,
afzal.mohd.ma, alexander.levin, glider, dvyukov,
christoffer.dall, linux, mawilcox, pombredanne, ard.biesheuvel,
vladimir.murzin, nicolas.pitre, tglx, thgarnie, dhowells,
keescook, arnd, geert, tixy, mark.rutland, james.morse,
zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
linux-kernel, kasan-dev, kvmarm, linux-mm
On Sun, Mar 18, 2018 at 21:21:20PM +0800, Russell King wrote:
>On Sun, Mar 18, 2018 at 08:53:36PM +0800, Abbott Liu wrote:
>> Because in some architecture(eg. arm) instruction set, non-aligned
>> access support is not very well, so 2 1-byte checks is more
>> safer than 1 2-byte check. The impact on performance is small
>> because 16-byte accesses are not too common.
>
>This is unnecessary:
>
>1. a load of a 16-bit quantity will work as desired on modern ARMs.
>2. Networking already relies on unaligned loads to work as per x86
> (iow, an unaligned 32-bit load loads the 32-bits at the address
> even if it's not naturally aligned, and that also goes for 16-bit
> accesses.)
>
>If these are rare (which you say above - "not too common") then it's
>much better to leave the code as-is, because it will most likely be
>faster on modern CPUs, and the impact for older generation CPUs is
>likely to be low.
Thanks for your review.
OK, I am going to remove this patch in the next version.
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v2 0/7] KASan for arm
@ 2018-03-18 12:53 Abbott Liu
2018-03-18 12:53 ` [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16 Abbott Liu
0 siblings, 1 reply; 3+ messages in thread
From: Abbott Liu @ 2018-03-18 12:53 UTC (permalink / raw)
To: linux, aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli,
liuwenliang, akpm, afzal.mohd.ma, alexander.levin
Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
linux-kernel, kasan-dev, kvmarm, linux-mm
Changelog:
v2 - v1
- Fixed some compiling error which happens on changing kernel compression
mode to lzma/xz/lzo/lz4.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
Russell King - ARM Linux <linux@armlinux.org.uk>
- Fixed a compiling error cause by some older arm instruction set(armv4t)
don't suppory movw/movt which is reported by kbuild.
- Changed the pte flag from _L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN to
pgprot_val(PAGE_KERNEL).
---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
- Moved Enable KASan patch as the last one.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
Russell King - ARM Linux <linux@armlinux.org.uk>
- Moved the definitions of cp15 registers from
arch/arm/include/asm/kvm_hyp.h to arch/arm/include/asm/cp15.h.
---Asked by: Mark Rutland <mark.rutland@arm.com>
- Merge the following commits into the commit
Define the virtual space of KASan's shadow region:
1) Define the virtual space of KASan's shadow region;
2) Avoid cleaning the KASan shadow area's mapping table;
3) Add KASan layout;
- Merge the following commits into the commit
Initialize the mapping of KASan shadow memory:
1) Initialize the mapping of KASan shadow memory;
2) Add support arm LPAE;
3) Don't need to map the shadow of KASan's shadow memory;
---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
4) Change mapping of kasan_zero_page int readonly.
Hi,all:
These patches add arch specific code for kernel address sanitizer
(see Documentation/kasan.txt).
1/8 of kernel addresses reserved for shadow memory. There was no
big enough hole for this, so virtual addresses for shadow were
stolen from user space.
At early boot stage the whole shadow region populated with just
one physical page (kasan_zero_page). Later, this page reused
as readonly zero shadow for some memory that KASan currently
don't track (vmalloc).
After mapping the physical memory, pages for shadow memory are
allocated and mapped.
KASan's stack instrumentation significantly increases stack's
consumption, so CONFIG_KASAN doubles THREAD_SIZE.
Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Some files built without kasan instrumentation (e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.
On arm LPAE architecture, the mapping table of KASan shadow memory(if
PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
because kasan instrumentation maybe cause do_translation_fault function
accessing KASan shadow memory. The accessing of KASan shadow memory in
do_translation_fault function maybe cause dead circle. So the mapping table
of KASan shadow memory need be copyed in pgd_alloc function.
Most of the code comes from:
https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe
These patches are tested on vexpress-ca15, vexpress-ca9
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Tested-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Abbott Liu (3):
2 1-byte checks more safer for memory_is_poisoned_16
Add TTBR operator for kasan_init
Define the virtual space of KASan's shadow region
Andrey Ryabinin (4):
Disable instrumentation for some code
Replace memory function for kasan
Initialize the mapping of KASan shadow memory
Enable KASan for arm
arch/arm/Kconfig | 1 +
arch/arm/boot/compressed/Makefile | 1 +
arch/arm/boot/compressed/decompress.c | 2 +
arch/arm/boot/compressed/libfdt_env.h | 2 +
arch/arm/include/asm/cp15.h | 104 ++++++++++++
arch/arm/include/asm/kasan.h | 23 +++
arch/arm/include/asm/kasan_def.h | 52 ++++++
arch/arm/include/asm/kvm_hyp.h | 52 ------
arch/arm/include/asm/memory.h | 5 +
arch/arm/include/asm/pgalloc.h | 7 +-
arch/arm/include/asm/string.h | 17 ++
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/entry-armv.S | 5 +-
arch/arm/kernel/entry-common.S | 6 +-
arch/arm/kernel/head-common.S | 7 +-
arch/arm/kernel/setup.c | 2 +
arch/arm/kernel/unwind.c | 3 +-
arch/arm/kvm/hyp/cp15-sr.c | 12 +-
arch/arm/kvm/hyp/switch.c | 6 +-
arch/arm/lib/memcpy.S | 3 +
arch/arm/lib/memmove.S | 5 +-
arch/arm/lib/memset.S | 3 +
arch/arm/mm/Makefile | 3 +
arch/arm/mm/init.c | 6 +
arch/arm/mm/kasan_init.c | 290 ++++++++++++++++++++++++++++++++++
arch/arm/mm/mmu.c | 7 +-
arch/arm/mm/pgd.c | 14 ++
arch/arm/vdso/Makefile | 2 +
mm/kasan/kasan.c | 24 ++-
29 files changed, 588 insertions(+), 80 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/include/asm/kasan_def.h
create mode 100644 arch/arm/mm/kasan_init.c
--
2.9.0
^ permalink raw reply [flat|nested] 3+ messages in thread* [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16
2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
@ 2018-03-18 12:53 ` Abbott Liu
2018-03-18 13:21 ` Russell King - ARM Linux
0 siblings, 1 reply; 3+ messages in thread
From: Abbott Liu @ 2018-03-18 12:53 UTC (permalink / raw)
To: linux, aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli,
liuwenliang, akpm, afzal.mohd.ma, alexander.levin
Cc: glider, dvyukov, christoffer.dall, linux, mawilcox, pombredanne,
ard.biesheuvel, vladimir.murzin, nicolas.pitre, tglx, thgarnie,
dhowells, keescook, arnd, geert, tixy, mark.rutland, james.morse,
zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
linux-kernel, kasan-dev, kvmarm, linux-mm
Because in some architecture(eg. arm) instruction set, non-aligned
access support is not very well, so 2 1-byte checks is more
safer than 1 2-byte check. The impact on performance is small
because 16-byte accesses are not too common.
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
---
mm/kasan/kasan.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e13d911..104839a 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -151,13 +151,20 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr,
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
- u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
-
- /* Unaligned 16-bytes access maps into 3 shadow bytes. */
- if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
- return *shadow_addr || memory_is_poisoned_1(addr + 15);
+ u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
- return *shadow_addr;
+ if (unlikely(shadow_addr[0] || shadow_addr[1])) {
+ return true;
+ } else if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE))) {
+ /*
+ * If two shadow bytes covers 16-byte access, we don't
+ * need to do anything more. Otherwise, test the last
+ * shadow byte.
+ */
+ return false;
+ } else {
+ return memory_is_poisoned_1(addr + 15);
+ }
}
static __always_inline unsigned long bytes_is_nonzero(const u8 *start,
--
2.9.0
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16
2018-03-18 12:53 ` [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16 Abbott Liu
@ 2018-03-18 13:21 ` Russell King - ARM Linux
0 siblings, 0 replies; 3+ messages in thread
From: Russell King - ARM Linux @ 2018-03-18 13:21 UTC (permalink / raw)
To: Abbott Liu
Cc: aryabinin, marc.zyngier, kstewart, gregkh, f.fainelli, akpm,
afzal.mohd.ma, alexander.levin, glider, dvyukov,
christoffer.dall, linux, mawilcox, pombredanne, ard.biesheuvel,
vladimir.murzin, nicolas.pitre, tglx, thgarnie, dhowells,
keescook, arnd, geert, tixy, mark.rutland, james.morse,
zhichao.huang, jinb.park7, labbott, philip, grygorii.strashko,
catalin.marinas, opendmb, kirill.shutemov, linux-arm-kernel,
linux-kernel, kasan-dev, kvmarm, linux-mm
On Sun, Mar 18, 2018 at 08:53:36PM +0800, Abbott Liu wrote:
> Because in some architecture(eg. arm) instruction set, non-aligned
> access support is not very well, so 2 1-byte checks is more
> safer than 1 2-byte check. The impact on performance is small
> because 16-byte accesses are not too common.
This is unnecessary:
1. a load of a 16-bit quantity will work as desired on modern ARMs.
2. Networking already relies on unaligned loads to work as per x86
(iow, an unaligned 32-bit load loads the 32-bits at the address
even if it's not naturally aligned, and that also goes for 16-bit
accesses.)
If these are rare (which you say above - "not too common") then it's
much better to leave the code as-is, because it will most likely be
faster on modern CPUs, and the impact for older generation CPUs is
likely to be low.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2018-03-19 1:02 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-19 1:02 [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16 Liuwenliang (Abbott Liu)
-- strict thread matches above, loose matches on Subject: below --
2018-03-18 12:53 [PATCH v2 0/7] KASan for arm Abbott Liu
2018-03-18 12:53 ` [PATCH 1/7] 2 1-byte checks more safer for memory_is_poisoned_16 Abbott Liu
2018-03-18 13:21 ` Russell King - ARM Linux
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox