From: js1304@gmail.com
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
Alexander Potapenko <glider@google.com>,
Dmitry Vyukov <dvyukov@google.com>,
kasan-dev@googlegroups.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, "H . Peter Anvin" <hpa@zytor.com>,
kernel-team@lge.com, Joonsoo Kim <iamjoonsoo.kim@lge.com>
Subject: [PATCH v1 02/11] mm/kasan: don't fetch the next shadow value speculartively
Date: Tue, 16 May 2017 10:16:40 +0900 [thread overview]
Message-ID: <1494897409-14408-3-git-send-email-iamjoonsoo.kim@lge.com> (raw)
In-Reply-To: <1494897409-14408-1-git-send-email-iamjoonsoo.kim@lge.com>
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Fetching the next shadow value speculartively has pros and cons.
If shadow bytes are zero, we can exit the check with a single branch.
However, it could cause unaligned access. And, if the next shadow value
isn't zero, we need to do additional check. Next shadow value can be
non-zero due to various reasons.
Moreoever, following patch will introduce on-demand shadow memory
allocation/mapping and this speculartive fetch would cause more stale
TLB case.
So, I think that there is more side-effect than the benefit.
This patch removes it.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
mm/kasan/kasan.c | 104 +++++++++++++++++++++++--------------------------------
1 file changed, 44 insertions(+), 60 deletions(-)
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 85ee45b0..97d3560 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -136,90 +136,74 @@ static __always_inline bool memory_is_poisoned_1(unsigned long addr)
static __always_inline bool memory_is_poisoned_2(unsigned long addr)
{
- u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
-
- if (unlikely(*shadow_addr)) {
- if (memory_is_poisoned_1(addr + 1))
- return true;
-
- /*
- * If single shadow byte covers 2-byte access, we don't
- * need to do anything more. Otherwise, test the first
- * shadow byte.
- */
- if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
- return false;
+ if (unlikely(memory_is_poisoned_1(addr)))
+ return true;
- return unlikely(*(u8 *)shadow_addr);
- }
+ /*
+ * If single shadow byte covers 2-byte access, we don't
+ * need to do anything more. Otherwise, test the first
+ * shadow byte.
+ */
+ if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0))
+ return false;
- return false;
+ return memory_is_poisoned_1(addr + 1);
}
static __always_inline bool memory_is_poisoned_4(unsigned long addr)
{
- u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
-
- if (unlikely(*shadow_addr)) {
- if (memory_is_poisoned_1(addr + 3))
- return true;
-
- /*
- * If single shadow byte covers 4-byte access, we don't
- * need to do anything more. Otherwise, test the first
- * shadow byte.
- */
- if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
- return false;
+ if (unlikely(memory_is_poisoned_1(addr + 3)))
+ return true;
- return unlikely(*(u8 *)shadow_addr);
- }
+ /*
+ * If single shadow byte covers 4-byte access, we don't
+ * need to do anything more. Otherwise, test the first
+ * shadow byte.
+ */
+ if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3))
+ return false;
- return false;
+ return memory_is_poisoned_1(addr);
}
static __always_inline bool memory_is_poisoned_8(unsigned long addr)
{
- u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+ u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
- if (unlikely(*shadow_addr)) {
- if (memory_is_poisoned_1(addr + 7))
- return true;
+ if (unlikely(*shadow_addr))
+ return true;
- /*
- * If single shadow byte covers 8-byte access, we don't
- * need to do anything more. Otherwise, test the first
- * shadow byte.
- */
- if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
- return false;
+ /*
+ * If single shadow byte covers 8-byte access, we don't
+ * need to do anything more. Otherwise, test the first
+ * shadow byte.
+ */
+ if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
+ return false;
- return unlikely(*(u8 *)shadow_addr);
- }
+ if (unlikely(memory_is_poisoned_1(addr + 7)))
+ return true;
return false;
}
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
- u32 *shadow_addr = (u32 *)kasan_mem_to_shadow((void *)addr);
-
- if (unlikely(*shadow_addr)) {
- u16 shadow_first_bytes = *(u16 *)shadow_addr;
+ u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
- if (unlikely(shadow_first_bytes))
- return true;
+ if (unlikely(*shadow_addr))
+ return true;
- /*
- * If two shadow bytes covers 16-byte access, we don't
- * need to do anything more. Otherwise, test the last
- * shadow byte.
- */
- if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
- return false;
+ /*
+ * If two shadow bytes covers 16-byte access, we don't
+ * need to do anything more. Otherwise, test the last
+ * shadow byte.
+ */
+ if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
+ return false;
- return memory_is_poisoned_1(addr + 15);
- }
+ if (unlikely(memory_is_poisoned_1(addr + 15)))
+ return true;
return false;
}
--
2.7.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-05-16 1:17 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-16 1:16 [PATCH v1 00/11] mm/kasan: support per-page shadow memory to reduce memory consumption js1304
2017-05-16 1:16 ` [PATCH v1 01/11] mm/kasan: rename XXX_is_zero to XXX_is_nonzero js1304
2017-05-16 1:16 ` js1304 [this message]
2017-05-16 1:16 ` [PATCH v1 03/11] mm/kasan: handle unaligned end address in zero_pte_populate js1304
2017-05-16 1:16 ` [PATCH v1 04/11] mm/kasan: extend kasan_populate_zero_shadow() js1304
2017-05-16 1:16 ` [PATCH v1 05/11] mm/kasan: introduce per-page shadow memory infrastructure js1304
2017-05-16 1:16 ` [PATCH v1 06/11] mm/kasan: mark/unmark the target range that is for original shadow memory js1304
2017-05-16 1:16 ` [PATCH v1 07/11] x86/kasan: use per-page " js1304
2017-05-16 1:16 ` [PATCH v1 08/11] mm/kasan: support on-demand shadow allocation/mapping js1304
2017-05-16 1:16 ` [PATCH v1 09/11] x86/kasan: support on-demand shadow mapping js1304
2017-05-16 1:16 ` [PATCH v1 10/11] mm/kasan: support dynamic shadow memory free js1304
2017-05-16 1:16 ` [PATCH v1 11/11] mm/kasan: change the order of shadow memory check js1304
2017-05-16 1:28 ` [PATCH(RE-RESEND) v1 01/11] mm/kasan: rename _is_zero to _is_nonzero Joonsoo Kim
2017-05-16 4:34 ` [PATCH v1 00/11] mm/kasan: support per-page shadow memory to reduce memory consumption Dmitry Vyukov
2017-05-16 4:47 ` Dmitry Vyukov
2017-05-16 6:23 ` Joonsoo Kim
2017-05-16 20:49 ` Dmitry Vyukov
2017-05-17 7:23 ` Joonsoo Kim
2017-05-17 7:25 ` Joonsoo Kim
2017-05-24 6:57 ` Dmitry Vyukov
2017-05-24 7:45 ` Joonsoo Kim
2017-05-24 17:19 ` Dmitry Vyukov
2017-05-25 0:41 ` Joonsoo Kim
2017-05-29 15:07 ` Dmitry Vyukov
2017-05-29 15:12 ` Dmitry Vyukov
2017-05-29 15:29 ` Dmitry Vyukov
2017-05-30 7:58 ` Vladimir Murzin
2017-05-30 8:15 ` Dmitry Vyukov
2017-05-30 8:31 ` Vladimir Murzin
2017-05-30 8:40 ` Vladimir Murzin
2017-05-30 8:49 ` Dmitry Vyukov
2017-05-30 9:08 ` Vladimir Murzin
2017-05-30 9:26 ` Dmitry Vyukov
2017-05-30 9:39 ` Vladimir Murzin
2017-05-30 9:45 ` Dmitry Vyukov
2017-05-30 9:54 ` Vladimir Murzin
2017-05-30 14:16 ` Andrey Ryabinin
2017-05-31 5:50 ` Joonsoo Kim
2017-05-31 16:31 ` Andrey Ryabinin
2017-06-08 2:43 ` Joonsoo Kim
2017-06-01 15:16 ` 王靖天
2017-06-01 18:06 ` Dmitry Vyukov
2017-06-08 2:40 ` Joonsoo Kim
2017-06-13 16:49 ` Andrey Ryabinin
2017-06-14 0:12 ` Joonsoo Kim
2017-05-17 12:17 ` Andrey Ryabinin
2017-05-19 1:53 ` Joonsoo Kim
2017-05-22 6:02 ` Dmitry Vyukov
2017-05-24 6:04 ` Joonsoo Kim
2017-05-24 16:31 ` Dmitry Vyukov
2017-05-25 0:46 ` Joonsoo Kim
2017-05-22 14:00 ` Andrey Ryabinin
2017-05-24 6:18 ` Joonsoo Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1494897409-14408-3-git-send-email-iamjoonsoo.kim@lge.com \
--to=js1304@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=aryabinin@virtuozzo.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=hpa@zytor.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=kasan-dev@googlegroups.com \
--cc=kernel-team@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox