From: Mark Rutland <mark.rutland@arm.com>
To: Daniel Axtens <dja@axtens.net>
Cc: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org,
aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org,
linux-kernel@vger.kernel.org, dvyukov@google.com
Subject: Re: [PATCH v3 1/3] kasan: support backing vmalloc space with real shadow memory
Date: Fri, 9 Aug 2019 13:37:46 +0100 [thread overview]
Message-ID: <20190809123745.GG48423@lakrids.cambridge.arm.com> (raw)
In-Reply-To: <20190808135037.GA47131@lakrids.cambridge.arm.com>
On Thu, Aug 08, 2019 at 02:50:37PM +0100, Mark Rutland wrote:
> From looking at this for a while, there are a few more things we should
> sort out:
> * We can use the split pmd locks (used by both x86 and arm64) to
> minimize contention on the init_mm ptl. As apply_to_page_range()
> doesn't pass the corresponding pmd in, we'll have to re-walk the table
> in the callback, but I suspect that's better than having all vmalloc
> operations contend on the same ptl.
Just to point out: I was wrong about this. We don't initialise the split
pmd locks for the kernel page tables, so we have to use the init_mm ptl.
I've fixed that up in my kasan/vmalloc branch as below, which works for
me on arm64 (with another patch to prevent arm64 from using early shadow
for the vmalloc area).
Thanks,
Mark.
----
static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, void *unused)
{
unsigned long page;
pte_t pte;
if (likely(!pte_none(*ptep)))
return 0;
page = __get_free_page(GFP_KERNEL);
if (!page)
return -ENOMEM;
memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE);
pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL);
/*
* Ensure poisoning is visible before the shadow is made visible
* to other CPUs.
*/
smp_wmb();
spin_lock(&init_mm.page_table_lock);
if (likely(pte_none(*ptep))) {
set_pte_at(&init_mm, addr, ptep, pte);
page = 0;
}
spin_unlock(&init_mm.page_table_lock);
if (page)
free_page(page);
return 0;
}
int kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area)
{
unsigned long shadow_start, shadow_end;
int ret;
shadow_start = (unsigned long)kasan_mem_to_shadow(area->addr);
shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
shadow_end = (unsigned long)kasan_mem_to_shadow(area->addr + area->size),
shadow_end = ALIGN(shadow_end, PAGE_SIZE);
ret = apply_to_page_range(&init_mm, shadow_start,
shadow_end - shadow_start,
kasan_populate_vmalloc_pte, NULL);
if (ret)
return ret;
kasan_unpoison_shadow(area->addr, requested_size);
/*
* We have to poison the remainder of the allocation each time, not
* just when the shadow page is first allocated, because vmalloc may
* reuse addresses, and an early large allocation would cause us to
* miss OOBs in future smaller allocations.
*
* The alternative is to poison the shadow on vfree()/vunmap(). We
* don't because the unmapping the virtual addresses should be
* sufficient to find most UAFs.
*/
requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
kasan_poison_shadow(area->addr + requested_size,
area->size - requested_size,
KASAN_VMALLOC_INVALID);
return 0;
}
next prev parent reply other threads:[~2019-08-09 12:37 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-31 7:15 [PATCH v3 0/3] " Daniel Axtens
2019-07-31 7:15 ` [PATCH v3 1/3] " Daniel Axtens
2019-08-08 13:50 ` Mark Rutland
2019-08-08 17:43 ` Mark Rutland
2019-08-09 9:54 ` Mark Rutland
2019-08-12 2:53 ` Daniel Axtens
2019-08-09 12:37 ` Mark Rutland [this message]
2019-08-09 11:54 ` Vasily Gorbik
2019-07-31 7:15 ` [PATCH v3 2/3] fork: support VMAP_STACK with KASAN_VMALLOC Daniel Axtens
2019-07-31 7:15 ` [PATCH v3 3/3] x86/kasan: support KASAN_VMALLOC Daniel Axtens
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190809123745.GG48423@lakrids.cambridge.arm.com \
--to=mark.rutland@arm.com \
--cc=aryabinin@virtuozzo.com \
--cc=dja@axtens.net \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox