From: Mark Rutland <mark.rutland@arm.com>
To: Yury Norov <yury.norov@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Nicholas Piggin <npiggin@gmail.com>,
Ding Tianhong <dingtianhong@huawei.com>,
Anshuman Khandual <anshuman.khandual@arm.com>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC PATCH] arm64: don't vmap() invalid page
Date: Wed, 19 Jan 2022 09:55:14 +0000 [thread overview]
Message-ID: <20220119095514.GA42995@C02TD0UTHF1T.local> (raw)
In-Reply-To: <20220118185354.464517-1-yury.norov@gmail.com>
On Tue, Jan 18, 2022 at 10:53:54AM -0800, Yury Norov wrote:
> vmap() takes struct page *pages as one of arguments, and user may provide
> an invalid pointer, which would lead to DABT at address translation later.
> Currently, kernel checks the pages against NULL. In my case, however, the
> address was not NULL, and was big enough so that the hardware generated
> Address Size Abort.
Can you give an example of when this might happen? It sounds like you're
actually hitting this, so a backtrace would be nice.
I'm a bit confused as to when why we'd try to vmap() pages that we
didn't have a legitimate struct page for -- where did these addresses
come from?
It sounds like this is going wrong at a higher level, and we're passing
entirely bogus struct page pointers around. This seems like the sort of
thing DEBUG_VIRTUAL or similar should check when we initially generate
the struct page pointer.
Thanks,
Mark.
> Interestingly, this abort happens even if copy_from_kernel_nofault() is used,
> which is quite inconvenient for debugging purposes.
>
> This patch adds an arch_vmap_page_valid() helper into vmap() path, so that
> architectures may add arch-specific checks of the pointer passed into vmap.
>
> For arm64, if the page passed to vmap() corresponds to a physical address
> greater than maximum possible value as described in TCR_EL1.IPS register, the
> following table walk would generate Address Size Abort. Instead of creating
> the invalid mapping, kernel will return ERANGE in such situation.
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
> ---
> arch/arm64/include/asm/vmalloc.h | 41 ++++++++++++++++++++++++++++++++
> include/linux/vmalloc.h | 7 ++++++
> mm/vmalloc.c | 8 +++++--
> 3 files changed, 54 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
> index b9185503feae..e9d43ee019ad 100644
> --- a/arch/arm64/include/asm/vmalloc.h
> +++ b/arch/arm64/include/asm/vmalloc.h
> @@ -4,6 +4,47 @@
> #include <asm/page.h>
> #include <asm/pgtable.h>
>
> +static inline u64 pa_size(u64 ips)
> +{
> + switch (ips) {
> + case 0b000:
> + return 1UL << 32;
> + case 0b001:
> + return 1UL << 36;
> + case 0b010:
> + return 1UL << 40;
> + case 0b011:
> + return 1UL << 42;
> + case 0b100:
> + return 1UL << 44;
> + case 0b101:
> + return 1UL << 48;
> + case 0b110:
> + return 1UL << 52;
> + /* All other values */
> + default:
> + return 1UL << 52;
> + }
> +}
> +
> +#define arch_vmap_page_valid arch_vmap_page_valid
> +static inline int arch_vmap_page_valid(struct page *page)
> +{
> + u64 tcr, ips, paddr_size;
> +
> + if (!page)
> + return -ENOMEM;
> +
> + tcr = read_sysreg_s(SYS_TCR_EL1);
> + ips = (tcr & TCR_IPS_MASK) >> TCR_IPS_SHIFT;
> +
> + paddr_size = pa_size(ips);
> + if (page_to_phys(page) >= paddr_size)
> + return -ERANGE;
> +
> + return 0;
> +}
> +
> #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
>
> #define arch_vmap_pud_supported arch_vmap_pud_supported
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index 6e022cc712e6..08b567d8bafc 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -119,6 +119,13 @@ static inline int arch_vmap_pte_supported_shift(unsigned long size)
> }
> #endif
>
> +#ifndef arch_vmap_page_valid
> +static inline int arch_vmap_page_valid(struct page *page)
> +{
> + return page ? 0 : -ENOMEM;
> +}
> +#endif
> +
> /*
> * Highlevel APIs for driver use
> */
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index d2a00ad4e1dd..ee0384405cdd 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -472,11 +472,15 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
> return -ENOMEM;
> do {
> struct page *page = pages[*nr];
> + int ret;
>
> if (WARN_ON(!pte_none(*pte)))
> return -EBUSY;
> - if (WARN_ON(!page))
> - return -ENOMEM;
> +
> + ret = arch_vmap_page_valid(page);
> + if (WARN_ON(ret))
> + return ret;
> +
> set_pte_at(&init_mm, addr, pte, mk_pte(page, prot));
> (*nr)++;
> } while (pte++, addr += PAGE_SIZE, addr != end);
> --
> 2.30.2
>
prev parent reply other threads:[~2022-01-19 9:55 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-18 18:53 Yury Norov
2022-01-18 18:56 ` Matthew Wilcox
2022-01-18 19:04 ` Yury Norov
2022-01-19 9:55 ` Mark Rutland [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220119095514.GA42995@C02TD0UTHF1T.local \
--to=mark.rutland@arm.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=catalin.marinas@arm.com \
--cc=dingtianhong@huawei.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=npiggin@gmail.com \
--cc=will@kernel.org \
--cc=yury.norov@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox