From: Yisheng Xie <xieyisheng1@huawei.com>
To: Tycho Andersen <tycho@docker.com>
Cc: Juerg Haefliger <juerg.haefliger@canonical.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
kernel-hardening@lists.openwall.com,
Marco Benatto <marco.antonio.780@gmail.com>,
x86@kernel.org
Subject: Re: [PATCH v6 03/11] mm, x86: Add support for eXclusive Page Frame Ownership (XPFO)
Date: Thu, 14 Sep 2017 14:15:25 +0800 [thread overview]
Message-ID: <689b2f87-46c5-b4be-67e0-a6639c420191@huawei.com> (raw)
In-Reply-To: <20170912181303.aqjj5ri3mhscw63t@docker>
Hi Tycho,
On 2017/9/13 2:13, Tycho Andersen wrote:
> Hi Yisheng,
>
>> On Tue, Sep 12, 2017 at 04:05:22PM +0800, Yisheng Xie wrote:
>>> IMO, before a page is allocated, it is in buddy system, which means it is free
>>> and no other 'map' on the page except direct map. Then if the page is allocated
>>> to user, XPFO should unmap the direct map. otherwise the ret2dir may works at
>>> this window before it is freed. Or maybe I'm still missing anything.
>>
>> I agree that it seems broken. I'm just not sure why the test doesn't
>> fail. It's certainly worth understanding.
>
> Ok, so I think what's going on is that the page *is* mapped and unmapped by the
> kernel as Juerg described, but only in certain cases. See prep_new_page(),
> which has the following:
>
> if (!free_pages_prezeroed() && (gfp_flags & __GFP_ZERO))
> for (i = 0; i < (1 << order); i++)
> clear_highpage(page + i);
>
> clear_highpage() maps and unmaps the pages, so that's why xpfo works with this
> set.
Oh, I really missed this point. For we need zero the memory before user get them.
Thanks a lot for figuring out.
>
> I tried with CONFIG_PAGE_POISONING_ZERO=y and page_poison=y, and the
> XPFO_READ_USER test does not fail, i.e. the read succeeds. So, I think we need
> to include this zeroing condition in xpfo_alloc_pages(), something like the
> patch below. Unfortunately, this fails to boot for me, probably for an
> unrelated reason that I'll look into.
Yes, seems need to fix in this case, and I also a litter puzzle about why boot fail.
Thanks
Yisheng Xie
>
> Thanks a lot!
>
> Tycho
>
>
>>From bfc21a6438cf8c56741af94cac939f1b0f63752c Mon Sep 17 00:00:00 2001
> From: Tycho Andersen <tycho@docker.com>
> Date: Tue, 12 Sep 2017 12:06:41 -0600
> Subject: [PATCH] draft of unmapping patch
>
> Signed-off-by: Tycho Andersen <tycho@docker.com>
> ---
> include/linux/xpfo.h | 5 +++--
> mm/compaction.c | 2 +-
> mm/internal.h | 2 +-
> mm/page_alloc.c | 10 ++++++----
> mm/xpfo.c | 10 ++++++++--
> 5 files changed, 19 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h
> index b24be9ac4a2d..c991bf7f051d 100644
> --- a/include/linux/xpfo.h
> +++ b/include/linux/xpfo.h
> @@ -29,7 +29,7 @@ void xpfo_flush_kernel_tlb(struct page *page, int order);
>
> void xpfo_kmap(void *kaddr, struct page *page);
> void xpfo_kunmap(void *kaddr, struct page *page);
> -void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp);
> +void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp, bool will_map);
> void xpfo_free_pages(struct page *page, int order);
>
> bool xpfo_page_is_unmapped(struct page *page);
> @@ -49,7 +49,8 @@ void xpfo_temp_unmap(const void *addr, size_t size, void **mapping,
>
> static inline void xpfo_kmap(void *kaddr, struct page *page) { }
> static inline void xpfo_kunmap(void *kaddr, struct page *page) { }
> -static inline void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp) { }
> +static inline void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp,
> + bool will_map) { }
> static inline void xpfo_free_pages(struct page *page, int order) { }
>
> static inline bool xpfo_page_is_unmapped(struct page *page) { return false; }
> diff --git a/mm/compaction.c b/mm/compaction.c
> index fb548e4c7bd4..9a222258e65c 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -76,7 +76,7 @@ static void map_pages(struct list_head *list)
> order = page_private(page);
> nr_pages = 1 << order;
>
> - post_alloc_hook(page, order, __GFP_MOVABLE);
> + post_alloc_hook(page, order, __GFP_MOVABLE, false);
> if (order)
> split_page(page, order);
>
> diff --git a/mm/internal.h b/mm/internal.h
> index 4ef49fc55e58..1a0331ec2b2d 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -165,7 +165,7 @@ extern void __free_pages_bootmem(struct page *page, unsigned long pfn,
> unsigned int order);
> extern void prep_compound_page(struct page *page, unsigned int order);
> extern void post_alloc_hook(struct page *page, unsigned int order,
> - gfp_t gfp_flags);
> + gfp_t gfp_flags, bool will_map);
> extern int user_min_free_kbytes;
>
> #if defined CONFIG_COMPACTION || defined CONFIG_CMA
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 09fdf1bad21f..f73809847c58 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1750,7 +1750,7 @@ static bool check_new_pages(struct page *page, unsigned int order)
> }
>
> inline void post_alloc_hook(struct page *page, unsigned int order,
> - gfp_t gfp_flags)
> + gfp_t gfp_flags, bool will_map)
> {
> set_page_private(page, 0);
> set_page_refcounted(page);
> @@ -1759,18 +1759,20 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
> kernel_map_pages(page, 1 << order, 1);
> kernel_poison_pages(page, 1 << order, 1);
> kasan_alloc_pages(page, order);
> - xpfo_alloc_pages(page, order, gfp_flags);
> + xpfo_alloc_pages(page, order, gfp_flags, will_map);
> set_page_owner(page, order, gfp_flags);
> }
>
> +extern bool xpfo_test;
> static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
> unsigned int alloc_flags)
> {
> int i;
> + bool needs_zero = !free_pages_prezeroed() && (gfp_flags & __GFP_ZERO);
>
> - post_alloc_hook(page, order, gfp_flags);
> + post_alloc_hook(page, order, gfp_flags, needs_zero);
>
> - if (!free_pages_prezeroed() && (gfp_flags & __GFP_ZERO))
> + if (needs_zero)
> for (i = 0; i < (1 << order); i++)
> clear_highpage(page + i);
>
> diff --git a/mm/xpfo.c b/mm/xpfo.c
> index ca5d4d1838f9..dd25e24213fe 100644
> --- a/mm/xpfo.c
> +++ b/mm/xpfo.c
> @@ -86,7 +86,7 @@ static inline struct xpfo *lookup_xpfo(struct page *page)
> return (void *)page_ext + page_xpfo_ops.offset;
> }
>
> -void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp)
> +void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp, bool will_map)
> {
> int i, flush_tlb = 0;
> struct xpfo *xpfo;
> @@ -116,8 +116,14 @@ void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp)
> * Tag the page as a user page and flush the TLB if it
> * was previously allocated to the kernel.
> */
> - if (!test_and_set_bit(XPFO_PAGE_USER, &xpfo->flags))
> + bool was_user = !test_and_set_bit(XPFO_PAGE_USER,
> + &xpfo->flags);
> +
> + if (was_user || !will_map) {
> + set_kpte(page_address(page + i), page + i,
> + __pgprot(0));
> flush_tlb = 1;
> + }
> } else {
> /* Tag the page as a non-user (kernel) page */
> clear_bit(XPFO_PAGE_USER, &xpfo->flags);
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-09-14 6:16 UTC|newest]
Thread overview: 76+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-07 17:35 [PATCH v6 00/11] Add support for eXclusive Page Frame Ownership Tycho Andersen
2017-09-07 17:35 ` [PATCH v6 01/11] mm: add MAP_HUGETLB support to vm_mmap Tycho Andersen
2017-09-08 7:42 ` Christoph Hellwig
2017-09-07 17:36 ` [PATCH v6 02/11] x86: always set IF before oopsing from page fault Tycho Andersen
2017-09-07 17:36 ` [PATCH v6 03/11] mm, x86: Add support for eXclusive Page Frame Ownership (XPFO) Tycho Andersen
2017-09-07 18:33 ` Ralph Campbell
2017-09-07 18:50 ` Tycho Andersen
2017-09-08 7:51 ` Christoph Hellwig
2017-09-08 14:58 ` Tycho Andersen
2017-09-09 15:35 ` Laura Abbott
2017-09-11 15:03 ` Tycho Andersen
2017-09-11 7:24 ` Yisheng Xie
2017-09-11 14:50 ` Tycho Andersen
2017-09-11 16:03 ` Juerg Haefliger
2017-09-11 16:59 ` Tycho Andersen
2017-09-12 8:05 ` Yisheng Xie
2017-09-12 14:36 ` Tycho Andersen
2017-09-12 18:13 ` Tycho Andersen
2017-09-14 6:15 ` Yisheng Xie [this message]
2017-09-20 23:46 ` Dave Hansen
2017-09-21 0:02 ` Tycho Andersen
2017-09-21 0:04 ` Dave Hansen
2017-09-11 18:32 ` Tycho Andersen
2017-09-11 21:54 ` Marco Benatto
2017-09-20 15:48 ` Dave Hansen
2017-09-20 22:34 ` Tycho Andersen
2017-09-20 23:21 ` Dave Hansen
2017-09-21 0:09 ` Tycho Andersen
2017-09-21 0:27 ` Dave Hansen
2017-09-21 1:37 ` Tycho Andersen
2017-11-10 1:09 ` Tycho Andersen
2017-11-13 22:20 ` Dave Hansen
2017-11-13 22:46 ` Dave Hansen
2017-11-15 0:33 ` [kernel-hardening] " Tycho Andersen
2017-11-15 0:37 ` Dave Hansen
2017-11-15 0:42 ` Tycho Andersen
2017-11-15 3:44 ` Matthew Wilcox
2017-11-15 7:00 ` Dave Hansen
2017-11-15 14:58 ` Matthew Wilcox
2017-11-15 16:20 ` [kernel-hardening] " Tycho Andersen
2017-11-15 21:34 ` Matthew Wilcox
2017-09-21 0:03 ` Dave Hansen
2017-09-21 0:28 ` Dave Hansen
2017-09-21 1:04 ` Tycho Andersen
2017-09-07 17:36 ` [PATCH v6 04/11] swiotlb: Map the buffer if it was unmapped by XPFO Tycho Andersen
2017-09-07 18:10 ` Christoph Hellwig
2017-09-07 18:44 ` Tycho Andersen
2017-09-08 7:13 ` Christoph Hellwig
2017-09-07 17:36 ` [PATCH v6 05/11] arm64/mm: Add support for XPFO Tycho Andersen
2017-09-08 7:53 ` Christoph Hellwig
2017-09-08 17:24 ` Tycho Andersen
2017-09-14 10:41 ` Julien Grall
2017-09-14 11:29 ` Juergen Gross
2017-09-14 18:22 ` [kernel-hardening] " Mark Rutland
2017-09-18 21:27 ` Tycho Andersen
2017-09-07 17:36 ` [PATCH v6 06/11] xpfo: add primitives for mapping underlying memory Tycho Andersen
2017-09-07 17:36 ` [PATCH v6 07/11] arm64/mm, xpfo: temporarily map dcache regions Tycho Andersen
2017-09-14 18:25 ` Mark Rutland
2017-09-18 21:29 ` Tycho Andersen
2017-09-07 17:36 ` [PATCH v6 08/11] arm64/mm: Add support for XPFO to swiotlb Tycho Andersen
2017-09-07 17:36 ` [PATCH v6 09/11] arm64/mm: disable section/contiguous mappings if XPFO is enabled Tycho Andersen
2017-09-09 15:38 ` Laura Abbott
2017-09-07 17:36 ` [PATCH v6 10/11] mm: add a user_virt_to_phys symbol Tycho Andersen
2017-09-08 7:55 ` Christoph Hellwig
2017-09-08 15:44 ` Kees Cook
2017-09-11 7:36 ` Christoph Hellwig
2017-09-14 18:34 ` [kernel-hardening] " Mark Rutland
2017-09-18 20:56 ` Tycho Andersen
2017-09-07 17:36 ` [PATCH v6 11/11] lkdtm: Add test for XPFO Tycho Andersen
2017-09-07 19:08 ` Kees Cook
2017-09-10 0:57 ` kbuild test robot
2017-09-11 10:34 ` [PATCH v6 00/11] Add support for eXclusive Page Frame Ownership Yisheng Xie
2017-09-11 15:02 ` Tycho Andersen
2017-09-12 7:07 ` Yisheng Xie
2017-09-12 7:40 ` Juerg Haefliger
2017-09-12 8:11 ` Yisheng Xie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=689b2f87-46c5-b4be-67e0-a6639c420191@huawei.com \
--to=xieyisheng1@huawei.com \
--cc=juerg.haefliger@canonical.com \
--cc=kernel-hardening@lists.openwall.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=marco.antonio.780@gmail.com \
--cc=tycho@docker.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox