* [PATCH v2] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order
@ 2020-06-30 14:51 Long Li
2020-06-30 17:13 ` Matthew Wilcox
0 siblings, 1 reply; 2+ messages in thread
From: Long Li @ 2020-06-30 14:51 UTC (permalink / raw)
To: willy, cl, penberg, rientjes, iamjoonsoo.kim, akpm; +Cc: linux-mm, linux-kernel
In the ARM32 environment with highmem enabled. Using flag of kmalloc()
with __GFP_HIGHMEM to allocate large memory, it will go through the
kmalloc_order() path and return NULL. The __GFP_HIGHMEM flag will
cause alloc_pages() to allocate highmem memory and pages cannot be
directly converted to a virtual address, kmalloc_order() will return
NULL and the page has been allocated.
After modification, GFP_SLAB_BUG_MASK has been checked before
allocating pages, refer to the new_slab().
Signed-off-by: Long Li <lonuxli.64@gmail.com>
---
Changes in v2:
- patch is rebased againest "[PATCH] mm: Free unused pages in
kmalloc_order()" [1]
- check GFP_SLAB_BUG_MASK and generate warnings before alloc_pages
in kmalloc_order()
[1] https://lkml.org/lkml/2020/6/27/16
mm/slab_common.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index a143a8c8f874..3548f4f8374b 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -27,6 +27,7 @@
#include <trace/events/kmem.h>
#include "slab.h"
+#include "internal.h"
enum slab_state slab_state;
LIST_HEAD(slab_caches);
@@ -815,6 +816,15 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
void *ret = NULL;
struct page *page;
+ if (unlikely(flags & GFP_SLAB_BUG_MASK)) {
+ gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK;
+
+ flags &= ~GFP_SLAB_BUG_MASK;
+ pr_warn("Unexpected gfp: %#x (%pGg). Fixing up to gfp: %#x (%pGg). Fix your code!\n",
+ invalid_mask, &invalid_mask, flags, &flags);
+ dump_stack();
+ }
+
flags |= __GFP_COMP;
page = alloc_pages(flags, order);
if (likely(page)) {
--
2.17.1
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH v2] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order
2020-06-30 14:51 [PATCH v2] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order Long Li
@ 2020-06-30 17:13 ` Matthew Wilcox
0 siblings, 0 replies; 2+ messages in thread
From: Matthew Wilcox @ 2020-06-30 17:13 UTC (permalink / raw)
To: Long Li
Cc: cl, penberg, rientjes, iamjoonsoo.kim, akpm, linux-mm, linux-kernel
On Tue, Jun 30, 2020 at 02:51:55PM +0000, Long Li wrote:
> In the ARM32 environment with highmem enabled. Using flag of kmalloc()
> with __GFP_HIGHMEM to allocate large memory, it will go through the
> kmalloc_order() path and return NULL. The __GFP_HIGHMEM flag will
> cause alloc_pages() to allocate highmem memory and pages cannot be
> directly converted to a virtual address, kmalloc_order() will return
> NULL and the page has been allocated.
>
> After modification, GFP_SLAB_BUG_MASK has been checked before
> allocating pages, refer to the new_slab().
>
> Signed-off-by: Long Li <lonuxli.64@gmail.com>
> ---
>
> Changes in v2:
> - patch is rebased againest "[PATCH] mm: Free unused pages in
> kmalloc_order()" [1]
> - check GFP_SLAB_BUG_MASK and generate warnings before alloc_pages
> in kmalloc_order()
>
> [1] https://lkml.org/lkml/2020/6/27/16
>
> mm/slab_common.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index a143a8c8f874..3548f4f8374b 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -27,6 +27,7 @@
> #include <trace/events/kmem.h>
>
> #include "slab.h"
> +#include "internal.h"
>
> enum slab_state slab_state;
> LIST_HEAD(slab_caches);
> @@ -815,6 +816,15 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
> void *ret = NULL;
> struct page *page;
>
> + if (unlikely(flags & GFP_SLAB_BUG_MASK)) {
> + gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK;
> +
> + flags &= ~GFP_SLAB_BUG_MASK;
> + pr_warn("Unexpected gfp: %#x (%pGg). Fixing up to gfp: %#x (%pGg). Fix your code!\n",
> + invalid_mask, &invalid_mask, flags, &flags);
> + dump_stack();
> + }
> +
> flags |= __GFP_COMP;
Oh, this is really good! I hadn't actually looked at how slab/slub handle
GFP_SLAB_BUG_MASK. If you don't mind though, I would suggest that this
code should all be in one place. Perhaps:
gfp_t kmalloc_invalid_flags(gfp_t flags)
{
gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK;
flags &= ~GFP_SLAB_BUG_MASK;
pr_warn("Unexpected gfp: %#x (%pGg). Fixing up to gfp: %#x (%pGg). Fix your code!\n",
invalid_mask, &invalid_mask, flags, &flags);
dump_stack();
return flags;
}
and then call it from the three places?
Also, the changelog could do with a bit of work. Perhaps:
kmalloc cannot allocate memory from HIGHMEM. Allocating large amounts of
memory currently bypasses the check and will simply leak the memory when
page_address() returns NULL. To fix this, factor the GFP_SLAB_BUG_MASK
check out of slab & slub, and call it from kmalloc_order() as well.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-06-30 17:14 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-30 14:51 [PATCH v2] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order Long Li
2020-06-30 17:13 ` Matthew Wilcox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox