linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/17] common kmalloc v4
@ 2022-08-17 10:18 Hyeonggon Yoo
  2022-08-17 10:18 ` [PATCH v4 01/17] mm/slab: move NUMA-related code to __do_cache_alloc() Hyeonggon Yoo
                   ` (17 more replies)
  0 siblings, 18 replies; 30+ messages in thread
From: Hyeonggon Yoo @ 2022-08-17 10:18 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin
  Cc: Hyeonggon Yoo, linux-mm, linux-kernel

v3: https://lore.kernel.org/lkml/20220712133946.307181-1-42.hyeyoo@gmail.com/

Hello, this is common kmalloc v4.
Please review and consider applying.

Changes from v3 are shown as range-diff below.
(range-diff does not include new patch 13)

v3 -> v4:
	
	- Rebased to commit 3cc40a443a04d5 (after 6.0-rc1)

	- Added Reviewed-by: from Vlastimil Babka. Thanks!

	- Adjusted comments from Vlastimil Babka:

		- uninline __kmalloc_large_node_notrace()
		
		- Adjust s->size to SLOB_UNITS(s->size) * SLOB_UNIT in
		  SLOB.

		- do not pass __GFP_COMP to tracepoint in
		  kmalloc_large() and friends

		- defer testing of 'accounted' into TP_printk until
		  trace_kmalloc()

		- rename kmem_cache_alloc[_node]_trace() to
		  kmalloc[_node]_trace() and move it to slab_alloc.c,
		  
		  Use __assume_kmalloc_alignement instead of
		  __assume_slab_alignment (patch 13)
	
		- replace word definition to declaration in changelog

	- Adjusted comment from Chrisptoh Lameter:
		- replace WARN_ON() to BUG_ON()


Any feedbacks/suggestions will be appreciated!

Thanks!

Hyeonggon Yoo (17):
  mm/slab: move NUMA-related code to __do_cache_alloc()
  mm/slab: cleanup slab_alloc() and slab_alloc_node()
  mm/slab_common: remove CONFIG_NUMA ifdefs for common kmalloc functions
  mm/slab_common: cleanup kmalloc_track_caller()
  mm/sl[au]b: factor out __do_kmalloc_node()
  mm/slab_common: fold kmalloc_order_trace() into kmalloc_large()
  mm/slub: move kmalloc_large_node() to slab_common.c
  mm/slab_common: kmalloc_node: pass large requests to page allocator
  mm/slab_common: cleanup kmalloc_large()
  mm/slab: kmalloc: pass requests larger than order-1 page to page
    allocator
  mm/sl[au]b: introduce common alloc/free functions without tracepoint
  mm/sl[au]b: generalize kmalloc subsystem
  mm/sl[au]b: cleanup kmem_cache_alloc[_node]_trace()
  mm/slab_common: unify NUMA and UMA version of tracepoints
  mm/slab_common: drop kmem_alloc & avoid dereferencing fields when not
    using
  mm/slab_common: move declaration of __ksize() to mm/slab.h
  mm/sl[au]b: check if large object is valid in __ksize()

 include/linux/slab.h        | 144 ++++++-----------
 include/trace/events/kmem.h |  74 +++------
 mm/slab.c                   | 305 +++++++++---------------------------
 mm/slab.h                   |  10 ++
 mm/slab_common.c            | 191 +++++++++++++++++++---
 mm/slob.c                   |  31 ++--
 mm/slub.c                   | 234 ++-------------------------
 7 files changed, 338 insertions(+), 651 deletions(-)


===== range-diff =====

git range-diff	slab-common-v3r0~16...slab-common-v3r0
		slab-common-v4r0~17...slab-common-v4r0:

 1:  c1ba6a2f28b4 !  1:  0f84e3cefd1a mm/slab: move NUMA-related code to __do_cache_alloc()
    @@ mm/slab.c: static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t fl
      	bool init = false;
      
     @@ mm/slab.c: slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
    + 		goto out_hooks;
      
    - 	cache_alloc_debugcheck_before(cachep, flags);
      	local_irq_save(save_flags);
     -
     -	if (nodeid == NUMA_NO_NODE)
    @@ mm/slab.c: slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, s
      	return ____cache_alloc(cachep, flags);
      }
     @@ mm/slab.c: slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags,
    + 		goto out;
      
    - 	cache_alloc_debugcheck_before(cachep, flags);
      	local_irq_save(save_flags);
     -	objp = __do_cache_alloc(cachep, flags);
     +	objp = __do_cache_alloc(cachep, flags, NUMA_NO_NODE);
 2:  75e053d9e62f !  2:  ed66aae2655d mm/slab: cleanup slab_alloc() and slab_alloc_node()
    @@ mm/slab.c: static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t fl
     -	if (unlikely(ptr))
     -		goto out_hooks;
     -
    --	cache_alloc_debugcheck_before(cachep, flags);
     -	local_irq_save(save_flags);
     -	ptr = __do_cache_alloc(cachep, flags, nodeid);
     -	local_irq_restore(save_flags);
    @@ mm/slab.c: __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid _
      	unsigned long save_flags;
      	void *objp;
     @@ mm/slab.c: slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags,
    + 		goto out;
      
    - 	cache_alloc_debugcheck_before(cachep, flags);
      	local_irq_save(save_flags);
     -	objp = __do_cache_alloc(cachep, flags, NUMA_NO_NODE);
     +	objp = __do_cache_alloc(cachep, flags, nodeid);
 3:  7db354b38ca6 =  3:  c84d648e0440 mm/slab_common: remove CONFIG_NUMA ifdefs for common kmalloc functions
 4:  cdb433c0c7eb =  4:  967dd62b2f55 mm/slab_common: cleanup kmalloc_track_caller()
 5:  46100ebddd00 !  5:  11b3a686bf31 mm/sl[au]b: factor out __do_kmalloc_node()
    @@ Commit message
         __kmalloc_node_track_caller().
     
         Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
    +    Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
     
      ## mm/slab.c ##
     @@ mm/slab.c: void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
 6:  efc756f837fa !  6:  f30428f4af3d mm/slab_common: fold kmalloc_order_trace() into kmalloc_large()
    @@ mm/slab_common.c: gfp_t kmalloc_fix_flags(gfp_t flags)
      
      	if (unlikely(flags & GFP_SLAB_BUG_MASK))
      		flags = kmalloc_fix_flags(flags);
    + 
    +-	flags |= __GFP_COMP;
    +-	page = alloc_pages(flags, order);
    ++	page = alloc_pages(flags | __GFP_COMP, order);
    + 	if (likely(page)) {
    + 		ret = page_address(page);
    + 		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
     @@ mm/slab_common.c: void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
      	ret = kasan_kmalloc_large(ret, size, flags);
      	/* As ret might get tagged, call kmemleak hook after KASAN. */
 7:  9e137d787056 =  7:  bd1a17ffce8c mm/slub: move kmalloc_large_node() to slab_common.c
 8:  e48d0b2adad4 !  8:  4a83cf5171f2 mm/slab_common: kmalloc_node: pass large requests to page allocator
    @@ Commit message
         __kmalloc_node_track_caller() when large objects are allocated.
     
         Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
    +    Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
     
      ## include/linux/slab.h ##
     @@ include/linux/slab.h: static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
    @@ mm/slab_common.c: void *kmalloc_large(size_t size, gfp_t flags)
      EXPORT_SYMBOL(kmalloc_large);
      
     -void *kmalloc_large_node(size_t size, gfp_t flags, int node)
    -+static __always_inline
    -+void *__kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
    ++void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
      {
      	struct page *page;
      	void *ptr = NULL;
    @@ mm/slab_common.c: void *kmalloc_large_node(size_t size, gfp_t flags, int node)
      	return ptr;
      }
     +
    -+void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
    -+{
    -+	return __kmalloc_large_node_notrace(size, flags, node);
    -+}
    -+
     +void *kmalloc_large_node(size_t size, gfp_t flags, int node)
     +{
    -+	void *ret = __kmalloc_large_node_notrace(size, flags, node);
    ++	void *ret = kmalloc_large_node_notrace(size, flags, node);
     +
     +	trace_kmalloc_node(_RET_IP_, ret, NULL, size,
     +			   PAGE_SIZE << get_order(size), flags, node);
 9:  7e813b9c9b0b !  9:  a94e5405bbc5 mm/slab_common: cleanup kmalloc_large()
    @@ Commit message
         mm/slab_common: cleanup kmalloc_large()
     
         Now that kmalloc_large() and kmalloc_large_node() do mostly same job,
    -    make kmalloc_large() wrapper of __kmalloc_large_node_notrace().
    +    make kmalloc_large() wrapper of kmalloc_large_node_notrace().
     
         In the meantime, add missing flag fix code in
    -    __kmalloc_large_node_notrace().
    +    kmalloc_large_node_notrace().
     
         Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
    +    Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
     
      ## mm/slab_common.c ##
     @@ mm/slab_common.c: gfp_t kmalloc_fix_flags(gfp_t flags)
    @@ mm/slab_common.c: gfp_t kmalloc_fix_flags(gfp_t flags)
     -	if (unlikely(flags & GFP_SLAB_BUG_MASK))
     -		flags = kmalloc_fix_flags(flags);
     -
    --	flags |= __GFP_COMP;
    --	page = alloc_pages(flags, order);
    +-	page = alloc_pages(flags | __GFP_COMP, order);
     -	if (likely(page)) {
     -		ret = page_address(page);
     -		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
    @@ mm/slab_common.c: gfp_t kmalloc_fix_flags(gfp_t flags)
     -}
     -EXPORT_SYMBOL(kmalloc_large);
      
    - static __always_inline
    - void *__kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
    -@@ mm/slab_common.c: void *__kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
    + void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
    + {
    +@@ mm/slab_common.c: void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
      	void *ptr = NULL;
      	unsigned int order = get_order(size);
      
    @@ mm/slab_common.c: void *__kmalloc_large_node_notrace(size_t size, gfp_t flags, i
      	flags |= __GFP_COMP;
      	page = alloc_pages_node(node, flags, order);
      	if (page) {
    -@@ mm/slab_common.c: void *__kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
    +@@ mm/slab_common.c: void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
      	return ptr;
      }
      
     +void *kmalloc_large(size_t size, gfp_t flags)
     +{
    -+	void *ret = __kmalloc_large_node_notrace(size, flags, NUMA_NO_NODE);
    ++	void *ret = kmalloc_large_node_notrace(size, flags, NUMA_NO_NODE);
     +
     +	trace_kmalloc(_RET_IP_, ret, NULL, size,
     +		      PAGE_SIZE << get_order(size), flags);
    @@ mm/slab_common.c: void *__kmalloc_large_node_notrace(size_t size, gfp_t flags, i
     +}
     +EXPORT_SYMBOL(kmalloc_large);
     +
    - void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
    + void *kmalloc_large_node(size_t size, gfp_t flags, int node)
      {
    - 	return __kmalloc_large_node_notrace(size, flags, node);
    + 	void *ret = kmalloc_large_node_notrace(size, flags, node);
10:  6ad83caba0a7 ! 10:  c46928674558 mm/slab: kmalloc: pass requests larger than order-1 page to page allocator
    @@ Commit message
         maintenance of common code.
     
         Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
    +    Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
     
      ## include/linux/slab.h ##
     @@ include/linux/slab.h: static inline unsigned int arch_slab_minalign(void)
11:  e5b712dc374c ! 11:  3215ee05c450 mm/sl[au]b: introduce common alloc/free functions without tracepoint
    @@ Commit message
         functions that does not have tracepoint.
     
         Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
    +    Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
     
      ## mm/slab.c ##
     @@ mm/slab.c: void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid)
12:  e44cd126a340 ! 12:  4d70e7590d3a mm/sl[au]b: generalize kmalloc subsystem
    @@ Commit message
         kfree(), __ksize(), __kmalloc(), __kmalloc_node() and move them
         to slab_common.c.
     
    +    In the meantime, rename kmalloc_large_node_notrace()
    +    to __kmalloc_large_node() and make it static as it's now only called in
    +    slab_common.c.
    +
         Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
    +    Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
     
      ## mm/slab.c ##
     @@ mm/slab.c: void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep,
    @@ mm/slab.c: void __check_heap_object(const void *ptr, unsigned long n,
     -}
     -EXPORT_SYMBOL(__ksize);
     
    + ## mm/slab.h ##
    +@@ mm/slab.h: void create_kmalloc_caches(slab_flags_t);
    + /* Find the kmalloc slab corresponding for a certain size */
    + struct kmem_cache *kmalloc_slab(size_t, gfp_t);
    + 
    +-void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node);
    + void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags,
    + 			      int node, size_t orig_size,
    + 			      unsigned long caller);
    +
      ## mm/slab_common.c ##
     @@ mm/slab_common.c: void free_large_kmalloc(struct folio *folio, void *object)
      			      -(PAGE_SIZE << order));
      	__free_pages(folio_page(folio, 0), order);
      }
     +
    ++static void *__kmalloc_large_node(size_t size, gfp_t flags, int node);
     +static __always_inline
     +void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller)
     +{
    @@ mm/slab_common.c: void free_large_kmalloc(struct folio *folio, void *object)
     +	void *ret;
     +
     +	if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) {
    -+		ret = kmalloc_large_node_notrace(size, flags, node);
    ++		ret = __kmalloc_large_node(size, flags, node);
     +		trace_kmalloc_node(caller, ret, NULL,
     +				   size, PAGE_SIZE << get_order(size),
     +				   flags, node);
    @@ mm/slab_common.c: void free_large_kmalloc(struct folio *folio, void *object)
      #endif /* !CONFIG_SLOB */
      
      gfp_t kmalloc_fix_flags(gfp_t flags)
    +@@ mm/slab_common.c: gfp_t kmalloc_fix_flags(gfp_t flags)
    +  * know the allocation order to free the pages properly in kfree.
    +  */
    + 
    +-void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
    ++void *__kmalloc_large_node(size_t size, gfp_t flags, int node)
    + {
    + 	struct page *page;
    + 	void *ptr = NULL;
    +@@ mm/slab_common.c: void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node)
    + 
    + void *kmalloc_large(size_t size, gfp_t flags)
    + {
    +-	void *ret = kmalloc_large_node_notrace(size, flags, NUMA_NO_NODE);
    ++	void *ret = __kmalloc_large_node(size, flags, NUMA_NO_NODE);
    + 
    + 	trace_kmalloc(_RET_IP_, ret, NULL, size,
    + 		      PAGE_SIZE << get_order(size), flags);
    +@@ mm/slab_common.c: EXPORT_SYMBOL(kmalloc_large);
    + 
    + void *kmalloc_large_node(size_t size, gfp_t flags, int node)
    + {
    +-	void *ret = kmalloc_large_node_notrace(size, flags, node);
    ++	void *ret = __kmalloc_large_node(size, flags, node);
    + 
    + 	trace_kmalloc_node(_RET_IP_, ret, NULL, size,
    + 			   PAGE_SIZE << get_order(size), flags, node);
     
      ## mm/slub.c ##
     @@ mm/slub.c: static int __init setup_slub_min_objects(char *str)
 -:  ------------ > 13:  da6880a20924 mm/sl[au]b: cleanup kmem_cache_alloc[_node]_trace()
13:  a137bfbdb06b ! 14:  ef7a0f0d58db mm/slab_common: unify NUMA and UMA version of tracepoints
    @@ Commit message
         event classes does not makes sense at all.
     
         Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
    +    Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
     
      ## include/trace/events/kmem.h ##
     @@
    @@ mm/slab.c: void *__kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_l
      
      	return ret;
      }
    -@@ mm/slab.c: kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size)
    - 
    - 	ret = kasan_kmalloc(cachep, ret, size, flags);
    - 	trace_kmalloc(_RET_IP_, ret, cachep,
    --		      size, cachep->size, flags);
    -+		      size, cachep->size, flags, NUMA_NO_NODE);
    - 	return ret;
    - }
    - EXPORT_SYMBOL(kmem_cache_alloc_trace);
     @@ mm/slab.c: void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid)
      {
      	void *ret = slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object_size, _RET_IP_);
    @@ mm/slab.c: void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, i
     -	trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep,
     -				    cachep->object_size, cachep->size,
     -				    flags, nodeid);
    -+	trace_kmem_cache_alloc(_RET_IP_, ret, cachep,
    -+			       cachep->object_size, cachep->size,
    -+			       flags, nodeid);
    - 
    - 	return ret;
    - }
    -@@ mm/slab.c: void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep,
    - 	ret = slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_);
    ++	trace_kmem_cache_alloc(_RET_IP_, ret, cachep, cachep->object_size,
    ++			       cachep->size, flags, nodeid);
      
    - 	ret = kasan_kmalloc(cachep, ret, size, flags);
    --	trace_kmalloc_node(_RET_IP_, ret, cachep,
    --			   size, cachep->size,
    --			   flags, nodeid);
    -+	trace_kmalloc(_RET_IP_, ret, cachep,
    -+		      size, cachep->size,
    -+		      flags, nodeid);
      	return ret;
      }
    - EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
     
      ## mm/slab_common.c ##
     @@ mm/slab_common.c: void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller
      
      	if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) {
    - 		ret = kmalloc_large_node_notrace(size, flags, node);
    + 		ret = __kmalloc_large_node(size, flags, node);
     -		trace_kmalloc_node(caller, ret, NULL,
     -				   size, PAGE_SIZE << get_order(size),
     -				   flags, node);
    -+		trace_kmalloc(_RET_IP_, ret, NULL,
    -+			      size, PAGE_SIZE << get_order(size),
    -+			      flags, node);
    ++		trace_kmalloc(_RET_IP_, ret, NULL, size,
    ++			      PAGE_SIZE << get_order(size), flags, node);
      		return ret;
      	}
      
    @@ mm/slab_common.c: void *__do_kmalloc_node(size_t size, gfp_t flags, int node, un
      	ret = kasan_kmalloc(s, ret, size, flags);
     -	trace_kmalloc_node(caller, ret, s, size,
     -			   s->size, flags, node);
    -+	trace_kmalloc(_RET_IP_, ret, s, size,
    -+		      s->size, flags, node);
    ++	trace_kmalloc(_RET_IP_, ret, s, size, s->size, flags, node);
      	return ret;
      }
      
    +@@ mm/slab_common.c: void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
    + 	void *ret = __kmem_cache_alloc_node(s, gfpflags, NUMA_NO_NODE,
    + 					    size, _RET_IP_);
    + 
    +-	trace_kmalloc_node(_RET_IP_, ret, s, size, s->size,
    +-			   gfpflags, NUMA_NO_NODE);
    ++	trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, NUMA_NO_NODE);
    + 
    + 	ret = kasan_kmalloc(s, ret, size, gfpflags);
    + 	return ret;
    +@@ mm/slab_common.c: void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags,
    + {
    + 	void *ret = __kmem_cache_alloc_node(s, gfpflags, node, size, _RET_IP_);
    + 
    +-	trace_kmalloc_node(_RET_IP_, ret, s, size, s->size, gfpflags, node);
    ++	trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, node);
    + 
    + 	ret = kasan_kmalloc(s, ret, size, gfpflags);
    + 	return ret;
     @@ mm/slab_common.c: void *kmalloc_large(size_t size, gfp_t flags)
    - 	void *ret = __kmalloc_large_node_notrace(size, flags, NUMA_NO_NODE);
    + 	void *ret = __kmalloc_large_node(size, flags, NUMA_NO_NODE);
      
      	trace_kmalloc(_RET_IP_, ret, NULL, size,
     -		      PAGE_SIZE << get_order(size), flags);
    @@ mm/slab_common.c: void *kmalloc_large(size_t size, gfp_t flags)
      EXPORT_SYMBOL(kmalloc_large);
     @@ mm/slab_common.c: void *kmalloc_large_node(size_t size, gfp_t flags, int node)
      {
    - 	void *ret = __kmalloc_large_node_notrace(size, flags, node);
    + 	void *ret = __kmalloc_large_node(size, flags, node);
      
     -	trace_kmalloc_node(_RET_IP_, ret, NULL, size,
     -			   PAGE_SIZE << get_order(size), flags, node);
    @@ mm/slob.c: __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long cal
      
     -		trace_kmalloc_node(caller, ret, NULL,
     -				   size, size + minalign, gfp, node);
    -+		trace_kmalloc(caller, ret, NULL,
    -+			      size, size + minalign, gfp, node);
    ++		trace_kmalloc(caller, ret, NULL, size,
    ++			      size + minalign, gfp, node);
      	} else {
      		unsigned int order = get_order(size);
      
    @@ mm/slob.c: __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long cal
      
     -		trace_kmalloc_node(caller, ret, NULL,
     -				   size, PAGE_SIZE << order, gfp, node);
    -+		trace_kmalloc(caller, ret, NULL,
    -+			      size, PAGE_SIZE << order, gfp, node);
    ++		trace_kmalloc(caller, ret, NULL, size,
    ++			      PAGE_SIZE << order, gfp, node);
      	}
      
      	kmemleak_alloc(ret, size, 1, gfp);
    @@ mm/slub.c: void *__kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *l
     -				s->size, gfpflags);
     +				s->size, gfpflags, NUMA_NO_NODE);
      
    - 	return ret;
    - }
    -@@ mm/slub.c: void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags,
    - void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
    - {
    - 	void *ret = slab_alloc(s, NULL, gfpflags, _RET_IP_, size);
    --	trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags);
    -+	trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, NUMA_NO_NODE);
    - 	ret = kasan_kmalloc(s, ret, size, gfpflags);
      	return ret;
      }
     @@ mm/slub.c: void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node)
    @@ mm/slub.c: void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int
      
     -	trace_kmem_cache_alloc_node(_RET_IP_, ret, s,
     -				    s->object_size, s->size, gfpflags, node);
    -+	trace_kmem_cache_alloc(_RET_IP_, ret, s,
    -+			       s->object_size, s->size, gfpflags, node);
    ++	trace_kmem_cache_alloc(_RET_IP_, ret, s, s->object_size,
    ++			       s->size, gfpflags, node);
      
      	return ret;
      }
    -@@ mm/slub.c: void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
    - {
    - 	void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size);
    - 
    --	trace_kmalloc_node(_RET_IP_, ret, s,
    --			   size, s->size, gfpflags, node);
    -+	trace_kmalloc(_RET_IP_, ret, s,
    -+		      size, s->size, gfpflags, node);
    - 
    - 	ret = kasan_kmalloc(s, ret, size, gfpflags);
    - 	return ret;
14:  6d2b911a8274 ! 15:  998f51e44ff8 mm/slab_common: drop kmem_alloc & avoid dereferencing fields when not using
    @@ Commit message
              gfp flag is enough to know if it's accounted or not.
            - Avoid dereferencing s->object_size and s->size when not using kmem_cache_alloc event.
            - Avoid dereferencing s->name in when not using kmem_cache_free event.
    +       - Adjust s->size to SLOB_UNITS(s->size) * SLOB_UNIT in SLOB
     
    +    Cc: Vasily Averin <vasily.averin@linux.dev>
         Suggested-by: Vlastimil Babka <vbabka@suse.cz>
         Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
    +    Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
     
      ## include/trace/events/kmem.h ##
     @@
    @@ include/trace/events/kmem.h: DECLARE_EVENT_CLASS(kmem_alloc,
      		__entry->gfp_flags	= (__force unsigned long)gfp_flags;
      		__entry->node		= node;
      		__entry->accounted	= IS_ENABLED(CONFIG_MEMCG_KMEM) ?
    + 					  ((gfp_flags & __GFP_ACCOUNT) ||
    +-					  (s && s->flags & SLAB_ACCOUNT)) : false;
    ++					  (s->flags & SLAB_ACCOUNT)) : false;
    + 	),
    + 
    + 	TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s node=%d accounted=%s",
     @@ include/trace/events/kmem.h: DECLARE_EVENT_CLASS(kmem_alloc,
      		__entry->accounted ? "true" : "false")
      );
    @@ include/trace/events/kmem.h: DECLARE_EVENT_CLASS(kmem_alloc,
     +		__field(	size_t,		bytes_alloc	)
     +		__field(	unsigned long,	gfp_flags	)
     +		__field(	int,		node		)
    -+		__field(	bool,		accounted	)
     +	),
      
     -	TP_PROTO(unsigned long call_site, const void *ptr,
    @@ include/trace/events/kmem.h: DECLARE_EVENT_CLASS(kmem_alloc,
     +		__entry->bytes_alloc	= bytes_alloc;
     +		__entry->gfp_flags	= (__force unsigned long)gfp_flags;
     +		__entry->node		= node;
    -+		__entry->accounted	= IS_ENABLED(CONFIG_MEMCG_KMEM) ?
    -+					  (gfp_flags & __GFP_ACCOUNT) : false;
     +	),
      
     -	TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node)
    @@ include/trace/events/kmem.h: DECLARE_EVENT_CLASS(kmem_alloc,
     +		__entry->bytes_alloc,
     +		show_gfp_flags(__entry->gfp_flags),
     +		__entry->node,
    -+		__entry->accounted ? "true" : "false")
    ++		(IS_ENABLED(CONFIG_MEMCG_KMEM) &&
    ++		 (__entry->gfp_flags & __GFP_ACCOUNT)) ? "true" : "false")
      );
      
      TRACE_EVENT(kfree,
    @@ mm/slab.c: void *__kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_l
      
      	return ret;
      }
    -@@ mm/slab.c: kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size)
    - 	ret = slab_alloc(cachep, NULL, flags, size, _RET_IP_);
    - 
    - 	ret = kasan_kmalloc(cachep, ret, size, flags);
    --	trace_kmalloc(_RET_IP_, ret, cachep,
    --		      size, cachep->size, flags, NUMA_NO_NODE);
    -+	trace_kmalloc(_RET_IP_, ret,
    -+		      size, cachep->size,
    -+		      flags, NUMA_NO_NODE);
    - 	return ret;
    - }
    - EXPORT_SYMBOL(kmem_cache_alloc_trace);
     @@ mm/slab.c: void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid)
      {
      	void *ret = slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object_size, _RET_IP_);
      
    --	trace_kmem_cache_alloc(_RET_IP_, ret, cachep,
    --			       cachep->object_size, cachep->size,
    --			       flags, nodeid);
    +-	trace_kmem_cache_alloc(_RET_IP_, ret, cachep, cachep->object_size,
    +-			       cachep->size, flags, nodeid);
     +	trace_kmem_cache_alloc(_RET_IP_, ret, cachep, flags, nodeid);
      
      	return ret;
      }
    -@@ mm/slab.c: void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep,
    - 	ret = slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_);
    - 
    - 	ret = kasan_kmalloc(cachep, ret, size, flags);
    --	trace_kmalloc(_RET_IP_, ret, cachep,
    -+	trace_kmalloc(_RET_IP_, ret,
    - 		      size, cachep->size,
    - 		      flags, nodeid);
    - 	return ret;
     @@ mm/slab.c: void kmem_cache_free(struct kmem_cache *cachep, void *objp)
      	if (!cachep)
      		return;
    @@ mm/slab_common.c
     @@ mm/slab_common.c: void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller
      
      	if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) {
    - 		ret = kmalloc_large_node_notrace(size, flags, node);
    --		trace_kmalloc(_RET_IP_, ret, NULL,
    -+		trace_kmalloc(_RET_IP_, ret,
    - 			      size, PAGE_SIZE << get_order(size),
    - 			      flags, node);
    + 		ret = __kmalloc_large_node(size, flags, node);
    +-		trace_kmalloc(_RET_IP_, ret, NULL, size,
    ++		trace_kmalloc(_RET_IP_, ret, size,
    + 			      PAGE_SIZE << get_order(size), flags, node);
      		return ret;
    + 	}
     @@ mm/slab_common.c: void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller
      
      	ret = __kmem_cache_alloc_node(s, flags, node, size, caller);
      	ret = kasan_kmalloc(s, ret, size, flags);
    --	trace_kmalloc(_RET_IP_, ret, s, size,
    --		      s->size, flags, node);
    -+	trace_kmalloc(_RET_IP_, ret,
    -+		      size, s->size,
    -+		      flags, node);
    +-	trace_kmalloc(_RET_IP_, ret, s, size, s->size, flags, node);
    ++	trace_kmalloc(_RET_IP_, ret, size, s->size, flags, node);
      	return ret;
      }
      
    +@@ mm/slab_common.c: void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
    + 	void *ret = __kmem_cache_alloc_node(s, gfpflags, NUMA_NO_NODE,
    + 					    size, _RET_IP_);
    + 
    +-	trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, NUMA_NO_NODE);
    ++	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, NUMA_NO_NODE);
    + 
    + 	ret = kasan_kmalloc(s, ret, size, gfpflags);
    + 	return ret;
    +@@ mm/slab_common.c: void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags,
    + {
    + 	void *ret = __kmem_cache_alloc_node(s, gfpflags, node, size, _RET_IP_);
    + 
    +-	trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, node);
    ++	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, node);
    + 
    + 	ret = kasan_kmalloc(s, ret, size, gfpflags);
    + 	return ret;
     @@ mm/slab_common.c: void *kmalloc_large(size_t size, gfp_t flags)
      {
    - 	void *ret = __kmalloc_large_node_notrace(size, flags, NUMA_NO_NODE);
    + 	void *ret = __kmalloc_large_node(size, flags, NUMA_NO_NODE);
      
     -	trace_kmalloc(_RET_IP_, ret, NULL, size,
     -		      PAGE_SIZE << get_order(size), flags, NUMA_NO_NODE);
    -+	trace_kmalloc(_RET_IP_, ret,
    -+		      size, PAGE_SIZE << get_order(size),
    ++	trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size),
     +		      flags, NUMA_NO_NODE);
      	return ret;
      }
      EXPORT_SYMBOL(kmalloc_large);
     @@ mm/slab_common.c: void *kmalloc_large_node(size_t size, gfp_t flags, int node)
      {
    - 	void *ret = __kmalloc_large_node_notrace(size, flags, node);
    + 	void *ret = __kmalloc_large_node(size, flags, node);
      
     -	trace_kmalloc(_RET_IP_, ret, NULL, size,
     -		      PAGE_SIZE << get_order(size), flags, node);
    -+	trace_kmalloc(_RET_IP_, ret,
    -+		      size, PAGE_SIZE << get_order(size),
    ++	trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size),
     +		      flags, node);
      	return ret;
      }
    @@ mm/slob.c: __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long cal
      		*m = size;
      		ret = (void *)m + minalign;
      
    --		trace_kmalloc(caller, ret, NULL,
    --			      size, size + minalign, gfp, node);
    +-		trace_kmalloc(caller, ret, NULL, size,
    +-			      size + minalign, gfp, node);
     +		trace_kmalloc(caller, ret, size, size + minalign, gfp, node);
      	} else {
      		unsigned int order = get_order(size);
    @@ mm/slob.c: __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long cal
      			gfp |= __GFP_COMP;
      		ret = slob_new_pages(gfp, order, node);
      
    --		trace_kmalloc(caller, ret, NULL,
    --			      size, PAGE_SIZE << order, gfp, node);
    +-		trace_kmalloc(caller, ret, NULL, size,
    +-			      PAGE_SIZE << order, gfp, node);
     +		trace_kmalloc(caller, ret, size, PAGE_SIZE << order, gfp, node);
      	}
      
      	kmemleak_alloc(ret, size, 1, gfp);
    +@@ mm/slob.c: int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags)
    + 		/* leave room for rcu footer at the end of object */
    + 		c->size += sizeof(struct slob_rcu);
    + 	}
    ++
    ++	/* Actual size allocated */
    ++	c->size = SLOB_UNITS(c->size) * SLOB_UNIT;
    + 	c->flags = flags;
    + 	return 0;
    + }
     @@ mm/slob.c: static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
      
      	if (c->size < PAGE_SIZE) {
    @@ mm/slub.c: void *__kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *l
     -				s->size, gfpflags, NUMA_NO_NODE);
     +	trace_kmem_cache_alloc(_RET_IP_, ret, s, gfpflags, NUMA_NO_NODE);
      
    - 	return ret;
    - }
    -@@ mm/slub.c: void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags,
    - void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
    - {
    - 	void *ret = slab_alloc(s, NULL, gfpflags, _RET_IP_, size);
    --	trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, NUMA_NO_NODE);
    -+	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, NUMA_NO_NODE);
    - 	ret = kasan_kmalloc(s, ret, size, gfpflags);
      	return ret;
      }
     @@ mm/slub.c: void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node)
      {
      	void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size);
      
    --	trace_kmem_cache_alloc(_RET_IP_, ret, s,
    --			       s->object_size, s->size, gfpflags, node);
    +-	trace_kmem_cache_alloc(_RET_IP_, ret, s, s->object_size,
    +-			       s->size, gfpflags, node);
     +	trace_kmem_cache_alloc(_RET_IP_, ret, s, gfpflags, node);
      
      	return ret;
      }
    -@@ mm/slub.c: void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
    - {
    - 	void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size);
    - 
    --	trace_kmalloc(_RET_IP_, ret, s,
    --		      size, s->size, gfpflags, node);
    -+	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, node);
    - 
    - 	ret = kasan_kmalloc(s, ret, size, gfpflags);
    - 	return ret;
     @@ mm/slub.c: void kmem_cache_free(struct kmem_cache *s, void *x)
      	s = cache_from_obj(s, x);
      	if (!s)
15:  566fdd67515d ! 16:  cd1a424103f5 mm/slab_common: move definition of __ksize() to mm/slab.h
    @@ Metadata
     Author: Hyeonggon Yoo <42.hyeyoo@gmail.com>
     
      ## Commit message ##
    -    mm/slab_common: move definition of __ksize() to mm/slab.h
    +    mm/slab_common: move declaration of __ksize() to mm/slab.h
     
         __ksize() is only called by KASAN. Remove export symbol and move
    -    definition to mm/slab.h as we don't want to grow its callers.
    +    declaration to mm/slab.h as we don't want to grow its callers.
     
         Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
         Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
    @@ mm/slab_common.c: size_t __ksize(const void *object)
      	return slab_ksize(folio_slab(folio)->slab_cache);
      }
     -EXPORT_SYMBOL(__ksize);
    - #endif /* !CONFIG_SLOB */
      
    - gfp_t kmalloc_fix_flags(gfp_t flags)
    + #ifdef CONFIG_TRACING
    + void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
     
      ## mm/slob.c ##
     @@ mm/slob.c: size_t __ksize(const void *block)
16:  2c99a66a9307 ! 17:  11bd80a065e4 mm/sl[au]b: check if large object is valid in __ksize()
    @@ Metadata
      ## Commit message ##
         mm/sl[au]b: check if large object is valid in __ksize()
     
    -    __ksize() returns size of objects allocated from slab allocator.
    -    When invalid object is passed to __ksize(), returning zero
    -    prevents further memory corruption and makes caller be able to
    -    check if there is an error.
    -
         If address of large object is not beginning of folio or size of
    -    the folio is too small, it must be invalid. Return zero in such cases.
    +    the folio is too small, it must be invalid. BUG() in such cases.
     
    +    Cc: Marco Elver <elver@google.com>
         Suggested-by: Vlastimil Babka <vbabka@suse.cz>
         Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
    +    Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
     
      ## mm/slab_common.c ##
     @@ mm/slab_common.c: size_t __ksize(const void *object)
    @@ mm/slab_common.c: size_t __ksize(const void *object)
      
     -	if (unlikely(!folio_test_slab(folio)))
     +	if (unlikely(!folio_test_slab(folio))) {
    -+		if (WARN_ON(object != folio_address(folio) ||
    -+				folio_size(folio) <= KMALLOC_MAX_CACHE_SIZE))
    -+			return 0;
    ++		BUG_ON(folio_size(folio) <= KMALLOC_MAX_CACHE_SIZE);
    ++		BUG_ON(object != folio_address(folio));
      		return folio_size(folio);
     +	}
      
-- 
2.32.0


^ permalink raw reply	[flat|nested] 30+ messages in thread
* Re: [PATCH] mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation
@ 2022-10-15 11:47 Guenter Roeck
  0 siblings, 0 replies; 30+ messages in thread
From: Guenter Roeck @ 2022-10-15 11:47 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin, linux-mm,
	linux-kernel

On Sat, Oct 15, 2022 at 01:34:29PM +0900, Hyeonggon Yoo wrote:
> After commit d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than
> order-1 page to page allocator"), SLAB passes large ( > PAGE_SIZE * 2)
> requests to buddy like SLUB does.
> 
> SLAB has been using kmalloc caches to allocate freelist_idx_t array for
> off slab caches. But after the commit, freelist_size can be bigger than
> KMALLOC_MAX_CACHE_SIZE.
> 
> Instead of using pointer to kmalloc cache, use kmalloc_node() and only
> check if the kmalloc cache is off slab during calculate_slab_order().
> If freelist_size > KMALLOC_MAX_CACHE_SIZE, no looping condition happens
> as it allocates freelist_idx_t array directly from buddy.
> 
> Reported-by: Guenter Roeck <linux@roeck-us.net>
> Fixes: d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator")
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> ---
> 
> @Guenter:
> 	This fixes the issue on my emulation.
> 	Can you please test this on your environment?

Yes, that fixes the problem for me.

Tested-by: Guenter Roeck <linux@roeck-us.net>

Thanks,
Guenter

> 
>  include/linux/slab_def.h |  1 -
>  mm/slab.c                | 37 +++++++++++++++++++------------------
>  2 files changed, 19 insertions(+), 19 deletions(-)
> 
> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> index e24c9aff6fed..f0ffad6a3365 100644
> --- a/include/linux/slab_def.h
> +++ b/include/linux/slab_def.h
> @@ -33,7 +33,6 @@ struct kmem_cache {
>  
>  	size_t colour;			/* cache colouring range */
>  	unsigned int colour_off;	/* colour offset */
> -	struct kmem_cache *freelist_cache;
>  	unsigned int freelist_size;
>  
>  	/* constructor func */
> diff --git a/mm/slab.c b/mm/slab.c
> index a5486ff8362a..d1f6e2c64c2e 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1619,7 +1619,7 @@ static void slab_destroy(struct kmem_cache *cachep, struct slab *slab)
>  	 * although actual page can be freed in rcu context
>  	 */
>  	if (OFF_SLAB(cachep))
> -		kmem_cache_free(cachep->freelist_cache, freelist);
> +		kfree(freelist);
>  }
>  
>  /*
> @@ -1671,21 +1671,27 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
>  		if (flags & CFLGS_OFF_SLAB) {
>  			struct kmem_cache *freelist_cache;
>  			size_t freelist_size;
> +			size_t freelist_cache_size;
>  
>  			freelist_size = num * sizeof(freelist_idx_t);
> -			freelist_cache = kmalloc_slab(freelist_size, 0u);
> -			if (!freelist_cache)
> -				continue;
> -
> -			/*
> -			 * Needed to avoid possible looping condition
> -			 * in cache_grow_begin()
> -			 */
> -			if (OFF_SLAB(freelist_cache))
> -				continue;
> +			if (freelist_size > KMALLOC_MAX_CACHE_SIZE) {
> +				freelist_cache_size = PAGE_SIZE << get_order(freelist_size);
> +			} else {
> +				freelist_cache = kmalloc_slab(freelist_size, 0u);
> +				if (!freelist_cache)
> +					continue;
> +				freelist_cache_size = freelist_cache->size;
> +
> +				/*
> +				 * Needed to avoid possible looping condition
> +				 * in cache_grow_begin()
> +				 */
> +				if (OFF_SLAB(freelist_cache))
> +					continue;
> +			}
>  
>  			/* check if off slab has enough benefit */
> -			if (freelist_cache->size > cachep->size / 2)
> +			if (freelist_cache_size > cachep->size / 2)
>  				continue;
>  		}
>  
> @@ -2061,11 +2067,6 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
>  		cachep->flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
>  #endif
>  
> -	if (OFF_SLAB(cachep)) {
> -		cachep->freelist_cache =
> -			kmalloc_slab(cachep->freelist_size, 0u);
> -	}
> -
>  	err = setup_cpu_cache(cachep, gfp);
>  	if (err) {
>  		__kmem_cache_release(cachep);
> @@ -2292,7 +2293,7 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
>  		freelist = NULL;
>  	else if (OFF_SLAB(cachep)) {
>  		/* Slab management obj is off-slab. */
> -		freelist = kmem_cache_alloc_node(cachep->freelist_cache,
> +		freelist = kmalloc_node(cachep->freelist_size,
>  					      local_flags, nodeid);
>  	} else {
>  		/* We will use last bytes at the slab for freelist */
> -- 
> 2.32.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2022-10-16  9:10 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-17 10:18 [PATCH v4 00/17] common kmalloc v4 Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 01/17] mm/slab: move NUMA-related code to __do_cache_alloc() Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 02/17] mm/slab: cleanup slab_alloc() and slab_alloc_node() Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 03/17] mm/slab_common: remove CONFIG_NUMA ifdefs for common kmalloc functions Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 04/17] mm/slab_common: cleanup kmalloc_track_caller() Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 05/17] mm/sl[au]b: factor out __do_kmalloc_node() Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 06/17] mm/slab_common: fold kmalloc_order_trace() into kmalloc_large() Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 07/17] mm/slub: move kmalloc_large_node() to slab_common.c Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 08/17] mm/slab_common: kmalloc_node: pass large requests to page allocator Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 09/17] mm/slab_common: cleanup kmalloc_large() Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 10/17] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator Hyeonggon Yoo
2022-10-14 20:58   ` Guenter Roeck
2022-10-14 23:48     ` Hyeonggon Yoo
2022-10-15 19:39       ` Vlastimil Babka
2022-10-16  9:10         ` Hyeonggon Yoo
2022-10-15  4:34     ` [PATCH] mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 11/17] mm/sl[au]b: introduce common alloc/free functions without tracepoint Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 12/17] mm/sl[au]b: generalize kmalloc subsystem Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 13/17] mm/sl[au]b: cleanup kmem_cache_alloc[_node]_trace() Hyeonggon Yoo
2022-08-23 15:04   ` Vlastimil Babka
2022-08-24  3:54     ` Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 14/17] mm/slab_common: unify NUMA and UMA version of tracepoints Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 15/17] mm/slab_common: drop kmem_alloc & avoid dereferencing fields when not using Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 16/17] mm/slab_common: move declaration of __ksize() to mm/slab.h Hyeonggon Yoo
2022-08-17 10:18 ` [PATCH v4 17/17] mm/sl[au]b: check if large object is valid in __ksize() Hyeonggon Yoo
2022-08-23 15:12   ` Vlastimil Babka
2022-08-24  3:52     ` Hyeonggon Yoo
2022-08-23 15:16 ` [PATCH v4 00/17] common kmalloc v4 Vlastimil Babka
2022-08-24  3:58   ` Hyeonggon Yoo
2022-10-15 11:47 [PATCH] mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation Guenter Roeck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox