From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>,
Christoph Lameter <cl@linux.com>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Pekka Enberg <penberg@kernel.org>,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
patches@lists.linux.dev, Alexander Potapenko <glider@google.com>,
Andrey Konovalov <andreyknvl@gmail.com>,
Andrey Ryabinin <ryabinin.a.a@gmail.com>,
Andy Lutomirski <luto@kernel.org>, Borislav Petkov <bp@alien8.de>,
cgroups@vger.kernel.org,
Dave Hansen <dave.hansen@linux.intel.com>,
David Woodhouse <dwmw2@infradead.org>,
Dmitry Vyukov <dvyukov@google.com>,
"H. Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
iommu@lists.linux-foundation.org, Joerg Roedel <joro@8bytes.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Julia Lawall <julia.lawall@inria.fr>,
kasan-dev@googlegroups.com, Lu Baolu <baolu.lu@linux.intel.com>,
Luis Chamberlain <mcgrof@kernel.org>,
Marco Elver <elver@google.com>, Michal Hocko <mhocko@kernel.org>,
Minchan Kim <minchan@kernel.org>, Nitin Gupta <ngupta@vflare.org>,
Peter Zijlstra <peterz@infradead.org>,
Sergey Senozhatsky <senozhatsky@chromium.org>,
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
Thomas Gleixner <tglx@linutronix.de>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Will Deacon <will@kernel.org>,
x86@kernel.org
Subject: Re: [PATCH v2 00/33] Separate struct slab from struct page
Date: Thu, 16 Dec 2021 15:00:42 +0000 [thread overview]
Message-ID: <YbtUmi5kkhmlXEB1@ip-172-31-30-232.ap-northeast-1.compute.internal> (raw)
In-Reply-To: <4c3dfdfa-2e19-a9a7-7945-3d75bc87ca05@suse.cz>
On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > this cover letter.
> >
> > Series also available in git, based on 5.16-rc3:
> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
Reviewing the whole patch series takes longer than I thought.
I'll try to review and test rest of patches when I have time.
I added Tested-by if kernel builds okay and kselftests
does not break the kernel on my machine.
(with CONFIG_SLAB/SLUB/SLOB depending on the patch),
Let me know me if you know better way to test a patch.
# mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Comment:
Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
btw, do we need slabs_cpu_partial attribute when we don't use
cpu partials? (!SLUB_CPU_PARTIAL)
# mm/slub: Simplify struct slab slabs field definition
Comment:
This is how struct page looks on the top of v3r3 branch:
struct page {
[...]
struct { /* slab, slob and slub */
union {
struct list_head slab_list;
struct { /* Partial pages */
struct page *next;
#ifdef CONFIG_64BIT
int pages; /* Nr of pages left */
#else
short int pages;
#endif
};
};
[...]
It's not consistent with struct slab.
I think this is because "mm: Remove slab from struct page" was dropped.
Would you update some of patches?
# mm/sl*b: Differentiate struct slab fields by sl*b implementations
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Works SL[AUO]B on my machine and makes code much better.
# mm/slob: Convert SLOB to use struct slab and struct folio
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
It still works fine on SLOB.
# mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
# mm/slub: Convert __free_slab() to use struct slab
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Thanks,
Hyeonggon.
>
> 1: 10b656f9eb1e = 1: 10b656f9eb1e mm: add virt_to_folio() and folio_address()
> 2: 5e6ad846acf1 = 2: 5e6ad846acf1 mm/slab: Dissolve slab_map_pages() in its caller
> 3: 48d4e9407aa0 = 3: 48d4e9407aa0 mm/slub: Make object_err() static
> 4: fe1e19081321 = 4: fe1e19081321 mm: Split slab into its own type
> 5: af7fd46fbb9b = 5: af7fd46fbb9b mm: Add account_slab() and unaccount_slab()
> 6: 7ed088d601d9 = 6: 7ed088d601d9 mm: Convert virt_to_cache() to use struct slab
> 7: 1d41188b9401 = 7: 1d41188b9401 mm: Convert __ksize() to struct slab
> 8: 5d9d1231461f ! 8: 8fd22e0b086e mm: Use struct slab in kmem_obj_info()
> @@ Commit message
> slab type instead of the page type, we make it obvious that this can
> only be called for slabs.
>
> + [ vbabka@suse.cz: also convert the related kmem_valid_obj() to folios ]
> +
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>
> @@ mm/slab.h: struct kmem_obj_info {
> #endif /* MM_SLAB_H */
>
> ## mm/slab_common.c ##
> +@@ mm/slab_common.c: bool slab_is_available(void)
> + */
> + bool kmem_valid_obj(void *object)
> + {
> +- struct page *page;
> ++ struct folio *folio;
> +
> + /* Some arches consider ZERO_SIZE_PTR to be a valid address. */
> + if (object < (void *)PAGE_SIZE || !virt_addr_valid(object))
> + return false;
> +- page = virt_to_head_page(object);
> +- return PageSlab(page);
> ++ folio = virt_to_folio(object);
> ++ return folio_test_slab(folio);
> + }
> + EXPORT_SYMBOL_GPL(kmem_valid_obj);
> +
> @@ mm/slab_common.c: void kmem_dump_obj(void *object)
> {
> char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
> @@ mm/slub.c: int __kmem_cache_shutdown(struct kmem_cache *s)
> objp = base + s->size * objnr;
> kpp->kp_objp = objp;
> - if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) ||
> -+ if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size || (objp - base) % s->size) ||
> ++ if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size
> ++ || (objp - base) % s->size) ||
> !(s->flags & SLAB_STORE_USER))
> return;
> #ifdef CONFIG_SLUB_DEBUG
> 9: 3aef771be335 ! 9: c97e73c3b6c2 mm: Convert check_heap_object() to use struct slab
> @@ mm/slab.h: struct kmem_obj_info {
> +#else
> +static inline
> +void __check_heap_object(const void *ptr, unsigned long n,
> -+ const struct slab *slab, bool to_user) { }
> ++ const struct slab *slab, bool to_user)
> ++{
> ++}
> +#endif
> +
> #endif /* MM_SLAB_H */
> 10: 2253e45e6bef = 10: da05e0f7179c mm/slub: Convert detached_freelist to use a struct slab
> 11: f28202bc27ba = 11: 383887e77104 mm/slub: Convert kfree() to use a struct slab
> 12: 31b58b1e914f = 12: c46be093c637 mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
> 13: 636406a3ad59 = 13: 49dbbf917052 mm/slub: Convert print_page_info() to print_slab_info()
> 14: 3b49efda3b6f = 14: 4bb0c932156a mm/slub: Convert alloc_slab_page() to return a struct slab
> 15: 61a195526d3b ! 15: 4b9761b5cfab mm/slub: Convert __free_slab() to use struct slab
> @@ mm/slub.c: static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int n
>
> - __ClearPageSlabPfmemalloc(page);
> - __ClearPageSlab(page);
> +- /* In union with page->mapping where page allocator expects NULL */
> +- page->slab_cache = NULL;
> + __slab_clear_pfmemalloc(slab);
> + __folio_clear_slab(folio);
> - /* In union with page->mapping where page allocator expects NULL */
> -- page->slab_cache = NULL;
> -+ slab->slab_cache = NULL;
> ++ folio->mapping = NULL;
> if (current->reclaim_state)
> current->reclaim_state->reclaimed_slab += pages;
> - unaccount_slab(page_slab(page), order, s);
> 16: 987c7ed31580 = 16: f384ec918065 mm/slub: Convert pfmemalloc_match() to take a struct slab
> 17: cc742564237e ! 17: 06738ade4e17 mm/slub: Convert most struct page to struct slab by spatch
> @@ Commit message
>
> // Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c
> // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
> - // embedded script script
> + // embedded script
>
> // build list of functions to exclude from applying the next rule
> @initialize:ocaml@
> 18: b45acac9aace = 18: 1a4f69a4cced mm/slub: Finish struct page to struct slab conversion
> 19: 76c3eeb39684 ! 19: 1d62d706e884 mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
> @@ mm/slab.c: slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nod
> - __ClearPageSlabPfmemalloc(page);
> - __ClearPageSlab(page);
> - page_mapcount_reset(page);
> +- /* In union with page->mapping where page allocator expects NULL */
> +- page->slab_cache = NULL;
> + BUG_ON(!folio_test_slab(folio));
> + __slab_clear_pfmemalloc(slab);
> + __folio_clear_slab(folio);
> + page_mapcount_reset(folio_page(folio, 0));
> - /* In union with page->mapping where page allocator expects NULL */
> -- page->slab_cache = NULL;
> -+ slab->slab_cache = NULL;
> ++ folio->mapping = NULL;
>
> if (current->reclaim_state)
> current->reclaim_state->reclaimed_slab += 1 << order;
> 20: ed6144dbebce ! 20: fd4c3aabacd3 mm/slab: Convert most struct page to struct slab by spatch
> @@ Commit message
>
> // Options: --include-headers --no-includes --smpl-spacing mm/slab.c
> // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
> - // embedded script script
> + // embedded script
>
> // build list of functions for applying the next rule
> @initialize:ocaml@
> 21: 17fb81e601e6 = 21: b59720b2edba mm/slab: Finish struct page to struct slab conversion
> 22: 4e8d1faebc24 ! 22: 65ced071c3e7 mm: Convert struct page to struct slab in functions used by other subsystems
> @@ Commit message
> ,...)
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> + Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
> Cc: Julia Lawall <julia.lawall@inria.fr>
> Cc: Luis Chamberlain <mcgrof@kernel.org>
> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> 23: eefa12e18a92 = 23: c9c8dee01e5d mm/memcg: Convert slab objcgs from struct page to struct slab
> 24: fa5ba4107ce2 ! 24: def731137335 mm/slob: Convert SLOB to use struct slab
> @@ Metadata
> Author: Matthew Wilcox (Oracle) <willy@infradead.org>
>
> ## Commit message ##
> - mm/slob: Convert SLOB to use struct slab
> + mm/slob: Convert SLOB to use struct slab and struct folio
>
> - Use struct slab throughout the slob allocator.
> + Use struct slab throughout the slob allocator. Where non-slab page can appear
> + use struct folio instead of struct page.
>
> [ vbabka@suse.cz: don't introduce wrappers for PageSlobFree in mm/slab.h just
> for the single callers being wrappers in mm/slob.c ]
>
> + [ Hyeonggon Yoo <42.hyeyoo@gmail.com>: fix NULL pointer deference ]
> +
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>
> ## mm/slob.c ##
> +@@
> + * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
> + * alloc_pages() directly, allocating compound pages so the page order
> + * does not have to be separately tracked.
> +- * These objects are detected in kfree() because PageSlab()
> ++ * These objects are detected in kfree() because folio_test_slab()
> + * is false for them.
> + *
> + * SLAB is emulated on top of SLOB by simply calling constructors and
> @@ mm/slob.c: static LIST_HEAD(free_slob_large);
> /*
> * slob_page_free: true for pages on free_slob_pages list.
> @@ mm/slob.c: static void *slob_page_alloc(struct page *sp, size_t size, int align,
> int align_offset)
> {
> - struct page *sp;
> ++ struct folio *folio;
> + struct slab *sp;
> struct list_head *slob_list;
> slob_t *b = NULL;
> @@ mm/slob.c: static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
> return NULL;
> - sp = virt_to_page(b);
> - __SetPageSlab(sp);
> -+ sp = virt_to_slab(b);
> -+ __SetPageSlab(slab_page(sp));
> ++ folio = virt_to_folio(b);
> ++ __folio_set_slab(folio);
> ++ sp = folio_slab(folio);
>
> spin_lock_irqsave(&slob_lock, flags);
> sp->units = SLOB_UNITS(PAGE_SIZE);
> @@ mm/slob.c: static void slob_free(void *block, int size)
> spin_unlock_irqrestore(&slob_lock, flags);
> - __ClearPageSlab(sp);
> - page_mapcount_reset(sp);
> -+ __ClearPageSlab(slab_page(sp));
> ++ __folio_clear_slab(slab_folio(sp));
> + page_mapcount_reset(slab_page(sp));
> slob_free_pages(b, 0);
> return;
> }
> +@@ mm/slob.c: EXPORT_SYMBOL(__kmalloc_node_track_caller);
> +
> + void kfree(const void *block)
> + {
> +- struct page *sp;
> ++ struct folio *sp;
> +
> + trace_kfree(_RET_IP_, block);
> +
> +@@ mm/slob.c: void kfree(const void *block)
> + return;
> + kmemleak_free(block);
> +
> +- sp = virt_to_page(block);
> +- if (PageSlab(sp)) {
> ++ sp = virt_to_folio(block);
> ++ if (folio_test_slab(sp)) {
> + int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
> + unsigned int *m = (unsigned int *)(block - align);
> + slob_free(m, *m + align);
> + } else {
> +- unsigned int order = compound_order(sp);
> +- mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
> ++ unsigned int order = folio_order(sp);
> ++
> ++ mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
> + -(PAGE_SIZE << order));
> +- __free_pages(sp, order);
> ++ __free_pages(folio_page(sp, 0), order);
> +
> + }
> + }
> 25: aa4f573a4c96 ! 25: 466b9fb1f6e5 mm/kasan: Convert to struct folio and struct slab
> @@ Commit message
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> + Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Andrey Konovalov <andreyknvl@gmail.com>
> 26: 67b7966d2fb6 = 26: b8159ae8e5cd mm/kfence: Convert kfence_guarded_alloc() to struct slab
> 31: d64dfe49c1e7 ! 27: 4525180926f9 mm/sl*b: Differentiate struct slab fields by sl*b implementations
> @@ Commit message
> possible.
>
> This should also prevent accidental use of fields that don't exist in given
> - implementation. Before this patch virt_to_cache() and and cache_from_obj() was
> - visible for SLOB (albeit not used), although it relies on the slab_cache field
> + implementation. Before this patch virt_to_cache() and cache_from_obj() were
> + visible for SLOB (albeit not used), although they rely on the slab_cache field
> that isn't set by SLOB. With this patch it's now a compile error, so these
> functions are now hidden behind #ifndef CONFIG_SLOB.
>
> @@ mm/kfence/core.c: static void *kfence_guarded_alloc(struct kmem_cache *cache, si
> - slab->s_mem = addr;
> +#if defined(CONFIG_SLUB)
> + slab->objects = 1;
> -+#elif defined (CONFIG_SLAB)
> ++#elif defined(CONFIG_SLAB)
> + slab->s_mem = addr;
> +#endif
>
> @@ mm/slab.h
> +
> +#if defined(CONFIG_SLAB)
> +
> -+ union {
> -+ struct list_head slab_list;
> + union {
> + struct list_head slab_list;
> +- struct { /* Partial pages */
> + struct rcu_head rcu_head;
> + };
> + struct kmem_cache *slab_cache;
> + void *freelist; /* array of free object indexes */
> -+ void * s_mem; /* first object */
> ++ void *s_mem; /* first object */
> + unsigned int active;
> +
> +#elif defined(CONFIG_SLUB)
> +
> - union {
> - struct list_head slab_list;
> -- struct { /* Partial pages */
> ++ union {
> ++ struct list_head slab_list;
> + struct rcu_head rcu_head;
> + struct {
> struct slab *next;
> @@ mm/slab.h: struct slab {
> +#elif defined(CONFIG_SLOB)
> +
> + struct list_head slab_list;
> -+ void * __unused_1;
> ++ void *__unused_1;
> + void *freelist; /* first free block */
> -+ void * __unused_2;
> ++ void *__unused_2;
> + int units;
> +
> +#else
> @@ mm/slab.h: struct slab {
> #ifdef CONFIG_MEMCG
> unsigned long memcg_data;
> @@ mm/slab.h: struct slab {
> - static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
> SLAB_MATCH(flags, __page_flags);
> SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */
> + SLAB_MATCH(slab_list, slab_list);
> +#ifndef CONFIG_SLOB
> SLAB_MATCH(rcu_head, rcu_head);
> + SLAB_MATCH(slab_cache, slab_cache);
> ++#endif
> ++#ifdef CONFIG_SLAB
> + SLAB_MATCH(s_mem, s_mem);
> + SLAB_MATCH(active, active);
> +#endif
> SLAB_MATCH(_refcount, __page_refcount);
> #ifdef CONFIG_MEMCG
> 32: 0abf87bae67e = 28: 94b78948d53f mm/slub: Simplify struct slab slabs field definition
> 33: 813c304f18e4 = 29: f5261e6375f0 mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
> 27: ebce4b5b5ced ! 30: 1414e8c87de6 zsmalloc: Stop using slab fields in struct page
> @@ Commit message
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> - Cc: Minchan Kim <minchan@kernel.org>
> + Acked-by: Minchan Kim <minchan@kernel.org>
> Cc: Nitin Gupta <ngupta@vflare.org>
> Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
>
> 28: f124425ae7de = 31: 8a3cda6b38eb bootmem: Use page->index instead of page->freelist
> 29: 82da48c73b2e < -: ------------ iommu: Use put_pages_list
> 30: 181e16dfefbb < -: ------------ mm: Remove slab from struct page
> -: ------------ > 32: 91e069ba116b mm/slob: Remove unnecessary page_mapcount_reset() function call
next prev parent reply other threads:[~2021-12-16 15:01 UTC|newest]
Thread overview: 89+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-01 18:14 Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 01/33] mm: add virt_to_folio() and folio_address() Vlastimil Babka
2021-12-14 14:20 ` Johannes Weiner
2021-12-14 14:27 ` Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 02/33] mm/slab: Dissolve slab_map_pages() in its caller Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 03/33] mm/slub: Make object_err() static Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 04/33] mm: Split slab into its own type Vlastimil Babka
2021-12-14 14:24 ` Johannes Weiner
2021-12-01 18:14 ` [PATCH v2 05/33] mm: Add account_slab() and unaccount_slab() Vlastimil Babka
2021-12-14 14:25 ` Johannes Weiner
2021-12-01 18:14 ` [PATCH v2 06/33] mm: Convert virt_to_cache() to use struct slab Vlastimil Babka
2021-12-14 14:26 ` Johannes Weiner
2021-12-01 18:14 ` [PATCH v2 07/33] mm: Convert __ksize() to " Vlastimil Babka
2021-12-14 14:28 ` Johannes Weiner
2021-12-01 18:14 ` [PATCH v2 08/33] mm: Use struct slab in kmem_obj_info() Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 09/33] mm: Convert check_heap_object() to use struct slab Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 10/33] mm/slub: Convert detached_freelist to use a " Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 11/33] mm/slub: Convert kfree() " Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 12/33] mm/slub: Convert __slab_lock() and __slab_unlock() to " Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 13/33] mm/slub: Convert print_page_info() to print_slab_info() Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 14/33] mm/slub: Convert alloc_slab_page() to return a struct slab Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 15/33] mm/slub: Convert __free_slab() to use " Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 16/33] mm/slub: Convert pfmemalloc_match() to take a " Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 17/33] mm/slub: Convert most struct page to struct slab by spatch Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 18/33] mm/slub: Finish struct page to struct slab conversion Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 19/33] mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 20/33] mm/slab: Convert most struct page to struct slab by spatch Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 21/33] mm/slab: Finish struct page to struct slab conversion Vlastimil Babka
2021-12-01 18:14 ` [PATCH v2 22/33] mm: Convert struct page to struct slab in functions used by other subsystems Vlastimil Babka
2021-12-02 17:16 ` Andrey Konovalov
2021-12-14 14:31 ` Johannes Weiner
2021-12-01 18:15 ` [PATCH v2 23/33] mm/memcg: Convert slab objcgs from struct page to struct slab Vlastimil Babka
2021-12-14 14:43 ` Johannes Weiner
2021-12-20 23:31 ` Vlastimil Babka
2021-12-01 18:15 ` [PATCH v2 24/33] mm/slob: Convert SLOB to use " Vlastimil Babka
2021-12-10 10:44 ` Hyeonggon Yoo
2021-12-10 11:44 ` Vlastimil Babka
2021-12-10 15:29 ` Hyeonggon Yoo
2021-12-10 18:09 ` Vlastimil Babka
2021-12-11 10:54 ` Hyeonggon Yoo
2021-12-01 18:15 ` [PATCH v2 25/33] mm/kasan: Convert to struct folio and " Vlastimil Babka
2021-12-02 17:16 ` Andrey Konovalov
2021-12-01 18:15 ` [PATCH v2 26/33] mm/kfence: Convert kfence_guarded_alloc() to " Vlastimil Babka
2021-12-01 18:15 ` [PATCH v2 27/33] zsmalloc: Stop using slab fields in struct page Vlastimil Babka
2021-12-01 23:34 ` Minchan Kim
2021-12-14 14:58 ` Johannes Weiner
2021-12-01 18:15 ` [PATCH v2 28/33] bootmem: Use page->index instead of page->freelist Vlastimil Babka
2021-12-14 14:59 ` Johannes Weiner
2021-12-01 18:15 ` [PATCH v2 29/33] iommu: Use put_pages_list Vlastimil Babka
2021-12-01 19:07 ` Matthew Wilcox
2021-12-01 19:45 ` Robin Murphy
2021-12-01 18:15 ` [PATCH v2 30/33] mm: Remove slab from struct page Vlastimil Babka
2021-12-14 14:46 ` Johannes Weiner
2021-12-01 18:15 ` [PATCH v2 31/33] mm/sl*b: Differentiate struct slab fields by sl*b implementations Vlastimil Babka
[not found] ` <20211210163757.GA717823@odroid>
2021-12-10 18:26 ` Vlastimil Babka
2021-12-11 16:23 ` Matthew Wilcox
[not found] ` <20211211115527.GA822127@odroid>
2021-12-11 16:52 ` Matthew Wilcox
2021-12-01 18:15 ` [PATCH v2 32/33] mm/slub: Simplify struct slab slabs field definition Vlastimil Babka
2021-12-14 15:06 ` Johannes Weiner
2021-12-01 18:15 ` [PATCH v2 33/33] mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled Vlastimil Babka
2021-12-01 18:39 ` slab tree for next Vlastimil Babka
2021-12-01 20:34 ` Vlastimil Babka
2021-12-02 16:36 ` Vlastimil Babka
2021-12-02 20:39 ` Stephen Rothwell
2022-01-04 0:21 ` Vlastimil Babka
2022-01-04 8:44 ` Stephen Rothwell
2023-08-29 9:55 ` Vlastimil Babka
2023-08-29 21:33 ` Stephen Rothwell
2024-11-05 16:33 ` Vlastimil Babka
2024-11-05 21:08 ` Stephen Rothwell
2021-12-02 12:25 ` [PATCH v2 00/33] Separate struct slab from struct page Vlastimil Babka
2021-12-14 12:57 ` Vlastimil Babka
[not found] ` <20211214143822.GA1063445@odroid>
2021-12-14 14:43 ` Vlastimil Babka
2021-12-15 1:03 ` Roman Gushchin
2021-12-15 23:38 ` Roman Gushchin
2021-12-16 9:19 ` Vlastimil Babka
2021-12-20 0:47 ` Vlastimil Babka
2021-12-20 1:42 ` Matthew Wilcox
2021-12-20 0:24 ` Vlastimil Babka
2021-12-16 15:00 ` Hyeonggon Yoo [this message]
2021-12-20 23:58 ` Vlastimil Babka
2021-12-21 17:25 ` Robin Murphy
2021-12-22 7:36 ` Hyeonggon Yoo
2021-12-22 16:56 ` Vlastimil Babka
2021-12-25 9:16 ` Hyeonggon Yoo
2021-12-25 17:53 ` Matthew Wilcox
2021-12-27 2:43 ` Hyeonggon Yoo
2021-12-29 11:22 ` Hyeonggon Yoo
2022-01-03 17:56 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YbtUmi5kkhmlXEB1@ip-172-31-30-232.ap-northeast-1.compute.internal \
--to=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=baolu.lu@linux.intel.com \
--cc=bp@alien8.de \
--cc=cgroups@vger.kernel.org \
--cc=cl@linux.com \
--cc=dave.hansen@linux.intel.com \
--cc=dvyukov@google.com \
--cc=dwmw2@infradead.org \
--cc=elver@google.com \
--cc=glider@google.com \
--cc=hannes@cmpxchg.org \
--cc=hpa@zytor.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=iommu@lists.linux-foundation.org \
--cc=joro@8bytes.org \
--cc=julia.lawall@inria.fr \
--cc=kasan-dev@googlegroups.com \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mcgrof@kernel.org \
--cc=mhocko@kernel.org \
--cc=minchan@kernel.org \
--cc=mingo@redhat.com \
--cc=ngupta@vflare.org \
--cc=patches@lists.linux.dev \
--cc=penberg@kernel.org \
--cc=peterz@infradead.org \
--cc=rientjes@google.com \
--cc=ryabinin.a.a@gmail.com \
--cc=senozhatsky@chromium.org \
--cc=suravee.suthikulpanit@amd.com \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=vdavydov.dev@gmail.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox