From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4939C433F5 for ; Mon, 4 Oct 2021 14:05:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6F19C61184 for ; Mon, 4 Oct 2021 14:05:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6F19C61184 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0EF4C940025; Mon, 4 Oct 2021 10:05:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 09FD294000B; Mon, 4 Oct 2021 10:05:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5BFF940025; Mon, 4 Oct 2021 10:05:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id D405C94000B for ; Mon, 4 Oct 2021 10:05:44 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 907F1181C98E5 for ; Mon, 4 Oct 2021 14:05:44 +0000 (UTC) X-FDA: 78658928208.20.8FBE781 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 0ABC37008654 for ; Mon, 4 Oct 2021 14:05:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Job6CpG8PoMKpEgpVLGmMbDGiQduhajJXwL+GlTmnwQ=; b=c/5YNPNgtsD2z4WC9i6ZLk1bA6 ce60ORrWmg+oVi2tZ4HiGZ+9YCdbSsvIUa2MPJLBDt7+LDofxyTqmQqy7ZaA76H/4fMaTJAS0j98m tbdpW1dqbNtdR/fDyngqa8JWBTdWfKy9CJ4vtRx97zo2C1MeUtaxg32SzLHZGYNuJ9eS6rYRl6Nuz omvmuEscoEJSofUFdRaKnUjPO5gk7PKgJzOVUI56/kEYrSroXY7Xe9M88PeHJGzTW5mCHKCqw9CWL DJQ5sQPJV7CZ8x91Wg3s0WLM58pvuHvIWf9boJMnYY0jlf0Nlhhi+511kG7VDKPQim2oeDhp6np9B Z0vgGoBA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOZR-00GwsX-5v; Mon, 04 Oct 2021 14:04:14 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 15/62] mm/slub: Convert kmem_cache_cpu to struct slab Date: Mon, 4 Oct 2021 14:46:03 +0100 Message-Id: <20211004134650.4031813-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0ABC37008654 X-Stat-Signature: xe6o47xtc1m9gcc4em5fxphzo931jbce Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="c/5YNPNg"; dmarc=none; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633356343-105933 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To avoid converting from page to slab, we have to convert all these functions at once. Adds a little type-safety. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/slub_def.h | 4 +- mm/slub.c | 208 +++++++++++++++++++-------------------- 2 files changed, 106 insertions(+), 106 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 85499f0586b0..3cc64e9f988c 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -48,9 +48,9 @@ enum stat_item { struct kmem_cache_cpu { void **freelist; /* Pointer to next available object */ unsigned long tid; /* Globally unique transaction id */ - struct page *page; /* The slab from which we are allocating */ + struct slab *slab; /* The slab from which we are allocating */ #ifdef CONFIG_SLUB_CPU_PARTIAL - struct page *partial; /* Partially allocated frozen slabs */ + struct slab *partial; /* Partially allocated frozen slabs */ #endif local_lock_t lock; /* Protects the fields above */ #ifdef CONFIG_SLUB_STATS diff --git a/mm/slub.c b/mm/slub.c index 41c4ccd67d95..d849b644d0ed 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2084,9 +2084,9 @@ static inline void *acquire_slab(struct kmem_cache = *s, } =20 #ifdef CONFIG_SLUB_CPU_PARTIAL -static void put_cpu_partial(struct kmem_cache *s, struct page *page, int= drain); +static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int= drain); #else -static inline void put_cpu_partial(struct kmem_cache *s, struct page *pa= ge, +static inline void put_cpu_partial(struct kmem_cache *s, struct slab *sl= ab, int drain) { } #endif static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags); @@ -2095,9 +2095,9 @@ static inline bool pfmemalloc_match(struct page *pa= ge, gfp_t gfpflags); * Try to allocate a partial slab from a specific node. */ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_no= de *n, - struct page **ret_page, gfp_t gfpflags) + struct slab **ret_slab, gfp_t gfpflags) { - struct page *page, *page2; + struct slab *slab, *slab2; void *object =3D NULL; unsigned int available =3D 0; unsigned long flags; @@ -2113,23 +2113,23 @@ static void *get_partial_node(struct kmem_cache *= s, struct kmem_cache_node *n, return NULL; =20 spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry_safe(page, page2, &n->partial, slab_list) { + list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) { void *t; =20 - if (!pfmemalloc_match(page, gfpflags)) + if (!pfmemalloc_match(slab_page(slab), gfpflags)) continue; =20 - t =3D acquire_slab(s, n, page, object =3D=3D NULL, &objects); + t =3D acquire_slab(s, n, slab_page(slab), object =3D=3D NULL, &objects= ); if (!t) break; =20 available +=3D objects; if (!object) { - *ret_page =3D page; + *ret_slab =3D slab; stat(s, ALLOC_FROM_PARTIAL); object =3D t; } else { - put_cpu_partial(s, page, 0); + put_cpu_partial(s, slab, 0); stat(s, CPU_PARTIAL_NODE); } if (!kmem_cache_has_cpu_partial(s) @@ -2142,10 +2142,10 @@ static void *get_partial_node(struct kmem_cache *= s, struct kmem_cache_node *n, } =20 /* - * Get a page from somewhere. Search in increasing NUMA distances. + * Get a slab from somewhere. Search in increasing NUMA distances. */ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, - struct page **ret_page) + struct slab **ret_slab) { #ifdef CONFIG_NUMA struct zonelist *zonelist; @@ -2187,7 +2187,7 @@ static void *get_any_partial(struct kmem_cache *s, = gfp_t flags, =20 if (n && cpuset_zone_allowed(zone, flags) && n->nr_partial > s->min_partial) { - object =3D get_partial_node(s, n, ret_page, flags); + object =3D get_partial_node(s, n, ret_slab, flags); if (object) { /* * Don't check read_mems_allowed_retry() @@ -2206,10 +2206,10 @@ static void *get_any_partial(struct kmem_cache *s= , gfp_t flags, } =20 /* - * Get a partial page, lock it and return it. + * Get a partial slab, lock it and return it. */ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, - struct page **ret_page) + struct slab **ret_slab) { void *object; int searchnode =3D node; @@ -2217,11 +2217,11 @@ static void *get_partial(struct kmem_cache *s, gf= p_t flags, int node, if (node =3D=3D NUMA_NO_NODE) searchnode =3D numa_mem_id(); =20 - object =3D get_partial_node(s, get_node(s, searchnode), ret_page, flags= ); + object =3D get_partial_node(s, get_node(s, searchnode), ret_slab, flags= ); if (object || node !=3D NUMA_NO_NODE) return object; =20 - return get_any_partial(s, flags, ret_page); + return get_any_partial(s, flags, ret_slab); } =20 #ifdef CONFIG_PREEMPTION @@ -2506,7 +2506,7 @@ static void unfreeze_partials(struct kmem_cache *s) unsigned long flags; =20 local_lock_irqsave(&s->cpu_slab->lock, flags); - partial_page =3D this_cpu_read(s->cpu_slab->partial); + partial_page =3D slab_page(this_cpu_read(s->cpu_slab->partial)); this_cpu_write(s->cpu_slab->partial, NULL); local_unlock_irqrestore(&s->cpu_slab->lock, flags); =20 @@ -2519,7 +2519,7 @@ static void unfreeze_partials_cpu(struct kmem_cache= *s, { struct page *partial_page; =20 - partial_page =3D slub_percpu_partial(c); + partial_page =3D slab_page(slub_percpu_partial(c)); c->partial =3D NULL; =20 if (partial_page) @@ -2527,52 +2527,52 @@ static void unfreeze_partials_cpu(struct kmem_cac= he *s, } =20 /* - * Put a page that was just frozen (in __slab_free|get_partial_node) int= o a - * partial page slot if available. + * Put a slab that was just frozen (in __slab_free|get_partial_node) int= o a + * partial slab slot if available. * * If we did not find a slot then simply move all the partials to the * per node partial list. */ -static void put_cpu_partial(struct kmem_cache *s, struct page *page, int= drain) +static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int= drain) { - struct page *oldpage; - struct page *page_to_unfreeze =3D NULL; + struct slab *oldslab; + struct slab *slab_to_unfreeze =3D NULL; unsigned long flags; - int pages =3D 0; + int slabs =3D 0; int pobjects =3D 0; =20 local_lock_irqsave(&s->cpu_slab->lock, flags); =20 - oldpage =3D this_cpu_read(s->cpu_slab->partial); + oldslab =3D this_cpu_read(s->cpu_slab->partial); =20 - if (oldpage) { - if (drain && oldpage->pobjects > slub_cpu_partial(s)) { + if (oldslab) { + if (drain && oldslab->pobjects > slub_cpu_partial(s)) { /* * Partial array is full. Move the existing set to the * per node partial list. Postpone the actual unfreezing * outside of the critical section. */ - page_to_unfreeze =3D oldpage; - oldpage =3D NULL; + slab_to_unfreeze =3D oldslab; + oldslab =3D NULL; } else { - pobjects =3D oldpage->pobjects; - pages =3D oldpage->pages; + pobjects =3D oldslab->pobjects; + slabs =3D oldslab->slabs; } } =20 - pages++; - pobjects +=3D page->objects - page->inuse; + slabs++; + pobjects +=3D slab->objects - slab->inuse; =20 - page->pages =3D pages; - page->pobjects =3D pobjects; - page->next =3D oldpage; + slab->slabs =3D slabs; + slab->pobjects =3D pobjects; + slab->next =3D oldslab; =20 - this_cpu_write(s->cpu_slab->partial, page); + this_cpu_write(s->cpu_slab->partial, slab); =20 local_unlock_irqrestore(&s->cpu_slab->lock, flags); =20 - if (page_to_unfreeze) { - __unfreeze_partials(s, page_to_unfreeze); + if (slab_to_unfreeze) { + __unfreeze_partials(s, slab_page(slab_to_unfreeze)); stat(s, CPU_PARTIAL_DRAIN); } } @@ -2593,10 +2593,10 @@ static inline void flush_slab(struct kmem_cache *= s, struct kmem_cache_cpu *c) =20 local_lock_irqsave(&s->cpu_slab->lock, flags); =20 - page =3D c->page; + page =3D slab_page(c->slab); freelist =3D c->freelist; =20 - c->page =3D NULL; + c->slab =3D NULL; c->freelist =3D NULL; c->tid =3D next_tid(c->tid); =20 @@ -2612,9 +2612,9 @@ static inline void __flush_cpu_slab(struct kmem_cac= he *s, int cpu) { struct kmem_cache_cpu *c =3D per_cpu_ptr(s->cpu_slab, cpu); void *freelist =3D c->freelist; - struct page *page =3D c->page; + struct page *page =3D slab_page(c->slab); =20 - c->page =3D NULL; + c->slab =3D NULL; c->freelist =3D NULL; c->tid =3D next_tid(c->tid); =20 @@ -2648,7 +2648,7 @@ static void flush_cpu_slab(struct work_struct *w) s =3D sfw->s; c =3D this_cpu_ptr(s->cpu_slab); =20 - if (c->page) + if (c->slab) flush_slab(s, c); =20 unfreeze_partials(s); @@ -2658,7 +2658,7 @@ static bool has_cpu_slab(int cpu, struct kmem_cache= *s) { struct kmem_cache_cpu *c =3D per_cpu_ptr(s->cpu_slab, cpu); =20 - return c->page || slub_percpu_partial(c); + return c->slab || slub_percpu_partial(c); } =20 static DEFINE_MUTEX(flush_lock); @@ -2872,15 +2872,15 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, unsigned long addr, struct kmem_cache_cpu *c) { void *freelist; - struct page *page; + struct slab *slab; unsigned long flags; =20 stat(s, ALLOC_SLOWPATH); =20 -reread_page: +reread_slab: =20 - page =3D READ_ONCE(c->page); - if (!page) { + slab =3D READ_ONCE(c->slab); + if (!slab) { /* * if the node is not online or has no normal memory, just * ignore the node constraint @@ -2892,7 +2892,7 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, } redo: =20 - if (unlikely(!node_match(page, node))) { + if (unlikely(!node_match(slab_page(slab), node))) { /* * same as above but node_match() being false already * implies node !=3D NUMA_NO_NODE @@ -2907,27 +2907,27 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, } =20 /* - * By rights, we should be searching for a slab page that was - * PFMEMALLOC but right now, we are losing the pfmemalloc + * By rights, we should be searching for a slab that was + * PFMEMALLOC but right now, we lose the pfmemalloc * information when the page leaves the per-cpu allocator */ - if (unlikely(!pfmemalloc_match_unsafe(page, gfpflags))) + if (unlikely(!pfmemalloc_match_unsafe(slab_page(slab), gfpflags))) goto deactivate_slab; =20 - /* must check again c->page in case we got preempted and it changed */ + /* must check again c->slab in case we got preempted and it changed */ local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(page !=3D c->page)) { + if (unlikely(slab !=3D c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); - goto reread_page; + goto reread_slab; } freelist =3D c->freelist; if (freelist) goto load_freelist; =20 - freelist =3D get_freelist(s, page); + freelist =3D get_freelist(s, slab_page(slab)); =20 if (!freelist) { - c->page =3D NULL; + c->slab =3D NULL; local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, DEACTIVATE_BYPASS); goto new_slab; @@ -2941,10 +2941,10 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, =20 /* * freelist is pointing to the list of objects to be used. - * page is pointing to the page from which the objects are obtained. - * That page must be frozen for per cpu allocations to work. + * slab is pointing to the slab from which the objects are obtained. + * That slab must be frozen for per cpu allocations to work. */ - VM_BUG_ON(!c->page->frozen); + VM_BUG_ON(!c->slab->frozen); c->freelist =3D get_freepointer(s, freelist); c->tid =3D next_tid(c->tid); local_unlock_irqrestore(&s->cpu_slab->lock, flags); @@ -2953,23 +2953,23 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, deactivate_slab: =20 local_lock_irqsave(&s->cpu_slab->lock, flags); - if (page !=3D c->page) { + if (slab !=3D c->slab) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); - goto reread_page; + goto reread_slab; } freelist =3D c->freelist; - c->page =3D NULL; + c->slab =3D NULL; c->freelist =3D NULL; local_unlock_irqrestore(&s->cpu_slab->lock, flags); - deactivate_slab(s, page, freelist); + deactivate_slab(s, slab_page(slab), freelist); =20 new_slab: =20 if (slub_percpu_partial(c)) { local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(c->page)) { + if (unlikely(c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); - goto reread_page; + goto reread_slab; } if (unlikely(!slub_percpu_partial(c))) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); @@ -2977,8 +2977,8 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, goto new_objects; } =20 - page =3D c->page =3D slub_percpu_partial(c); - slub_set_percpu_partial(c, page); + slab =3D c->slab =3D slub_percpu_partial(c); + slub_set_percpu_partial(c, slab); local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, CPU_PARTIAL_ALLOC); goto redo; @@ -2986,32 +2986,32 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, =20 new_objects: =20 - freelist =3D get_partial(s, gfpflags, node, &page); + freelist =3D get_partial(s, gfpflags, node, &slab); if (freelist) - goto check_new_page; + goto check_new_slab; =20 slub_put_cpu_ptr(s->cpu_slab); - page =3D slab_page(new_slab(s, gfpflags, node)); + slab =3D new_slab(s, gfpflags, node); c =3D slub_get_cpu_ptr(s->cpu_slab); =20 - if (unlikely(!page)) { + if (unlikely(!slab)) { slab_out_of_memory(s, gfpflags, node); return NULL; } =20 /* - * No other reference to the page yet so we can + * No other reference to the slab yet so we can * muck around with it freely without cmpxchg */ - freelist =3D page->freelist; - page->freelist =3D NULL; + freelist =3D slab->freelist; + slab->freelist =3D NULL; =20 stat(s, ALLOC_SLAB); =20 -check_new_page: +check_new_slab: =20 if (kmem_cache_debug(s)) { - if (!alloc_debug_processing(s, page, freelist, addr)) { + if (!alloc_debug_processing(s, slab_page(slab), freelist, addr)) { /* Slab failed checks. Next slab needed */ goto new_slab; } else { @@ -3023,39 +3023,39 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, } } =20 - if (unlikely(!pfmemalloc_match(page, gfpflags))) + if (unlikely(!pfmemalloc_match(slab_page(slab), gfpflags))) /* * For !pfmemalloc_match() case we don't load freelist so that * we don't make further mismatched allocations easier. */ goto return_single; =20 -retry_load_page: +retry_load_slab: =20 local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(c->page)) { + if (unlikely(c->slab)) { void *flush_freelist =3D c->freelist; - struct page *flush_page =3D c->page; + struct slab *flush_slab =3D c->slab; =20 - c->page =3D NULL; + c->slab =3D NULL; c->freelist =3D NULL; c->tid =3D next_tid(c->tid); =20 local_unlock_irqrestore(&s->cpu_slab->lock, flags); =20 - deactivate_slab(s, flush_page, flush_freelist); + deactivate_slab(s, slab_page(flush_slab), flush_freelist); =20 stat(s, CPUSLAB_FLUSH); =20 - goto retry_load_page; + goto retry_load_slab; } - c->page =3D page; + c->slab =3D slab; =20 goto load_freelist; =20 return_single: =20 - deactivate_slab(s, page, get_freepointer(s, freelist)); + deactivate_slab(s, slab_page(slab), get_freepointer(s, freelist)); return freelist; } =20 @@ -3159,7 +3159,7 @@ static __always_inline void *slab_alloc_node(struct= kmem_cache *s, */ =20 object =3D c->freelist; - page =3D c->page; + page =3D slab_page(c->slab); /* * We cannot use the lockless fastpath on PREEMPT_RT because if a * slowpath has taken the local_lock_irqsave(), it is not protected @@ -3351,7 +3351,7 @@ static void __slab_free(struct kmem_cache *s, struc= t slab *slab, * If we just froze the slab then put it onto the * per cpu partial list. */ - put_cpu_partial(s, slab_page(slab), 1); + put_cpu_partial(s, slab, 1); stat(s, CPU_PARTIAL_FREE); } =20 @@ -3427,7 +3427,7 @@ static __always_inline void do_slab_free(struct kme= m_cache *s, /* Same with comment on barrier() in slab_alloc_node() */ barrier(); =20 - if (likely(slab_page(slab) =3D=3D c->page)) { + if (likely(slab =3D=3D c->slab)) { #ifndef CONFIG_PREEMPT_RT void **freelist =3D READ_ONCE(c->freelist); =20 @@ -3453,7 +3453,7 @@ static __always_inline void do_slab_free(struct kme= m_cache *s, =20 local_lock(&s->cpu_slab->lock); c =3D this_cpu_ptr(s->cpu_slab); - if (unlikely(slab_page(slab) !=3D c->page)) { + if (unlikely(slab !=3D c->slab)) { local_unlock(&s->cpu_slab->lock); goto redo; } @@ -5221,7 +5221,7 @@ static ssize_t show_slab_objects(struct kmem_cache = *s, int node; struct page *page; =20 - page =3D READ_ONCE(c->page); + page =3D slab_page(READ_ONCE(c->slab)); if (!page) continue; =20 @@ -5236,7 +5236,7 @@ static ssize_t show_slab_objects(struct kmem_cache = *s, total +=3D x; nodes[node] +=3D x; =20 - page =3D slub_percpu_partial_read_once(c); + page =3D slab_page(slub_percpu_partial_read_once(c)); if (page) { node =3D page_to_nid(page); if (flags & SO_TOTAL) @@ -5441,31 +5441,31 @@ SLAB_ATTR_RO(objects_partial); static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf) { int objects =3D 0; - int pages =3D 0; + int slabs =3D 0; int cpu; int len =3D 0; =20 for_each_online_cpu(cpu) { - struct page *page; + struct slab *slab; =20 - page =3D slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); + slab =3D slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); =20 - if (page) { - pages +=3D page->pages; - objects +=3D page->pobjects; + if (slab) { + slabs +=3D slab->slabs; + objects +=3D slab->pobjects; } } =20 - len +=3D sysfs_emit_at(buf, len, "%d(%d)", objects, pages); + len +=3D sysfs_emit_at(buf, len, "%d(%d)", objects, slabs); =20 #ifdef CONFIG_SMP for_each_online_cpu(cpu) { - struct page *page; + struct slab *slab; =20 - page =3D slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); - if (page) + slab =3D slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); + if (slab) len +=3D sysfs_emit_at(buf, len, " C%d=3D%d(%d)", - cpu, page->pobjects, page->pages); + cpu, slab->pobjects, slab->slabs); } #endif len +=3D sysfs_emit_at(buf, len, "\n"); --=20 2.32.0