From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07D3CC433EF for ; Tue, 16 Nov 2021 00:17:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5EF1D614C8 for ; Tue, 16 Nov 2021 00:17:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5EF1D614C8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D59FD6B009C; Mon, 15 Nov 2021 19:16:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 525B36B00AA; Mon, 15 Nov 2021 19:16:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97B426B00A3; Mon, 15 Nov 2021 19:16:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id CB9E26B009E for ; Mon, 15 Nov 2021 19:16:42 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8A7C98477E for ; Tue, 16 Nov 2021 00:16:42 +0000 (UTC) X-FDA: 78812877444.25.0B082CF Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf11.hostedemail.com (Postfix) with ESMTP id 9F00AF000206 for ; Tue, 16 Nov 2021 00:16:40 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 89E4D1FD6E; Tue, 16 Nov 2021 00:16:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1637021799; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Iq/Y2A8JKl4SHDSSpPkWYsbuR1tXvuIhdIuiDlLiwJk=; b=KN9PoRN0LsGKu9rGMr1m8MqWgtu2ffaEGyt9KmKqQwqzMgr3DJsMrOQs4gUhPNqUlUow9i Xr0PZUO0ceIs5cUgQ0uszyDyFcdYYEhO6CDiONstG/8SZ9bMS7on2cNn2nO8+NzD7OcRgG tQ3ixSJzFu8pEZuN5nJuVZkOjeEf+BQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1637021799; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Iq/Y2A8JKl4SHDSSpPkWYsbuR1tXvuIhdIuiDlLiwJk=; b=VAgXqpeD4SYwZyDHX77cWPGURqs2O7ttuHGhnmwcG/ZkDvEcpQlNYR86ZjRt3dX1UgQUmk k52cDqHLzHoNTrCw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 437D4139DB; Tue, 16 Nov 2021 00:16:39 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 0EzTD2f4kmFjXAAAMHmgww (envelope-from ); Tue, 16 Nov 2021 00:16:39 +0000 From: Vlastimil Babka To: Matthew Wilcox , linux-mm@kvack.org, Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Vlastimil Babka , Julia Lawall , Luis Chamberlain Subject: [RFC PATCH 16/32] mm/slub: Convert most struct page to struct slab by spatch Date: Tue, 16 Nov 2021 01:16:12 +0100 Message-Id: <20211116001628.24216-17-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211116001628.24216-1-vbabka@suse.cz> References: <20211116001628.24216-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=82942; h=from:subject; bh=wQUvz4pcz3p6bYXPvjpfhnbjMuZyfQPzBMxaFTau5E4=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhkvg+E2N41tzLid4kPEtHyyf+Ypj746x3dXcurbuF wC40mwCJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYZL4PgAKCRDgIcpz8YmpEPQNB/ 0Rq0500XNwnJ9+rTdYuqPmkvIDFDPxt5dcTb0RiYJqIR559y1Bb/AeliX0KPbsQ0NagVxXUxZRyqOX EC2Cd04Tv7+MWxXFfUCH5mFdFDB6kmzMzXK65UV2oNwfZMJ9QmZ5JEp+ZAI+JQ/6EXdlFx/I3TQ96R daIOnpAGKOPXjgvz5PcAGk1ZxRZTOm2j8+l4mz13UXdhHD4jwoYg/3Jye/f0kr2YWqrRWKv4pjUxYz c5v/qiddSiGLonHG8MPPmz1oyoKQ2kSvitxMeHC5PdxdCTfxwqvDQF+4SUMwm5Pr8E2q3fWAVD7XHA OpwU6SJ4PRu3e7jspY4OV5ymCMVJTw X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9F00AF000206 X-Stat-Signature: 9tegcsf8yda6r953da9n35bd1196779j Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=KN9PoRN0; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=VAgXqpeD; spf=pass (imf11.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1637021800-999895 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The majority of conversion from struct page to struct slab in SLUB intern= als can be delegated to a coccinelle semantic patch. This includes renaming o= f variables with 'page' in name to 'slab', and similar. Big thanks to Julia Lawall and Luis Chamberlain for help with coccinelle. // Options: --include-headers --no-includes --smpl-spacing include/linux/= slub_def.h mm/slub.c // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml f= or the // embedded script script // build list of functions to exclude from applying the next rule @initialize:ocaml@ @@ let ok_function p =3D not (List.mem (List.hd p).current_element ["nearest_obj";"obj_to_index"= ;"objs_per_slab_page";"__slab_lock";"__slab_unlock";"free_nonslab_page";"= kmalloc_large_node"]) // convert the type from struct page to struct page in all functions exce= pt the // list from previous rule // this also affects struct kmem_cache_cpu, but that's ok @@ position p : script:ocaml() { ok_function p }; @@ - struct page@p + struct slab // in struct kmem_cache_cpu, change the name from page to slab // the type was already converted by the previous rule @@ @@ struct kmem_cache_cpu { ... -struct slab *page; +struct slab *slab; ... } // there are many places that use c->page which is now c->slab after the // previous rule @@ struct kmem_cache_cpu *c; @@ -c->page +c->slab @@ @@ struct kmem_cache { ... - unsigned int cpu_partial_pages; + unsigned int cpu_partial_slabs; ... } @@ struct kmem_cache *s; @@ - s->cpu_partial_pages + s->cpu_partial_slabs @@ @@ static void - setup_page_debug( + setup_slab_debug( ...) {...} @@ @@ - setup_page_debug( + setup_slab_debug( ...); // for all functions (with exceptions), change any "struct slab *page" // parameter to "struct slab *slab" in the signature, and generally all // occurences of "page" to "slab" in the body - with some special cases. @@ identifier fn !~ "free_nonslab_page|obj_to_index|objs_per_slab_page|neare= st_obj"; @@ fn(..., - struct slab *page + struct slab *slab ,...) { <... - page + slab ...> } // similar to previous but the param is called partial_page @@ identifier fn; @@ fn(..., - struct slab *partial_page + struct slab *partial_slab ,...) { <... - partial_page + partial_slab ...> } // similar to previous but for functions that take pointer to struct page= ptr @@ identifier fn; @@ fn(..., - struct slab **ret_page + struct slab **ret_slab ,...) { <... - ret_page + ret_slab ...> } // functions converted by previous rules that were temporarily called usi= ng // slab_page(E) so we want to remove the wrapper now that they accept str= uct // slab ptr directly @@ identifier fn =3D~ "slab_free|do_slab_free"; expression E; @@ fn(..., - slab_page(E) + E ,...) // similar to previous but for another pattern @@ identifier fn =3D~ "slab_pad_check|check_object"; @@ fn(..., - folio_page(folio, 0) + slab ,...) // functions that were returning struct page ptr and now will return stru= ct // slab ptr, including slab_page() wrapper removal @@ identifier fn =3D~ "allocate_slab|new_slab"; expression E; @@ static -struct slab * +struct slab * fn(...) { <... - slab_page(E) + E ...> } // rename any former struct page * declarations @@ @@ struct slab * ( - page + slab | - partial_page + partial_slab | - oldpage + oldslab ) ; // this has to be separate from previous rule as page and page2 appear at= the // same line @@ @@ struct slab * -page2 +slab2 ; // similar but with initial assignment @@ expression E; @@ struct slab * ( - page + slab | - flush_page + flush_slab | - discard_page + slab_to_discard | - page_to_unfreeze + slab_to_unfreeze ) =3D E; // convert most of struct page to struct slab usage inside functions (wit= h // exceptions), including specific variable renames @@ identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)= *lock|__free_slab|free_nonslab_page|kmalloc_large_node"; expression E; @@ fn(...) { <... ( - int pages; + int slabs; | - int pages =3D E; + int slabs =3D E; | - page + slab | - flush_page + flush_slab | - partial_page + partial_slab | - oldpage->pages + oldslab->slabs | - oldpage + oldslab | - unsigned int nr_pages; + unsigned int nr_slabs; | - nr_pages + nr_slabs | - unsigned int partial_pages =3D E; + unsigned int partial_slabs =3D E; | - partial_pages + partial_slabs ) ...> } // this has to be split out from the previous rule so that lines containi= ng // multiple matching changes will be fully converted @@ identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)= *lock|__free_slab|free_nonslab_page|kmalloc_large_node"; @@ fn(...) { <... ( - slab->pages + slab->slabs | - pages + slabs | - page2 + slab2 | - discard_page + slab_to_discard | - page_to_unfreeze + slab_to_unfreeze ) ...> } // after we simply changed all occurences of page to slab, some usages ne= ed // adjustment for slab-specific functions, or use slab_page() wrapper @@ identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)= *lock|__free_slab|free_nonslab_page|kmalloc_large_node"; @@ fn(...) { <... ( - page_slab(slab) + slab | - kasan_poison_slab(slab) + kasan_poison_slab(slab_page(slab)) | - page_address(slab) + slab_address(slab) | - page_size(slab) + slab_size(slab) | - PageSlab(slab) + folio_test_slab(slab_folio(slab)) | - page_to_nid(slab) + slab_nid(slab) | - compound_order(slab) + slab_order(slab) ) ...> } Signed-off-by: Vlastimil Babka Cc: Julia Lawall Cc: Luis Chamberlain --- include/linux/slub_def.h | 6 +- mm/slub.c | 872 +++++++++++++++++++-------------------- 2 files changed, 439 insertions(+), 439 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 1ef68d4de9c0..00d99afe1c0e 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -48,9 +48,9 @@ enum stat_item { struct kmem_cache_cpu { void **freelist; /* Pointer to next available object */ unsigned long tid; /* Globally unique transaction id */ - struct page *page; /* The slab from which we are allocating */ + struct slab *slab; /* The slab from which we are allocating */ #ifdef CONFIG_SLUB_CPU_PARTIAL - struct page *partial; /* Partially allocated frozen slabs */ + struct slab *partial; /* Partially allocated frozen slabs */ #endif local_lock_t lock; /* Protects the fields above */ #ifdef CONFIG_SLUB_STATS @@ -100,7 +100,7 @@ struct kmem_cache { /* Number of per cpu partial objects to keep around */ unsigned int cpu_partial; /* Number of per cpu partial pages to keep around */ - unsigned int cpu_partial_pages; + unsigned int cpu_partial_slabs; #endif struct kmem_cache_order_objects oo; =20 diff --git a/mm/slub.c b/mm/slub.c index fd76b736021b..cc5ce18fe679 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -417,7 +417,7 @@ static inline unsigned int oo_objects(struct kmem_cac= he_order_objects x) #ifdef CONFIG_SLUB_CPU_PARTIAL static void slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_o= bjects) { - unsigned int nr_pages; + unsigned int nr_slabs; =20 s->cpu_partial =3D nr_objects; =20 @@ -427,8 +427,8 @@ static void slub_set_cpu_partial(struct kmem_cache *s= , unsigned int nr_objects) * growth of the list. For simplicity we assume that the pages will * be half-full. */ - nr_pages =3D DIV_ROUND_UP(nr_objects * 2, oo_objects(s->oo)); - s->cpu_partial_pages =3D nr_pages; + nr_slabs =3D DIV_ROUND_UP(nr_objects * 2, oo_objects(s->oo)); + s->cpu_partial_slabs =3D nr_slabs; } #else static inline void @@ -456,16 +456,16 @@ static __always_inline void __slab_unlock(struct sl= ab *slab) __bit_spin_unlock(PG_locked, &page->flags); } =20 -static __always_inline void slab_lock(struct page *page, unsigned long *= flags) +static __always_inline void slab_lock(struct slab *slab, unsigned long *= flags) { if (IS_ENABLED(CONFIG_PREEMPT_RT)) local_irq_save(*flags); - __slab_lock(page_slab(page)); + __slab_lock(slab); } =20 -static __always_inline void slab_unlock(struct page *page, unsigned long= *flags) +static __always_inline void slab_unlock(struct slab *slab, unsigned long= *flags) { - __slab_unlock(page_slab(page)); + __slab_unlock(slab); if (IS_ENABLED(CONFIG_PREEMPT_RT)) local_irq_restore(*flags); } @@ -475,7 +475,7 @@ static __always_inline void slab_unlock(struct page *= page, unsigned long *flags) * by an _irqsave() lock variant. Except on PREEMPT_RT where locks are d= ifferent * so we disable interrupts as part of slab_[un]lock(). */ -static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct pa= ge *page, +static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct sl= ab *slab, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) @@ -485,7 +485,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_= cache *s, struct page *page #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) if (s->flags & __CMPXCHG_DOUBLE) { - if (cmpxchg_double(&page->freelist, &page->counters, + if (cmpxchg_double(&slab->freelist, &slab->counters, freelist_old, counters_old, freelist_new, counters_new)) return true; @@ -495,15 +495,15 @@ static inline bool __cmpxchg_double_slab(struct kme= m_cache *s, struct page *page /* init to 0 to prevent spurious warnings */ unsigned long flags =3D 0; =20 - slab_lock(page, &flags); - if (page->freelist =3D=3D freelist_old && - page->counters =3D=3D counters_old) { - page->freelist =3D freelist_new; - page->counters =3D counters_new; - slab_unlock(page, &flags); + slab_lock(slab, &flags); + if (slab->freelist =3D=3D freelist_old && + slab->counters =3D=3D counters_old) { + slab->freelist =3D freelist_new; + slab->counters =3D counters_new; + slab_unlock(slab, &flags); return true; } - slab_unlock(page, &flags); + slab_unlock(slab, &flags); } =20 cpu_relax(); @@ -516,7 +516,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_= cache *s, struct page *page return false; } =20 -static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page= *page, +static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct slab= *slab, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) @@ -524,7 +524,7 @@ static inline bool cmpxchg_double_slab(struct kmem_ca= che *s, struct page *page, #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) if (s->flags & __CMPXCHG_DOUBLE) { - if (cmpxchg_double(&page->freelist, &page->counters, + if (cmpxchg_double(&slab->freelist, &slab->counters, freelist_old, counters_old, freelist_new, counters_new)) return true; @@ -534,16 +534,16 @@ static inline bool cmpxchg_double_slab(struct kmem_= cache *s, struct page *page, unsigned long flags; =20 local_irq_save(flags); - __slab_lock(page_slab(page)); - if (page->freelist =3D=3D freelist_old && - page->counters =3D=3D counters_old) { - page->freelist =3D freelist_new; - page->counters =3D counters_new; - __slab_unlock(page_slab(page)); + __slab_lock(slab); + if (slab->freelist =3D=3D freelist_old && + slab->counters =3D=3D counters_old) { + slab->freelist =3D freelist_new; + slab->counters =3D counters_new; + __slab_unlock(slab); local_irq_restore(flags); return true; } - __slab_unlock(page_slab(page)); + __slab_unlock(slab); local_irq_restore(flags); } =20 @@ -562,14 +562,14 @@ static unsigned long object_map[BITS_TO_LONGS(MAX_O= BJS_PER_PAGE)]; static DEFINE_RAW_SPINLOCK(object_map_lock); =20 static void __fill_map(unsigned long *obj_map, struct kmem_cache *s, - struct page *page) + struct slab *slab) { - void *addr =3D page_address(page); + void *addr =3D slab_address(slab); void *p; =20 - bitmap_zero(obj_map, page->objects); + bitmap_zero(obj_map, slab->objects); =20 - for (p =3D page->freelist; p; p =3D get_freepointer(s, p)) + for (p =3D slab->freelist; p; p =3D get_freepointer(s, p)) set_bit(__obj_to_index(s, addr, p), obj_map); } =20 @@ -599,14 +599,14 @@ static inline bool slab_add_kunit_errors(void) { re= turn false; } * Node listlock must be held to guarantee that the page does * not vanish from under us. */ -static unsigned long *get_map(struct kmem_cache *s, struct page *page) +static unsigned long *get_map(struct kmem_cache *s, struct slab *slab) __acquires(&object_map_lock) { VM_BUG_ON(!irqs_disabled()); =20 raw_spin_lock(&object_map_lock); =20 - __fill_map(object_map, s, page); + __fill_map(object_map, s, slab); =20 return object_map; } @@ -667,17 +667,17 @@ static inline void metadata_access_disable(void) =20 /* Verify that a pointer has an address that is valid within a slab page= */ static inline int check_valid_pointer(struct kmem_cache *s, - struct page *page, void *object) + struct slab *slab, void *object) { void *base; =20 if (!object) return 1; =20 - base =3D page_address(page); + base =3D slab_address(slab); object =3D kasan_reset_tag(object); object =3D restore_red_left(s, object); - if (object < base || object >=3D base + page->objects * s->size || + if (object < base || object >=3D base + slab->objects * s->size || (object - base) % s->size) { return 0; } @@ -827,14 +827,14 @@ static void slab_fix(struct kmem_cache *s, char *fm= t, ...) va_end(args); } =20 -static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p= ) +static void print_trailer(struct kmem_cache *s, struct slab *slab, u8 *p= ) { unsigned int off; /* Offset of last byte */ - u8 *addr =3D page_address(page); + u8 *addr =3D slab_address(slab); =20 print_tracking(s, p); =20 - print_slab_info(page_slab(page)); + print_slab_info(slab); =20 pr_err("Object 0x%p @offset=3D%tu fp=3D0x%p\n\n", p, p - addr, get_freepointer(s, p)); @@ -866,23 +866,23 @@ static void print_trailer(struct kmem_cache *s, str= uct page *page, u8 *p) dump_stack(); } =20 -static void object_err(struct kmem_cache *s, struct page *page, +static void object_err(struct kmem_cache *s, struct slab *slab, u8 *object, char *reason) { if (slab_add_kunit_errors()) return; =20 slab_bug(s, "%s", reason); - print_trailer(s, page, object); + print_trailer(s, slab, object); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } =20 -static bool freelist_corrupted(struct kmem_cache *s, struct page *page, +static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, void **freelist, void *nextfree) { if ((s->flags & SLAB_CONSISTENCY_CHECKS) && - !check_valid_pointer(s, page, nextfree) && freelist) { - object_err(s, page, *freelist, "Freechain corrupt"); + !check_valid_pointer(s, slab, nextfree) && freelist) { + object_err(s, slab, *freelist, "Freechain corrupt"); *freelist =3D NULL; slab_fix(s, "Isolate corrupted freechain"); return true; @@ -891,7 +891,7 @@ static bool freelist_corrupted(struct kmem_cache *s, = struct page *page, return false; } =20 -static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *p= age, +static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *s= lab, const char *fmt, ...) { va_list args; @@ -904,7 +904,7 @@ static __printf(3, 4) void slab_err(struct kmem_cache= *s, struct page *page, vsnprintf(buf, sizeof(buf), fmt, args); va_end(args); slab_bug(s, "%s", buf); - print_slab_info(page_slab(page)); + print_slab_info(slab); dump_stack(); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } @@ -932,13 +932,13 @@ static void restore_bytes(struct kmem_cache *s, cha= r *message, u8 data, memset(from, data, to - from); } =20 -static int check_bytes_and_report(struct kmem_cache *s, struct page *pag= e, +static int check_bytes_and_report(struct kmem_cache *s, struct slab *sla= b, u8 *object, char *what, u8 *start, unsigned int value, unsigned int bytes) { u8 *fault; u8 *end; - u8 *addr =3D page_address(page); + u8 *addr =3D slab_address(slab); =20 metadata_access_enable(); fault =3D memchr_inv(kasan_reset_tag(start), value, bytes); @@ -957,7 +957,7 @@ static int check_bytes_and_report(struct kmem_cache *= s, struct page *page, pr_err("0x%p-0x%p @offset=3D%tu. First byte 0x%x instead of 0x%x\n", fault, end - 1, fault - addr, fault[0], value); - print_trailer(s, page, object); + print_trailer(s, slab, object); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); =20 skip_bug_print: @@ -1003,7 +1003,7 @@ static int check_bytes_and_report(struct kmem_cache= *s, struct page *page, * may be used with merged slabcaches. */ =20 -static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *= p) +static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *= p) { unsigned long off =3D get_info_end(s); /* The end of info */ =20 @@ -1016,12 +1016,12 @@ static int check_pad_bytes(struct kmem_cache *s, = struct page *page, u8 *p) if (size_from_object(s) =3D=3D off) return 1; =20 - return check_bytes_and_report(s, page, p, "Object padding", + return check_bytes_and_report(s, slab, p, "Object padding", p + off, POISON_INUSE, size_from_object(s) - off); } =20 /* Check the pad bytes at the end of a slab page */ -static int slab_pad_check(struct kmem_cache *s, struct page *page) +static int slab_pad_check(struct kmem_cache *s, struct slab *slab) { u8 *start; u8 *fault; @@ -1033,8 +1033,8 @@ static int slab_pad_check(struct kmem_cache *s, str= uct page *page) if (!(s->flags & SLAB_POISON)) return 1; =20 - start =3D page_address(page); - length =3D page_size(page); + start =3D slab_address(slab); + length =3D slab_size(slab); end =3D start + length; remainder =3D length % s->size; if (!remainder) @@ -1049,7 +1049,7 @@ static int slab_pad_check(struct kmem_cache *s, str= uct page *page) while (end > fault && end[-1] =3D=3D POISON_INUSE) end--; =20 - slab_err(s, page, "Padding overwritten. 0x%p-0x%p @offset=3D%tu", + slab_err(s, slab, "Padding overwritten. 0x%p-0x%p @offset=3D%tu", fault, end - 1, fault - start); print_section(KERN_ERR, "Padding ", pad, remainder); =20 @@ -1057,23 +1057,23 @@ static int slab_pad_check(struct kmem_cache *s, s= truct page *page) return 0; } =20 -static int check_object(struct kmem_cache *s, struct page *page, +static int check_object(struct kmem_cache *s, struct slab *slab, void *object, u8 val) { u8 *p =3D object; u8 *endobject =3D object + s->object_size; =20 if (s->flags & SLAB_RED_ZONE) { - if (!check_bytes_and_report(s, page, object, "Left Redzone", + if (!check_bytes_and_report(s, slab, object, "Left Redzone", object - s->red_left_pad, val, s->red_left_pad)) return 0; =20 - if (!check_bytes_and_report(s, page, object, "Right Redzone", + if (!check_bytes_and_report(s, slab, object, "Right Redzone", endobject, val, s->inuse - s->object_size)) return 0; } else { if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) { - check_bytes_and_report(s, page, p, "Alignment padding", + check_bytes_and_report(s, slab, p, "Alignment padding", endobject, POISON_INUSE, s->inuse - s->object_size); } @@ -1081,15 +1081,15 @@ static int check_object(struct kmem_cache *s, str= uct page *page, =20 if (s->flags & SLAB_POISON) { if (val !=3D SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) && - (!check_bytes_and_report(s, page, p, "Poison", p, + (!check_bytes_and_report(s, slab, p, "Poison", p, POISON_FREE, s->object_size - 1) || - !check_bytes_and_report(s, page, p, "End Poison", + !check_bytes_and_report(s, slab, p, "End Poison", p + s->object_size - 1, POISON_END, 1))) return 0; /* * check_pad_bytes cleans up on its own. */ - check_pad_bytes(s, page, p); + check_pad_bytes(s, slab, p); } =20 if (!freeptr_outside_object(s) && val =3D=3D SLUB_RED_ACTIVE) @@ -1100,8 +1100,8 @@ static int check_object(struct kmem_cache *s, struc= t page *page, return 1; =20 /* Check free pointer validity */ - if (!check_valid_pointer(s, page, get_freepointer(s, p))) { - object_err(s, page, p, "Freepointer corrupt"); + if (!check_valid_pointer(s, slab, get_freepointer(s, p))) { + object_err(s, slab, p, "Freepointer corrupt"); /* * No choice but to zap it and thus lose the remainder * of the free objects in this slab. May cause @@ -1113,28 +1113,28 @@ static int check_object(struct kmem_cache *s, str= uct page *page, return 1; } =20 -static int check_slab(struct kmem_cache *s, struct page *page) +static int check_slab(struct kmem_cache *s, struct slab *slab) { int maxobj; =20 - if (!PageSlab(page)) { - slab_err(s, page, "Not a valid slab page"); + if (!folio_test_slab(slab_folio(slab))) { + slab_err(s, slab, "Not a valid slab page"); return 0; } =20 - maxobj =3D order_objects(compound_order(page), s->size); - if (page->objects > maxobj) { - slab_err(s, page, "objects %u > max %u", - page->objects, maxobj); + maxobj =3D order_objects(slab_order(slab), s->size); + if (slab->objects > maxobj) { + slab_err(s, slab, "objects %u > max %u", + slab->objects, maxobj); return 0; } - if (page->inuse > page->objects) { - slab_err(s, page, "inuse %u > max %u", - page->inuse, page->objects); + if (slab->inuse > slab->objects) { + slab_err(s, slab, "inuse %u > max %u", + slab->inuse, slab->objects); return 0; } /* Slab_pad_check fixes things up after itself */ - slab_pad_check(s, page); + slab_pad_check(s, slab); return 1; } =20 @@ -1142,26 +1142,26 @@ static int check_slab(struct kmem_cache *s, struc= t page *page) * Determine if a certain object on a page is on the freelist. Must hold= the * slab lock to guarantee that the chains are in a consistent state. */ -static int on_freelist(struct kmem_cache *s, struct page *page, void *se= arch) +static int on_freelist(struct kmem_cache *s, struct slab *slab, void *se= arch) { int nr =3D 0; void *fp; void *object =3D NULL; int max_objects; =20 - fp =3D page->freelist; - while (fp && nr <=3D page->objects) { + fp =3D slab->freelist; + while (fp && nr <=3D slab->objects) { if (fp =3D=3D search) return 1; - if (!check_valid_pointer(s, page, fp)) { + if (!check_valid_pointer(s, slab, fp)) { if (object) { - object_err(s, page, object, + object_err(s, slab, object, "Freechain corrupt"); set_freepointer(s, object, NULL); } else { - slab_err(s, page, "Freepointer corrupt"); - page->freelist =3D NULL; - page->inuse =3D page->objects; + slab_err(s, slab, "Freepointer corrupt"); + slab->freelist =3D NULL; + slab->inuse =3D slab->objects; slab_fix(s, "Freelist cleared"); return 0; } @@ -1172,34 +1172,34 @@ static int on_freelist(struct kmem_cache *s, stru= ct page *page, void *search) nr++; } =20 - max_objects =3D order_objects(compound_order(page), s->size); + max_objects =3D order_objects(slab_order(slab), s->size); if (max_objects > MAX_OBJS_PER_PAGE) max_objects =3D MAX_OBJS_PER_PAGE; =20 - if (page->objects !=3D max_objects) { - slab_err(s, page, "Wrong number of objects. Found %d but should be %d"= , - page->objects, max_objects); - page->objects =3D max_objects; + if (slab->objects !=3D max_objects) { + slab_err(s, slab, "Wrong number of objects. Found %d but should be %d"= , + slab->objects, max_objects); + slab->objects =3D max_objects; slab_fix(s, "Number of objects adjusted"); } - if (page->inuse !=3D page->objects - nr) { - slab_err(s, page, "Wrong object count. Counter is %d but counted were = %d", - page->inuse, page->objects - nr); - page->inuse =3D page->objects - nr; + if (slab->inuse !=3D slab->objects - nr) { + slab_err(s, slab, "Wrong object count. Counter is %d but counted were = %d", + slab->inuse, slab->objects - nr); + slab->inuse =3D slab->objects - nr; slab_fix(s, "Object count adjusted"); } return search =3D=3D NULL; } =20 -static void trace(struct kmem_cache *s, struct page *page, void *object, +static void trace(struct kmem_cache *s, struct slab *slab, void *object, int alloc) { if (s->flags & SLAB_TRACE) { pr_info("TRACE %s %s 0x%p inuse=3D%d fp=3D0x%p\n", s->name, alloc ? "alloc" : "free", - object, page->inuse, - page->freelist); + object, slab->inuse, + slab->freelist); =20 if (!alloc) print_section(KERN_INFO, "Object ", (void *)object, @@ -1213,22 +1213,22 @@ static void trace(struct kmem_cache *s, struct pa= ge *page, void *object, * Tracking of fully allocated slabs for debugging purposes. */ static void add_full(struct kmem_cache *s, - struct kmem_cache_node *n, struct page *page) + struct kmem_cache_node *n, struct slab *slab) { if (!(s->flags & SLAB_STORE_USER)) return; =20 lockdep_assert_held(&n->list_lock); - list_add(&page->slab_list, &n->full); + list_add(&slab->slab_list, &n->full); } =20 -static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n,= struct page *page) +static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n,= struct slab *slab) { if (!(s->flags & SLAB_STORE_USER)) return; =20 lockdep_assert_held(&n->list_lock); - list_del(&page->slab_list); + list_del(&slab->slab_list); } =20 /* Tracking of the number of slabs for debugging purposes */ @@ -1268,7 +1268,7 @@ static inline void dec_slabs_node(struct kmem_cache= *s, int node, int objects) } =20 /* Object debug checks for alloc/free paths */ -static void setup_object_debug(struct kmem_cache *s, struct page *page, +static void setup_object_debug(struct kmem_cache *s, struct slab *slab, void *object) { if (!kmem_cache_debug_flags(s, SLAB_STORE_USER|SLAB_RED_ZONE|__OBJECT_P= OISON)) @@ -1279,89 +1279,89 @@ static void setup_object_debug(struct kmem_cache = *s, struct page *page, } =20 static -void setup_page_debug(struct kmem_cache *s, struct page *page, void *add= r) +void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *add= r) { if (!kmem_cache_debug_flags(s, SLAB_POISON)) return; =20 metadata_access_enable(); - memset(kasan_reset_tag(addr), POISON_INUSE, page_size(page)); + memset(kasan_reset_tag(addr), POISON_INUSE, slab_size(slab)); metadata_access_disable(); } =20 static inline int alloc_consistency_checks(struct kmem_cache *s, - struct page *page, void *object) + struct slab *slab, void *object) { - if (!check_slab(s, page)) + if (!check_slab(s, slab)) return 0; =20 - if (!check_valid_pointer(s, page, object)) { - object_err(s, page, object, "Freelist Pointer check fails"); + if (!check_valid_pointer(s, slab, object)) { + object_err(s, slab, object, "Freelist Pointer check fails"); return 0; } =20 - if (!check_object(s, page, object, SLUB_RED_INACTIVE)) + if (!check_object(s, slab, object, SLUB_RED_INACTIVE)) return 0; =20 return 1; } =20 static noinline int alloc_debug_processing(struct kmem_cache *s, - struct page *page, + struct slab *slab, void *object, unsigned long addr) { if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!alloc_consistency_checks(s, page, object)) + if (!alloc_consistency_checks(s, slab, object)) goto bad; } =20 /* Success perform special debug activities for allocs */ if (s->flags & SLAB_STORE_USER) set_track(s, object, TRACK_ALLOC, addr); - trace(s, page, object, 1); + trace(s, slab, object, 1); init_object(s, object, SLUB_RED_ACTIVE); return 1; =20 bad: - if (PageSlab(page)) { + if (folio_test_slab(slab_folio(slab))) { /* * If this is a slab page then lets do the best we can * to avoid issues in the future. Marking all objects * as used avoids touching the remaining objects. */ slab_fix(s, "Marking all objects used"); - page->inuse =3D page->objects; - page->freelist =3D NULL; + slab->inuse =3D slab->objects; + slab->freelist =3D NULL; } return 0; } =20 static inline int free_consistency_checks(struct kmem_cache *s, - struct page *page, void *object, unsigned long addr) + struct slab *slab, void *object, unsigned long addr) { - if (!check_valid_pointer(s, page, object)) { - slab_err(s, page, "Invalid object pointer 0x%p", object); + if (!check_valid_pointer(s, slab, object)) { + slab_err(s, slab, "Invalid object pointer 0x%p", object); return 0; } =20 - if (on_freelist(s, page, object)) { - object_err(s, page, object, "Object already free"); + if (on_freelist(s, slab, object)) { + object_err(s, slab, object, "Object already free"); return 0; } =20 - if (!check_object(s, page, object, SLUB_RED_ACTIVE)) + if (!check_object(s, slab, object, SLUB_RED_ACTIVE)) return 0; =20 - if (unlikely(s !=3D page->slab_cache)) { - if (!PageSlab(page)) { - slab_err(s, page, "Attempt to free object(0x%p) outside of slab", + if (unlikely(s !=3D slab->slab_cache)) { + if (!folio_test_slab(slab_folio(slab))) { + slab_err(s, slab, "Attempt to free object(0x%p) outside of slab", object); - } else if (!page->slab_cache) { + } else if (!slab->slab_cache) { pr_err("SLUB : no slab for object 0x%p.\n", object); dump_stack(); } else - object_err(s, page, object, + object_err(s, slab, object, "page slab pointer corrupt."); return 0; } @@ -1370,21 +1370,21 @@ static inline int free_consistency_checks(struct = kmem_cache *s, =20 /* Supports checking bulk free of a constructed freelist */ static noinline int free_debug_processing( - struct kmem_cache *s, struct page *page, + struct kmem_cache *s, struct slab *slab, void *head, void *tail, int bulk_cnt, unsigned long addr) { - struct kmem_cache_node *n =3D get_node(s, page_to_nid(page)); + struct kmem_cache_node *n =3D get_node(s, slab_nid(slab)); void *object =3D head; int cnt =3D 0; unsigned long flags, flags2; int ret =3D 0; =20 spin_lock_irqsave(&n->list_lock, flags); - slab_lock(page, &flags2); + slab_lock(slab, &flags2); =20 if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!check_slab(s, page)) + if (!check_slab(s, slab)) goto out; } =20 @@ -1392,13 +1392,13 @@ static noinline int free_debug_processing( cnt++; =20 if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!free_consistency_checks(s, page, object, addr)) + if (!free_consistency_checks(s, slab, object, addr)) goto out; } =20 if (s->flags & SLAB_STORE_USER) set_track(s, object, TRACK_FREE, addr); - trace(s, page, object, 0); + trace(s, slab, object, 0); /* Freepointer not overwritten by init_object(), SLAB_POISON moved it *= / init_object(s, object, SLUB_RED_INACTIVE); =20 @@ -1411,10 +1411,10 @@ static noinline int free_debug_processing( =20 out: if (cnt !=3D bulk_cnt) - slab_err(s, page, "Bulk freelist count(%d) invalid(%d)\n", + slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n", bulk_cnt, cnt); =20 - slab_unlock(page, &flags2); + slab_unlock(slab, &flags2); spin_unlock_irqrestore(&n->list_lock, flags); if (!ret) slab_fix(s, "Object at 0x%p not freed", object); @@ -1629,26 +1629,26 @@ slab_flags_t kmem_cache_flags(unsigned int object= _size, } #else /* !CONFIG_SLUB_DEBUG */ static inline void setup_object_debug(struct kmem_cache *s, - struct page *page, void *object) {} + struct slab *slab, void *object) {} static inline -void setup_page_debug(struct kmem_cache *s, struct page *page, void *add= r) {} +void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *add= r) {} =20 static inline int alloc_debug_processing(struct kmem_cache *s, - struct page *page, void *object, unsigned long addr) { return 0; } + struct slab *slab, void *object, unsigned long addr) { return 0; } =20 static inline int free_debug_processing( - struct kmem_cache *s, struct page *page, + struct kmem_cache *s, struct slab *slab, void *head, void *tail, int bulk_cnt, unsigned long addr) { return 0; } =20 -static inline int slab_pad_check(struct kmem_cache *s, struct page *page= ) +static inline int slab_pad_check(struct kmem_cache *s, struct slab *slab= ) { return 1; } -static inline int check_object(struct kmem_cache *s, struct page *page, +static inline int check_object(struct kmem_cache *s, struct slab *slab, void *object, u8 val) { return 1; } static inline void add_full(struct kmem_cache *s, struct kmem_cache_node= *n, - struct page *page) {} + struct slab *slab) {} static inline void remove_full(struct kmem_cache *s, struct kmem_cache_n= ode *n, - struct page *page) {} + struct slab *slab) {} slab_flags_t kmem_cache_flags(unsigned int object_size, slab_flags_t flags, const char *name) { @@ -1667,7 +1667,7 @@ static inline void inc_slabs_node(struct kmem_cache= *s, int node, static inline void dec_slabs_node(struct kmem_cache *s, int node, int objects) {} =20 -static bool freelist_corrupted(struct kmem_cache *s, struct page *page, +static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, void **freelist, void *nextfree) { return false; @@ -1772,10 +1772,10 @@ static inline bool slab_free_freelist_hook(struct= kmem_cache *s, return *head !=3D NULL; } =20 -static void *setup_object(struct kmem_cache *s, struct page *page, +static void *setup_object(struct kmem_cache *s, struct slab *slab, void *object) { - setup_object_debug(s, page, object); + setup_object_debug(s, slab, object); object =3D kasan_init_slab_obj(s, object); if (unlikely(s->ctor)) { kasan_unpoison_object_data(s, object); @@ -1853,7 +1853,7 @@ static void __init init_freelist_randomization(void= ) } =20 /* Get the next entry on the pre-computed freelist randomized */ -static void *next_freelist_entry(struct kmem_cache *s, struct page *page= , +static void *next_freelist_entry(struct kmem_cache *s, struct slab *slab= , unsigned long *pos, void *start, unsigned long page_limit, unsigned long freelist_count) @@ -1875,32 +1875,32 @@ static void *next_freelist_entry(struct kmem_cach= e *s, struct page *page, } =20 /* Shuffle the single linked freelist based on a random pre-computed seq= uence */ -static bool shuffle_freelist(struct kmem_cache *s, struct page *page) +static bool shuffle_freelist(struct kmem_cache *s, struct slab *slab) { void *start; void *cur; void *next; unsigned long idx, pos, page_limit, freelist_count; =20 - if (page->objects < 2 || !s->random_seq) + if (slab->objects < 2 || !s->random_seq) return false; =20 freelist_count =3D oo_objects(s->oo); pos =3D get_random_int() % freelist_count; =20 - page_limit =3D page->objects * s->size; - start =3D fixup_red_left(s, page_address(page)); + page_limit =3D slab->objects * s->size; + start =3D fixup_red_left(s, slab_address(slab)); =20 /* First entry is used as the base of the freelist */ - cur =3D next_freelist_entry(s, page, &pos, start, page_limit, + cur =3D next_freelist_entry(s, slab, &pos, start, page_limit, freelist_count); - cur =3D setup_object(s, page, cur); - page->freelist =3D cur; + cur =3D setup_object(s, slab, cur); + slab->freelist =3D cur; =20 - for (idx =3D 1; idx < page->objects; idx++) { - next =3D next_freelist_entry(s, page, &pos, start, page_limit, + for (idx =3D 1; idx < slab->objects; idx++) { + next =3D next_freelist_entry(s, slab, &pos, start, page_limit, freelist_count); - next =3D setup_object(s, page, next); + next =3D setup_object(s, slab, next); set_freepointer(s, cur, next); cur =3D next; } @@ -1914,15 +1914,15 @@ static inline int init_cache_random_seq(struct km= em_cache *s) return 0; } static inline void init_freelist_randomization(void) { } -static inline bool shuffle_freelist(struct kmem_cache *s, struct page *p= age) +static inline bool shuffle_freelist(struct kmem_cache *s, struct slab *s= lab) { return false; } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ =20 -static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int= node) +static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int= node) { - struct page *page; + struct slab *slab; struct kmem_cache_order_objects oo =3D s->oo; gfp_t alloc_gfp; void *start, *p, *next; @@ -1941,60 +1941,60 @@ static struct page *allocate_slab(struct kmem_cac= he *s, gfp_t flags, int node) if ((alloc_gfp & __GFP_DIRECT_RECLAIM) && oo_order(oo) > oo_order(s->mi= n)) alloc_gfp =3D (alloc_gfp | __GFP_NOMEMALLOC) & ~(__GFP_RECLAIM|__GFP_N= OFAIL); =20 - page =3D slab_page(alloc_slab_page(s, alloc_gfp, node, oo)); - if (unlikely(!page)) { + slab =3D alloc_slab_page(s, alloc_gfp, node, oo); + if (unlikely(!slab)) { oo =3D s->min; alloc_gfp =3D flags; /* * Allocation may have failed due to fragmentation. * Try a lower order alloc if possible */ - page =3D slab_page(alloc_slab_page(s, alloc_gfp, node, oo)); - if (unlikely(!page)) + slab =3D alloc_slab_page(s, alloc_gfp, node, oo); + if (unlikely(!slab)) goto out; stat(s, ORDER_FALLBACK); } =20 - page->objects =3D oo_objects(oo); + slab->objects =3D oo_objects(oo); =20 - account_slab(page_slab(page), oo_order(oo), s, flags); + account_slab(slab, oo_order(oo), s, flags); =20 - page->slab_cache =3D s; + slab->slab_cache =3D s; =20 - kasan_poison_slab(page); + kasan_poison_slab(slab_page(slab)); =20 - start =3D page_address(page); + start =3D slab_address(slab); =20 - setup_page_debug(s, page, start); + setup_slab_debug(s, slab, start); =20 - shuffle =3D shuffle_freelist(s, page); + shuffle =3D shuffle_freelist(s, slab); =20 if (!shuffle) { start =3D fixup_red_left(s, start); - start =3D setup_object(s, page, start); - page->freelist =3D start; - for (idx =3D 0, p =3D start; idx < page->objects - 1; idx++) { + start =3D setup_object(s, slab, start); + slab->freelist =3D start; + for (idx =3D 0, p =3D start; idx < slab->objects - 1; idx++) { next =3D p + s->size; - next =3D setup_object(s, page, next); + next =3D setup_object(s, slab, next); set_freepointer(s, p, next); p =3D next; } set_freepointer(s, p, NULL); } =20 - page->inuse =3D page->objects; - page->frozen =3D 1; + slab->inuse =3D slab->objects; + slab->frozen =3D 1; =20 out: - if (!page) + if (!slab) return NULL; =20 - inc_slabs_node(s, page_to_nid(page), page->objects); + inc_slabs_node(s, slab_nid(slab), slab->objects); =20 - return page; + return slab; } =20 -static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node= ) +static struct slab *new_slab(struct kmem_cache *s, gfp_t flags, int node= ) { if (unlikely(flags & GFP_SLAB_BUG_MASK)) flags =3D kmalloc_fix_flags(flags); @@ -2014,9 +2014,9 @@ static void __free_slab(struct kmem_cache *s, struc= t slab *slab) if (kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) { void *p; =20 - slab_pad_check(s, folio_page(folio, 0)); + slab_pad_check(s, slab); for_each_object(p, s, slab_address(slab), slab->objects) - check_object(s, folio_page(folio, 0), p, SLUB_RED_INACTIVE); + check_object(s, slab, p, SLUB_RED_INACTIVE); } =20 __slab_clear_pfmemalloc(slab); @@ -2031,50 +2031,50 @@ static void __free_slab(struct kmem_cache *s, str= uct slab *slab) =20 static void rcu_free_slab(struct rcu_head *h) { - struct page *page =3D container_of(h, struct page, rcu_head); + struct slab *slab =3D container_of(h, struct slab, rcu_head); =20 - __free_slab(page->slab_cache, page_slab(page)); + __free_slab(slab->slab_cache, slab); } =20 -static void free_slab(struct kmem_cache *s, struct page *page) +static void free_slab(struct kmem_cache *s, struct slab *slab) { if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU)) { - call_rcu(&page->rcu_head, rcu_free_slab); + call_rcu(&slab->rcu_head, rcu_free_slab); } else - __free_slab(s, page_slab(page)); + __free_slab(s, slab); } =20 -static void discard_slab(struct kmem_cache *s, struct page *page) +static void discard_slab(struct kmem_cache *s, struct slab *slab) { - dec_slabs_node(s, page_to_nid(page), page->objects); - free_slab(s, page); + dec_slabs_node(s, slab_nid(slab), slab->objects); + free_slab(s, slab); } =20 /* * Management of partially allocated slabs. */ static inline void -__add_partial(struct kmem_cache_node *n, struct page *page, int tail) +__add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) { n->nr_partial++; if (tail =3D=3D DEACTIVATE_TO_TAIL) - list_add_tail(&page->slab_list, &n->partial); + list_add_tail(&slab->slab_list, &n->partial); else - list_add(&page->slab_list, &n->partial); + list_add(&slab->slab_list, &n->partial); } =20 static inline void add_partial(struct kmem_cache_node *n, - struct page *page, int tail) + struct slab *slab, int tail) { lockdep_assert_held(&n->list_lock); - __add_partial(n, page, tail); + __add_partial(n, slab, tail); } =20 static inline void remove_partial(struct kmem_cache_node *n, - struct page *page) + struct slab *slab) { lockdep_assert_held(&n->list_lock); - list_del(&page->slab_list); + list_del(&slab->slab_list); n->nr_partial--; } =20 @@ -2085,12 +2085,12 @@ static inline void remove_partial(struct kmem_cac= he_node *n, * Returns a list of objects or NULL if it fails. */ static inline void *acquire_slab(struct kmem_cache *s, - struct kmem_cache_node *n, struct page *page, + struct kmem_cache_node *n, struct slab *slab, int mode) { void *freelist; unsigned long counters; - struct page new; + struct slab new; =20 lockdep_assert_held(&n->list_lock); =20 @@ -2099,11 +2099,11 @@ static inline void *acquire_slab(struct kmem_cach= e *s, * The old freelist is the list of objects for the * per cpu allocation list. */ - freelist =3D page->freelist; - counters =3D page->counters; + freelist =3D slab->freelist; + counters =3D slab->counters; new.counters =3D counters; if (mode) { - new.inuse =3D page->objects; + new.inuse =3D slab->objects; new.freelist =3D NULL; } else { new.freelist =3D freelist; @@ -2112,21 +2112,21 @@ static inline void *acquire_slab(struct kmem_cach= e *s, VM_BUG_ON(new.frozen); new.frozen =3D 1; =20 - if (!__cmpxchg_double_slab(s, page, + if (!__cmpxchg_double_slab(s, slab, freelist, counters, new.freelist, new.counters, "acquire_slab")) return NULL; =20 - remove_partial(n, page); + remove_partial(n, slab); WARN_ON(!freelist); return freelist; } =20 #ifdef CONFIG_SLUB_CPU_PARTIAL -static void put_cpu_partial(struct kmem_cache *s, struct page *page, int= drain); +static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int= drain); #else -static inline void put_cpu_partial(struct kmem_cache *s, struct page *pa= ge, +static inline void put_cpu_partial(struct kmem_cache *s, struct slab *sl= ab, int drain) { } #endif static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags); @@ -2135,12 +2135,12 @@ static inline bool pfmemalloc_match(struct slab *= slab, gfp_t gfpflags); * Try to allocate a partial slab from a specific node. */ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_no= de *n, - struct page **ret_page, gfp_t gfpflags) + struct slab **ret_slab, gfp_t gfpflags) { - struct page *page, *page2; + struct slab *slab, *slab2; void *object =3D NULL; unsigned long flags; - unsigned int partial_pages =3D 0; + unsigned int partial_slabs =3D 0; =20 /* * Racy check. If we mistakenly see no partial slabs then we @@ -2152,28 +2152,28 @@ static void *get_partial_node(struct kmem_cache *= s, struct kmem_cache_node *n, return NULL; =20 spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry_safe(page, page2, &n->partial, slab_list) { + list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) { void *t; =20 - if (!pfmemalloc_match(page_slab(page), gfpflags)) + if (!pfmemalloc_match(slab, gfpflags)) continue; =20 - t =3D acquire_slab(s, n, page, object =3D=3D NULL); + t =3D acquire_slab(s, n, slab, object =3D=3D NULL); if (!t) break; =20 if (!object) { - *ret_page =3D page; + *ret_slab =3D slab; stat(s, ALLOC_FROM_PARTIAL); object =3D t; } else { - put_cpu_partial(s, page, 0); + put_cpu_partial(s, slab, 0); stat(s, CPU_PARTIAL_NODE); - partial_pages++; + partial_slabs++; } #ifdef CONFIG_SLUB_CPU_PARTIAL if (!kmem_cache_has_cpu_partial(s) - || partial_pages > s->cpu_partial_pages / 2) + || partial_slabs > s->cpu_partial_slabs / 2) break; #else break; @@ -2188,7 +2188,7 @@ static void *get_partial_node(struct kmem_cache *s,= struct kmem_cache_node *n, * Get a page from somewhere. Search in increasing NUMA distances. */ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, - struct page **ret_page) + struct slab **ret_slab) { #ifdef CONFIG_NUMA struct zonelist *zonelist; @@ -2230,7 +2230,7 @@ static void *get_any_partial(struct kmem_cache *s, = gfp_t flags, =20 if (n && cpuset_zone_allowed(zone, flags) && n->nr_partial > s->min_partial) { - object =3D get_partial_node(s, n, ret_page, flags); + object =3D get_partial_node(s, n, ret_slab, flags); if (object) { /* * Don't check read_mems_allowed_retry() @@ -2252,7 +2252,7 @@ static void *get_any_partial(struct kmem_cache *s, = gfp_t flags, * Get a partial page, lock it and return it. */ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, - struct page **ret_page) + struct slab **ret_slab) { void *object; int searchnode =3D node; @@ -2260,11 +2260,11 @@ static void *get_partial(struct kmem_cache *s, gf= p_t flags, int node, if (node =3D=3D NUMA_NO_NODE) searchnode =3D numa_mem_id(); =20 - object =3D get_partial_node(s, get_node(s, searchnode), ret_page, flags= ); + object =3D get_partial_node(s, get_node(s, searchnode), ret_slab, flags= ); if (object || node !=3D NUMA_NO_NODE) return object; =20 - return get_any_partial(s, flags, ret_page); + return get_any_partial(s, flags, ret_slab); } =20 #ifdef CONFIG_PREEMPTION @@ -2346,20 +2346,20 @@ static void init_kmem_cache_cpus(struct kmem_cach= e *s) * Assumes the slab has been already safely taken away from kmem_cache_c= pu * by the caller. */ -static void deactivate_slab(struct kmem_cache *s, struct page *page, +static void deactivate_slab(struct kmem_cache *s, struct slab *slab, void *freelist) { enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE }; - struct kmem_cache_node *n =3D get_node(s, page_to_nid(page)); + struct kmem_cache_node *n =3D get_node(s, slab_nid(slab)); int lock =3D 0, free_delta =3D 0; enum slab_modes l =3D M_NONE, m =3D M_NONE; void *nextfree, *freelist_iter, *freelist_tail; int tail =3D DEACTIVATE_TO_HEAD; unsigned long flags =3D 0; - struct page new; - struct page old; + struct slab new; + struct slab old; =20 - if (page->freelist) { + if (slab->freelist) { stat(s, DEACTIVATE_REMOTE_FREES); tail =3D DEACTIVATE_TO_TAIL; } @@ -2378,7 +2378,7 @@ static void deactivate_slab(struct kmem_cache *s, s= truct page *page, * 'freelist_iter' is already corrupted. So isolate all objects * starting at 'freelist_iter' by skipping them. */ - if (freelist_corrupted(s, page, &freelist_iter, nextfree)) + if (freelist_corrupted(s, slab, &freelist_iter, nextfree)) break; =20 freelist_tail =3D freelist_iter; @@ -2405,8 +2405,8 @@ static void deactivate_slab(struct kmem_cache *s, s= truct page *page, */ redo: =20 - old.freelist =3D READ_ONCE(page->freelist); - old.counters =3D READ_ONCE(page->counters); + old.freelist =3D READ_ONCE(slab->freelist); + old.counters =3D READ_ONCE(slab->counters); VM_BUG_ON(!old.frozen); =20 /* Determine target state of the slab */ @@ -2448,18 +2448,18 @@ static void deactivate_slab(struct kmem_cache *s,= struct page *page, =20 if (l !=3D m) { if (l =3D=3D M_PARTIAL) - remove_partial(n, page); + remove_partial(n, slab); else if (l =3D=3D M_FULL) - remove_full(s, n, page); + remove_full(s, n, slab); =20 if (m =3D=3D M_PARTIAL) - add_partial(n, page, tail); + add_partial(n, slab, tail); else if (m =3D=3D M_FULL) - add_full(s, n, page); + add_full(s, n, slab); } =20 l =3D m; - if (!cmpxchg_double_slab(s, page, + if (!cmpxchg_double_slab(s, slab, old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")) @@ -2474,26 +2474,26 @@ static void deactivate_slab(struct kmem_cache *s,= struct page *page, stat(s, DEACTIVATE_FULL); else if (m =3D=3D M_FREE) { stat(s, DEACTIVATE_EMPTY); - discard_slab(s, page); + discard_slab(s, slab); stat(s, FREE_SLAB); } } =20 #ifdef CONFIG_SLUB_CPU_PARTIAL -static void __unfreeze_partials(struct kmem_cache *s, struct page *parti= al_page) +static void __unfreeze_partials(struct kmem_cache *s, struct slab *parti= al_slab) { struct kmem_cache_node *n =3D NULL, *n2 =3D NULL; - struct page *page, *discard_page =3D NULL; + struct slab *slab, *slab_to_discard =3D NULL; unsigned long flags =3D 0; =20 - while (partial_page) { - struct page new; - struct page old; + while (partial_slab) { + struct slab new; + struct slab old; =20 - page =3D partial_page; - partial_page =3D page->next; + slab =3D partial_slab; + partial_slab =3D slab->next; =20 - n2 =3D get_node(s, page_to_nid(page)); + n2 =3D get_node(s, slab_nid(slab)); if (n !=3D n2) { if (n) spin_unlock_irqrestore(&n->list_lock, flags); @@ -2504,8 +2504,8 @@ static void __unfreeze_partials(struct kmem_cache *= s, struct page *partial_page) =20 do { =20 - old.freelist =3D page->freelist; - old.counters =3D page->counters; + old.freelist =3D slab->freelist; + old.counters =3D slab->counters; VM_BUG_ON(!old.frozen); =20 new.counters =3D old.counters; @@ -2513,16 +2513,16 @@ static void __unfreeze_partials(struct kmem_cache= *s, struct page *partial_page) =20 new.frozen =3D 0; =20 - } while (!__cmpxchg_double_slab(s, page, + } while (!__cmpxchg_double_slab(s, slab, old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")); =20 if (unlikely(!new.inuse && n->nr_partial >=3D s->min_partial)) { - page->next =3D discard_page; - discard_page =3D page; + slab->next =3D slab_to_discard; + slab_to_discard =3D slab; } else { - add_partial(n, page, DEACTIVATE_TO_TAIL); + add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } } @@ -2530,12 +2530,12 @@ static void __unfreeze_partials(struct kmem_cache= *s, struct page *partial_page) if (n) spin_unlock_irqrestore(&n->list_lock, flags); =20 - while (discard_page) { - page =3D discard_page; - discard_page =3D discard_page->next; + while (slab_to_discard) { + slab =3D slab_to_discard; + slab_to_discard =3D slab_to_discard->next; =20 stat(s, DEACTIVATE_EMPTY); - discard_slab(s, page); + discard_slab(s, slab); stat(s, FREE_SLAB); } } @@ -2545,28 +2545,28 @@ static void __unfreeze_partials(struct kmem_cache= *s, struct page *partial_page) */ static void unfreeze_partials(struct kmem_cache *s) { - struct page *partial_page; + struct slab *partial_slab; unsigned long flags; =20 local_lock_irqsave(&s->cpu_slab->lock, flags); - partial_page =3D this_cpu_read(s->cpu_slab->partial); + partial_slab =3D this_cpu_read(s->cpu_slab->partial); this_cpu_write(s->cpu_slab->partial, NULL); local_unlock_irqrestore(&s->cpu_slab->lock, flags); =20 - if (partial_page) - __unfreeze_partials(s, partial_page); + if (partial_slab) + __unfreeze_partials(s, partial_slab); } =20 static void unfreeze_partials_cpu(struct kmem_cache *s, struct kmem_cache_cpu *c) { - struct page *partial_page; + struct slab *partial_slab; =20 - partial_page =3D slub_percpu_partial(c); + partial_slab =3D slub_percpu_partial(c); c->partial =3D NULL; =20 - if (partial_page) - __unfreeze_partials(s, partial_page); + if (partial_slab) + __unfreeze_partials(s, partial_slab); } =20 /* @@ -2576,42 +2576,42 @@ static void unfreeze_partials_cpu(struct kmem_cac= he *s, * If we did not find a slot then simply move all the partials to the * per node partial list. */ -static void put_cpu_partial(struct kmem_cache *s, struct page *page, int= drain) +static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int= drain) { - struct page *oldpage; - struct page *page_to_unfreeze =3D NULL; + struct slab *oldslab; + struct slab *slab_to_unfreeze =3D NULL; unsigned long flags; - int pages =3D 0; + int slabs =3D 0; =20 local_lock_irqsave(&s->cpu_slab->lock, flags); =20 - oldpage =3D this_cpu_read(s->cpu_slab->partial); + oldslab =3D this_cpu_read(s->cpu_slab->partial); =20 - if (oldpage) { - if (drain && oldpage->pages >=3D s->cpu_partial_pages) { + if (oldslab) { + if (drain && oldslab->slabs >=3D s->cpu_partial_slabs) { /* * Partial array is full. Move the existing set to the * per node partial list. Postpone the actual unfreezing * outside of the critical section. */ - page_to_unfreeze =3D oldpage; - oldpage =3D NULL; + slab_to_unfreeze =3D oldslab; + oldslab =3D NULL; } else { - pages =3D oldpage->pages; + slabs =3D oldslab->slabs; } } =20 - pages++; + slabs++; =20 - page->pages =3D pages; - page->next =3D oldpage; + slab->slabs =3D slabs; + slab->next =3D oldslab; =20 - this_cpu_write(s->cpu_slab->partial, page); + this_cpu_write(s->cpu_slab->partial, slab); =20 local_unlock_irqrestore(&s->cpu_slab->lock, flags); =20 - if (page_to_unfreeze) { - __unfreeze_partials(s, page_to_unfreeze); + if (slab_to_unfreeze) { + __unfreeze_partials(s, slab_to_unfreeze); stat(s, CPU_PARTIAL_DRAIN); } } @@ -2627,22 +2627,22 @@ static inline void unfreeze_partials_cpu(struct k= mem_cache *s, static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cp= u *c) { unsigned long flags; - struct page *page; + struct slab *slab; void *freelist; =20 local_lock_irqsave(&s->cpu_slab->lock, flags); =20 - page =3D c->page; + slab =3D c->slab; freelist =3D c->freelist; =20 - c->page =3D NULL; + c->slab =3D NULL; c->freelist =3D NULL; c->tid =3D next_tid(c->tid); =20 local_unlock_irqrestore(&s->cpu_slab->lock, flags); =20 - if (page) { - deactivate_slab(s, page, freelist); + if (slab) { + deactivate_slab(s, slab, freelist); stat(s, CPUSLAB_FLUSH); } } @@ -2651,14 +2651,14 @@ static inline void __flush_cpu_slab(struct kmem_c= ache *s, int cpu) { struct kmem_cache_cpu *c =3D per_cpu_ptr(s->cpu_slab, cpu); void *freelist =3D c->freelist; - struct page *page =3D c->page; + struct slab *slab =3D c->slab; =20 - c->page =3D NULL; + c->slab =3D NULL; c->freelist =3D NULL; c->tid =3D next_tid(c->tid); =20 - if (page) { - deactivate_slab(s, page, freelist); + if (slab) { + deactivate_slab(s, slab, freelist); stat(s, CPUSLAB_FLUSH); } =20 @@ -2687,7 +2687,7 @@ static void flush_cpu_slab(struct work_struct *w) s =3D sfw->s; c =3D this_cpu_ptr(s->cpu_slab); =20 - if (c->page) + if (c->slab) flush_slab(s, c); =20 unfreeze_partials(s); @@ -2697,7 +2697,7 @@ static bool has_cpu_slab(int cpu, struct kmem_cache= *s) { struct kmem_cache_cpu *c =3D per_cpu_ptr(s->cpu_slab, cpu); =20 - return c->page || slub_percpu_partial(c); + return c->slab || slub_percpu_partial(c); } =20 static DEFINE_MUTEX(flush_lock); @@ -2759,19 +2759,19 @@ static int slub_cpu_dead(unsigned int cpu) * Check if the objects in a per cpu structure fit numa * locality expectations. */ -static inline int node_match(struct page *page, int node) +static inline int node_match(struct slab *slab, int node) { #ifdef CONFIG_NUMA - if (node !=3D NUMA_NO_NODE && page_to_nid(page) !=3D node) + if (node !=3D NUMA_NO_NODE && slab_nid(slab) !=3D node) return 0; #endif return 1; } =20 #ifdef CONFIG_SLUB_DEBUG -static int count_free(struct page *page) +static int count_free(struct slab *slab) { - return page->objects - page->inuse; + return slab->objects - slab->inuse; } =20 static inline unsigned long node_nr_objs(struct kmem_cache_node *n) @@ -2782,15 +2782,15 @@ static inline unsigned long node_nr_objs(struct k= mem_cache_node *n) =20 #if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS) static unsigned long count_partial(struct kmem_cache_node *n, - int (*get_count)(struct page *)) + int (*get_count)(struct slab *)) { unsigned long flags; unsigned long x =3D 0; - struct page *page; + struct slab *slab; =20 spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, slab_list) - x +=3D get_count(page); + list_for_each_entry(slab, &n->partial, slab_list) + x +=3D get_count(slab); spin_unlock_irqrestore(&n->list_lock, flags); return x; } @@ -2849,25 +2849,25 @@ static inline bool pfmemalloc_match(struct slab *= slab, gfp_t gfpflags) * * If this function returns NULL then the page has been unfrozen. */ -static inline void *get_freelist(struct kmem_cache *s, struct page *page= ) +static inline void *get_freelist(struct kmem_cache *s, struct slab *slab= ) { - struct page new; + struct slab new; unsigned long counters; void *freelist; =20 lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock)); =20 do { - freelist =3D page->freelist; - counters =3D page->counters; + freelist =3D slab->freelist; + counters =3D slab->counters; =20 new.counters =3D counters; VM_BUG_ON(!new.frozen); =20 - new.inuse =3D page->objects; + new.inuse =3D slab->objects; new.frozen =3D freelist !=3D NULL; =20 - } while (!__cmpxchg_double_slab(s, page, + } while (!__cmpxchg_double_slab(s, slab, freelist, counters, NULL, new.counters, "get_freelist")); @@ -2898,15 +2898,15 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, unsigned long addr, struct kmem_cache_cpu *c) { void *freelist; - struct page *page; + struct slab *slab; unsigned long flags; =20 stat(s, ALLOC_SLOWPATH); =20 reread_page: =20 - page =3D READ_ONCE(c->page); - if (!page) { + slab =3D READ_ONCE(c->slab); + if (!slab) { /* * if the node is not online or has no normal memory, just * ignore the node constraint @@ -2918,7 +2918,7 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, } redo: =20 - if (unlikely(!node_match(page, node))) { + if (unlikely(!node_match(slab, node))) { /* * same as above but node_match() being false already * implies node !=3D NUMA_NO_NODE @@ -2937,12 +2937,12 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, * PFMEMALLOC but right now, we are losing the pfmemalloc * information when the page leaves the per-cpu allocator */ - if (unlikely(!pfmemalloc_match(page_slab(page), gfpflags))) + if (unlikely(!pfmemalloc_match(slab, gfpflags))) goto deactivate_slab; =20 /* must check again c->page in case we got preempted and it changed */ local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(page !=3D c->page)) { + if (unlikely(slab !=3D c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); goto reread_page; } @@ -2950,10 +2950,10 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, if (freelist) goto load_freelist; =20 - freelist =3D get_freelist(s, page); + freelist =3D get_freelist(s, slab); =20 if (!freelist) { - c->page =3D NULL; + c->slab =3D NULL; local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, DEACTIVATE_BYPASS); goto new_slab; @@ -2970,7 +2970,7 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, * page is pointing to the page from which the objects are obtained. * That page must be frozen for per cpu allocations to work. */ - VM_BUG_ON(!c->page->frozen); + VM_BUG_ON(!c->slab->frozen); c->freelist =3D get_freepointer(s, freelist); c->tid =3D next_tid(c->tid); local_unlock_irqrestore(&s->cpu_slab->lock, flags); @@ -2979,21 +2979,21 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, deactivate_slab: =20 local_lock_irqsave(&s->cpu_slab->lock, flags); - if (page !=3D c->page) { + if (slab !=3D c->slab) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); goto reread_page; } freelist =3D c->freelist; - c->page =3D NULL; + c->slab =3D NULL; c->freelist =3D NULL; local_unlock_irqrestore(&s->cpu_slab->lock, flags); - deactivate_slab(s, page, freelist); + deactivate_slab(s, slab, freelist); =20 new_slab: =20 if (slub_percpu_partial(c)) { local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(c->page)) { + if (unlikely(c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); goto reread_page; } @@ -3003,8 +3003,8 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, goto new_objects; } =20 - page =3D c->page =3D slub_percpu_partial(c); - slub_set_percpu_partial(c, page); + slab =3D c->slab =3D slub_percpu_partial(c); + slub_set_percpu_partial(c, slab); local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, CPU_PARTIAL_ALLOC); goto redo; @@ -3012,15 +3012,15 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, =20 new_objects: =20 - freelist =3D get_partial(s, gfpflags, node, &page); + freelist =3D get_partial(s, gfpflags, node, &slab); if (freelist) goto check_new_page; =20 slub_put_cpu_ptr(s->cpu_slab); - page =3D new_slab(s, gfpflags, node); + slab =3D new_slab(s, gfpflags, node); c =3D slub_get_cpu_ptr(s->cpu_slab); =20 - if (unlikely(!page)) { + if (unlikely(!slab)) { slab_out_of_memory(s, gfpflags, node); return NULL; } @@ -3029,15 +3029,15 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, * No other reference to the page yet so we can * muck around with it freely without cmpxchg */ - freelist =3D page->freelist; - page->freelist =3D NULL; + freelist =3D slab->freelist; + slab->freelist =3D NULL; =20 stat(s, ALLOC_SLAB); =20 check_new_page: =20 if (kmem_cache_debug(s)) { - if (!alloc_debug_processing(s, page, freelist, addr)) { + if (!alloc_debug_processing(s, slab, freelist, addr)) { /* Slab failed checks. Next slab needed */ goto new_slab; } else { @@ -3049,7 +3049,7 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, } } =20 - if (unlikely(!pfmemalloc_match(page_slab(page), gfpflags))) + if (unlikely(!pfmemalloc_match(slab, gfpflags))) /* * For !pfmemalloc_match() case we don't load freelist so that * we don't make further mismatched allocations easier. @@ -3059,29 +3059,29 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, retry_load_page: =20 local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(c->page)) { + if (unlikely(c->slab)) { void *flush_freelist =3D c->freelist; - struct page *flush_page =3D c->page; + struct slab *flush_slab =3D c->slab; =20 - c->page =3D NULL; + c->slab =3D NULL; c->freelist =3D NULL; c->tid =3D next_tid(c->tid); =20 local_unlock_irqrestore(&s->cpu_slab->lock, flags); =20 - deactivate_slab(s, flush_page, flush_freelist); + deactivate_slab(s, flush_slab, flush_freelist); =20 stat(s, CPUSLAB_FLUSH); =20 goto retry_load_page; } - c->page =3D page; + c->slab =3D slab; =20 goto load_freelist; =20 return_single: =20 - deactivate_slab(s, page, get_freepointer(s, freelist)); + deactivate_slab(s, slab, get_freepointer(s, freelist)); return freelist; } =20 @@ -3138,7 +3138,7 @@ static __always_inline void *slab_alloc_node(struct= kmem_cache *s, { void *object; struct kmem_cache_cpu *c; - struct page *page; + struct slab *slab; unsigned long tid; struct obj_cgroup *objcg =3D NULL; bool init =3D false; @@ -3185,7 +3185,7 @@ static __always_inline void *slab_alloc_node(struct= kmem_cache *s, */ =20 object =3D c->freelist; - page =3D c->page; + slab =3D c->slab; /* * We cannot use the lockless fastpath on PREEMPT_RT because if a * slowpath has taken the local_lock_irqsave(), it is not protected @@ -3194,7 +3194,7 @@ static __always_inline void *slab_alloc_node(struct= kmem_cache *s, * there is a suitable cpu freelist. */ if (IS_ENABLED(CONFIG_PREEMPT_RT) || - unlikely(!object || !page || !node_match(page, node))) { + unlikely(!object || !slab || !node_match(slab, node))) { object =3D __slab_alloc(s, gfpflags, node, addr, c); } else { void *next_object =3D get_freepointer_safe(s, object); @@ -3299,14 +3299,14 @@ EXPORT_SYMBOL(kmem_cache_alloc_node_trace); * lock and free the item. If there is no additional partial page * handling required then we can return immediately. */ -static void __slab_free(struct kmem_cache *s, struct page *page, +static void __slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) =20 { void *prior; int was_frozen; - struct page new; + struct slab new; unsigned long counters; struct kmem_cache_node *n =3D NULL; unsigned long flags; @@ -3317,7 +3317,7 @@ static void __slab_free(struct kmem_cache *s, struc= t page *page, return; =20 if (kmem_cache_debug(s) && - !free_debug_processing(s, page, head, tail, cnt, addr)) + !free_debug_processing(s, slab, head, tail, cnt, addr)) return; =20 do { @@ -3325,8 +3325,8 @@ static void __slab_free(struct kmem_cache *s, struc= t page *page, spin_unlock_irqrestore(&n->list_lock, flags); n =3D NULL; } - prior =3D page->freelist; - counters =3D page->counters; + prior =3D slab->freelist; + counters =3D slab->counters; set_freepointer(s, tail, prior); new.counters =3D counters; was_frozen =3D new.frozen; @@ -3345,7 +3345,7 @@ static void __slab_free(struct kmem_cache *s, struc= t page *page, =20 } else { /* Needs to be taken off a list */ =20 - n =3D get_node(s, page_to_nid(page)); + n =3D get_node(s, slab_nid(slab)); /* * Speculatively acquire the list_lock. * If the cmpxchg does not succeed then we may @@ -3359,7 +3359,7 @@ static void __slab_free(struct kmem_cache *s, struc= t page *page, } } =20 - } while (!cmpxchg_double_slab(s, page, + } while (!cmpxchg_double_slab(s, slab, prior, counters, head, new.counters, "__slab_free")); @@ -3377,7 +3377,7 @@ static void __slab_free(struct kmem_cache *s, struc= t page *page, * If we just froze the page then put it onto the * per cpu partial list. */ - put_cpu_partial(s, page, 1); + put_cpu_partial(s, slab, 1); stat(s, CPU_PARTIAL_FREE); } =20 @@ -3392,8 +3392,8 @@ static void __slab_free(struct kmem_cache *s, struc= t page *page, * then add it. */ if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) { - remove_full(s, n, page); - add_partial(n, page, DEACTIVATE_TO_TAIL); + remove_full(s, n, slab); + add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } spin_unlock_irqrestore(&n->list_lock, flags); @@ -3404,16 +3404,16 @@ static void __slab_free(struct kmem_cache *s, str= uct page *page, /* * Slab on the partial list. */ - remove_partial(n, page); + remove_partial(n, slab); stat(s, FREE_REMOVE_PARTIAL); } else { /* Slab must be on the full list */ - remove_full(s, n, page); + remove_full(s, n, slab); } =20 spin_unlock_irqrestore(&n->list_lock, flags); stat(s, FREE_SLAB); - discard_slab(s, page); + discard_slab(s, slab); } =20 /* @@ -3432,7 +3432,7 @@ static void __slab_free(struct kmem_cache *s, struc= t page *page, * count (cnt). Bulk free indicated by tail pointer being set. */ static __always_inline void do_slab_free(struct kmem_cache *s, - struct page *page, void *head, void *tail, + struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { void *tail_obj =3D tail ? : head; @@ -3455,7 +3455,7 @@ static __always_inline void do_slab_free(struct kme= m_cache *s, /* Same with comment on barrier() in slab_alloc_node() */ barrier(); =20 - if (likely(page =3D=3D c->page)) { + if (likely(slab =3D=3D c->slab)) { #ifndef CONFIG_PREEMPT_RT void **freelist =3D READ_ONCE(c->freelist); =20 @@ -3481,7 +3481,7 @@ static __always_inline void do_slab_free(struct kme= m_cache *s, =20 local_lock(&s->cpu_slab->lock); c =3D this_cpu_ptr(s->cpu_slab); - if (unlikely(page !=3D c->page)) { + if (unlikely(slab !=3D c->slab)) { local_unlock(&s->cpu_slab->lock); goto redo; } @@ -3496,11 +3496,11 @@ static __always_inline void do_slab_free(struct k= mem_cache *s, #endif stat(s, FREE_FASTPATH); } else - __slab_free(s, page, head, tail_obj, cnt, addr); + __slab_free(s, slab, head, tail_obj, cnt, addr); =20 } =20 -static __always_inline void slab_free(struct kmem_cache *s, struct page = *page, +static __always_inline void slab_free(struct kmem_cache *s, struct slab = *slab, void *head, void *tail, int cnt, unsigned long addr) { @@ -3509,13 +3509,13 @@ static __always_inline void slab_free(struct kmem= _cache *s, struct page *page, * to remove objects, whose reuse must be delayed. */ if (slab_free_freelist_hook(s, &head, &tail, &cnt)) - do_slab_free(s, page, head, tail, cnt, addr); + do_slab_free(s, slab, head, tail, cnt, addr); } =20 #ifdef CONFIG_KASAN_GENERIC void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr= ) { - do_slab_free(cache, slab_page(virt_to_slab(x)), x, NULL, 1, addr); + do_slab_free(cache, virt_to_slab(x), x, NULL, 1, addr); } #endif =20 @@ -3524,7 +3524,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) s =3D cache_from_obj(s, x); if (!s) return; - slab_free(s, slab_page(virt_to_slab(x)), x, NULL, 1, _RET_IP_); + slab_free(s, virt_to_slab(x), x, NULL, 1, _RET_IP_); trace_kmem_cache_free(_RET_IP_, x, s->name); } EXPORT_SYMBOL(kmem_cache_free); @@ -3654,7 +3654,7 @@ void kmem_cache_free_bulk(struct kmem_cache *s, siz= e_t size, void **p) if (!df.slab) continue; =20 - slab_free(df.s, slab_page(df.slab), df.freelist, df.tail, df.cnt, _RET= _IP_); + slab_free(df.s, df.slab, df.freelist, df.tail, df.cnt, _RET_IP_); } while (likely(size)); } EXPORT_SYMBOL(kmem_cache_free_bulk); @@ -3924,38 +3924,38 @@ static struct kmem_cache *kmem_cache_node; */ static void early_kmem_cache_node_alloc(int node) { - struct page *page; + struct slab *slab; struct kmem_cache_node *n; =20 BUG_ON(kmem_cache_node->size < sizeof(struct kmem_cache_node)); =20 - page =3D new_slab(kmem_cache_node, GFP_NOWAIT, node); + slab =3D new_slab(kmem_cache_node, GFP_NOWAIT, node); =20 - BUG_ON(!page); - if (page_to_nid(page) !=3D node) { + BUG_ON(!slab); + if (slab_nid(slab) !=3D node) { pr_err("SLUB: Unable to allocate memory from node %d\n", node); pr_err("SLUB: Allocating a useless per node structure in order to be a= ble to continue\n"); } =20 - n =3D page->freelist; + n =3D slab->freelist; BUG_ON(!n); #ifdef CONFIG_SLUB_DEBUG init_object(kmem_cache_node, n, SLUB_RED_ACTIVE); init_tracking(kmem_cache_node, n); #endif n =3D kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false); - page->freelist =3D get_freepointer(kmem_cache_node, n); - page->inuse =3D 1; - page->frozen =3D 0; + slab->freelist =3D get_freepointer(kmem_cache_node, n); + slab->inuse =3D 1; + slab->frozen =3D 0; kmem_cache_node->node[node] =3D n; init_kmem_cache_node(n); - inc_slabs_node(kmem_cache_node, node, page->objects); + inc_slabs_node(kmem_cache_node, node, slab->objects); =20 /* * No locks need to be taken here as it has just been * initialized and there is no concurrent access. */ - __add_partial(n, page, DEACTIVATE_TO_HEAD); + __add_partial(n, slab, DEACTIVATE_TO_HEAD); } =20 static void free_kmem_cache_nodes(struct kmem_cache *s) @@ -4241,20 +4241,20 @@ static int kmem_cache_open(struct kmem_cache *s, = slab_flags_t flags) return -EINVAL; } =20 -static void list_slab_objects(struct kmem_cache *s, struct page *page, +static void list_slab_objects(struct kmem_cache *s, struct slab *slab, const char *text) { #ifdef CONFIG_SLUB_DEBUG - void *addr =3D page_address(page); + void *addr =3D slab_address(slab); unsigned long flags; unsigned long *map; void *p; =20 - slab_err(s, page, text, s->name); - slab_lock(page, &flags); + slab_err(s, slab, text, s->name); + slab_lock(slab, &flags); =20 - map =3D get_map(s, page); - for_each_object(p, s, addr, page->objects) { + map =3D get_map(s, slab); + for_each_object(p, s, addr, slab->objects) { =20 if (!test_bit(__obj_to_index(s, addr, p), map)) { pr_err("Object 0x%p @offset=3D%tu\n", p, p - addr); @@ -4262,7 +4262,7 @@ static void list_slab_objects(struct kmem_cache *s,= struct page *page, } } put_map(map); - slab_unlock(page, &flags); + slab_unlock(slab, &flags); #endif } =20 @@ -4274,23 +4274,23 @@ static void list_slab_objects(struct kmem_cache *= s, struct page *page, static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n= ) { LIST_HEAD(discard); - struct page *page, *h; + struct slab *slab, *h; =20 BUG_ON(irqs_disabled()); spin_lock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &n->partial, slab_list) { - if (!page->inuse) { - remove_partial(n, page); - list_add(&page->slab_list, &discard); + list_for_each_entry_safe(slab, h, &n->partial, slab_list) { + if (!slab->inuse) { + remove_partial(n, slab); + list_add(&slab->slab_list, &discard); } else { - list_slab_objects(s, page, + list_slab_objects(s, slab, "Objects remaining in %s on __kmem_cache_shutdown()"); } } spin_unlock_irq(&n->list_lock); =20 - list_for_each_entry_safe(page, h, &discard, slab_list) - discard_slab(s, page); + list_for_each_entry_safe(slab, h, &discard, slab_list) + discard_slab(s, slab); } =20 bool __kmem_cache_empty(struct kmem_cache *s) @@ -4559,7 +4559,7 @@ void kfree(const void *x) return; } slab =3D folio_slab(folio); - slab_free(slab->slab_cache, slab_page(slab), object, NULL, 1, _RET_IP_)= ; + slab_free(slab->slab_cache, slab, object, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kfree); =20 @@ -4579,8 +4579,8 @@ static int __kmem_cache_do_shrink(struct kmem_cache= *s) int node; int i; struct kmem_cache_node *n; - struct page *page; - struct page *t; + struct slab *slab; + struct slab *t; struct list_head discard; struct list_head promote[SHRINK_PROMOTE_MAX]; unsigned long flags; @@ -4599,8 +4599,8 @@ static int __kmem_cache_do_shrink(struct kmem_cache= *s) * Note that concurrent frees may occur while we hold the * list_lock. page->inuse here is the upper limit. */ - list_for_each_entry_safe(page, t, &n->partial, slab_list) { - int free =3D page->objects - page->inuse; + list_for_each_entry_safe(slab, t, &n->partial, slab_list) { + int free =3D slab->objects - slab->inuse; =20 /* Do not reread page->inuse */ barrier(); @@ -4608,11 +4608,11 @@ static int __kmem_cache_do_shrink(struct kmem_cac= he *s) /* We do not keep full slabs on the list */ BUG_ON(free <=3D 0); =20 - if (free =3D=3D page->objects) { - list_move(&page->slab_list, &discard); + if (free =3D=3D slab->objects) { + list_move(&slab->slab_list, &discard); n->nr_partial--; } else if (free <=3D SHRINK_PROMOTE_MAX) - list_move(&page->slab_list, promote + free - 1); + list_move(&slab->slab_list, promote + free - 1); } =20 /* @@ -4625,8 +4625,8 @@ static int __kmem_cache_do_shrink(struct kmem_cache= *s) spin_unlock_irqrestore(&n->list_lock, flags); =20 /* Release empty slabs */ - list_for_each_entry_safe(page, t, &discard, slab_list) - discard_slab(s, page); + list_for_each_entry_safe(slab, t, &discard, slab_list) + discard_slab(s, slab); =20 if (slabs_node(s, node)) ret =3D 1; @@ -4787,7 +4787,7 @@ static struct kmem_cache * __init bootstrap(struct = kmem_cache *static_cache) */ __flush_cpu_slab(s, smp_processor_id()); for_each_kmem_cache_node(s, node, n) { - struct page *p; + struct slab *p; =20 list_for_each_entry(p, &n->partial, slab_list) p->slab_cache =3D s; @@ -4965,54 +4965,54 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller); #endif =20 #ifdef CONFIG_SYSFS -static int count_inuse(struct page *page) +static int count_inuse(struct slab *slab) { - return page->inuse; + return slab->inuse; } =20 -static int count_total(struct page *page) +static int count_total(struct slab *slab) { - return page->objects; + return slab->objects; } #endif =20 #ifdef CONFIG_SLUB_DEBUG -static void validate_slab(struct kmem_cache *s, struct page *page, +static void validate_slab(struct kmem_cache *s, struct slab *slab, unsigned long *obj_map) { void *p; - void *addr =3D page_address(page); + void *addr =3D slab_address(slab); unsigned long flags; =20 - slab_lock(page, &flags); + slab_lock(slab, &flags); =20 - if (!check_slab(s, page) || !on_freelist(s, page, NULL)) + if (!check_slab(s, slab) || !on_freelist(s, slab, NULL)) goto unlock; =20 /* Now we know that a valid freelist exists */ - __fill_map(obj_map, s, page); - for_each_object(p, s, addr, page->objects) { + __fill_map(obj_map, s, slab); + for_each_object(p, s, addr, slab->objects) { u8 val =3D test_bit(__obj_to_index(s, addr, p), obj_map) ? SLUB_RED_INACTIVE : SLUB_RED_ACTIVE; =20 - if (!check_object(s, page, p, val)) + if (!check_object(s, slab, p, val)) break; } unlock: - slab_unlock(page, &flags); + slab_unlock(slab, &flags); } =20 static int validate_slab_node(struct kmem_cache *s, struct kmem_cache_node *n, unsigned long *obj_map) { unsigned long count =3D 0; - struct page *page; + struct slab *slab; unsigned long flags; =20 spin_lock_irqsave(&n->list_lock, flags); =20 - list_for_each_entry(page, &n->partial, slab_list) { - validate_slab(s, page, obj_map); + list_for_each_entry(slab, &n->partial, slab_list) { + validate_slab(s, slab, obj_map); count++; } if (count !=3D n->nr_partial) { @@ -5024,8 +5024,8 @@ static int validate_slab_node(struct kmem_cache *s, if (!(s->flags & SLAB_STORE_USER)) goto out; =20 - list_for_each_entry(page, &n->full, slab_list) { - validate_slab(s, page, obj_map); + list_for_each_entry(slab, &n->full, slab_list) { + validate_slab(s, slab, obj_map); count++; } if (count !=3D atomic_long_read(&n->nr_slabs)) { @@ -5190,15 +5190,15 @@ static int add_location(struct loc_track *t, stru= ct kmem_cache *s, } =20 static void process_slab(struct loc_track *t, struct kmem_cache *s, - struct page *page, enum track_item alloc, + struct slab *slab, enum track_item alloc, unsigned long *obj_map) { - void *addr =3D page_address(page); + void *addr =3D slab_address(slab); void *p; =20 - __fill_map(obj_map, s, page); + __fill_map(obj_map, s, slab); =20 - for_each_object(p, s, addr, page->objects) + for_each_object(p, s, addr, slab->objects) if (!test_bit(__obj_to_index(s, addr, p), obj_map)) add_location(t, s, get_track(s, p, alloc)); } @@ -5240,32 +5240,32 @@ static ssize_t show_slab_objects(struct kmem_cach= e *s, struct kmem_cache_cpu *c =3D per_cpu_ptr(s->cpu_slab, cpu); int node; - struct page *page; + struct slab *slab; =20 - page =3D READ_ONCE(c->page); - if (!page) + slab =3D READ_ONCE(c->slab); + if (!slab) continue; =20 - node =3D page_to_nid(page); + node =3D slab_nid(slab); if (flags & SO_TOTAL) - x =3D page->objects; + x =3D slab->objects; else if (flags & SO_OBJECTS) - x =3D page->inuse; + x =3D slab->inuse; else x =3D 1; =20 total +=3D x; nodes[node] +=3D x; =20 - page =3D slub_percpu_partial_read_once(c); - if (page) { - node =3D page_to_nid(page); + slab =3D slub_percpu_partial_read_once(c); + if (slab) { + node =3D slab_nid(slab); if (flags & SO_TOTAL) WARN_ON_ONCE(1); else if (flags & SO_OBJECTS) WARN_ON_ONCE(1); else - x =3D page->pages; + x =3D slab->slabs; total +=3D x; nodes[node] +=3D x; } @@ -5467,33 +5467,33 @@ SLAB_ATTR_RO(objects_partial); static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf) { int objects =3D 0; - int pages =3D 0; + int slabs =3D 0; int cpu; int len =3D 0; =20 for_each_online_cpu(cpu) { - struct page *page; + struct slab *slab; =20 - page =3D slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); + slab =3D slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); =20 - if (page) - pages +=3D page->pages; + if (slab) + slabs +=3D slab->slabs; } =20 /* Approximate half-full pages , see slub_set_cpu_partial() */ - objects =3D (pages * oo_objects(s->oo)) / 2; - len +=3D sysfs_emit_at(buf, len, "%d(%d)", objects, pages); + objects =3D (slabs * oo_objects(s->oo)) / 2; + len +=3D sysfs_emit_at(buf, len, "%d(%d)", objects, slabs); =20 #ifdef CONFIG_SMP for_each_online_cpu(cpu) { - struct page *page; + struct slab *slab; =20 - page =3D slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); - if (page) { - pages =3D READ_ONCE(page->pages); - objects =3D (pages * oo_objects(s->oo)) / 2; + slab =3D slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); + if (slab) { + slabs =3D READ_ONCE(slab->slabs); + objects =3D (slabs * oo_objects(s->oo)) / 2; len +=3D sysfs_emit_at(buf, len, " C%d=3D%d(%d)", - cpu, objects, pages); + cpu, objects, slabs); } } #endif @@ -6159,16 +6159,16 @@ static int slab_debug_trace_open(struct inode *in= ode, struct file *filep) =20 for_each_kmem_cache_node(s, node, n) { unsigned long flags; - struct page *page; + struct slab *slab; =20 if (!atomic_long_read(&n->nr_slabs)) continue; =20 spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, slab_list) - process_slab(t, s, page, alloc, obj_map); - list_for_each_entry(page, &n->full, slab_list) - process_slab(t, s, page, alloc, obj_map); + list_for_each_entry(slab, &n->partial, slab_list) + process_slab(t, s, slab, alloc, obj_map); + list_for_each_entry(slab, &n->full, slab_list) + process_slab(t, s, slab, alloc, obj_map); spin_unlock_irqrestore(&n->list_lock, flags); } =20 --=20 2.33.1