From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8958BC433F5 for ; Tue, 16 Nov 2021 00:17:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2663363215 for ; Tue, 16 Nov 2021 00:17:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2663363215 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 081996B009F; Mon, 15 Nov 2021 19:16:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 860236B00A5; Mon, 15 Nov 2021 19:16:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E277E6B00A9; Mon, 15 Nov 2021 19:16:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0054.hostedemail.com [216.40.44.54]) by kanga.kvack.org (Postfix) with ESMTP id 0E7216B0082 for ; Mon, 15 Nov 2021 19:16:43 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CCBD284767 for ; Tue, 16 Nov 2021 00:16:42 +0000 (UTC) X-FDA: 78812877444.04.1F386AD Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf22.hostedemail.com (Postfix) with ESMTP id 83BA71911 for ; Tue, 16 Nov 2021 00:16:41 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 6C88721977; Tue, 16 Nov 2021 00:16:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1637021800; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q2s2rRolTi/tozAyHKufO2rJcy/F+r5lJEP1a0gCagE=; b=bdN7BbC7EVijQR7vsPqXzwK9i+Cqde+DCzEDFFaCWv17f3hEYf4Kw6ELtUmF8bTY57sOJu LO7/OOSdtwYKJ85rOl35MEqH8Ksa3L2y8CCvlJ+SDimWMuoFPEy956/STisuSZSzclG+sw /IxFdfEfOdfNRSp+fdWhqGymlVxqStY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1637021800; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q2s2rRolTi/tozAyHKufO2rJcy/F+r5lJEP1a0gCagE=; b=YxbvWrpUzF6kjCHYIfyqzXr7K8UgGz23Zg3CqSkpYJVyoYLys4GREv5a9cp/0eXaOSiLpJ vEhRdKQ86++b7RBQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2C290139DB; Tue, 16 Nov 2021 00:16:40 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 0K46Cmj4kmFjXAAAMHmgww (envelope-from ); Tue, 16 Nov 2021 00:16:40 +0000 From: Vlastimil Babka To: Matthew Wilcox , linux-mm@kvack.org, Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Vlastimil Babka , Julia Lawall , Luis Chamberlain , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Marco Elver , Johannes Weiner , Michal Hocko , Vladimir Davydov , kasan-dev@googlegroups.com, cgroups@vger.kernel.org Subject: [RFC PATCH 21/32] mm: Convert struct page to struct slab in functions used by other subsystems Date: Tue, 16 Nov 2021 01:16:17 +0100 Message-Id: <20211116001628.24216-22-vbabka@suse.cz> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211116001628.24216-1-vbabka@suse.cz> References: <20211116001628.24216-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=12792; h=from:subject; bh=IgVl3ZRPt4OrPRzJceHKtR5LaZyvmIn43n2yD1brefw=; b=owEBbQGS/pANAwAIAeAhynPxiakQAcsmYgBhkvhHhJKjpglH0ojEgRxJJDx/KZJk3rlhaI0b5mIL ALr4j4yJATMEAAEIAB0WIQSNS5MBqTXjGL5IXszgIcpz8YmpEAUCYZL4RwAKCRDgIcpz8YmpEP6sB/ oCbzd1mr3I+H1daz/zmxBLv3GvS9Ev24Mc5t2tuNU5BUeOVnT3lbzMl5l/mnKzuRX+uO+ocdTwpr2o V2qdhp5F8e3DRzKeYWSErLSGrcUA4CsaEFK36qE3A1hR5FT1j94SE0GWmwB3Blwt+pOH2qoCRcdftJ l1qAbxaUntM81yQkwmpNMYMFv/uoML3krqGB8TUkjlmQfhQZypgImrnVkVCifMV7v4rwCY/JVRVYYW d1cyv/xt95ILtpXD3b2uP3PXTc1NkTdX4o8zQZAxXFyzThAnLSVmuUykJlLj8CbI9ojT9G1U9YLP8m EJafmg+tn8BU5T3uuWqLUxxRLYBnex X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 83BA71911 X-Stat-Signature: y8kynxumdpg4y5p74k7xrq7bspj888cu Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=bdN7BbC7; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=YxbvWrpU; spf=pass (imf22.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1637021801-298933 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KASAN, KFENCE and memcg interact with SLAB or SLUB internals through func= tions nearest_obj(), obj_to_index() and objs_per_slab() that use struct page as parameter. This patch converts it to struct slab including all callers, t= hrough a coccinelle semantic patch. // Options: --include-headers --no-includes --smpl-spacing include/linux/= slab_def.h include/linux/slub_def.h mm/slab.h mm/kasan/*.c mm/kfence/kfen= ce_test.c mm/memcontrol.c mm/slab.c mm/slub.c // Note: needs coccinelle 1.1.1 to avoid breaking whitespace @@ @@ -objs_per_slab_page( +objs_per_slab( ... ) { ... } @@ @@ -objs_per_slab_page( +objs_per_slab( ... ) @@ identifier fn =3D~ "obj_to_index|objs_per_slab"; @@ fn(..., - const struct page *page + const struct slab *slab ,...) { <... ( - page_address(page) + slab_address(slab) | - page + slab ) ...> } @@ identifier fn =3D~ "nearest_obj"; @@ fn(..., - struct page *page + const struct slab *slab ,...) { <... ( - page_address(page) + slab_address(slab) | - page + slab ) ...> } @@ identifier fn =3D~ "nearest_obj|obj_to_index|objs_per_slab"; expression E; @@ fn(..., ( - slab_page(E) + E | - virt_to_page(E) + virt_to_slab(E) | - virt_to_head_page(E) + virt_to_slab(E) | - page + page_slab(page) ) ,...) Signed-off-by: Vlastimil Babka Cc: Julia Lawall Cc: Luis Chamberlain Cc: Andrey Ryabinin Cc: Alexander Potapenko Cc: Andrey Konovalov Cc: Dmitry Vyukov Cc: Marco Elver Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Cc: --- include/linux/slab_def.h | 16 ++++++++-------- include/linux/slub_def.h | 18 +++++++++--------- mm/kasan/common.c | 4 ++-- mm/kasan/generic.c | 2 +- mm/kasan/report.c | 2 +- mm/kasan/report_tags.c | 2 +- mm/kfence/kfence_test.c | 4 ++-- mm/memcontrol.c | 4 ++-- mm/slab.c | 10 +++++----- mm/slab.h | 4 ++-- mm/slub.c | 2 +- 11 files changed, 34 insertions(+), 34 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 3aa5e1e73ab6..e24c9aff6fed 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -87,11 +87,11 @@ struct kmem_cache { struct kmem_cache_node *node[MAX_NUMNODES]; }; =20 -static inline void *nearest_obj(struct kmem_cache *cache, struct page *p= age, +static inline void *nearest_obj(struct kmem_cache *cache, const struct s= lab *slab, void *x) { - void *object =3D x - (x - page->s_mem) % cache->size; - void *last_object =3D page->s_mem + (cache->num - 1) * cache->size; + void *object =3D x - (x - slab->s_mem) % cache->size; + void *last_object =3D slab->s_mem + (cache->num - 1) * cache->size; =20 if (unlikely(object > last_object)) return last_object; @@ -106,16 +106,16 @@ static inline void *nearest_obj(struct kmem_cache *= cache, struct page *page, * reciprocal_divide(offset, cache->reciprocal_buffer_size) */ static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct page *page, void *obj) + const struct slab *slab, void *obj) { - u32 offset =3D (obj - page->s_mem); + u32 offset =3D (obj - slab->s_mem); return reciprocal_divide(offset, cache->reciprocal_buffer_size); } =20 -static inline int objs_per_slab_page(const struct kmem_cache *cache, - const struct page *page) +static inline int objs_per_slab(const struct kmem_cache *cache, + const struct slab *slab) { - if (is_kfence_address(page_address(page))) + if (is_kfence_address(slab_address(slab))) return 1; return cache->num; } diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 8a9c2876ca89..33c5c0e3bd8d 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -158,11 +158,11 @@ static inline void sysfs_slab_release(struct kmem_c= ache *s) =20 void *fixup_red_left(struct kmem_cache *s, void *p); =20 -static inline void *nearest_obj(struct kmem_cache *cache, struct page *p= age, +static inline void *nearest_obj(struct kmem_cache *cache, const struct s= lab *slab, void *x) { - void *object =3D x - (x - page_address(page)) % cache->size; - void *last_object =3D page_address(page) + - (page->objects - 1) * cache->size; + void *object =3D x - (x - slab_address(slab)) % cache->size; + void *last_object =3D slab_address(slab) + + (slab->objects - 1) * cache->size; void *result =3D (unlikely(object > last_object)) ? last_object : objec= t; =20 result =3D fixup_red_left(cache, result); @@ -178,16 +178,16 @@ static inline unsigned int __obj_to_index(const str= uct kmem_cache *cache, } =20 static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct page *page, void *obj) + const struct slab *slab, void *obj) { if (is_kfence_address(obj)) return 0; - return __obj_to_index(cache, page_address(page), obj); + return __obj_to_index(cache, slab_address(slab), obj); } =20 -static inline int objs_per_slab_page(const struct kmem_cache *cache, - const struct page *page) +static inline int objs_per_slab(const struct kmem_cache *cache, + const struct slab *slab) { - return page->objects; + return slab->objects; } #endif /* _LINUX_SLUB_DEF_H */ diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 8428da2aaf17..6a1cd2d38bff 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -298,7 +298,7 @@ static inline u8 assign_tag(struct kmem_cache *cache, /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: *= / #ifdef CONFIG_SLAB /* For SLAB assign tags based on the object index in the freelist. */ - return (u8)obj_to_index(cache, virt_to_head_page(object), (void *)objec= t); + return (u8)obj_to_index(cache, virt_to_slab(object), (void *)object); #else /* * For SLUB assign a random tag during slab creation, otherwise reuse @@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_ca= che *cache, void *object, if (is_kfence_address(object)) return false; =20 - if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=3D + if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=3D object)) { kasan_report_invalid_free(tagged_object, ip); return true; diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c index 84a038b07c6f..5d0b79416c4e 100644 --- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -339,7 +339,7 @@ static void __kasan_record_aux_stack(void *addr, bool= can_alloc) return; =20 cache =3D page->slab_cache; - object =3D nearest_obj(cache, page, addr); + object =3D nearest_obj(cache, page_slab(page), addr); alloc_meta =3D kasan_get_alloc_meta(cache, object); if (!alloc_meta) return; diff --git a/mm/kasan/report.c b/mm/kasan/report.c index 0bc10f452f7e..e00999dc6499 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -249,7 +249,7 @@ static void print_address_description(void *addr, u8 = tag) =20 if (page && PageSlab(page)) { struct kmem_cache *cache =3D page->slab_cache; - void *object =3D nearest_obj(cache, page, addr); + void *object =3D nearest_obj(cache, page_slab(page), addr); =20 describe_object(cache, object, addr, tag); } diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c index 8a319fc16dab..06c21dd77493 100644 --- a/mm/kasan/report_tags.c +++ b/mm/kasan/report_tags.c @@ -23,7 +23,7 @@ const char *kasan_get_bug_type(struct kasan_access_info= *info) page =3D kasan_addr_to_page(addr); if (page && PageSlab(page)) { cache =3D page->slab_cache; - object =3D nearest_obj(cache, page, (void *)addr); + object =3D nearest_obj(cache, page_slab(page), (void *)addr); alloc_meta =3D kasan_get_alloc_meta(cache, object); =20 if (alloc_meta) { diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c index 695030c1fff8..f7276711d7b9 100644 --- a/mm/kfence/kfence_test.c +++ b/mm/kfence/kfence_test.c @@ -291,8 +291,8 @@ static void *test_alloc(struct kunit *test, size_t si= ze, gfp_t gfp, enum allocat * even for KFENCE objects; these are required so that * memcg accounting works correctly. */ - KUNIT_EXPECT_EQ(test, obj_to_index(s, page, alloc), 0U); - KUNIT_EXPECT_EQ(test, objs_per_slab_page(s, page), 1); + KUNIT_EXPECT_EQ(test, obj_to_index(s, page_slab(page), alloc), 0U); + KUNIT_EXPECT_EQ(test, objs_per_slab(s, page_slab(page)), 1); =20 if (policy =3D=3D ALLOCATE_ANY) return alloc; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 781605e92015..c8b53ec074b4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2819,7 +2819,7 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg= (struct obj_cgroup *objcg) int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s= , gfp_t gfp, bool new_page) { - unsigned int objects =3D objs_per_slab_page(s, page); + unsigned int objects =3D objs_per_slab(s, page_slab(page)); unsigned long memcg_data; void *vec; =20 @@ -2881,7 +2881,7 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p) struct obj_cgroup *objcg; unsigned int off; =20 - off =3D obj_to_index(page->slab_cache, page, p); + off =3D obj_to_index(page->slab_cache, page_slab(page), p); objcg =3D page_objcgs(page)[off]; if (objcg) return obj_cgroup_memcg(objcg); diff --git a/mm/slab.c b/mm/slab.c index 78ef4d94e3de..adf688d2da64 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1560,7 +1560,7 @@ static void check_poison_obj(struct kmem_cache *cac= hep, void *objp) struct slab *slab =3D virt_to_slab(objp); unsigned int objnr; =20 - objnr =3D obj_to_index(cachep, slab_page(slab), objp); + objnr =3D obj_to_index(cachep, slab, objp); if (objnr) { objp =3D index_to_obj(cachep, slab, objnr - 1); realobj =3D (char *)objp + obj_offset(cachep); @@ -2530,7 +2530,7 @@ static void *slab_get_obj(struct kmem_cache *cachep= , struct slab *slab) static void slab_put_obj(struct kmem_cache *cachep, struct slab *slab, void *objp) { - unsigned int objnr =3D obj_to_index(cachep, slab_page(slab), objp); + unsigned int objnr =3D obj_to_index(cachep, slab, objp); #if DEBUG unsigned int i; =20 @@ -2717,7 +2717,7 @@ static void *cache_free_debugcheck(struct kmem_cach= e *cachep, void *objp, if (cachep->flags & SLAB_STORE_USER) *dbg_userword(cachep, objp) =3D (void *)caller; =20 - objnr =3D obj_to_index(cachep, slab_page(slab), objp); + objnr =3D obj_to_index(cachep, slab, objp); =20 BUG_ON(objnr >=3D cachep->num); BUG_ON(objp !=3D index_to_obj(cachep, slab, objnr)); @@ -3663,7 +3663,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void = *object, struct slab *slab) objp =3D object - obj_offset(cachep); kpp->kp_data_offset =3D obj_offset(cachep); slab =3D virt_to_slab(objp); - objnr =3D obj_to_index(cachep, slab_page(slab), objp); + objnr =3D obj_to_index(cachep, slab, objp); objp =3D index_to_obj(cachep, slab, objnr); kpp->kp_objp =3D objp; if (DEBUG && cachep->flags & SLAB_STORE_USER) @@ -4182,7 +4182,7 @@ void __check_heap_object(const void *ptr, unsigned = long n, =20 /* Find and validate object. */ cachep =3D slab->slab_cache; - objnr =3D obj_to_index(cachep, slab_page(slab), (void *)ptr); + objnr =3D obj_to_index(cachep, slab, (void *)ptr); BUG_ON(objnr >=3D cachep->num); =20 /* Find offset within object. */ diff --git a/mm/slab.h b/mm/slab.h index d6c993894c02..b07e842b5cfc 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -483,7 +483,7 @@ static inline void memcg_slab_post_alloc_hook(struct = kmem_cache *s, continue; } =20 - off =3D obj_to_index(s, page, p[i]); + off =3D obj_to_index(s, page_slab(page), p[i]); obj_cgroup_get(objcg); page_objcgs(page)[off] =3D objcg; mod_objcg_state(objcg, page_pgdat(page), @@ -522,7 +522,7 @@ static inline void memcg_slab_free_hook(struct kmem_c= ache *s_orig, else s =3D s_orig; =20 - off =3D obj_to_index(s, page, p[i]); + off =3D obj_to_index(s, page_slab(page), p[i]); objcg =3D objcgs[off]; if (!objcg) continue; diff --git a/mm/slub.c b/mm/slub.c index 7759f3dde64b..981e40a88bab 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4342,7 +4342,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void = *object, struct slab *slab) #else objp =3D objp0; #endif - objnr =3D obj_to_index(s, slab_page(slab), objp); + objnr =3D obj_to_index(s, slab, objp); kpp->kp_data_offset =3D (unsigned long)((char *)objp0 - (char *)objp); objp =3D base + s->size * objnr; kpp->kp_objp =3D objp; --=20 2.33.1