From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5469EC433FE for ; Mon, 4 Oct 2021 14:39:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 05AD16136F for ; Mon, 4 Oct 2021 14:39:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 05AD16136F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A6538940043; Mon, 4 Oct 2021 10:39:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A14AA94000B; Mon, 4 Oct 2021 10:39:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9040B940043; Mon, 4 Oct 2021 10:39:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id 8130C94000B for ; Mon, 4 Oct 2021 10:39:56 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3C5152DD7B for ; Mon, 4 Oct 2021 14:39:56 +0000 (UTC) X-FDA: 78659014392.12.1081BF1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id CB051B000D09 for ; Mon, 4 Oct 2021 14:39:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WHVh1IrKqLvOW+uQWq6NeqETwN3jHdcfu7o2NC34Whg=; b=DW3bYiJGTBSxs/9vQ59wIc9DQO 7dhbXB696EMImuelHjJM+uGrGl3xyaVOheXH/EyVGAn49jZjcZgeZwKKSxNTSL7myumEGdinqtHJC ZA4t6iGX/+BQfGn0hWu4yqobfnJIgMCGDUL2QzZjNa9shQICShPMsDG91Zm0aCdg9wsI11ahn+jy9 R5kh0JxTmgd/qKOxwPuZ738NaE10gM11GazP1is82GbM6ARmhe1pU451yftU8TX1i8TnijfYzZLUo D6JSjdWnm1HxShNncdHPLy8nnTsfVLiGxjwFzD2CfvqpbOdBoeCHmUlx4908qhx9rJ4B8M63466Fm p7XvcceA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXP6h-00H0bN-8n; Mon, 04 Oct 2021 14:38:54 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 39/62] mm/slub: Convert check_object() to struct slab Date: Mon, 4 Oct 2021 14:46:27 +0100 Message-Id: <20211004134650.4031813-40-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: CB051B000D09 X-Stat-Signature: 5k8qs333fme8cdgmon84bg3bguitjw3e Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DW3bYiJG; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1633358395-684430 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also convert check_bytes_and_report() and check_pad_bytes(). This is almost exclusively pushing slab_page() calls down. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index eb4286886c3e..fd11ca47bce8 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -904,13 +904,13 @@ static void restore_bytes(struct kmem_cache *s, cha= r *message, u8 data, memset(from, data, to - from); } =20 -static int check_bytes_and_report(struct kmem_cache *s, struct page *pag= e, +static int check_bytes_and_report(struct kmem_cache *s, struct slab *sla= b, u8 *object, char *what, u8 *start, unsigned int value, unsigned int bytes) { u8 *fault; u8 *end; - u8 *addr =3D page_address(page); + u8 *addr =3D slab_address(slab); =20 metadata_access_enable(); fault =3D memchr_inv(kasan_reset_tag(start), value, bytes); @@ -929,7 +929,7 @@ static int check_bytes_and_report(struct kmem_cache *= s, struct page *page, pr_err("0x%p-0x%p @offset=3D%tu. First byte 0x%x instead of 0x%x\n", fault, end - 1, fault - addr, fault[0], value); - print_trailer(s, page, object); + print_trailer(s, slab_page(slab), object); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); =20 skip_bug_print: @@ -975,7 +975,7 @@ static int check_bytes_and_report(struct kmem_cache *= s, struct page *page, * may be used with merged slabcaches. */ =20 -static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *= p) +static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *= p) { unsigned long off =3D get_info_end(s); /* The end of info */ =20 @@ -988,7 +988,7 @@ static int check_pad_bytes(struct kmem_cache *s, stru= ct page *page, u8 *p) if (size_from_object(s) =3D=3D off) return 1; =20 - return check_bytes_and_report(s, page, p, "Object padding", + return check_bytes_and_report(s, slab, p, "Object padding", p + off, POISON_INUSE, size_from_object(s) - off); } =20 @@ -1029,23 +1029,23 @@ static int slab_pad_check(struct kmem_cache *s, s= truct page *page) return 0; } =20 -static int check_object(struct kmem_cache *s, struct page *page, +static int check_object(struct kmem_cache *s, struct slab *slab, void *object, u8 val) { u8 *p =3D object; u8 *endobject =3D object + s->object_size; =20 if (s->flags & SLAB_RED_ZONE) { - if (!check_bytes_and_report(s, page, object, "Left Redzone", + if (!check_bytes_and_report(s, slab, object, "Left Redzone", object - s->red_left_pad, val, s->red_left_pad)) return 0; =20 - if (!check_bytes_and_report(s, page, object, "Right Redzone", + if (!check_bytes_and_report(s, slab, object, "Right Redzone", endobject, val, s->inuse - s->object_size)) return 0; } else { if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) { - check_bytes_and_report(s, page, p, "Alignment padding", + check_bytes_and_report(s, slab, p, "Alignment padding", endobject, POISON_INUSE, s->inuse - s->object_size); } @@ -1053,15 +1053,15 @@ static int check_object(struct kmem_cache *s, str= uct page *page, =20 if (s->flags & SLAB_POISON) { if (val !=3D SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) && - (!check_bytes_and_report(s, page, p, "Poison", p, + (!check_bytes_and_report(s, slab, p, "Poison", p, POISON_FREE, s->object_size - 1) || - !check_bytes_and_report(s, page, p, "End Poison", + !check_bytes_and_report(s, slab, p, "End Poison", p + s->object_size - 1, POISON_END, 1))) return 0; /* * check_pad_bytes cleans up on its own. */ - check_pad_bytes(s, page, p); + check_pad_bytes(s, slab, p); } =20 if (!freeptr_outside_object(s) && val =3D=3D SLUB_RED_ACTIVE) @@ -1072,8 +1072,8 @@ static int check_object(struct kmem_cache *s, struc= t page *page, return 1; =20 /* Check free pointer validity */ - if (!check_valid_pointer(s, page, get_freepointer(s, p))) { - object_err(s, page, p, "Freepointer corrupt"); + if (!check_valid_pointer(s, slab_page(slab), get_freepointer(s, p))) { + object_err(s, slab_page(slab), p, "Freepointer corrupt"); /* * No choice but to zap it and thus lose the remainder * of the free objects in this slab. May cause @@ -1271,7 +1271,7 @@ static inline int alloc_consistency_checks(struct k= mem_cache *s, return 0; } =20 - if (!check_object(s, slab_page(slab), object, SLUB_RED_INACTIVE)) + if (!check_object(s, slab, object, SLUB_RED_INACTIVE)) return 0; =20 return 1; @@ -1320,7 +1320,7 @@ static inline int free_consistency_checks(struct km= em_cache *s, return 0; } =20 - if (!check_object(s, slab_page(slab), object, SLUB_RED_ACTIVE)) + if (!check_object(s, slab, object, SLUB_RED_ACTIVE)) return 0; =20 if (unlikely(s !=3D slab->slab_cache)) { @@ -1613,7 +1613,7 @@ static inline int free_debug_processing( =20 static inline int slab_pad_check(struct kmem_cache *s, struct page *page= ) { return 1; } -static inline int check_object(struct kmem_cache *s, struct page *page, +static inline int check_object(struct kmem_cache *s, struct slab *slab, void *object, u8 val) { return 1; } static inline void add_full(struct kmem_cache *s, struct kmem_cache_node= *n, struct slab *slab) {} @@ -1971,7 +1971,7 @@ static void __free_slab(struct kmem_cache *s, struc= t slab *slab) =20 slab_pad_check(s, slab_page(slab)); for_each_object(p, s, slab_address(slab), slab->objects) - check_object(s, slab_page(slab), p, SLUB_RED_INACTIVE); + check_object(s, slab, p, SLUB_RED_INACTIVE); } =20 __slab_clear_pfmemalloc(slab); @@ -4968,7 +4968,7 @@ static void validate_slab(struct kmem_cache *s, str= uct slab *slab, u8 val =3D test_bit(__obj_to_index(s, addr, p), obj_map) ? SLUB_RED_INACTIVE : SLUB_RED_ACTIVE; =20 - if (!check_object(s, slab_page(slab), p, val)) + if (!check_object(s, slab, p, val)) break; } unlock: --=20 2.32.0