From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A0E2C04A68 for ; Wed, 27 Jul 2022 07:10:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9FFF16B0088; Wed, 27 Jul 2022 03:10:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B1E56B0089; Wed, 27 Jul 2022 03:10:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82952940007; Wed, 27 Jul 2022 03:10:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 732726B0088 for ; Wed, 27 Jul 2022 03:10:51 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4D838A0B42 for ; Wed, 27 Jul 2022 07:10:51 +0000 (UTC) X-FDA: 79732007502.10.B397084 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf05.hostedemail.com (Postfix) with ESMTP id A56F910009F for ; Wed, 27 Jul 2022 07:10:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658905850; x=1690441850; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U2DUaLw+/DEgR8kITGEVz4V5R9QpdataO3f86cOl0eM=; b=D0OKeBL8OuUQCR9V1lE1u9lPs/5+BTKQRbGnfoEurmpnS0dXfraUWNDs edUJuBQ880158gaFH5wmfQygdKfN9lt+3nYWz9ymRQ6niZ+8kXUPEYN34 PrK0SGaNKeOXZsi4JtlsSfFuIZXekgDK4fNz73NUh45oby5OBaDtfu7WX Gk03JsWjxzjzSg84WNYlR/vFgznlnKsasNKx2sgkwwV4HrzDNlI68daf2 zMaDagpwVClQa3ARzyKhVy6wFi0mftmw25UG5JwHhV9ZS/6QOQZCQFkqi mLm48tFrebWxLIMTGGRc6qpfy0kzb+a/AprwP0BYlzzw3M/sI2i19enVd A==; X-IronPort-AV: E=McAfee;i="6400,9594,10420"; a="275038623" X-IronPort-AV: E=Sophos;i="5.93,195,1654585200"; d="scan'208";a="275038623" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2022 00:10:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,195,1654585200"; d="scan'208";a="550737956" Received: from shbuild999.sh.intel.com ([10.239.146.138]) by orsmga003.jf.intel.com with ESMTP; 27 Jul 2022 00:10:46 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Dave Hansen , Robin Murphy , John Garry , Kefeng Wang , Feng Tang Subject: [PATCH v3 3/3] mm/slub: extend redzone check to cover extra allocated kmalloc space than requested Date: Wed, 27 Jul 2022 15:10:42 +0800 Message-Id: <20220727071042.8796-4-feng.tang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220727071042.8796-1-feng.tang@intel.com> References: <20220727071042.8796-1-feng.tang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=D0OKeBL8; spf=pass (imf05.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658905851; a=rsa-sha256; cv=none; b=wber2le6nhkTqNVZesfuGlTc8/T4DZ/MbWS5CN1rXw6RXtAxYKLXyT6tDTN2mx5cDCLnqI fHHFNzBT564DIoriNjugLXBEd5Ev7iLW8e4E0GHPUT9R5cZNAAfRWNqyHTej6NgV8FE22M T65D52LVkpU5ALWCz859Y8ypcvxQOF4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658905851; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yxKi1anUxD7n+42UBLlNvp3kQerGUqLu7DRiZIqhZnE=; b=FtIcltAkve8i+Nqrs5oBE5EqT/nmA8uvuxHO2J5IS7I7HH/BVLnHs4FdP9EeowjLTaZyia dC08MqiGPLyhuS0x7LCH8Jo73ri1NuEMgXFJO1JiHUtz9PpK/p1PTr5cT6Plcu7MFzCVCz zN7Z7zpQfGuFShJ5UZT4Qa0uLr9OtXM= Authentication-Results: imf05.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=D0OKeBL8; spf=pass (imf05.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A56F910009F X-Stat-Signature: rj71hrdohu7wkf7yzbahcik5jbb53j4h X-HE-Tag: 1658905850-324186 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmalloc will round up the request size to a fixed size (mostly power of 2), so there could be a extra space than what is requested, whose size is the actual buffer size minus original request size. To better detect out of bound access or abuse of this space, add redzone sanity check for it. And in current kernel, some kmalloc user already knows the existence of the space and utilizes it after calling 'ksize()' to know the real size of the allocated buffer. So we skip the sanity check for objects which have been called with ksize(), as treating them as legitimate users. Suggested-by: Vlastimil Babka Signed-off-by: Feng Tang --- mm/slub.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 49 insertions(+), 3 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 946919066a4b..added2653bb0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -836,6 +836,11 @@ static inline void set_orig_size(struct kmem_cache *s, *(unsigned int *)p = orig_size; } +static inline void skip_orig_size_check(struct kmem_cache *s, const void *object) +{ + set_orig_size(s, (void *)object, s->object_size); +} + static unsigned int get_orig_size(struct kmem_cache *s, void *object) { void *p = kasan_reset_tag(object); @@ -967,13 +972,35 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *slab, static void init_object(struct kmem_cache *s, void *object, u8 val) { u8 *p = kasan_reset_tag(object); + unsigned int orig_size = s->object_size; - if (s->flags & SLAB_RED_ZONE) + if (s->flags & SLAB_RED_ZONE) { memset(p - s->red_left_pad, val, s->red_left_pad); + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + unsigned int zone_start; + + orig_size = get_orig_size(s, object); + zone_start = orig_size; + + if (!freeptr_outside_object(s)) + zone_start = max_t(unsigned int, orig_size, + s->offset + sizeof(void *)); + + /* + * Redzone the extra allocated space by kmalloc + * than requested. + */ + if (zone_start < s->object_size) + memset(p + zone_start, val, + s->object_size - zone_start); + } + + } + if (s->flags & __OBJECT_POISON) { - memset(p, POISON_FREE, s->object_size - 1); - p[s->object_size - 1] = POISON_END; + memset(p, POISON_FREE, orig_size - 1); + p[orig_size - 1] = POISON_END; } if (s->flags & SLAB_RED_ZONE) @@ -1120,6 +1147,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, { u8 *p = object; u8 *endobject = object + s->object_size; + unsigned int orig_size; if (s->flags & SLAB_RED_ZONE) { if (!check_bytes_and_report(s, slab, object, "Left Redzone", @@ -1129,6 +1157,20 @@ static int check_object(struct kmem_cache *s, struct slab *slab, if (!check_bytes_and_report(s, slab, object, "Right Redzone", endobject, val, s->inuse - s->object_size)) return 0; + + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + orig_size = get_orig_size(s, object); + + if (!freeptr_outside_object(s)) + orig_size = max_t(unsigned int, orig_size, + s->offset + sizeof(void *)); + if (s->object_size > orig_size && + !check_bytes_and_report(s, slab, object, + "kmalloc Redzone", p + orig_size, + val, s->object_size - orig_size)) { + return 0; + } + } } else { if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) { check_bytes_and_report(s, slab, p, "Alignment padding", @@ -4588,6 +4630,10 @@ size_t __ksize(const void *object) if (unlikely(!folio_test_slab(folio))) return folio_size(folio); +#ifdef CONFIG_SLUB_DEBUG + skip_orig_size_check(folio_slab(folio)->slab_cache, object); +#endif + return slab_ksize(folio_slab(folio)->slab_cache); } EXPORT_SYMBOL(__ksize); -- 2.27.0