From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36186C47420 for ; Tue, 29 Sep 2020 18:36:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A379E2075A for ; Tue, 29 Sep 2020 18:35:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A379E2075A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D86516B006C; Tue, 29 Sep 2020 14:35:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D36706B0070; Tue, 29 Sep 2020 14:35:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4C136B0071; Tue, 29 Sep 2020 14:35:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id AEA456B006C for ; Tue, 29 Sep 2020 14:35:58 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 75C72180AD801 for ; Tue, 29 Sep 2020 18:35:58 +0000 (UTC) X-FDA: 77316953196.15.wax35_32015702718c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 4CBA01814B0C1 for ; Tue, 29 Sep 2020 18:35:58 +0000 (UTC) X-HE-Tag: wax35_32015702718c X-Filterd-Recvd-Size: 9120 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Tue, 29 Sep 2020 18:35:57 +0000 (UTC) Received: by mail-wm1-f65.google.com with SMTP id q9so5653177wmj.2 for ; Tue, 29 Sep 2020 11:35:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=H9FfRvaA+Wp3IJ0z7DJTjMPazUzC1jfP34QSVpIB8gM=; b=WaDqlA3m6htuPdSHhjCRkNQRGNPAchnBk+hPslXpBVeAYYYMVCc69nsAtSQ3A7B23L FZsMvKeHdplQ87phFub7zxeCFl18Vap5/qDhfUzX6LHM8SOrCZzCn5ovin864v8nBQ+U S5/ZLSiC8w3QXdbDCmmq8SyX3uo4JXQSvSrNLBf/WRMrcyt9jFT+K8arLLSzU3wIBmgQ Ype2KVFz2eal0vLlYaW0uY6qe/5B9mMkl3AeoBaY+PIw8hhLD1J1tu0lZ70M92/63XZt ArCfMRZvCk3R9yXSE1eU23yB7QsUNhbfHRCzbJA6fMd5oDcEyiA9wHBZtOaYfzUfUnx7 yphw== X-Gm-Message-State: AOAM531R3eokCvL/ctaQ4BW0lT9agxjyNHifyOFZ/wHrakiszz3Kzcw0 le7l7nyFPvDMDmobzuRD8TI= X-Google-Smtp-Source: ABdhPJz8ttasfOla2Nh3dmzYRTGGwGjvXLTOok9ECPboyXttbJf8+eh2xPs5JtcBJcd50lm/HCBFRQ== X-Received: by 2002:a05:600c:2909:: with SMTP id i9mr6280384wmd.160.1601404556735; Tue, 29 Sep 2020 11:35:56 -0700 (PDT) Received: from localhost.localdomain ([185.248.161.177]) by smtp.gmail.com with ESMTPSA id b188sm12151271wmb.2.2020.09.29.11.35.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Sep 2020 11:35:56 -0700 (PDT) From: Alexander Popov To: Kees Cook , Jann Horn , Will Deacon , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Masami Hiramatsu , Steven Rostedt , Peter Zijlstra , Krzysztof Kozlowski , Patrick Bellasi , David Howells , Eric Biederman , Johannes Weiner , Laura Abbott , Arnd Bergmann , Greg Kroah-Hartman , Daniel Micay , Andrey Konovalov , Matthew Wilcox , Pavel Machek , Valentin Schneider , kasan-dev@googlegroups.com, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, Alexander Popov Cc: notify@kernel.org Subject: [PATCH RFC v2 4/6] mm: Implement slab quarantine randomization Date: Tue, 29 Sep 2020 21:35:11 +0300 Message-Id: <20200929183513.380760-5-alex.popov@linux.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200929183513.380760-1-alex.popov@linux.com> References: <20200929183513.380760-1-alex.popov@linux.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The randomization is very important for the slab quarantine security properties. Without it the number of kmalloc()+kfree() calls that are needed for overwriting the vulnerable object is almost the same. That would be good for stable use-after-free exploitation, and we should not allow that. This commit contains very compact and hackish changes that introduce the quarantine randomization. At first all quarantine batches are filled by objects. Then during the quarantine reducing we randomly choose and free 1/2 of objects from a randomly chosen batch. Now the randomized quarantine releases the freed object at an unpredictable moment, which is harmful for the heap spraying technique employed by use-after-free exploits. Signed-off-by: Alexander Popov --- mm/kasan/quarantine.c | 79 +++++++++++++++++++++++++++++++++++++------ 1 file changed, 69 insertions(+), 10 deletions(-) diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c index 61666263c53e..4ce100605086 100644 --- a/mm/kasan/quarantine.c +++ b/mm/kasan/quarantine.c @@ -29,6 +29,7 @@ #include #include #include +#include =20 #include "../slab.h" #include "kasan.h" @@ -89,8 +90,13 @@ static void qlist_move_all(struct qlist_head *from, st= ruct qlist_head *to) } =20 #define QUARANTINE_PERCPU_SIZE (1 << 20) + +#ifdef CONFIG_KASAN #define QUARANTINE_BATCHES \ (1024 > 4 * CONFIG_NR_CPUS ? 1024 : 4 * CONFIG_NR_CPUS) +#else +#define QUARANTINE_BATCHES 128 +#endif =20 /* * The object quarantine consists of per-cpu queues and a global queue, @@ -110,10 +116,7 @@ DEFINE_STATIC_SRCU(remove_cache_srcu); /* Maximum size of the global queue. */ static unsigned long quarantine_max_size; =20 -/* - * Target size of a batch in global_quarantine. - * Usually equal to QUARANTINE_PERCPU_SIZE unless we have too much RAM. - */ +/* Target size of a batch in global_quarantine. */ static unsigned long quarantine_batch_size; =20 /* @@ -191,7 +194,12 @@ void quarantine_put(struct kasan_free_meta *info, st= ruct kmem_cache *cache) =20 q =3D this_cpu_ptr(&cpu_quarantine); qlist_put(q, &info->quarantine_link, cache->size); +#ifdef CONFIG_KASAN if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) { +#else + if (unlikely(q->bytes > min_t(size_t, QUARANTINE_PERCPU_SIZE, + READ_ONCE(quarantine_batch_size)))) { +#endif qlist_move_all(q, &temp); =20 raw_spin_lock(&quarantine_lock); @@ -204,7 +212,7 @@ void quarantine_put(struct kasan_free_meta *info, str= uct kmem_cache *cache) new_tail =3D quarantine_tail + 1; if (new_tail =3D=3D QUARANTINE_BATCHES) new_tail =3D 0; - if (new_tail !=3D quarantine_head) + if (new_tail !=3D quarantine_head || !IS_ENABLED(CONFIG_KASAN)) quarantine_tail =3D new_tail; } raw_spin_unlock(&quarantine_lock); @@ -213,12 +221,43 @@ void quarantine_put(struct kasan_free_meta *info, s= truct kmem_cache *cache) local_irq_restore(flags); } =20 +static void qlist_move_random(struct qlist_head *from, struct qlist_head= *to) +{ + struct qlist_node *curr; + + if (unlikely(qlist_empty(from))) + return; + + curr =3D from->head; + qlist_init(from); + while (curr) { + struct qlist_node *next =3D curr->next; + struct kmem_cache *obj_cache =3D qlink_to_cache(curr); + int rnd =3D get_random_int(); + + /* + * Hackish quarantine randomization, part 2: + * move only 1/2 of objects to the destination list. + * TODO: use random bits sparingly for better performance. + */ + if (rnd % 2 =3D=3D 0) + qlist_put(to, curr, obj_cache->size); + else + qlist_put(from, curr, obj_cache->size); + + curr =3D next; + } +} + void quarantine_reduce(void) { - size_t total_size, new_quarantine_size, percpu_quarantines; + size_t total_size; unsigned long flags; int srcu_idx; struct qlist_head to_free =3D QLIST_INIT; +#ifdef CONFIG_KASAN + size_t new_quarantine_size, percpu_quarantines; +#endif =20 if (likely(READ_ONCE(quarantine_size) <=3D READ_ONCE(quarantine_max_size))) @@ -236,12 +275,12 @@ void quarantine_reduce(void) srcu_idx =3D srcu_read_lock(&remove_cache_srcu); raw_spin_lock_irqsave(&quarantine_lock, flags); =20 - /* - * Update quarantine size in case of hotplug. Allocate a fraction of - * the installed memory to quarantine minus per-cpu queue limits. - */ + /* Update quarantine size in case of hotplug */ total_size =3D (totalram_pages() << PAGE_SHIFT) / QUARANTINE_FRACTION; + +#ifdef CONFIG_KASAN + /* Subtract per-cpu queue limits from total quarantine size */ percpu_quarantines =3D QUARANTINE_PERCPU_SIZE * num_online_cpus(); new_quarantine_size =3D (total_size < percpu_quarantines) ? 0 : total_size - percpu_quarantines; @@ -257,6 +296,26 @@ void quarantine_reduce(void) if (quarantine_head =3D=3D QUARANTINE_BATCHES) quarantine_head =3D 0; } +#else /* CONFIG_KASAN */ + /* + * Don't subtract per-cpu queue limits from total quarantine + * size to consume all quarantine slots. + */ + WRITE_ONCE(quarantine_max_size, total_size); + WRITE_ONCE(quarantine_batch_size, total_size / QUARANTINE_BATCHES); + + /* + * Hackish quarantine randomization, part 1: + * pick a random batch for reducing. + */ + if (likely(quarantine_size > quarantine_max_size)) { + do { + quarantine_head =3D get_random_int() % QUARANTINE_BATCHES; + } while (quarantine_head =3D=3D quarantine_tail); + qlist_move_random(&global_quarantine[quarantine_head], &to_free); + WRITE_ONCE(quarantine_size, quarantine_size - to_free.bytes); + } +#endif =20 raw_spin_unlock_irqrestore(&quarantine_lock, flags); =20 --=20 2.26.2