From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5458FC7EE23 for ; Tue, 16 May 2023 19:34:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D2165280001; Tue, 16 May 2023 15:34:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD17A900002; Tue, 16 May 2023 15:34:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B7221280001; Tue, 16 May 2023 15:34:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A3F7B900002 for ; Tue, 16 May 2023 15:34:17 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 6FF0216037C for ; Tue, 16 May 2023 19:34:17 +0000 (UTC) X-FDA: 80797119354.07.82499DA Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by imf02.hostedemail.com (Postfix) with ESMTP id 6308480017 for ; Tue, 16 May 2023 19:34:14 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=bNHc1U0C; spf=pass (imf02.hostedemail.com: domain of keescook@chromium.org designates 209.85.210.182 as permitted sender) smtp.mailfrom=keescook@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684265654; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NjomB45/tUXiCEV6Zx/tTP1f+pkWwWrPq8cCPi1RJqc=; b=BY/xAxWY3h6pdV7rFxhUnQUbm61HMJNrWth7BH9avOjS9tNUYaKAJDBGw+NDsaq78lPT51 Ms4gx78ggtWzw6tG8lQf3Nasih4DovQEV0JzsD15YuBYMMVGpEwVouhyDDvk2M1N7Ng17A tFdwU5vjt9giW7Ofndus9WFp7hu5GaM= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=bNHc1U0C; spf=pass (imf02.hostedemail.com: domain of keescook@chromium.org designates 209.85.210.182 as permitted sender) smtp.mailfrom=keescook@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684265654; a=rsa-sha256; cv=none; b=k8oh/ueuQm3mXRdti4TVYhw4gILjhvN6pPuQ9YAwHK7Cjan/9DRyRi9mrnk7V7M/9CYaB2 qALUFvGZxXK7O/tqBxK/jyJbg3XGSwM3nCygdMFIs+Ht7yDH2BtVynfEsL463YKYCFSxEH TfBqWpQiA3fTPBF18gClqCCw+wSR3n8= Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-64384274895so10362528b3a.2 for ; Tue, 16 May 2023 12:34:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1684265653; x=1686857653; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=NjomB45/tUXiCEV6Zx/tTP1f+pkWwWrPq8cCPi1RJqc=; b=bNHc1U0CtPLI3FNYS2/3S7VOUmvNc8ETG2uTFHiP/LSmRcJAmNmEa1HQKz+0QsyCYV 3LJuktipJLdpM/MdCXUc32xdnPqgSUNSmKg5rGOpcm325DRNUYPUPtaRhkMspnvwt+oR bB+y6nAPpHmohxYRaYqX0u/i97y1iy2CyL9sQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684265653; x=1686857653; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=NjomB45/tUXiCEV6Zx/tTP1f+pkWwWrPq8cCPi1RJqc=; b=I5MkGTGlcqAlUiUahIRB92FEekzHra1qudSP3CmccpsYtrevBMp2pb7SD20O7tAuAt cDVQwOv+KKutcDbcp8Pp9FMAWiGvLM5ibANO7i1EcaqXotSr0Z+s2hUdYhVesv+bbmDa 9qVRa6iFjIJTweS3tCA/trnTSen5+GXJtVE0ihpzF4K9IMQb1xQVWr6wahLNe1G6K8vA bnqQkEddSAkUpehhwx/WO86oDz3LCUXHkwjTG/aWA6OIkengF/OhLSUerY4VrcGpxvK2 7RTvuR6uRknMu2//JayNRgnUblAuVlTRAVJSvx9c7kADbFZ/raQKR8Fd6C4HGmAVui9h 9HQg== X-Gm-Message-State: AC+VfDz93lPA7BLWISGQB9g9ttSCPujwZ2hee1P3WuPSHtJvDb5G5JD4 uCMME7RRAL8l8r4SvNSfxyvkhg== X-Google-Smtp-Source: ACHHUZ58B7WESNKne+9oMKYnTHer8L35nMXOvyVf4ZrSBq07pY0pLhMKLIfNUzXFqoCSIzJ7rgsnDw== X-Received: by 2002:a05:6a00:1141:b0:64a:ff32:7349 with SMTP id b1-20020a056a00114100b0064aff327349mr16158832pfm.32.1684265652878; Tue, 16 May 2023 12:34:12 -0700 (PDT) Received: from www.outflux.net (198-0-35-241-static.hfc.comcastbusiness.net. [198.0.35.241]) by smtp.gmail.com with ESMTPSA id g19-20020aa78753000000b00634b91326a9sm14259716pfo.143.2023.05.16.12.34.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 May 2023 12:34:12 -0700 (PDT) Date: Tue, 16 May 2023 12:34:11 -0700 From: Kees Cook To: "GONG, Ruiqi" , Jann Horn Cc: Vlastimil Babka , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org, Hyeonggon Yoo <42.hyeyoo@gmail.com>, Alexander Lobakin , kasan-dev@googlegroups.com, Wang Weiyang , Xiu Jianfeng Subject: Re: [PATCH RFC v2] Randomized slab caches for kmalloc() Message-ID: <202305161204.CB4A87C13@keescook> References: <20230508075507.1720950-1-gongruiqi1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230508075507.1720950-1-gongruiqi1@huawei.com> X-Stat-Signature: 4uiwobdryi5j45isx5tjtpiwrb47ed1k X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 6308480017 X-Rspam-User: X-HE-Tag: 1684265654-714046 X-HE-Meta: U2FsdGVkX1/SaA1MYeacgN4xJb0atcGNHeYWoDJPpPqOnfpT0VNPtd0+zU3ED64fpxdZ6twZlVVKWN0mloSHrO3XIN3/Iu4Q6Jc55Cd8HxGnS1l6sNBy5MZVfz1Bb0cLLh9Zn0XNDKqV6n8ctBhsZnoM8DZOrep7XqE50HSz+3sAOTOQ2eGbk3sxOwX3MzjhDtEFsNCI/6UAV4QwiAtLUlp46uAhiIVGRDGHvYliQO9uAPu3L/Jt9zKpGDf6SHL/Hfk8/UqZ6X6OJkwy1k69/m0eCzx5YfmTm/GsGiXTUMtpPgyDil7cFp2Zu7Tw0O40IANfRgxwFO10LH13AkVaIqht2yEicdvtEMwE/clYofWMeanCRHQW/urKzVR275VcN8kwQTSn/t8aez5wol0TKmeYBLUpzJH4q26AC7OD+BpFQkehfvGgJO8vjtHJmEVbYDrNbduv0n+twcqQ6KtTnReCiWryQ7b3Ea6DaOQjbq2/xFWRf3XqdZfbfY/rNuwoy9p4bWk/z//zBk6+PRis3jdaF9dvxlYPZPqyHkmiNK1QJ/e68QxH3v236cpuRi0YWK6YWMIGKB/6Rm4OAptq0U976Sho3RlcGk7d7g19A7HVOXCHaBBEVusdSGWo21Vqzo7z3Ih9txvkCvyzJVwwdazpbuYUR51cIMi0v6bXaCS1QuKugqhRuzNcFAuHktlq3UDdBm9rj2cafs9JeVSzJXh5xzNygem4dM17l+HiweC+VJkB6Ajsssm7YAyeA43Aw7D21FRjaxJ5pbvK2ROKi/N47TsyuZKWW6H9BTtrzQ/7vOlXVnb0pCAAVIpcboSQ8dySd5Q7LigyEcDftTy8/2hBRJbhG7cORx1JTTSlRRO/BO7ejoQ5UKvRKJOk5jJNpaPEdAo7/Xw/Av08YeWqPRMlK6wsrxLshihtOdryqC9Rv5KAsar9NOv52TTrC+adSUxIiWBLV+t/H1du6NT pAJ9wfSX QqV/w1ayaZcuojnenHgqFtjuIcjLAiDJ1D73Z8IoHZgrvTUWcGFz5fHvDND7ORuyvalkJAOzorhkkewLaR47r0l90d6V8rdm9v5QH39rY/iLLuVtJXicx+LCsMZ3EROz9gCPR/TeA5KBsdtTGb2jC9URmgMNiai9/0W/SiqKsSegX55DrHwdhiZTgMI8Uxiy4OJYcakmzBbwcnP7/TK79M7RrXwPZP+Oj9a9XxLfEztY2OYDGeqI1xVrg9RaxIDUgEFG69LAbC0jbu4XcGp/H3gsemRvVwiMccekwzn7hgRKjxQCd6KECeloDmZxNunLl8FVdBBm+0JdVRgMLrp1eZtlEL5E5Mg5+L91XZ8w2Ig+aziyKxcZJLuUTniYKM1qN1etcZR68txACPL96Hc4MmuQvs4mWPsagzZxBQCz9BIoDoign0Ruxbc6pJmcTGsjoyzNNbTB7fIaYAsHWJU5iFiQWU2jupQqAQjkfwE5igh8bJOVyRt5rpnzraYXJdtIRcTR4wSpDuXsPpaPJPTO/gYhBkETmTb84LMzW6UBxN6UWPy9U7NrdqmCdbW+9Tf/cGICjL43Jd+Ma5XkFnhFGhrXkTLbXKiisGclHJ62aNn9wpvCpKIsIBJGlt764ZL6QSagRFSPaTD0nqkj43avUGvWg+g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For new CCs, the start of this thread is here[0]. On Mon, May 08, 2023 at 03:55:07PM +0800, GONG, Ruiqi wrote: > When exploiting memory vulnerabilities, "heap spraying" is a common > technique targeting those related to dynamic memory allocation (i.e. the > "heap"), and it plays an important role in a successful exploitation. > Basically, it is to overwrite the memory area of vulnerable object by > triggering allocation in other subsystems or modules and therefore > getting a reference to the targeted memory location. It's usable on > various types of vulnerablity including use after free (UAF), heap out- > of-bound write and etc. I heartily agree we need some better approaches to deal with UAF, and by extension, heap spraying. > There are (at least) two reasons why the heap can be sprayed: 1) generic > slab caches are shared among different subsystems and modules, and > 2) dedicated slab caches could be merged with the generic ones. > Currently these two factors cannot be prevented at a low cost: the first > one is a widely used memory allocation mechanism, and shutting down slab > merging completely via `slub_nomerge` would be overkill. > > To efficiently prevent heap spraying, we propose the following approach: > to create multiple copies of generic slab caches that will never be > merged, and random one of them will be used at allocation. The random > selection is based on the address of code that calls `kmalloc()`, which > means it is static at runtime (rather than dynamically determined at > each time of allocation, which could be bypassed by repeatedly spraying > in brute force). In this way, the vulnerable object and memory allocated > in other subsystems and modules will (most probably) be on different > slab caches, which prevents the object from being sprayed. This is a nice balance between the best option we have now ("slub_nomerge") and most invasive changes (type-based allocation segregation, which requires at least extensive compiler support), forcing some caches to be "out of reach". > > The overhead of performance has been tested on a 40-core x86 server by > comparing the results of `perf bench all` between the kernels with and > without this patch based on the latest linux-next kernel, which shows > minor difference. A subset of benchmarks are listed below: > > control experiment (avg of 3 samples) > sched/messaging (sec) 0.019 0.019 > sched/pipe (sec) 5.253 5.340 > syscall/basic (sec) 0.741 0.742 > mem/memcpy (GB/sec) 15.258789 14.860495 > mem/memset (GB/sec) 48.828125 50.431069 > > The overhead of memory usage was measured by executing `free` after boot > on a QEMU VM with 1GB total memory, and as expected, it's positively > correlated with # of cache copies: > > control 4 copies 8 copies 16 copies > total 969.8M 968.2M 968.2M 968.2M > used 20.0M 21.9M 24.1M 26.7M > free 936.9M 933.6M 931.4M 928.6M > available 932.2M 928.8M 926.6M 923.9M Great to see the impact: it's relatively tiny. Nice! Back when we looked at cache quarantines, Jann pointed out that it was still possible to perform heap spraying -- it just needed more allocations. In this case, I think that's addressed (probabilistically) by making it less likely that a cache where a UAF is reachable is merged with something with strong exploitation primitives (e.g. msgsnd). In light of all the UAF attack/defense breakdowns in Jann's blog post[1], I'm curious where this defense lands. It seems like it would keep the primitives described there (i.e. "upgrading" the heap spray into a page table "type confusion") would be addressed probabilistically just like any other style of attack. Jann, what do you think, and how does it compare to the KCTF work[2] you've been doing? In addition to this work, I'd like to see something like the kmalloc caches, but for kmem_cache_alloc(), where a dedicated cache of variably-sized allocations can be managed. With that, we can split off _dedicated_ caches where we know there are strong exploitation primitives (i.e. msgsnd, etc). Then we can carve off known weak heap allocation caches as well as make merging probabilistically harder. I imagine it would be possible to then split this series into two halves: one that creates the "make arbitrary-sized caches" API, and the second that applies that to kmalloc globally (as done here). > > Signed-off-by: GONG, Ruiqi > --- > > v2: > - Use hash_64() and a per-boot random seed to select kmalloc() caches. This is good: I was hoping there would be something to make it per-boot randomized beyond just compile-time. So, yes, I think this is worth it, but I'd like to see what design holes Jann can poke in it first. :) -Kees [0] https://lore.kernel.org/lkml/20230508075507.1720950-1-gongruiqi1@huawei.com/ [1] https://googleprojectzero.blogspot.com/2021/10/how-simple-linux-kernel-memory.html [2] https://github.com/thejh/linux/commit/a87ad16046f6f7fd61080ebfb93753366466b761 -- Kees Cook