From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73490E743D5 for ; Fri, 29 Sep 2023 00:51:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E4FA78D00CE; Thu, 28 Sep 2023 20:51:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E001A8D0002; Thu, 28 Sep 2023 20:51:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC77A8D00CE; Thu, 28 Sep 2023 20:51:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id BC9478D0002 for ; Thu, 28 Sep 2023 20:51:42 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 8BDABB4562 for ; Fri, 29 Sep 2023 00:51:42 +0000 (UTC) X-FDA: 81287807244.15.1F3225A Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf11.hostedemail.com (Postfix) with ESMTP id C0AB940004 for ; Fri, 29 Sep 2023 00:51:40 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=Ri4QBtNw; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf11.hostedemail.com: domain of keescook@chromium.org designates 209.85.214.176 as permitted sender) smtp.mailfrom=keescook@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695948700; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zAa4dlDzV3ROUKICn+wMYA02M0QPbIAkOWeP+0WwmYg=; b=4jn1UYYUeFd7Zilj50ReXBIoW4Y64T8ILVY+pyDt5Un8mX9jfoDx5y0sVBlEfSckJfnbqg Xa6VYlDilTG6mkqX5tvoZ3R1eimfPsr1h1qts2VMlF4lKsu1JB5u0LKodIF6qOgfBwXq8Q YIozqQTEY6QNq2N9M1smQ/gWZU3sEME= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=Ri4QBtNw; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf11.hostedemail.com: domain of keescook@chromium.org designates 209.85.214.176 as permitted sender) smtp.mailfrom=keescook@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695948700; a=rsa-sha256; cv=none; b=XIE+PDD2EMtgfYjUuzaZM95okD7M/GW3jVJLjCvI77KOQXDFM+mI0ub7eSMNRmuO2c9Wr+ fsum7J3fw5XcCqDBo7Xr2czBXhd2gdkgJgoevQbhOv7K2o1Ul08U8r6qWReOkbczTqxIEJ r0QzoJgWVvXITV5F6JZJQ6sk1KkuBD4= Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1c453379020so102077255ad.1 for ; Thu, 28 Sep 2023 17:51:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1695948699; x=1696553499; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=zAa4dlDzV3ROUKICn+wMYA02M0QPbIAkOWeP+0WwmYg=; b=Ri4QBtNwQ3LivXOCz+4iW09YTWqV9F2PTOXK6HaRaiMrTBKoR3OTOKZOVa6yvLNJnv DJusOrGZ31uEAiuKCiLbncdHXl5L29FrKzFJucs9E6ciGSKR8ykGpsAiR/qyLFF5QHI5 29VcaBmqG3hNHTKWM6CrFW+DcRN0ElI9Gcj/o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695948699; x=1696553499; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=zAa4dlDzV3ROUKICn+wMYA02M0QPbIAkOWeP+0WwmYg=; b=qS0F/EDjzSMUdr80Cmq+gbBbB9DyJ9aTlH/Xl8hPQJbuhh0naqzZJckGDZZuRnH8nX NlbT/4HNZMUyitrl99+2fI4hB/zWX2Wj24WthbiF4laFWJhmlNyheGs8mgRjQ4WDXsor OcDK7d506KcMuHPVGvRFYdYB0Pvs3YPJwbLdw0STeWeyf30ehfKdAu/VyB2rKwlM8xA/ PI46AaMHUBM/iaDrHRxAE2DLRG9l9PfVaJOpdVBmm92BroGZYIKdGtbk/UyacPVGTFHz 8OvOb+DJi3h5nL2snlFE3VUtlPeA89Oe2u67IzE088jyAgqfEPlj41rQYVklN4QnUik1 QgWg== X-Gm-Message-State: AOJu0Yyerxm6JGwBaOfHCizSsPhLqmW7wnkMsu8QWIYyc8xTsvcZMnVu C9zxri5i/cauLMPCuHkZEYYC3Q== X-Google-Smtp-Source: AGHT+IHcWFs2TkSx+f0Jkn2x8NP6yZWHGkFhN2wwgs8oryhOp2AFBuTUWcH3Jbs0ZAENfsB6t4fwMw== X-Received: by 2002:a17:903:25d4:b0:1b9:e972:134d with SMTP id jc20-20020a17090325d400b001b9e972134dmr2381625plb.3.1695948699370; Thu, 28 Sep 2023 17:51:39 -0700 (PDT) Received: from www.outflux.net (198-0-35-241-static.hfc.comcastbusiness.net. [198.0.35.241]) by smtp.gmail.com with ESMTPSA id q20-20020a170902e31400b001c60ba709b7sm10434668plc.125.2023.09.28.17.51.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Sep 2023 17:51:38 -0700 (PDT) Date: Thu, 28 Sep 2023 17:51:38 -0700 From: Kees Cook To: "Eric W. Biederman" Cc: Alexander Viro , Christian Brauner , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Sebastian Ott , Thomas =?iso-8859-1?Q?Wei=DFschuh?= , Pedro Falcato , linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org Subject: Re: [PATCH v3 3/4] binfmt_elf: Provide prot bits as context for padzero() errors Message-ID: <202309281750.FA45C0DBB@keescook> References: <20230927033634.make.602-kees@kernel.org> <20230927034223.986157-3-keescook@chromium.org> <87y1gr8j51.fsf@email.froward.int.ebiederm.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87y1gr8j51.fsf@email.froward.int.ebiederm.org> X-Rspamd-Queue-Id: C0AB940004 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 4wbrqm5xbrk1psf9zsdyfi4srthmaaof X-HE-Tag: 1695948700-418945 X-HE-Meta: U2FsdGVkX19fQTYfH9oHGQ15enYn22IvyRaVZh2f7Qo1bVuM352SlCOqwVsCZIh8QjAvOfflp2SY1NZSxxkeq1MVCSpS3UUUD9whB4Z9a8JiH5hV9WCuO2SvpQjSrUD6ZWIqfwn1d8eLo4csjFWpY5piJY6G0IDIkPFgywTmLd2TKEy+D7B/MtaOfQjxJlVBTtJR1HGopi/zmqO4NkhKO8DDDow3F8YrNBbvtDhdgaIDdF4xpXXd9H0R2OB5Xic5XjTrzP1hEYAUZzrfmnsqYGzBwqeFIwoJU4x4MSPE1H+pROqnZhxO4e0lOaBAACm1ju+pw+bJdUGYkAedryCtYJRynZPzLOh+WhG8ZyKrqaV+8deXhGXxyD0Td/QdaCQ0YFjSbqeN/HjiaLV1F0DZLUctqPCCoOSG90LVKeXrjRQUN+I8CzwJc5aa81Masv0jptx/8CzfnImWITLJJsd75NGgzipn5ouk5Msr9AgS3yY4RLEoGnH1Z/bHghI9KI3mzzrGIehG53pQ4mPooY42uldmCSefWks6AgV9VnWZXY+HBcWQ6aDuhVPF3HIV1CmKiaBfj0yVMbBeghs3wZTUzemCN+1F/JcPnWy6nOoxHPLiQkNAHeDEq963BC3lRZWqKfwQI/VNt3NlzR6dFoHBQWRVwxY6yN7cZiD7Sn4w7kw1TmPWkM3dCsxb+KC47jOHG0GVq//+wkpCPy37BcZS/Fgk1UPM4rjAxgMxRpq39vkF9Gc/98UjO2JG6E3RBuQH3pfc69aKl4g5taIxAp+dp4/Q3XOoqDIP2jK0XIGVO0yd/RRzWvTISFbwFiWw6pcg3vegcznpNANgsVOQYmiFdAsmfFY8woOLwKrBolqqVontilno9wBgnOhw1GxP2hMhQR1/AN9RhRXnvlb9xU0R+qwGnFUwYSeU5dyyZnJRqBR8UHgScWYzjy/eIH6QpMsIfFCjUG7UZ6NwGNKtNqS ufsn3nJl Qb1rHtfZkb8RLtw5/IzgLm+UzqiUozjYkch6pbdij1tLuLd3VTUlnT5I5z0u7Zu3PnR6Bk38PkXz9dDrT6kmw6sgHQDwZYP5wQYoTG9b73tddmA3zc/2I6g+Yu5f+8vBucXGLN9uT4jvY4oTO9CqI1LyigLb5LqrUPujm71r0ZDCI+xQvoiNPUO/sZpTap29rZdXbpNk0dd3WZK2I42W8ZqlJL292VVUDJpBJQ9Dr9WVP+eQe74eqiAOtbHgb/49T1hwwAD2n8M/CeU57jsF1tK3DUzQYI+4UFFWnm3GfASpAF8+xLGae5hNYo4HxIrX0c+WbTlqp8m++nd1YEQxNTg25grgBUQtg0E5kt6wOogWvb8ETXQGJaWesobJRMSsj12g7486GtclGrZ/MDmL9blcYX0HUZU4foc7jZmmqUAIxaWB3hbEhJm5cpjFqWNGrTM2X1nH8j259nEylbjdAm13yCafzlDfaMAxunt4YO3b+Vz7YZXqczEWx06rORV5qR4fhKFKXHLpO/3oNYs9w/9xzER8u8T+9sQG4sksZour+keMY/xrGyQkDRy/Kb7H3ns9JkadpNKXOh+ck81LNSlVebO6tIo3sAcop6kfWU46V4ES/0MxBBCiXgq+IGuBVKXxE2HM9EHZK60oxdF4KaX6dq8eYNw7qCG4cN3Ibf7fd3/pz06v3VoyLLoG+umrE1qYbZpLhLtYHup6ekTZQLJHipDkVLiz4+5mDArTvkIL1UkAg+bllURO4RmgGEvohH2Of X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Sep 27, 2023 at 03:18:34PM -0500, Eric W. Biederman wrote: > Kees Cook writes: > > > Errors with padzero() should be caught unless we're expecting a > > pathological (non-writable) segment. Report -EFAULT only when PROT_WRITE > > is present. > > > > Additionally add some more documentation to padzero(), elf_map(), and > > elf_load(). > > I wonder if this might be easier to just perform the PROT_WRITE > test in elf_load, and to completely skip padzero of PROT_WRITE > is not present. Yeah, actually, after moving load_elf_library() to elf_load(), there's only 1 caller of padzero... :P I'll work on that. -Kees > > Eric > > > Cc: Eric Biederman > > Cc: Alexander Viro > > Cc: Christian Brauner > > Cc: linux-fsdevel@vger.kernel.org > > Cc: linux-mm@kvack.org > > Suggested-by: Eric Biederman > > Signed-off-by: Kees Cook > > --- > > fs/binfmt_elf.c | 33 +++++++++++++++++++++++---------- > > 1 file changed, 23 insertions(+), 10 deletions(-) > > > > diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c > > index 0214d5a949fc..b939cfe3215c 100644 > > --- a/fs/binfmt_elf.c > > +++ b/fs/binfmt_elf.c > > @@ -110,19 +110,21 @@ static struct linux_binfmt elf_format = { > > > > #define BAD_ADDR(x) (unlikely((unsigned long)(x) >= TASK_SIZE)) > > > > -/* We need to explicitly zero any fractional pages > > - after the data section (i.e. bss). This would > > - contain the junk from the file that should not > > - be in memory > > +/* > > + * We need to explicitly zero any trailing portion of the page that follows > > + * p_filesz when it ends before the page ends (e.g. bss), otherwise this > > + * memory will contain the junk from the file that should not be present. > > */ > > -static int padzero(unsigned long elf_bss) > > +static int padzero(unsigned long address, int prot) > > { > > unsigned long nbyte; > > > > - nbyte = ELF_PAGEOFFSET(elf_bss); > > + nbyte = ELF_PAGEOFFSET(address); > > if (nbyte) { > > nbyte = ELF_MIN_ALIGN - nbyte; > > - if (clear_user((void __user *) elf_bss, nbyte)) > > + /* Only report errors when the segment is writable. */ > > + if (clear_user((void __user *)address, nbyte) && > > + prot & PROT_WRITE) > > return -EFAULT; > > } > > return 0; > > @@ -348,6 +350,11 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec, > > return 0; > > } > > > > +/* > > + * Map "eppnt->p_filesz" bytes from "filep" offset "eppnt->p_offset" > > + * into memory at "addr". (Note that p_filesz is rounded up to the > > + * next page, so any extra bytes from the file must be wiped.) > > + */ > > static unsigned long elf_map(struct file *filep, unsigned long addr, > > const struct elf_phdr *eppnt, int prot, int type, > > unsigned long total_size) > > @@ -387,6 +394,11 @@ static unsigned long elf_map(struct file *filep, unsigned long addr, > > return(map_addr); > > } > > > > +/* > > + * Map "eppnt->p_filesz" bytes from "filep" offset "eppnt->p_offset" > > + * into memory at "addr". Memory from "p_filesz" through "p_memsz" > > + * rounded up to the next page is zeroed. > > + */ > > static unsigned long elf_load(struct file *filep, unsigned long addr, > > const struct elf_phdr *eppnt, int prot, int type, > > unsigned long total_size) > > @@ -405,7 +417,8 @@ static unsigned long elf_load(struct file *filep, unsigned long addr, > > eppnt->p_memsz; > > > > /* Zero the end of the last mapped page */ > > - padzero(zero_start); > > + if (padzero(zero_start, prot)) > > + return -EFAULT; > > } > > } else { > > map_addr = zero_start = ELF_PAGESTART(addr); > > @@ -712,7 +725,7 @@ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex, > > * the file up to the page boundary, and zero it from elf_bss > > * up to the end of the page. > > */ > > - if (padzero(elf_bss)) { > > + if (padzero(elf_bss, bss_prot)) { > > error = -EFAULT; > > goto out; > > } > > @@ -1407,7 +1420,7 @@ static int load_elf_library(struct file *file) > > goto out_free_ph; > > > > elf_bss = eppnt->p_vaddr + eppnt->p_filesz; > > - if (padzero(elf_bss)) { > > + if (padzero(elf_bss, PROT_WRITE)) { > > error = -EFAULT; > > goto out_free_ph; > > } -- Kees Cook