From: "Linux regression tracking (Thorsten Leemhuis)" <regressions@leemhuis.info>
To: Kees Cook <keescook@chromium.org>,
"Eric W. Biederman" <ebiederm@xmission.com>
Cc: Jan Bujak <j@exia.io>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
viro@zeniv.linux.org.uk, brauner@kernel.org,
linux-fsdevel@vger.kernel.org,
Linux kernel regressions list <regressions@lists.linux.dev>
Subject: Re: Recent-ish changes in binfmt_elf made my program segfault
Date: Thu, 1 Feb 2024 11:47:02 +0100 [thread overview]
Message-ID: <95eae92a-ecad-4e0e-b381-5835f370a9e7@leemhuis.info> (raw)
In-Reply-To: <202401221339.85DBD3931@keescook>
Hi, Thorsten here, the Linux kernel's regression tracker. Top-posting
for once, to make this easily accessible to everyone.
Eric, what's the status wrt. to this regression? Things from here look
stalled, but I might be missing something.
Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
--
Everything you wanna know about Linux kernel regression tracking:
https://linux-regtracking.leemhuis.info/about/#tldr
If I did something stupid, please tell me, as explained on that page.
#regzbot poke
On 22.01.24 23:12, Kees Cook wrote:
> On Mon, Jan 22, 2024 at 03:01:06PM -0600, Eric W. Biederman wrote:
>> Kees Cook <keescook@chromium.org> writes:
>>
>>> On Mon, Jan 22, 2024 at 10:43:59AM -0600, Eric W. Biederman wrote:
>>>> Jan Bujak <j@exia.io> writes:
>>>>
>>>>> Hi.
>>>>>
>>>>> I recently updated my kernel and one of my programs started segfaulting.
>>>>>
>>>>> The issue seems to be related to how the kernel interprets PT_LOAD headers;
>>>>> consider the following program headers (from 'readelf' of my reproduction):
>>>>>
>>>>> Program Headers:
>>>>> Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
>>>>> LOAD 0x001000 0x10000 0x10000 0x000010 0x000010 R 0x1000
>>>>> LOAD 0x002000 0x11000 0x11000 0x000010 0x000010 RW 0x1000
>>>>> LOAD 0x002010 0x11010 0x11010 0x000000 0x000004 RW 0x1000
>>>>> LOAD 0x003000 0x12000 0x12000 0x0000d2 0x0000d2 R E 0x1000
>>>>> LOAD 0x004000 0x20000 0x20000 0x000004 0x000004 RW 0x1000
>>>>>
>>>>> Old kernels load this ELF file in the following way ('/proc/self/maps'):
>>>>>
>>>>> 00010000-00011000 r--p 00001000 00:02 131 ./bug-reproduction
>>>>> 00011000-00012000 rw-p 00002000 00:02 131 ./bug-reproduction
>>>>> 00012000-00013000 r-xp 00003000 00:02 131 ./bug-reproduction
>>>>> 00020000-00021000 rw-p 00004000 00:02 131 ./bug-reproduction
>>>>>
>>>>> And new kernels do it like this:
>>>>>
>>>>> 00010000-00011000 r--p 00001000 00:02 131 ./bug-reproduction
>>>>> 00011000-00012000 rw-p 00000000 00:00 0
>>>>> 00012000-00013000 r-xp 00003000 00:02 131 ./bug-reproduction
>>>>> 00020000-00021000 rw-p 00004000 00:02 131 ./bug-reproduction
>>>>>
>>>>> That map between 0x11000 and 0x12000 is the program's '.data' and '.bss'
>>>>> sections to which it tries to write to, and since the kernel doesn't map
>>>>> them anymore it crashes.
>>>>>
>>>>> I bisected the issue to the following commit:
>>>>>
>>>>> commit 585a018627b4d7ed37387211f667916840b5c5ea
>>>>> Author: Eric W. Biederman <ebiederm@xmission.com>
>>>>> Date: Thu Sep 28 20:24:29 2023 -0700
>>>>>
>>>>> binfmt_elf: Support segments with 0 filesz and misaligned starts
>>>>>
>>>>> I can confirm that with this commit the issue reproduces, and with it
>>>>> reverted it doesn't.
>>>>>
>>>>> I have prepared a minimal reproduction of the problem available here,
>>>>> along with all of the scripts I used for bisecting:
>>>>>
>>>>> https://github.com/koute/linux-elf-loading-bug
>>>>>
>>>>> You can either compile it from source (requires Rust and LLD), or there's
>>>>> a prebuilt binary in 'bin/bug-reproduction` which you can run. (It's tiny,
>>>>> so you can easily check with 'objdump -d' that it isn't malicious).
>>>>>
>>>>> On old kernels this will run fine, and on new kernels it will
>>>>> segfault.
>>>>
>>>> Frankly your ELF binary is buggy, and probably the best fix would be to
>>>> fix the linker script that is used to generate your binary.
>>>>
>>>> The problem is the SYSV ABI defines everything in terms of pages and so
>>>> placing two ELF segments on the same page results in undefined behavior.
>>>>
>>>> The code was fixed to honor your .bss segment and now your .data segment
>>>> is being stomped, because you defined them to overlap.
>>>>
>>>> Ideally your linker script would place both your .data and .bss in
>>>> the same segment. That would both fix the issue and give you a more
>>>> compact elf binary, while not changing the generated code at all.
>>>>
>>>>
>>>> That said regressions suck and it would be good if we could update the
>>>> code to do something reasonable in this case.
>>>>
>>>> We can perhaps we can update the .bss segment to just memset an existing
>>>> page if one has already been mapped. Which would cleanly handle a case
>>>> like yours. I need to think about that for a moment to see what the
>>>> code would look like to do that.
>>>
>>> It's the "if one has already been mapped" part which might
>>> become expensive...
>>
>> I am wondering if perhaps we can add MAP_FIXED_NOREPLACE and take
>> some appropriate action if there is already a mapping there.
>
> Yeah, in the general case we had to back out MAP_FIXED_NOREPLACE usage
> for individual LOADs because there were so many cases of overlapping
> LOADs. :( Currently it's only used during the initial mapping (when
> "total_size" is set), to avoid colliding with the stack.
>
> But, as you suggest, if we only use it for filesz==0, it could work.
>
>> Such as printing a warning and skipping the action entirely for
>> a pure bss segment. That would essentially replicate the previous
>> behavior.
>
> Instead of failing, perhaps we just fallback to not using
> MAP_FIXED_NOREPLACE and do the memset? (And maybe pr_warn_once?)
>
>> At a minimum adding MAP_FIXED_NOREPLACE should allow us to
>> deterministically detect and warn about problems, making it easier
>> for people to understand why their binary won't run.
>
> Yeah, it seems like it's the vm_brk_flags() that is clobber the mapping,
> so we have to skip that for the MAP_FIXED_NOREPLACE fails on a filesz==0
> case?
>
next prev parent reply other threads:[~2024-02-01 10:47 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-22 12:01 Jan Bujak
2024-01-22 14:54 ` Pedro Falcato
2024-01-22 15:23 ` Jan Bujak
2024-02-27 2:23 ` Kees Cook
2024-02-27 15:35 ` Eric W. Biederman
2024-02-27 17:22 ` Kees Cook
2024-02-27 20:59 ` Eric W. Biederman
2024-01-22 16:43 ` Eric W. Biederman
2024-01-22 20:48 ` Kees Cook
2024-01-22 21:01 ` Eric W. Biederman
2024-01-22 22:12 ` Kees Cook
2024-02-01 10:47 ` Linux regression tracking (Thorsten Leemhuis) [this message]
2024-02-04 23:27 ` Kees Cook
2024-02-26 5:54 ` Linux regression tracking (Thorsten Leemhuis)
2024-03-25 15:26 ` Linux regression tracking (Thorsten Leemhuis)
2024-03-25 16:56 ` Kees Cook
2024-03-25 17:08 ` Thorsten Leemhuis
2024-01-24 6:59 ` Linux regression tracking #adding (Thorsten Leemhuis)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=95eae92a-ecad-4e0e-b381-5835f370a9e7@leemhuis.info \
--to=regressions@leemhuis.info \
--cc=brauner@kernel.org \
--cc=ebiederm@xmission.com \
--cc=j@exia.io \
--cc=keescook@chromium.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=regressions@lists.linux.dev \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox