linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Russell King <linux@armlinux.org.uk>,
	Will Deacon <will@kernel.org>,
	<linux-arm-kernel@lists.infradead.org>, <linux-mm@kvack.org>
Subject: Re: [PATCH 1/2] arm64: mm: drop VM_FAULT_BADMAP/VM_FAULT_BADACCESS
Date: Wed, 10 Apr 2024 18:58:27 +0800	[thread overview]
Message-ID: <ae1be698-6e94-46de-83fd-2d94bac98afe@huawei.com> (raw)
In-Reply-To: <ec022d1d-7f60-4893-8418-2ed635a7d528@huawei.com>



On 2024/4/10 9:30, Kefeng Wang wrote:
> 
> 
> On 2024/4/9 22:28, Catalin Marinas wrote:
>> Hi Kefeng,
>>
>> On Sun, Apr 07, 2024 at 04:12:10PM +0800, Kefeng Wang wrote:
>>> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
>>> index 405f9aa831bd..61a2acae0dca 100644
>>> --- a/arch/arm64/mm/fault.c
>>> +++ b/arch/arm64/mm/fault.c
>>> @@ -500,9 +500,6 @@ static bool is_write_abort(unsigned long esr)
>>>       return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM);
>>>   }
>>> -#define VM_FAULT_BADMAP        ((__force vm_fault_t)0x010000)
>>> -#define VM_FAULT_BADACCESS    ((__force vm_fault_t)0x020000)
>>> -
>>>   static int __kprobes do_page_fault(unsigned long far, unsigned long 
>>> esr,
>>>                      struct pt_regs *regs)
>>>   {
>>> @@ -513,6 +510,7 @@ static int __kprobes do_page_fault(unsigned long 
>>> far, unsigned long esr,
>>>       unsigned int mm_flags = FAULT_FLAG_DEFAULT;
>>>       unsigned long addr = untagged_addr(far);
>>>       struct vm_area_struct *vma;
>>> +    int si_code;
>>
>> I think we should initialise this to 0. Currently all paths seem to set
>> si_code to something meaningful but I'm not sure the last 'else' close
>> in this patch is guaranteed to always cover exactly those earlier code
>> paths updating si_code. I'm not talking about the 'goto bad_area' paths
>> since they set 'fault' to 0 but the fall through after the second (under
>> the mm lock) handle_mm_fault().
> 
> Recheck it, without this patch, the second handle_mm_fault() never
> return VM_FAULT_BADACCESS, but could return VM_FAULT_SIGSEGV(maybe
> other), which not handled in the other error path,
> 
>   handle_mm_fault
>      ret = sanitize_fault_flags(vma, &flags);
>      if (!arch_vma_access_permitted())
>       ret = VM_FAULT_SIGSEGV;
> 
> so the orignal logical will set si_code to SEGV_MAPERR
> 
>    fault == VM_FAULT_BADACCESS ? SEGV_ACCERR : SEGV_MAPERR,
> 
> therefore, i think we should set the default si_code to SEGV_MAPERR.
> 
> 
>>
>>>       if (kprobe_page_fault(regs, esr))
>>>           return 0;
>>> @@ -572,9 +570,10 @@ static int __kprobes do_page_fault(unsigned long 
>>> far, unsigned long esr,
>>>       if (!(vma->vm_flags & vm_flags)) {
>>>           vma_end_read(vma);
>>> -        fault = VM_FAULT_BADACCESS;
>>> +        fault = 0;
>>> +        si_code = SEGV_ACCERR;
>>>           count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
>>> -        goto done;
>>> +        goto bad_area;
>>>       }
>>>       fault = handle_mm_fault(vma, addr, mm_flags | 
>>> FAULT_FLAG_VMA_LOCK, regs);
>>>       if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
>>> @@ -599,15 +598,18 @@ static int __kprobes do_page_fault(unsigned 
>>> long far, unsigned long esr,
>>>   retry:
>>>       vma = lock_mm_and_find_vma(mm, addr, regs);
>>>       if (unlikely(!vma)) {
>>> -        fault = VM_FAULT_BADMAP;
>>> -        goto done;
>>> +        fault = 0;
>>> +        si_code = SEGV_MAPERR;
>>> +        goto bad_area;
>>>       }
>>> -    if (!(vma->vm_flags & vm_flags))
>>> -        fault = VM_FAULT_BADACCESS;
>>> -    else
>>> -        fault = handle_mm_fault(vma, addr, mm_flags, regs);
>>> +    if (!(vma->vm_flags & vm_flags)) {
>>> +        fault = 0;
>>> +        si_code = SEGV_ACCERR;
>>> +        goto bad_area;
>>> +    }
>>
>> What's releasing the mm lock here? Prior to this change, it is falling
>> through to mmap_read_unlock() below or handle_mm_fault() was releasing
>> the lock (VM_FAULT_RETRY, VM_FAULT_COMPLETED).
> 
> Indeed, will fix,
> 
>>
>>> +    fault = handle_mm_fault(vma, addr, mm_flags, regs);
>>>       /* Quick path to respond to signals */
>>>       if (fault_signal_pending(fault, regs)) {
>>>           if (!user_mode(regs))
>>> @@ -626,13 +628,11 @@ static int __kprobes do_page_fault(unsigned 
>>> long far, unsigned long esr,
>>>       mmap_read_unlock(mm);
>>>   done:
>>> -    /*
>>> -     * Handle the "normal" (no error) case first.
>>> -     */
>>> -    if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP |
>>> -                  VM_FAULT_BADACCESS))))
>>> +    /* Handle the "normal" (no error) case first. */
>>> +    if (likely(!(fault & VM_FAULT_ERROR)))
>>>           return 0;

Another choice, we set si_code = SEGV_MAPERR here, since normal
pagefault don't use si_code, only the error patch need to initialize.


>>> +bad_area:
>>>       /*
>>>        * If we are in kernel mode at this point, we have no context to
>>>        * handle this fault with.
>>> @@ -667,13 +667,8 @@ static int __kprobes do_page_fault(unsigned long 
>>> far, unsigned long esr,
>>>           arm64_force_sig_mceerr(BUS_MCEERR_AR, far, lsb, inf->name);
>>>       } else {
>>> -        /*
>>> -         * Something tried to access memory that isn't in our memory
>>> -         * map.
>>> -         */
>>> -        arm64_force_sig_fault(SIGSEGV,
>>> -                      fault == VM_FAULT_BADACCESS ? SEGV_ACCERR : 
>>> SEGV_MAPERR,
>>> -                      far, inf->name);
>>> +        /* Something tried to access memory that out of memory map */
>>> +        arm64_force_sig_fault(SIGSEGV, si_code, far, inf->name);
>>>       }
>>
>> We can get to the 'else' close after the second handle_mm_fault(). Do we
>> guarantee that 'fault == 0' in this last block? If not, maybe a warning
>> and some safe initialisation for 'si_code' to avoid leaking stack data.
> 
> As analyzed above, it is sufficient that make si_code to SEGV_MAPPER by 
> default, right?
> 
> Thanks.
> 
> 
>>
> 


  reply	other threads:[~2024-04-10 10:58 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-07  8:12 [PATCH -next 0/2] mm: remove arch's private VM_FAULT_BADMAP/BADACCESS Kefeng Wang
2024-04-07  8:12 ` [PATCH 1/2] arm64: mm: drop VM_FAULT_BADMAP/VM_FAULT_BADACCESS Kefeng Wang
2024-04-09 14:28   ` Catalin Marinas
2024-04-10  1:30     ` Kefeng Wang
2024-04-10 10:58       ` Kefeng Wang [this message]
2024-04-11  9:59         ` Catalin Marinas
2024-04-11 11:11           ` Kefeng Wang
2024-04-10 11:24   ` Aishwarya TCV
2024-04-10 11:53     ` Kefeng Wang
2024-04-10 12:39       ` Cristian Marussi
2024-04-10 12:48         ` Kefeng Wang
2024-04-10 20:18           ` Andrew Morton
2024-04-07  8:12 ` [PATCH 2/2] arm: " Kefeng Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ae1be698-6e94-46de-83fd-2d94bac98afe@huawei.com \
    --to=wangkefeng.wang@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=catalin.marinas@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@armlinux.org.uk \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox