> Test-case for that would be helpful, as normal malloc()'ed anon memory > cannot be subject for the bug. Unless I miss something obvious. I've modified the test-case attached to the bug and now it doesn't use malloc()'ed memory but file backed mmap shared memory. On Tue, May 17, 2016 at 5:06 PM, Kirill A. Shutemov wrote: > On Tue, May 17, 2016 at 04:56:02PM +0530, Ashish Srivastava wrote: > > Yes, the original repro was using a custom allocator but I was seeing the > > issue with malloc'd memory as well on my (ARMv7) platform. > > Test-case for that would be helpful, as normal malloc()'ed anon memory > cannot be subject for the bug. Unless I miss something obvious. > > > I agree that the repro code won't reliably work so have modified the > repro > > code attached to the bug to use file backed memory. > > > > That really is the root cause of the problem. I can make the following > > change in the kernel that can make the slow writes problem go away. > > This makes vma_set_page_prot return the value of vma_wants_writenotify to > > the caller after setting vma->vmpage_prot. > > > > In vma_set_page_prot: > > -void vma_set_page_prot(struct vm_area_struct *vma) > > +bool vma_set_page_prot(struct vm_area_struct *vma) > > { > > unsigned long vm_flags = vma->vm_flags; > > > > vma->vm_page_prot = vm_pgprot_modify(vma->vm_page_prot, vm_flags); > > if (vma_wants_writenotify(vma)) { > > vm_flags &= ~VM_SHARED; > > vma->vm_page_prot = vm_pgprot_modify(vma->vm_page_prot, > > vm_flags); > > + return 1; > > } > > + return 0; > > } > > > > In mprotect_fixup: > > > > * held in write mode. > > */ > > vma->vm_flags = newflags; > > - dirty_accountable = vma_wants_writenotify(vma); > > - vma_set_page_prot(vma); > > + dirty_accountable = vma_set_page_prot(vma); > > > > change_protection(vma, start, end, vma->vm_page_prot, > > dirty_accountable, 0) > > > > That looks good to me. Please prepare proper patch. > > -- > Kirill A. Shutemov >