From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx204.postini.com [74.125.245.204]) by kanga.kvack.org (Postfix) with SMTP id B0E376B0034 for ; Fri, 24 May 2013 05:02:33 -0400 (EDT) Received: by mail-wi0-f178.google.com with SMTP id hj6so1343028wib.5 for ; Fri, 24 May 2013 02:02:32 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20130523152445.17549682ae45b5aab3f3cde0@linux-foundation.org> References: <20130523052421.13864.83978.stgit@localhost6.localdomain6> <20130523052547.13864.83306.stgit@localhost6.localdomain6> <20130523152445.17549682ae45b5aab3f3cde0@linux-foundation.org> Date: Fri, 24 May 2013 13:02:30 +0400 Message-ID: Subject: Re: [PATCH v8 9/9] vmcore: support mmap() on /proc/vmcore From: Maxim Uvarov Content-Type: multipart/alternative; boundary=e89a8f234ce515854f04dd730f9a Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: HATAYAMA Daisuke , riel@redhat.com, hughd@google.com, jingbai.ma@hp.com, "kexec@lists.infradead.org" , linux-kernel@vger.kernel.org, lisa.mitchell@hp.com, linux-mm@kvack.org, Atsushi Kumagai , "Eric W. Biederman" , kosaki.motohiro@jp.fujitsu.com, zhangyanfei@cn.fujitsu.com, walken@google.com, Cliff Wickman , Vivek Goyal --e89a8f234ce515854f04dd730f9a Content-Type: text/plain; charset=ISO-8859-1 2013/5/24 Andrew Morton > On Thu, 23 May 2013 14:25:48 +0900 HATAYAMA Daisuke < > d.hatayama@jp.fujitsu.com> wrote: > > > This patch introduces mmap_vmcore(). > > > > Don't permit writable nor executable mapping even with mprotect() > > because this mmap() is aimed at reading crash dump memory. > > Non-writable mapping is also requirement of remap_pfn_range() when > > mapping linear pages on non-consecutive physical pages; see > > is_cow_mapping(). > > > > Set VM_MIXEDMAP flag to remap memory by remap_pfn_range and by > > remap_vmalloc_range_pertial at the same time for a single > > vma. do_munmap() can correctly clean partially remapped vma with two > > functions in abnormal case. See zap_pte_range(), vm_normal_page() and > > their comments for details. > > > > On x86-32 PAE kernels, mmap() supports at most 16TB memory only. This > > limitation comes from the fact that the third argument of > > remap_pfn_range(), pfn, is of 32-bit length on x86-32: unsigned long. > > More reviewing and testing, please. > > Do you have git pull for both kernel and userland changes? I would like to do some more testing on my machines. Maxim. > > From: Andrew Morton > Subject: vmcore-support-mmap-on-proc-vmcore-fix > > use min(), switch to conventional error-unwinding approach > > Cc: Atsushi Kumagai > Cc: HATAYAMA Daisuke > Cc: KOSAKI Motohiro > Cc: Lisa Mitchell > Cc: Vivek Goyal > Cc: Zhang Yanfei > Signed-off-by: Andrew Morton > --- > > fs/proc/vmcore.c | 27 ++++++++++----------------- > 1 file changed, 10 insertions(+), 17 deletions(-) > > diff -puN fs/proc/vmcore.c~vmcore-support-mmap-on-proc-vmcore-fix > fs/proc/vmcore.c > --- a/fs/proc/vmcore.c~vmcore-support-mmap-on-proc-vmcore-fix > +++ a/fs/proc/vmcore.c > @@ -218,9 +218,7 @@ static int mmap_vmcore(struct file *file > if (start < elfcorebuf_sz) { > u64 pfn; > > - tsz = elfcorebuf_sz - start; > - if (size < tsz) > - tsz = size; > + tsz = min(elfcorebuf_sz - (size_t)start, size); > pfn = __pa(elfcorebuf + start) >> PAGE_SHIFT; > if (remap_pfn_range(vma, vma->vm_start, pfn, tsz, > vma->vm_page_prot)) > @@ -236,15 +234,11 @@ static int mmap_vmcore(struct file *file > if (start < elfcorebuf_sz + elfnotes_sz) { > void *kaddr; > > - tsz = elfcorebuf_sz + elfnotes_sz - start; > - if (size < tsz) > - tsz = size; > + tsz = min(elfcorebuf_sz + elfnotes_sz - (size_t)start, > size); > kaddr = elfnotes_buf + start - elfcorebuf_sz; > if (remap_vmalloc_range_partial(vma, vma->vm_start + len, > - kaddr, tsz)) { > - do_munmap(vma->vm_mm, vma->vm_start, len); > - return -EAGAIN; > - } > + kaddr, tsz)) > + goto fail; > size -= tsz; > start += tsz; > len += tsz; > @@ -257,16 +251,12 @@ static int mmap_vmcore(struct file *file > if (start < m->offset + m->size) { > u64 paddr = 0; > > - tsz = m->offset + m->size - start; > - if (size < tsz) > - tsz = size; > + tsz = min_t(size_t, m->offset + m->size - start, > size); > paddr = m->paddr + start - m->offset; > if (remap_pfn_range(vma, vma->vm_start + len, > paddr >> PAGE_SHIFT, tsz, > - vma->vm_page_prot)) { > - do_munmap(vma->vm_mm, vma->vm_start, len); > - return -EAGAIN; > - } > + vma->vm_page_prot)) > + goto fail; > size -= tsz; > start += tsz; > len += tsz; > @@ -277,6 +267,9 @@ static int mmap_vmcore(struct file *file > } > > return 0; > +fail: > + do_munmap(vma->vm_mm, vma->vm_start, len); > + return -EAGAIN; > } > > static const struct file_operations proc_vmcore_operations = { > _ > > > _______________________________________________ > kexec mailing list > kexec@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/kexec > -- Best regards, Maxim Uvarov --e89a8f234ce515854f04dd730f9a Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable



2013/5/24 Andrew Morton <akpm@linux-foundation.org>=
On Thu, 23 May 2013 14:25:= 48 +0900 HATAYAMA Daisuke <= d.hatayama@jp.fujitsu.com> wrote:

> This patch introduces mmap_vmcore().
>
> Don't permit writable nor executable mapping even with mprotect()<= br> > because this mmap() is aimed at reading crash dump memory.
> Non-writable mapping is also requirement of remap_pfn_range() when
> mapping linear pages on non-consecutive physical pages; see
> is_cow_mapping().
>
> Set VM_MIXEDMAP flag to remap memory by remap_pfn_range and by
> remap_vmalloc_range_pertial at the same time for a single
> vma. do_munmap() can correctly clean partially remapped vma with two > functions in abnormal case. See zap_pte_range(), vm_normal_page() and<= br> > their comments for details.
>
> On x86-32 PAE kernels, mmap() supports at most 16TB memory only. This<= br> > limitation comes from the fact that the third argument of
> remap_pfn_range(), pfn, is of 32-bit length on x86-32: unsigned long.<= br>
More reviewing and testing, please.


Do you have git pull for both kernel a= nd userland changes? I would like to do some more testing on my machines.
Maxim.
=A0

From: Andrew Morton <akpm@l= inux-foundation.org>
Subject: vmcore-support-mmap-on-proc-vmcore-fix

use min(), switch to conventional error-unwinding approach

Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Cc: HATAYAMA Daisuke <d.hat= ayama@jp.fujitsu.com>
Cc: KOSAKI Motohiro <k= osaki.motohiro@jp.fujitsu.com>
Cc: Lisa Mitchell <lisa.mitchell= @hp.com>
Cc: Vivek Goyal <vgoyal@redhat.com<= /a>>
Cc: Zhang Yanfei <
zhangyan= fei@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

=A0fs/proc/vmcore.c | =A0 27 ++++++++++-----------------
=A01 file changed, 10 insertions(+), 17 deletions(-)

diff -puN fs/proc/vmcore.c~vmcore-support-mmap-on-proc-vmcore-fix fs/proc/v= mcore.c
--- a/fs/proc/vmcore.c~vmcore-support-mmap-on-proc-vmcore-fix
+++ a/fs/proc/vmcore.c
@@ -218,9 +218,7 @@ static int mmap_vmcore(struct file *file
=A0 =A0 =A0 =A0 if (start < elfcorebuf_sz) {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 u64 pfn;

- =A0 =A0 =A0 =A0 =A0 =A0 =A0 tsz =3D elfcorebuf_sz - start;
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (size < tsz)
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 tsz =3D size;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 tsz =3D min(elfcorebuf_sz - (size_t)start, si= ze);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 pfn =3D __pa(elfcorebuf += start) >> PAGE_SHIFT;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (remap_pfn_range= (vma, vma->vm_start, pfn, tsz,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 vma->vm_page_prot))
@@ -236,15 +234,11 @@ static int mmap_vmcore(struct file *file
=A0 =A0 =A0 =A0 if (start < elfcorebuf_sz + elfnotes_s= z) {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 void *kaddr;

- =A0 =A0 =A0 =A0 =A0 =A0 =A0 tsz =3D elfcorebuf_sz + elfnotes_sz - start;<= br>
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (size < tsz)
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 tsz =3D size;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 tsz =3D min(elfcorebuf_sz + elfnotes_sz - (si= ze_t)start, size);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 kaddr =3D elfnotes_buf + = start - elfcorebuf_sz;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (remap_vmalloc_r= ange_partial(vma, vma->vm_start + len,
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 kaddr, tsz)) {
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 do_munmap(vma->vm_mm, vma-= >vm_start, len);
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return -EAGAIN;
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 }
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 kaddr, tsz))
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 goto fail;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 size -=3D tsz;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 start +=3D tsz;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 len +=3D tsz;
@@ -257,16 +251,12 @@ static int mmap_vmcore(struct file *file
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (start < m->offs= et + m->size) {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 u64 paddr =3D 0;

- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 tsz =3D m->offset + m->= size - start;
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (size < tsz)
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 tsz =3D size;=
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 tsz =3D min_t(size_t, m->o= ffset + m->size - start, size);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 paddr =3D= m->paddr + start - m->offset;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if = (remap_pfn_range(vma, vma->vm_start + len,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 paddr >> PAGE_SHIFT, tsz,
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 vma->vm_page_prot)) {
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 do_munmap(vma->vm_mm, vma->vm_start, len);
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return = -EAGAIN;
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 }
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 vma->vm_page_prot))
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 goto fail; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 size -=3D tsz;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 start +=3D tsz;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 len +=3D tsz;
@@ -277,6 +267,9 @@ static int mmap_vmcore(struct file *file
=A0 =A0 =A0 =A0 }

=A0 =A0 =A0 =A0 return 0;
+fail:
+ =A0 =A0 =A0 do_munmap(vma->vm_mm, vma->vm_= start, len);
+ =A0 =A0 =A0 return -EAGAIN;
=A0}

=A0static const struct file_operations proc_= vmcore_operations =3D {
_


_______________________________________________
kexec mailing list
kexec@lists.infradead.org<= br> http://lists.infradead.org/mailman/listinfo/kexec



--
Best regard= s,
Maxim Uvarov
--e89a8f234ce515854f04dd730f9a-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org