From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Rik van Riel <riel@redhat.com>, Mel Gorman <mgorman@suse.de>,
Michal Hocko <mhocko@suse.cz>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Hugh Dickins <hughd@google.com>,
Davidlohr Bueso <davidlohr.bueso@hp.com>,
David Gibson <david@gibson.dropbear.id.au>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
Date: Tue, 16 Jul 2013 16:12:17 +0900 [thread overview]
Message-ID: <20130716071216.GC30116@lge.com> (raw)
In-Reply-To: <874nbvhx90.fsf@linux.vnet.ibm.com>
On Tue, Jul 16, 2013 at 11:17:23AM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:
>
> > On Mon, Jul 15, 2013 at 08:41:12PM +0530, Aneesh Kumar K.V wrote:
> >> Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:
> >>
> >> > If we map the region with MAP_NORESERVE and MAP_SHARED,
> >> > we can skip to check reserve counting and eventually we cannot be ensured
> >> > to allocate a huge page in fault time.
> >> > With following example code, you can easily find this situation.
> >> >
> >> > Assume 2MB, nr_hugepages = 100
> >> >
> >> > fd = hugetlbfs_unlinked_fd();
> >> > if (fd < 0)
> >> > return 1;
> >> >
> >> > size = 200 * MB;
> >> > flag = MAP_SHARED;
> >> > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0);
> >> > if (p == MAP_FAILED) {
> >> > fprintf(stderr, "mmap() failed: %s\n", strerror(errno));
> >> > return -1;
> >> > }
> >> >
> >> > size = 2 * MB;
> >> > flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE;
> >> > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0);
> >> > if (p == MAP_FAILED) {
> >> > fprintf(stderr, "mmap() failed: %s\n", strerror(errno));
> >> > }
> >> > p[0] = '0';
> >> > sleep(10);
> >> >
> >> > During executing sleep(10), run 'cat /proc/meminfo' on another process.
> >> > You'll find a mentioned problem.
> >> >
> >> > Solution is simple. We should check VM_NORESERVE in vma_has_reserves().
> >> > This prevent to use a pre-allocated huge page if free count is under
> >> > the reserve count.
> >>
> >> You have a problem with this patch, which i guess you are fixing in
> >> patch 9. Consider two process
> >>
> >> a) MAP_SHARED on fd
> >> b) MAP_SHARED | MAP_NORESERVE on fd
> >>
> >> We should allow the (b) to access the page even if VM_NORESERVE is set
> >> and we are out of reserve space .
> >
> > I can't get your point.
> > Please elaborate more on this.
>
>
> One process mmap with MAP_SHARED and another one with MAP_SHARED | MAP_NORESERVE
> Now the first process will result in reserving the pages from the hugtlb
> pool. Now if the second process try to dequeue huge page and we don't
> have free space we will fail because
>
> vma_has_reservers will now return zero because VM_NORESERVE is set
> and we can have (h->free_huge_pages - h->resv_huge_pages) == 0;
I think that this behavior is correct, because a user who mapped with
VM_NORESERVE should not think their allocation always succeed. With patch 9,
he can be ensured to succeed, but I think it is side-effect.
> The below hunk in your patch 9 handles that
>
> + if (vma->vm_flags & VM_NORESERVE) {
> + /*
> + * This address is already reserved by other process(chg == 0),
> + * so, we should decreament reserved count. Without
> + * decreamenting, reserve count is remained after releasing
> + * inode, because this allocated page will go into page cache
> + * and is regarded as coming from reserved pool in releasing
> + * step. Currently, we don't have any other solution to deal
> + * with this situation properly, so add work-around here.
> + */
> + if (vma->vm_flags & VM_MAYSHARE && chg == 0)
> + return 1;
> + else
> + return 0;
> + }
>
> so may be both of these should be folded ?
I think that these patches should not be folded, because these handle
two separate issues. Reserve count mismatch issue mentioned in patch 9
is not introduced by patch 7.
Thanks.
>
> -aneesh
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-07-16 7:12 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-15 9:52 [PATCH 0/9] mm, hugetlb: clean-up and possible bug fix Joonsoo Kim
2013-07-15 9:52 ` [PATCH 1/9] mm, hugetlb: move up the code which check availability of free huge page Joonsoo Kim
2013-07-15 14:01 ` Aneesh Kumar K.V
2013-07-16 1:16 ` Joonsoo Kim
2013-07-16 3:36 ` Aneesh Kumar K.V
2013-07-16 5:10 ` Joonsoo Kim
2013-07-22 14:45 ` Michal Hocko
2013-07-15 9:52 ` [PATCH 2/9] mm, hugetlb: trivial commenting fix Joonsoo Kim
2013-07-15 13:12 ` Hillf Danton
2013-07-15 14:02 ` Aneesh Kumar K.V
2013-07-22 14:46 ` Michal Hocko
2013-07-15 9:52 ` [PATCH 3/9] mm, hugetlb: clean-up alloc_huge_page() Joonsoo Kim
2013-07-22 14:51 ` Michal Hocko
2013-07-23 7:29 ` Joonsoo Kim
2013-07-15 9:52 ` [PATCH 4/9] mm, hugetlb: fix and clean-up node iteration code to alloc or free Joonsoo Kim
2013-07-15 14:27 ` Aneesh Kumar K.V
2013-07-16 1:41 ` Joonsoo Kim
2013-07-17 2:00 ` Jianguo Wu
2013-07-18 6:46 ` Joonsoo Kim
2013-07-15 9:52 ` [PATCH 5/9] mm, hugetlb: remove redundant list_empty check in gather_surplus_pages() Joonsoo Kim
2013-07-15 14:31 ` Aneesh Kumar K.V
2013-07-16 1:42 ` Joonsoo Kim
2013-07-22 14:55 ` Michal Hocko
2013-07-15 9:52 ` [PATCH 6/9] mm, hugetlb: do not use a page in page cache for cow optimization Joonsoo Kim
2013-07-15 13:55 ` Aneesh Kumar K.V
2013-07-16 1:56 ` Joonsoo Kim
2013-07-17 8:55 ` Wanpeng Li
2013-07-17 8:55 ` Wanpeng Li
2013-07-15 9:52 ` [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves() Joonsoo Kim
2013-07-15 14:48 ` Aneesh Kumar K.V
2013-07-15 15:11 ` Aneesh Kumar K.V
2013-07-16 2:12 ` Joonsoo Kim
2013-07-16 5:47 ` Aneesh Kumar K.V
2013-07-16 7:12 ` Joonsoo Kim [this message]
2013-07-18 2:03 ` Wanpeng Li
2013-07-18 2:03 ` Wanpeng Li
2013-07-15 9:52 ` [PATCH 8/9] mm, hugetlb: remove decrement_hugepage_resv_vma() Joonsoo Kim
2013-07-15 14:50 ` Aneesh Kumar K.V
2013-07-17 9:31 ` Wanpeng Li
2013-07-17 9:31 ` Wanpeng Li
2013-07-15 9:52 ` [PATCH 9/9] mm, hugetlb: decrement reserve count if VM_NORESERVE alloc page cache Joonsoo Kim
2013-07-15 15:11 ` Aneesh Kumar K.V
2013-07-18 2:02 ` Wanpeng Li
2013-07-18 2:02 ` Wanpeng Li
2013-07-15 14:10 ` [PATCH 0/9] mm, hugetlb: clean-up and possible bug fix Aneesh Kumar K.V
2013-07-16 1:10 ` Joonsoo Kim
2013-07-16 1:27 ` Sam Ben
2013-07-16 1:45 ` Joonsoo Kim
2013-07-16 1:55 ` Sam Ben
2013-07-16 2:14 ` Joonsoo Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130716071216.GC30116@lge.com \
--to=iamjoonsoo.kim@lge.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.vnet.ibm.com \
--cc=david@gibson.dropbear.id.au \
--cc=davidlohr.bueso@hp.com \
--cc=hughd@google.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.cz \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox