From: Adam Litke <agl@us.ibm.com>
To: "Chen, Kenneth W" <kenneth.w.chen@intel.com>
Cc: David Gibson <david@gibson.dropbear.id.au>,
wli@holomorphy.com, Andrew Morton <akpm@osdl.org>,
linux-mm@kvack.org
Subject: RE: [patch] hugetlb strict commit accounting - v3
Date: Mon, 20 Mar 2006 14:21:44 -0600 [thread overview]
Message-ID: <1142886104.14508.12.camel@localhost.localdomain> (raw)
In-Reply-To: <B14CB421AD82C944A59ED6C1CA4E068F18F696@scsmsx404.amr.corp.intel.com>
On Mon, 2006-03-20 at 10:48 -0800, Chen, Kenneth W wrote:
> Adam Litke wrote on Monday, March 20, 2006 7:35 AM
> > On Thu, 2006-03-09 at 19:14 -0800, Chen, Kenneth W wrote:
> > > @@ -98,6 +98,12 @@ struct page *alloc_huge_page(struct vm_a
> > > int i;
> > >
> > > spin_lock(&hugetlb_lock);
> > > + if (vma->vm_flags & VM_MAYSHARE)
> > > + resv_huge_pages--;
> > > + else if (free_huge_pages <= resv_huge_pages) {
> > > + spin_unlock(&hugetlb_lock);
> > > + return NULL;
> > > + }
> > > page = dequeue_huge_page(vma, addr);
> > > if (!page) {
> > > spin_unlock(&hugetlb_lock);
> >
> > Unfortunately this will break down when two or more threads race to
> > allocate the same page. You end up with a double-decrement of
> > resv_huge_pages even though only one thread will win the race.
>
> Are you sure? David introduced hugetlb_instantiation_mutex to serialize
> entire hugetlb fault path, such race is not possible anymore. I
> previously
> quipped about it, and soon realized that for private mapping, such thing
> is inevitable. And even for shared mapping, that means not needing a
> back
> out path. I will add it for defensive measure.
You're right. I forgot about that patch... With it applied, everything
works correctly.
> Thanks for bring this up though, there is one place where it still have
> problem - allocation can fail under file system quota.
>
> Which brings up another interesting question: should private mapping
> hold
> file system quota? If it does as it is now, that means file system
> quota
> need to be reserved up front along with hugetlb page reservation.
I must profess my ignorance about the filesystem quota part. I've never
seen that used in practice as a resource limiting lever. That said, I
think we need to ensure either: both shared and private hold quota, or
neither hold it.
--
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2006-03-20 20:21 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-03-20 18:48 Chen, Kenneth W
2006-03-20 20:21 ` Adam Litke [this message]
-- strict thread matches above, loose matches on Subject: below --
2006-03-10 3:14 Chen, Kenneth W
2006-03-10 4:37 ` 'David Gibson'
2006-03-10 4:46 ` Andrew Morton
2006-03-10 4:50 ` 'David Gibson'
2006-03-10 5:39 ` Andrew Morton
2006-03-10 5:48 ` Andrew Morton
2006-03-20 15:35 ` Adam Litke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1142886104.14508.12.camel@localhost.localdomain \
--to=agl@us.ibm.com \
--cc=akpm@osdl.org \
--cc=david@gibson.dropbear.id.au \
--cc=kenneth.w.chen@intel.com \
--cc=linux-mm@kvack.org \
--cc=wli@holomorphy.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox