From: David Rientjes <rientjes@google.com>
To: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Cc: linux-mm@kvack.org, linux-numa@vger.kernel.org,
akpm@linux-foundation.org, Mel Gorman <mel@csn.ul.ie>,
Nishanth Aravamudan <nacc@us.ibm.com>,
Adam Litke <agl@us.ibm.com>, Andy Whitcroft <apw@canonical.com>,
eric.whitney@hp.com
Subject: Re: [PATCH 3/5] hugetlb: derive huge pages nodes allowed from task mempolicy
Date: Thu, 27 Aug 2009 12:40:44 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.00.0908271236190.14815@chino.kir.corp.google.com> (raw)
In-Reply-To: <1251233347.16229.0.camel@useless.americas.hpqcorp.net>
On Tue, 25 Aug 2009, Lee Schermerhorn wrote:
> > > Index: linux-2.6.31-rc6-mmotm-090820-1918/mm/hugetlb.c
> > > ===================================================================
> > > --- linux-2.6.31-rc6-mmotm-090820-1918.orig/mm/hugetlb.c 2009-08-24 12:12:50.000000000 -0400
> > > +++ linux-2.6.31-rc6-mmotm-090820-1918/mm/hugetlb.c 2009-08-24 12:12:53.000000000 -0400
> > > @@ -1257,10 +1257,13 @@ static int adjust_pool_surplus(struct hs
> > > static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count)
> > > {
> > > unsigned long min_count, ret;
> > > + nodemask_t *nodes_allowed;
> > >
> > > if (h->order >= MAX_ORDER)
> > > return h->max_huge_pages;
> > >
> >
> > Why can't you simply do this?
> >
> > struct mempolicy *pol = NULL;
> > nodemask_t *nodes_allowed = &node_online_map;
> >
> > local_irq_disable();
> > pol = current->mempolicy;
> > mpol_get(pol);
> > local_irq_enable();
> > if (pol) {
> > switch (pol->mode) {
> > case MPOL_BIND:
> > case MPOL_INTERLEAVE:
> > nodes_allowed = pol->v.nodes;
> > break;
> > case MPOL_PREFERRED:
> > ... use NODEMASK_SCRATCH() ...
> > default:
> > BUG();
> > }
> > }
> > mpol_put(pol);
> >
> > and then use nodes_allowed throughout set_max_huge_pages()?
>
>
> Well, I do use nodes_allowed [pointer] throughout set_max_huge_pages().
Yeah, the above code would all be in set_max_huge_pages() and
huge_mpol_nodes_allowed() would be removed.
> NODEMASK_SCRATCH() didn't exist when I wrote this, and I can't be sure
> it will return a kmalloc()'d nodemask, which I need because a NULL
> nodemask pointer means "all online nodes" [really all nodes with memory,
> I suppose] and I need a pointer to kmalloc()'d nodemask to return from
> huge_mpol_nodes_allowed(). I want to keep the access to the internals
> of mempolicy in mempolicy.[ch], thus the call out to
> huge_mpol_nodes_allowed(), instead of open coding it.
Ok, so you could add a mempolicy.c helper function that returns
nodemask_t * and either points to mpol->v.nodes for most cases after
getting a reference on mpol with mpol_get() or points to a dynamically
allocated NODEMASK_ALLOC() on a nodemask created for MPOL_PREFERRED.
This works nicely because either way you still have a reference to mpol,
so you'll need to call into a mpol_nodemask_free() function which can use
the same switch statement:
void mpol_nodemask_free(struct mempolicy *mpol,
struct nodemask_t *nodes_allowed)
{
switch (mpol->mode) {
case MPOL_PREFERRED:
kfree(nodes_allowed);
break;
default:
break;
}
mpol_put(mpol);
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-08-27 19:40 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-08-24 19:24 [PATCH 0/5] hugetlb: numa control of persistent huge pages alloc/free Lee Schermerhorn
2009-08-24 19:25 ` [PATCH 1/5] hugetlb: rework hstate_next_node_* functions Lee Schermerhorn
2009-08-25 8:10 ` David Rientjes
2009-08-24 19:26 ` [PATCH 2/5] hugetlb: add nodemask arg to huge page alloc, free and surplus adjust fcns Lee Schermerhorn
2009-08-25 8:16 ` David Rientjes
2009-08-25 20:49 ` Lee Schermerhorn
2009-08-25 21:59 ` David Rientjes
2009-08-26 9:58 ` Mel Gorman
2009-08-24 19:27 ` [PATCH 3/5] hugetlb: derive huge pages nodes allowed from task mempolicy Lee Schermerhorn
2009-08-25 8:47 ` David Rientjes
2009-08-25 20:49 ` Lee Schermerhorn
2009-08-27 19:40 ` David Rientjes [this message]
2009-08-25 10:22 ` Mel Gorman
2009-08-24 19:29 ` [PATCH 4/5] hugetlb: add per node hstate attributes Lee Schermerhorn
2009-08-25 10:19 ` Mel Gorman
2009-08-25 20:49 ` Lee Schermerhorn
2009-08-26 10:11 ` Mel Gorman
2009-08-26 18:02 ` Lee Schermerhorn
2009-08-26 19:47 ` David Rientjes
2009-08-26 20:46 ` Lee Schermerhorn
2009-08-27 9:52 ` Mel Gorman
2009-08-27 19:35 ` David Rientjes
2009-08-28 12:56 ` Lee Schermerhorn
2009-08-26 18:04 ` Lee Schermerhorn
2009-08-27 10:23 ` Mel Gorman
2009-08-27 16:52 ` Lee Schermerhorn
2009-08-28 10:09 ` Mel Gorman
2009-08-25 13:35 ` Mel Gorman
2009-08-25 20:49 ` Lee Schermerhorn
2009-08-26 10:12 ` Mel Gorman
2009-08-24 19:30 ` [PATCH 5/5] hugetlb: update hugetlb documentation for mempolicy based management Lee Schermerhorn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.00.0908271236190.14815@chino.kir.corp.google.com \
--to=rientjes@google.com \
--cc=Lee.Schermerhorn@hp.com \
--cc=agl@us.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=apw@canonical.com \
--cc=eric.whitney@hp.com \
--cc=linux-mm@kvack.org \
--cc=linux-numa@vger.kernel.org \
--cc=mel@csn.ul.ie \
--cc=nacc@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox