From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail138.messagelabs.com (mail138.messagelabs.com [216.82.249.35]) by kanga.kvack.org (Postfix) with ESMTP id C2FBA6B005A for ; Tue, 16 Jun 2009 09:51:25 -0400 (EDT) From: Lee Schermerhorn Date: Tue, 16 Jun 2009 09:53:01 -0400 Message-Id: <20090616135301.25248.91276.sendpatchset@lts-notebook> In-Reply-To: <20090616135228.25248.22018.sendpatchset@lts-notebook> References: <20090616135228.25248.22018.sendpatchset@lts-notebook> Subject: [PATCH 3/5] Use per hstate nodes_allowed to constrain huge page allocation Sender: owner-linux-mm@kvack.org To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, Mel Gorman , Nishanth Aravamudan , Adam Litke , Andy Whitcroft , eric.whitney@hp.com List-ID: [PATCH 3/5] Use per hstate nodes_allowed to constrain huge page allocation Against: 17may09 mmotm Select only nodes from the per hstate nodes_allowed mask when promoting surplus pages to persistent or when allocating fresh huge pages to the pool. Note that alloc_buddy_huge_page() still uses task policy to allocate surplus huge pages. This could be changed. Signed-off-by: Lee Schermerhorn mm/hugetlb.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) Index: linux-2.6.30-rc8-mmotm-090603-1633/mm/hugetlb.c =================================================================== --- linux-2.6.30-rc8-mmotm-090603-1633.orig/mm/hugetlb.c 2009-06-04 12:59:32.000000000 -0400 +++ linux-2.6.30-rc8-mmotm-090603-1633/mm/hugetlb.c 2009-06-04 12:59:33.000000000 -0400 @@ -637,9 +637,9 @@ static struct page *alloc_fresh_huge_pag static int hstate_next_node(struct hstate *h) { int next_nid; - next_nid = next_node(h->hugetlb_next_nid, node_online_map); + next_nid = next_node(h->hugetlb_next_nid, *h->nodes_allowed); if (next_nid == MAX_NUMNODES) - next_nid = first_node(node_online_map); + next_nid = first_node(*h->nodes_allowed); h->hugetlb_next_nid = next_nid; return next_nid; } @@ -652,6 +652,11 @@ static int alloc_fresh_huge_page(struct int ret = 0; start_nid = h->hugetlb_next_nid; + /* + * we may have allocated with a different nodes_allowed previously + */ + if (!node_isset(start_nid, *h->nodes_allowed)) + start_nid = hstate_next_node(h); do { page = alloc_fresh_huge_page_node(h, h->hugetlb_next_nid); @@ -1169,20 +1174,28 @@ static inline void try_to_free_low(struc /* * Increment or decrement surplus_huge_pages. Keep node-specific counters - * balanced by operating on them in a round-robin fashion. + * balanced by operating on them in a round-robin fashion. Use nodes_allowed + * mask when decreasing suplus pages as we're "promoting" them to persistent. + * Use node_online_map for increment surplus pages as we're demoting previously + * persistent huge pages. + * Called holding the hugetlb_lock. * Returns 1 if an adjustment was made. */ static int adjust_pool_surplus(struct hstate *h, int delta) { + nodemask_t *nodemask = &node_online_map; static int prev_nid; int nid = prev_nid; int ret = 0; VM_BUG_ON(delta != -1 && delta != 1); + if (delta < 0) + nodemask = h->nodes_allowed; + do { - nid = next_node(nid, node_online_map); + nid = next_node(nid, *nodemask); if (nid == MAX_NUMNODES) - nid = first_node(node_online_map); + nid = first_node(*nodemask); /* To shrink on this node, there must be a surplus page */ if (delta < 0 && !h->surplus_huge_pages_node[nid]) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org