From: Mel Gorman <mel@csn.ul.ie>
To: Mel Gorman <mel@csn.ul.ie>,
Linux Memory Management List <linux-mm@kvack.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>,
Rik van Riel <riel@redhat.com>,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Christoph Lameter <cl@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Nick Piggin <npiggin@suse.de>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Lin Ming <ming.m.lin@intel.com>,
Zhang Yanmin <yanmin_zhang@linux.intel.com>
Subject: [PATCH 15/20] Do not disable interrupts in free_page_mlock()
Date: Sun, 22 Feb 2009 23:17:24 +0000 [thread overview]
Message-ID: <1235344649-18265-16-git-send-email-mel@csn.ul.ie> (raw)
In-Reply-To: <1235344649-18265-1-git-send-email-mel@csn.ul.ie>
free_page_mlock() tests and clears PG_mlocked. If set, it disables interrupts
to update counters and this happens on every page free even though interrupts
are disabled very shortly afterwards a second time. This is wasteful.
This patch splits what free_page_mlock() does. The bit check is still
made. However, the update of counters is delayed until the interrupts are
disabled. One potential weirdness with this split is that the counters do
not get updated if the bad_page() check is triggered but a system showing
bad pages is getting screwed already.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
mm/internal.h | 10 ++--------
mm/page_alloc.c | 8 +++++++-
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 478223b..b52bf86 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -155,14 +155,8 @@ static inline void mlock_migrate_page(struct page *newpage, struct page *page)
*/
static inline void free_page_mlock(struct page *page)
{
- if (unlikely(TestClearPageMlocked(page))) {
- unsigned long flags;
-
- local_irq_save(flags);
- __dec_zone_page_state(page, NR_MLOCK);
- __count_vm_event(UNEVICTABLE_MLOCKFREED);
- local_irq_restore(flags);
- }
+ __dec_zone_page_state(page, NR_MLOCK);
+ __count_vm_event(UNEVICTABLE_MLOCKFREED);
}
#else /* CONFIG_UNEVICTABLE_LRU */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a9e9466..9adafba 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -501,7 +501,6 @@ static inline void __free_one_page(struct page *page,
static inline int free_pages_check(struct page *page)
{
- free_page_mlock(page);
if (unlikely(page_mapcount(page) |
(page->mapping != NULL) |
(page_count(page) != 0) |
@@ -559,6 +558,7 @@ static void __free_pages_ok(struct page *page, unsigned int order,
unsigned long flags;
int i;
int bad = 0;
+ int clearMlocked = TestClearPageMlocked(page);
for (i = 0 ; i < (1 << order) ; ++i)
bad += free_pages_check(page + i);
@@ -574,6 +574,8 @@ static void __free_pages_ok(struct page *page, unsigned int order,
kernel_map_pages(page, 1 << order, 0);
local_irq_save(flags);
+ if (clearMlocked)
+ free_page_mlock(page);
__count_vm_events(PGFREE, 1 << order);
free_one_page(page_zone(page), page, order, migratetype);
local_irq_restore(flags);
@@ -1023,6 +1025,7 @@ static void free_hot_cold_page(struct page *page, int cold)
struct zone *zone = page_zone(page);
struct per_cpu_pages *pcp;
unsigned long flags;
+ int clearMlocked = TestClearPageMlocked(page);
if (PageAnon(page))
page->mapping = NULL;
@@ -1039,6 +1042,9 @@ static void free_hot_cold_page(struct page *page, int cold)
pcp = &zone_pcp(zone, get_cpu())->pcp;
local_irq_save(flags);
__count_vm_event(PGFREE);
+ if (clearMlocked)
+ free_page_mlock(page);
+
if (cold)
list_add_tail(&page->lru, &pcp->list);
else
--
1.5.6.5
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-02-22 23:16 UTC|newest]
Thread overview: 95+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-02-22 23:17 [RFC PATCH 00/20] Cleanup and optimise the page allocator Mel Gorman
2009-02-22 23:17 ` [PATCH 01/20] Replace __alloc_pages_internal() with __alloc_pages_nodemask() Mel Gorman
2009-02-22 23:17 ` [PATCH 02/20] Do not sanity check order in the fast path Mel Gorman
2009-02-22 23:17 ` [PATCH 03/20] Do not check NUMA node ID when the caller knows the node is valid Mel Gorman
2009-02-23 15:01 ` Christoph Lameter
2009-02-23 16:24 ` Mel Gorman
2009-02-22 23:17 ` [PATCH 04/20] Convert gfp_zone() to use a table of precalculated values Mel Gorman
2009-02-23 11:55 ` [PATCH] mm: clean up __GFP_* flags a bit Peter Zijlstra
2009-02-23 18:01 ` Mel Gorman
2009-02-23 20:27 ` Vegard Nossum
2009-02-23 15:23 ` [PATCH 04/20] Convert gfp_zone() to use a table of precalculated values Christoph Lameter
2009-02-23 15:41 ` Nick Piggin
2009-02-23 15:43 ` [PATCH 04/20] Convert gfp_zone() to use a table of precalculated value Christoph Lameter
2009-02-23 16:40 ` Mel Gorman
2009-02-23 17:03 ` Christoph Lameter
2009-02-24 1:32 ` KAMEZAWA Hiroyuki
2009-02-24 3:59 ` Nick Piggin
2009-02-24 5:20 ` KAMEZAWA Hiroyuki
2009-02-24 11:36 ` Mel Gorman
2009-02-23 16:33 ` [PATCH 04/20] Convert gfp_zone() to use a table of precalculated values Mel Gorman
2009-02-23 16:33 ` [PATCH 04/20] Convert gfp_zone() to use a table of precalculated value Christoph Lameter
2009-02-23 17:41 ` Mel Gorman
2009-02-22 23:17 ` [PATCH 05/20] Check only once if the zonelist is suitable for the allocation Mel Gorman
2009-02-22 23:17 ` [PATCH 06/20] Break up the allocator entry point into fast and slow paths Mel Gorman
2009-02-22 23:17 ` [PATCH 07/20] Simplify the check on whether cpusets are a factor or not Mel Gorman
2009-02-23 7:14 ` Pekka J Enberg
2009-02-23 9:07 ` Peter Zijlstra
2009-02-23 9:13 ` Pekka Enberg
2009-02-23 11:39 ` Mel Gorman
2009-02-23 13:19 ` Pekka Enberg
2009-02-23 9:14 ` Li Zefan
2009-02-22 23:17 ` [PATCH 08/20] Move check for disabled anti-fragmentation out of fastpath Mel Gorman
2009-02-22 23:17 ` [PATCH 09/20] Calculate the preferred zone for allocation only once Mel Gorman
2009-02-22 23:17 ` [PATCH 10/20] Calculate the migratetype " Mel Gorman
2009-02-22 23:17 ` [PATCH 11/20] Inline get_page_from_freelist() in the fast-path Mel Gorman
2009-02-23 7:21 ` Pekka Enberg
2009-02-23 11:42 ` Mel Gorman
2009-02-23 15:32 ` Nick Piggin
2009-02-24 13:32 ` Mel Gorman
2009-02-24 14:08 ` Nick Piggin
2009-02-24 15:03 ` Mel Gorman
2009-02-22 23:17 ` [PATCH 12/20] Inline __rmqueue_smallest() Mel Gorman
2009-02-22 23:17 ` [PATCH 13/20] Inline buffered_rmqueue() Mel Gorman
2009-02-23 7:24 ` Pekka Enberg
2009-02-23 11:44 ` Mel Gorman
2009-02-22 23:17 ` [PATCH 14/20] Do not call get_pageblock_migratetype() more than necessary Mel Gorman
2009-02-22 23:17 ` Mel Gorman [this message]
2009-02-23 9:19 ` [PATCH 15/20] Do not disable interrupts in free_page_mlock() Peter Zijlstra
2009-02-23 12:23 ` Mel Gorman
2009-02-23 12:44 ` Peter Zijlstra
2009-02-23 14:25 ` Mel Gorman
2009-02-22 23:17 ` [PATCH 16/20] Do not setup zonelist cache when there is only one node Mel Gorman
2009-02-22 23:17 ` [PATCH 17/20] Do not double sanity check page attributes during allocation Mel Gorman
2009-02-22 23:17 ` [PATCH 18/20] Split per-cpu list into one-list-per-migrate-type Mel Gorman
2009-02-22 23:17 ` [PATCH 19/20] Batch free pages from migratetype per-cpu lists Mel Gorman
2009-02-22 23:17 ` [PATCH 20/20] Get rid of the concept of hot/cold page freeing Mel Gorman
2009-02-23 9:37 ` Andrew Morton
2009-02-23 23:30 ` Mel Gorman
2009-02-23 23:53 ` Andrew Morton
2009-02-24 11:51 ` Mel Gorman
2009-02-25 0:01 ` Andrew Morton
2009-02-25 16:01 ` Mel Gorman
2009-02-25 16:19 ` Andrew Morton
2009-02-26 16:37 ` Mel Gorman
2009-02-26 17:00 ` Christoph Lameter
2009-02-26 17:15 ` Mel Gorman
2009-02-26 17:30 ` Christoph Lameter
2009-02-27 11:33 ` Nick Piggin
2009-02-27 15:40 ` Christoph Lameter
2009-03-03 13:52 ` Mel Gorman
2009-03-03 18:53 ` Christoph Lameter
2009-02-27 11:38 ` Nick Piggin
2009-03-01 10:37 ` KOSAKI Motohiro
2009-02-25 18:33 ` Christoph Lameter
2009-02-22 23:57 ` [RFC PATCH 00/20] Cleanup and optimise the page allocator Andi Kleen
2009-02-23 12:34 ` Mel Gorman
2009-02-23 15:34 ` [RFC PATCH 00/20] Cleanup and optimise the page allocato Christoph Lameter
2009-02-23 0:02 ` [RFC PATCH 00/20] Cleanup and optimise the page allocator Andi Kleen
2009-02-23 14:32 ` Mel Gorman
2009-02-23 17:49 ` Andi Kleen
2009-02-24 14:32 ` Mel Gorman
2009-02-23 7:29 ` Pekka Enberg
2009-02-23 8:34 ` Zhang, Yanmin
2009-02-23 9:10 ` KOSAKI Motohiro
2009-02-23 11:55 ` [PATCH] mm: gfp_to_alloc_flags() Peter Zijlstra
2009-02-23 14:00 ` Pekka Enberg
2009-02-23 18:17 ` Mel Gorman
2009-02-23 20:09 ` Peter Zijlstra
2009-02-23 22:59 ` Andrew Morton
2009-02-24 8:59 ` Peter Zijlstra
2009-02-23 14:38 ` [RFC PATCH 00/20] Cleanup and optimise the page allocator Christoph Lameter
2009-02-23 14:46 ` Nick Piggin
2009-02-23 15:00 ` Mel Gorman
2009-02-23 15:22 ` Nick Piggin
2009-02-23 20:26 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1235344649-18265-16-git-send-email-mel@csn.ul.ie \
--to=mel@csn.ul.ie \
--cc=cl@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ming.m.lin@intel.com \
--cc=npiggin@suse.de \
--cc=penberg@cs.helsinki.fi \
--cc=riel@redhat.com \
--cc=yanmin_zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox