* [PATCH] mm: vmscan: deactivate isolated pages with lru lock released
@ 2012-01-11 12:45 Hillf Danton
2012-01-11 21:29 ` Rik van Riel
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Hillf Danton @ 2012-01-11 12:45 UTC (permalink / raw)
To: linux-mm
Cc: KAMEZAWA Hiroyuki, David Rientjes, Andrew Morton, LKML, Hillf Danton
Spinners on other CPUs, if any, could take the lru lock and do their jobs while
isolated pages are deactivated on the current CPU if the lock is released
actively. And no risk of race raised as pages are already queued on locally
private list.
Signed-off-by: Hillf Danton <dhillf@gmail.com>
---
--- a/mm/vmscan.c Thu Dec 29 20:20:16 2011
+++ b/mm/vmscan.c Wed Jan 11 20:40:40 2012
@@ -1464,6 +1464,7 @@ update_isolated_counts(struct mem_cgroup
struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
nr_active = clear_active_flags(isolated_list, count);
+ spin_lock_irq(&zone->lru_lock);
__count_vm_events(PGDEACTIVATE, nr_active);
__mod_zone_page_state(zone, NR_ACTIVE_FILE,
@@ -1482,6 +1483,7 @@ update_isolated_counts(struct mem_cgroup
reclaim_stat->recent_scanned[0] += *nr_anon;
reclaim_stat->recent_scanned[1] += *nr_file;
+ spin_unlock_irq(&zone->lru_lock);
}
/*
@@ -1577,15 +1579,13 @@ shrink_inactive_list(unsigned long nr_to
__count_zone_vm_events(PGSCAN_DIRECT, zone,
nr_scanned);
}
+ spin_unlock_irq(&zone->lru_lock);
- if (nr_taken == 0) {
- spin_unlock_irq(&zone->lru_lock);
+ if (nr_taken == 0)
return 0;
- }
update_isolated_counts(mz, sc, &nr_anon, &nr_file, &page_list);
- spin_unlock_irq(&zone->lru_lock);
nr_reclaimed = shrink_page_list(&page_list, mz, sc, priority,
&nr_dirty, &nr_writeback);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: vmscan: deactivate isolated pages with lru lock released
2012-01-11 12:45 [PATCH] mm: vmscan: deactivate isolated pages with lru lock released Hillf Danton
@ 2012-01-11 21:29 ` Rik van Riel
2012-01-11 21:51 ` David Rientjes
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: Rik van Riel @ 2012-01-11 21:29 UTC (permalink / raw)
To: Hillf Danton
Cc: linux-mm, KAMEZAWA Hiroyuki, David Rientjes, Andrew Morton, LKML
On 01/11/2012 07:45 AM, Hillf Danton wrote:
> Spinners on other CPUs, if any, could take the lru lock and do their jobs while
> isolated pages are deactivated on the current CPU if the lock is released
> actively. And no risk of race raised as pages are already queued on locally
> private list.
>
>
> Signed-off-by: Hillf Danton<dhillf@gmail.com>
Reviewed-by: Rik van Riel<riel@redhat.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: vmscan: deactivate isolated pages with lru lock released
2012-01-11 12:45 [PATCH] mm: vmscan: deactivate isolated pages with lru lock released Hillf Danton
2012-01-11 21:29 ` Rik van Riel
@ 2012-01-11 21:51 ` David Rientjes
2012-01-11 22:33 ` Hugh Dickins
2012-01-12 2:28 ` KAMEZAWA Hiroyuki
3 siblings, 0 replies; 8+ messages in thread
From: David Rientjes @ 2012-01-11 21:51 UTC (permalink / raw)
To: Hillf Danton; +Cc: linux-mm, KAMEZAWA Hiroyuki, Andrew Morton, LKML
On Wed, 11 Jan 2012, Hillf Danton wrote:
> Spinners on other CPUs, if any, could take the lru lock and do their jobs while
> isolated pages are deactivated on the current CPU if the lock is released
> actively. And no risk of race raised as pages are already queued on locally
> private list.
>
>
> Signed-off-by: Hillf Danton <dhillf@gmail.com>
Acked-by: David Rientjes <rientjes@google.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: vmscan: deactivate isolated pages with lru lock released
2012-01-11 12:45 [PATCH] mm: vmscan: deactivate isolated pages with lru lock released Hillf Danton
2012-01-11 21:29 ` Rik van Riel
2012-01-11 21:51 ` David Rientjes
@ 2012-01-11 22:33 ` Hugh Dickins
2012-01-12 13:39 ` Hillf Danton
2012-01-12 2:28 ` KAMEZAWA Hiroyuki
3 siblings, 1 reply; 8+ messages in thread
From: Hugh Dickins @ 2012-01-11 22:33 UTC (permalink / raw)
To: Hillf Danton
Cc: linux-mm, KAMEZAWA Hiroyuki, David Rientjes, Andrew Morton, LKML
On Wed, 11 Jan 2012, Hillf Danton wrote:
> Spinners on other CPUs, if any, could take the lru lock and do their jobs while
> isolated pages are deactivated on the current CPU if the lock is released
> actively. And no risk of race raised as pages are already queued on locally
> private list.
You make a good point - except, I'm afraid as usual, I have difficulty
in understanding your comment, in separating how it is before your change
and how it is after your change. Above you're describing how it is after
your change; and it would help if you point out that you're taking the
lock off clear_active_flags(), which goes all the way down the list of
pages we isolated (to a locally private list, yes, important point).
However... this patch is based on Linus's current, and will clash with a
patch of mine presently in akpm's tree - which I'm expecting will go on
to Linus soon, unless Andrew discards it in favour of yours (that might
involve a little unravelling, I didn't look). Among other rearrangements,
I merged the code from clear_active_flags() into update_isolated_counts().
And something that worries me is that you're now dropping the spinlock
and reacquiring it shortly afterwards, just clear_active_flags in between.
That may bounce the lock around more than before, and actually prove worse.
I suspect that your patch can be improved, to take away that worry.
Why do we need to take the lock again? Only to update reclaim_stat:
for the other stats, interrupts disabled is certainly good enough,
and more research might show that preemption disabled would be enough.
get_scan_count() is called at the (re)start of shrink_mem_cgroup_zone(),
before it goes down to do shrink_list()s: I think it would not be harmed
at all if we delayed updating reclaim_stat->recent_scanned until the
next time we take the lock, lower down.
Other things that strike me, looking here again: isn't it the case that
update_isolated_counts() is actually called either for file or for anon,
but never for both? We might be able to make savings from that, perhaps
enlist help from isolate_lru_pages() to avoid having to go down the list
again to clear active flags.
I certainly do have more changes to make around here, in changing the
locking over to be per-memcg (and the locking on reclaim_stat is something
I have not got quite right yet): but if you've a good patch to reduce the
locking, I should rebase on top of yours.
Hugh
>
>
> Signed-off-by: Hillf Danton <dhillf@gmail.com>
> ---
>
> --- a/mm/vmscan.c Thu Dec 29 20:20:16 2011
> +++ b/mm/vmscan.c Wed Jan 11 20:40:40 2012
> @@ -1464,6 +1464,7 @@ update_isolated_counts(struct mem_cgroup
> struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
>
> nr_active = clear_active_flags(isolated_list, count);
> + spin_lock_irq(&zone->lru_lock);
> __count_vm_events(PGDEACTIVATE, nr_active);
>
> __mod_zone_page_state(zone, NR_ACTIVE_FILE,
> @@ -1482,6 +1483,7 @@ update_isolated_counts(struct mem_cgroup
>
> reclaim_stat->recent_scanned[0] += *nr_anon;
> reclaim_stat->recent_scanned[1] += *nr_file;
> + spin_unlock_irq(&zone->lru_lock);
> }
>
> /*
> @@ -1577,15 +1579,13 @@ shrink_inactive_list(unsigned long nr_to
> __count_zone_vm_events(PGSCAN_DIRECT, zone,
> nr_scanned);
> }
> + spin_unlock_irq(&zone->lru_lock);
>
> - if (nr_taken == 0) {
> - spin_unlock_irq(&zone->lru_lock);
> + if (nr_taken == 0)
> return 0;
> - }
>
> update_isolated_counts(mz, sc, &nr_anon, &nr_file, &page_list);
>
> - spin_unlock_irq(&zone->lru_lock);
>
> nr_reclaimed = shrink_page_list(&page_list, mz, sc, priority,
> &nr_dirty, &nr_writeback);
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: vmscan: deactivate isolated pages with lru lock released
2012-01-11 12:45 [PATCH] mm: vmscan: deactivate isolated pages with lru lock released Hillf Danton
` (2 preceding siblings ...)
2012-01-11 22:33 ` Hugh Dickins
@ 2012-01-12 2:28 ` KAMEZAWA Hiroyuki
2012-01-12 13:32 ` Hillf Danton
3 siblings, 1 reply; 8+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-01-12 2:28 UTC (permalink / raw)
To: Hillf Danton; +Cc: linux-mm, David Rientjes, Andrew Morton, LKML
On Wed, 11 Jan 2012 20:45:07 +0800
Hillf Danton <dhillf@gmail.com> wrote:
> Spinners on other CPUs, if any, could take the lru lock and do their jobs while
> isolated pages are deactivated on the current CPU if the lock is released
> actively. And no risk of race raised as pages are already queued on locally
> private list.
>
>
> Signed-off-by: Hillf Danton <dhillf@gmail.com>
Doesn't this increase the number of lock/unlock ?
Hmm, isn't it better to integrate clear_active_flags to isolate_pages() ?
Then we don't need list scan.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: vmscan: deactivate isolated pages with lru lock released
2012-01-12 2:28 ` KAMEZAWA Hiroyuki
@ 2012-01-12 13:32 ` Hillf Danton
0 siblings, 0 replies; 8+ messages in thread
From: Hillf Danton @ 2012-01-12 13:32 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: linux-mm, David Rientjes, Andrew Morton, LKML
On Thu, Jan 12, 2012 at 10:28 AM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@jp.fujitsu.com> wrote:
> On Wed, 11 Jan 2012 20:45:07 +0800
> Hillf Danton <dhillf@gmail.com> wrote:
>
>> Spinners on other CPUs, if any, could take the lru lock and do their jobs while
>> isolated pages are deactivated on the current CPU if the lock is released
>> actively. And no risk of race raised as pages are already queued on locally
>> private list.
>>
>>
>> Signed-off-by: Hillf Danton <dhillf@gmail.com>
>
> Doesn't this increase the number of lock/unlock ?
> Hmm, isn't it better to integrate clear_active_flags to isolate_pages() ?
> Then we don't need list scan.
>
Look at it soon.
Thanks,
Hillf
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: vmscan: deactivate isolated pages with lru lock released
2012-01-11 22:33 ` Hugh Dickins
@ 2012-01-12 13:39 ` Hillf Danton
2012-01-12 19:51 ` Hugh Dickins
0 siblings, 1 reply; 8+ messages in thread
From: Hillf Danton @ 2012-01-12 13:39 UTC (permalink / raw)
To: Hugh Dickins
Cc: linux-mm, KAMEZAWA Hiroyuki, David Rientjes, Andrew Morton, LKML
Hi Hugh
Thanks for your comment.
On Thu, Jan 12, 2012 at 6:33 AM, Hugh Dickins <hughd@google.com> wrote:
> On Wed, 11 Jan 2012, Hillf Danton wrote:
>
>> Spinners on other CPUs, if any, could take the lru lock and do their jobs while
>> isolated pages are deactivated on the current CPU if the lock is released
>> actively. And no risk of race raised as pages are already queued on locally
>> private list.
>
> You make a good point - except, I'm afraid as usual, I have difficulty
> in understanding your comment, in separating how it is before your change
> and how it is after your change. Above you're describing how it is after
> your change; and it would help if you point out that you're taking the
> lock off clear_active_flags(), which goes all the way down the list of
> pages we isolated (to a locally private list, yes, important point).
>
> However... this patch is based on Linus's current, and will clash with a
> patch of mine presently in akpm's tree - which I'm expecting will go on
> to Linus soon, unless Andrew discards it in favour of yours (that might
> involve a little unravelling, I didn't look). Among other rearrangements,
> I merged the code from clear_active_flags() into update_isolated_counts().
>
> And something that worries me is that you're now dropping the spinlock
> and reacquiring it shortly afterwards, just clear_active_flags in between.
> That may bounce the lock around more than before, and actually prove worse.
>
Yes, there is change introduced in locking behavior, and if it is already hot,
last acquiring it maybe a lucky accident due to that bounce(in your term).
The same lock is also encountered when isolating pages for migration, and I am
currently attempting to copy that lock mode to reclaim, based on the assumption
that bounce could be cured with bounce 8-) and preparing for incoming complains.
Though a hot lock, tiny window remains open for tiny tackle, for example the
attached diff.
--- a/mm/vmscan.c Thu Dec 29 20:20:16 2011
+++ b/mm/vmscan.c Thu Jan 12 20:48:42 2012
@@ -1032,6 +1032,12 @@ keep_lumpy:
return nr_reclaimed;
}
+static bool is_all_lru_mode(isolate_mode_t mode)
+{
+ return (mode & (ISOLATE_ACTIVE|ISOLATE_INACTIVE)) ==
+ (ISOLATE_ACTIVE|ISOLATE_INACTIVE);
+}
+
/*
* Attempt to remove the specified page from its LRU. Only take this page
* if it is of the appropriate PageActive status. Pages which are being
@@ -1051,8 +1057,7 @@ int __isolate_lru_page(struct page *page
if (!PageLRU(page))
return ret;
- all_lru_mode = (mode & (ISOLATE_ACTIVE|ISOLATE_INACTIVE)) ==
- (ISOLATE_ACTIVE|ISOLATE_INACTIVE);
+ all_lru_mode = is_all_lru_mode(mode);
/*
* When checking the active state, we need to be sure we are
@@ -1155,6 +1160,13 @@ static unsigned long isolate_lru_pages(u
unsigned long nr_lumpy_dirty = 0;
unsigned long nr_lumpy_failed = 0;
unsigned long scan;
+
+ /* Try to save a few cycles mainly due to lru_lock held and irq off,
+ * no bother attempting pfn-based isolation if pages only on the given
+ * src list could be taken.
+ */
+ if (order && !is_all_lru_mode(mode))
+ order = 0;
for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) {
struct page *page;
--
> I suspect that your patch can be improved, to take away that worry.
> Why do we need to take the lock again? Only to update reclaim_stat:
> for the other stats, interrupts disabled is certainly good enough,
> and more research might show that preemption disabled would be enough.
>
> get_scan_count() is called at the (re)start of shrink_mem_cgroup_zone(),
> before it goes down to do shrink_list()s: I think it would not be harmed
> at all if we delayed updating reclaim_stat->recent_scanned until the
> next time we take the lock, lower down.
>
Dunno how to handle the tons of __mod_zone_page_state() or similar without lock
protection 8-/ try to deffer updating reclaim_stat soon.
> Other things that strike me, looking here again: isn't it the case that
> update_isolated_counts() is actually called either for file or for anon,
> but never for both?
No, see the above diff please 8-)
> We might be able to make savings from that, perhaps
> enlist help from isolate_lru_pages() to avoid having to go down the list
> again to clear active flags.
>
Best regards
Hillf
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: vmscan: deactivate isolated pages with lru lock released
2012-01-12 13:39 ` Hillf Danton
@ 2012-01-12 19:51 ` Hugh Dickins
0 siblings, 0 replies; 8+ messages in thread
From: Hugh Dickins @ 2012-01-12 19:51 UTC (permalink / raw)
To: Hillf Danton
Cc: linux-mm, KAMEZAWA Hiroyuki, David Rientjes, Andrew Morton, LKML
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1990 bytes --]
On Thu, 12 Jan 2012, Hillf Danton wrote:
> On Thu, Jan 12, 2012 at 6:33 AM, Hugh Dickins <hughd@google.com> wrote:
> > On Wed, 11 Jan 2012, Hillf Danton wrote:
>
> > I suspect that your patch can be improved, to take away that worry.
> > Why do we need to take the lock again? Only to update reclaim_stat:
> > for the other stats, interrupts disabled is certainly good enough,
> > and more research might show that preemption disabled would be enough.
> >
> > get_scan_count() is called at the (re)start of shrink_mem_cgroup_zone(),
> > before it goes down to do shrink_list()s: I think it would not be harmed
> > at all if we delayed updating reclaim_stat->recent_scanned until the
> > next time we take the lock, lower down.
> >
>
> Dunno how to handle the tons of __mod_zone_page_state() or similar without lock
> protection 8-/ try to deffer updating reclaim_stat soon.
Aren't the __mod_zone_page_state() counts per-cpu? Although we very
often update them while holding the zone spinlock, that's because we
happen to be holding it already, and it prevents preemption to another
cpu, without needing to invoke the more expensive mod_zone_page_state().
Similarly __count_vm_events() is per-cpu (and no zone lock would help it).
>
> > Other things that strike me, looking here again: isn't it the case that
> > update_isolated_counts() is actually called either for file or for anon,
> > but never for both?
>
> No, see the above diff please 8-)
I think you are obliquely reminding me that lumpy reclaim will take pages
from wherever, so that way some anon pages will sneak into the file lru
reclaim and some file pages into the anon lru reclaim. Right? Whereas
move_active_pages_to_lru() doesn't have that problem, because
shrink_active_list() uses a stricter setting of reclaim_mode. Hmm,
more simplification that can be done once lumpy reclaim is removed.
(It's the 3.2 tree with its naming that I'm examining at this moment.)
Hugh
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2012-01-12 19:52 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-11 12:45 [PATCH] mm: vmscan: deactivate isolated pages with lru lock released Hillf Danton
2012-01-11 21:29 ` Rik van Riel
2012-01-11 21:51 ` David Rientjes
2012-01-11 22:33 ` Hugh Dickins
2012-01-12 13:39 ` Hillf Danton
2012-01-12 19:51 ` Hugh Dickins
2012-01-12 2:28 ` KAMEZAWA Hiroyuki
2012-01-12 13:32 ` Hillf Danton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox