linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Iram Shahzad" <iram.shahzad@jp.fujitsu.com>
To: Wu Fengguang <fengguang.wu@intel.com>,
	Minchan Kim <minchan.kim@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Subject: Re: compaction: trying to understand the code
Date: Tue, 24 Aug 2010 14:07:02 +0900	[thread overview]
Message-ID: <8E31CE28A1354C43BBAD0BDEFA10494E@rainbow> (raw)
In-Reply-To: <20100824002753.GB6568@localhost>

[-- Attachment #1: Type: text/plain, Size: 806 bytes --]

> One question is, why kswapd won't proceed after isolating all the pages?
> If it has done with the isolated pages, we'll see growing inactive_anon
> numbers.
> 
> /proc/vmstat should give more clues on any possible page reclaim
> activities. Iram, would you help post it?

I am not sure which point of time are you interested in, so I am
attaching /proc/vmstat log of 3 points.

too_many_isolated_vmstat_before_frag.txt
   This one is taken before I ran my test app which attempts
   to make fragmentation
too_many_isolated_vmstat_before_compaction.txt
   This one is taken after running the test app and before
   running compaction.
too_many_isolated_vmstat_during_compaction.txt
   This one is taken a few minutes after running compaction.
   To take this I ran compaction in background.

Thanks
Iram

[-- Attachment #2: too_many_isolated_vmstat_before_frag.txt --]
[-- Type: text/plain, Size: 1269 bytes --]

nr_free_pages 79896
nr_inactive_anon 0
nr_active_anon 14688
nr_inactive_file 10444
nr_active_file 2718
nr_unevictable 0
nr_mlock 0
nr_anon_pages 12341
nr_mapped 9430
nr_file_pages 15511
nr_dirty 0
nr_writeback 0
nr_slab_reclaimable 528
nr_slab_unreclaimable 1073
nr_page_table_pages 1479
nr_kernel_stack 235
nr_unstable 0
nr_bounce 0
nr_vmscan_write 0
nr_writeback_temp 0
nr_isolated_anon 0
nr_isolated_file 0
nr_shmem 2349
pgpgin 4
pgpgout 0
pswpin 0
pswpout 0
pgalloc_normal 54208
pgalloc_high 0
pgalloc_movable 0
pgfree 134220
pgactivate 2718
pgdeactivate 0
pgfault 88952
pgmajfault 555
pgrefill_normal 0
pgrefill_high 0
pgrefill_movable 0
pgsteal_normal 0
pgsteal_high 0
pgsteal_movable 0
pgscan_kswapd_normal 0
pgscan_kswapd_high 0
pgscan_kswapd_movable 0
pgscan_direct_normal 0
pgscan_direct_high 0
pgscan_direct_movable 0
pginodesteal 0
slabs_scanned 0
kswapd_steal 0
kswapd_inodesteal 0
pageoutrun 0
allocstall 0
pgrotated 0
compact_blocks_moved 0
compact_pages_moved 0
compact_pagemigrate_failed 0
compact_stall 0
compact_fail 0
compact_success 0
unevictable_pgs_culled 0
unevictable_pgs_scanned 0
unevictable_pgs_rescued 0
unevictable_pgs_mlocked 0
unevictable_pgs_munlocked 0
unevictable_pgs_cleared 0
unevictable_pgs_stranded 0
unevictable_pgs_mlockfreed 0

[-- Attachment #3: too_many_isolated_vmstat_before_compaction.txt --]
[-- Type: text/plain, Size: 1271 bytes --]

nr_free_pages 54098
nr_inactive_anon 0
nr_active_anon 40354
nr_inactive_file 10433
nr_active_file 2729
nr_unevictable 0
nr_mlock 0
nr_anon_pages 38007
nr_mapped 9469
nr_file_pages 15511
nr_dirty 0
nr_writeback 0
nr_slab_reclaimable 528
nr_slab_unreclaimable 1070
nr_page_table_pages 1582
nr_kernel_stack 236
nr_unstable 0
nr_bounce 0
nr_vmscan_write 0
nr_writeback_temp 0
nr_isolated_anon 0
nr_isolated_file 0
nr_shmem 2349
pgpgin 4
pgpgout 0
pswpin 0
pswpout 0
pgalloc_normal 105927
pgalloc_high 0
pgalloc_movable 0
pgfree 160167
pgactivate 2729
pgdeactivate 0
pgfault 141220
pgmajfault 555
pgrefill_normal 0
pgrefill_high 0
pgrefill_movable 0
pgsteal_normal 0
pgsteal_high 0
pgsteal_movable 0
pgscan_kswapd_normal 0
pgscan_kswapd_high 0
pgscan_kswapd_movable 0
pgscan_direct_normal 0
pgscan_direct_high 0
pgscan_direct_movable 0
pginodesteal 0
slabs_scanned 0
kswapd_steal 0
kswapd_inodesteal 0
pageoutrun 0
allocstall 0
pgrotated 0
compact_blocks_moved 0
compact_pages_moved 0
compact_pagemigrate_failed 0
compact_stall 0
compact_fail 0
compact_success 0
unevictable_pgs_culled 0
unevictable_pgs_scanned 0
unevictable_pgs_rescued 0
unevictable_pgs_mlocked 0
unevictable_pgs_munlocked 0
unevictable_pgs_cleared 0
unevictable_pgs_stranded 0
unevictable_pgs_mlockfreed 0

[-- Attachment #4: too_many_isolated_vmstat_during_compaction.txt --]
[-- Type: text/plain, Size: 1283 bytes --]

nr_free_pages 53673
nr_inactive_anon 0
nr_active_anon 40498
nr_inactive_file 10427
nr_active_file 2735
nr_unevictable 0
nr_mlock 0
nr_anon_pages 38151
nr_mapped 9469
nr_file_pages 15511
nr_dirty 0
nr_writeback 0
nr_slab_reclaimable 536
nr_slab_unreclaimable 1070
nr_page_table_pages 1588
nr_kernel_stack 237
nr_unstable 0
nr_bounce 0
nr_vmscan_write 0
nr_writeback_temp 0
nr_isolated_anon 8592
nr_isolated_file 1862
nr_shmem 2349
pgpgin 4
pgpgout 0
pswpin 0
pswpout 0
pgalloc_normal 117872
pgalloc_high 0
pgalloc_movable 0
pgfree 182402
pgactivate 2735
pgdeactivate 0
pgfault 182499
pgmajfault 555
pgrefill_normal 0
pgrefill_high 0
pgrefill_movable 0
pgsteal_normal 0
pgsteal_high 0
pgsteal_movable 0
pgscan_kswapd_normal 0
pgscan_kswapd_high 0
pgscan_kswapd_movable 0
pgscan_direct_normal 0
pgscan_direct_high 0
pgscan_direct_movable 0
pginodesteal 0
slabs_scanned 0
kswapd_steal 0
kswapd_inodesteal 0
pageoutrun 0
allocstall 0
pgrotated 0
compact_blocks_moved 327
compact_pages_moved 10454
compact_pagemigrate_failed 0
compact_stall 0
compact_fail 0
compact_success 0
unevictable_pgs_culled 0
unevictable_pgs_scanned 0
unevictable_pgs_rescued 0
unevictable_pgs_mlocked 0
unevictable_pgs_munlocked 0
unevictable_pgs_cleared 0
unevictable_pgs_stranded 0
unevictable_pgs_mlockfreed 0

  reply	other threads:[~2010-08-24  5:02 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-17 11:08 Iram Shahzad
2010-08-17 11:10 ` Mel Gorman
2010-08-18  8:19   ` Iram Shahzad
2010-08-18 15:41     ` Wu Fengguang
2010-08-19  7:09       ` Iram Shahzad
2010-08-19  7:45         ` Wu Fengguang
2010-08-19  7:46         ` Mel Gorman
2010-08-19  8:08           ` Wu Fengguang
2010-08-19  8:15             ` Mel Gorman
2010-08-19  8:29               ` Wu Fengguang
2010-08-20  5:45           ` Iram Shahzad
2010-08-20  5:50             ` Wu Fengguang
2010-08-20  6:13               ` Iram Shahzad
2010-08-19 16:00         ` Minchan Kim
2010-08-20  5:31           ` Iram Shahzad
2010-08-20  5:34             ` Wu Fengguang
2010-08-20  9:35               ` Mel Gorman
2010-08-20 10:22                 ` Minchan Kim
2010-08-22 15:31                   ` Minchan Kim
2010-08-22 23:23                     ` Wu Fengguang
2010-08-23  1:58                       ` Minchan Kim
2010-08-23  3:03                         ` Iram Shahzad
2010-08-23  9:10                           ` Minchan Kim
2010-08-26  8:51                             ` Mel Gorman
2010-08-23  7:18                       ` Mel Gorman
2010-08-23 17:14                       ` Minchan Kim
2010-08-24  0:27                         ` Wu Fengguang
2010-08-24  5:07                           ` Iram Shahzad [this message]
2010-08-24  6:52                             ` Minchan Kim
2010-08-26  8:05                               ` Iram Shahzad
2010-08-23  7:16                     ` Mel Gorman
2010-08-23  9:07                       ` Minchan Kim
2010-08-20 10:23                 ` Wu Fengguang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8E31CE28A1354C43BBAD0BDEFA10494E@rainbow \
    --to=iram.shahzad@jp.fujitsu.com \
    --cc=akpm@linux-foundation.org \
    --cc=fengguang.wu@intel.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=minchan.kim@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox