From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from westrelay02.boulder.ibm.com (westrelay02.boulder.ibm.com [9.17.195.11]) by e32.co.us.ibm.com (8.12.10/8.12.9) with ESMTP id j5MGu5RX504782 for ; Wed, 22 Jun 2005 12:56:05 -0400 Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by westrelay02.boulder.ibm.com (8.12.10/NCO/VER6.6) with ESMTP id j5MGu5Bj323586 for ; Wed, 22 Jun 2005 10:56:05 -0600 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.12.11/8.13.3) with ESMTP id j5MGu4x6016607 for ; Wed, 22 Jun 2005 10:56:04 -0600 Subject: Re: 2.6.12-mm1 & 2K lun testing (JFS problem ?) From: Badari Pulavarty In-Reply-To: <1119448252.9262.12.camel@localhost> References: <1118856977.4301.406.camel@dyn9047017072.beaverton.ibm.com> <20050616002451.01f7e9ed.akpm@osdl.org> <1118951458.4301.478.camel@dyn9047017072.beaverton.ibm.com> <20050616133730.1924fca3.akpm@osdl.org> <1118965381.4301.488.camel@dyn9047017072.beaverton.ibm.com> <20050616175130.22572451.akpm@osdl.org> <42B2E7D2.9080705@us.ibm.com> <20050617141331.078e5f8f.akpm@osdl.org> <1119400494.4620.33.camel@dyn9047017102.beaverton.ibm.com> <1119448252.9262.12.camel@localhost> Content-Type: text/plain Date: Wed, 22 Jun 2005 09:56:03 -0700 Message-Id: <1119459363.13376.1.camel@dyn9047017102.beaverton.ibm.com> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org Return-Path: To: Dave Kleikamp Cc: Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org List-ID: I need to re-create the problem to capture stats. I don't see any stacks for jfsCommit, jfsSync, jfsIO threads in sysrq-t output (in /var/log/messages). Hmm. Let me re-create to capture this. Thanks, Badari On Wed, 2005-06-22 at 08:50 -0500, Dave Kleikamp wrote: > On Tue, 2005-06-21 at 17:34 -0700, Badari Pulavarty wrote: > > Hi Andrew & Shaggy, > > > > Here is the summary of 2K lun testing on 2.6.12-mm1. > > > > When I tune dirty ratios and CFQ queue depths, things > > seems to be running fine. > > > > echo 20 > /proc/sys/vm/dirty_ratio > > echo 20 > /proc/sys/vm/overcommit_ratio > > echo 4 > /sys/block//queue/nr_requests > > > > > > But, I am running into JFS problem. I can't kill my > > "dd" process. > > Assuming you built the kernel with CONFIG_JFS_STATISTICS, can you send > me the contents of /proc/fs/jfs/txstats? > > > They all get stuck in: > > > > (I am going to try ext3). > > > > dd D 0000000000000000 0 12943 1 12939 > > (NOTLB) > > ffff81010612d8f8 0000000000000086 ffff81019677a380 000000000003ffff > > 00000000d5b95298 ffff81010612d918 0000000000000003 > > ffff810169f63880 > > 00000076d9f1ea00 0000000000000001 > > Call Trace:{submit_bio+223} > > {txBegin+625} > > Looks like txBegin is the problem. Probably ran out of txBlocks. Maybe > a stack trace of jfsCommit, jfsIO, and jfsSync threads might be useful > too. > > > {default_wake_function+0} > > {default_wake_function+0} > > {jfs_commit_inode+155} > > {jfs_write_inode+58} > > {__writeback_single_inode+551} > > {jfs_get_blocks+521} > > {find_get_page+92} > > {__find_get_block_slow+85} > > {generic_sync_sb_inodes+524} > > {writeback_inodes+125} > > {balance_dirty_pages_ratelimited+228} > > {generic_file_buffered_write+1221} > > {current_fs_time+85} > > {__mark_inode_dirty+52} > > {inode_update_time+188} > > {__generic_file_aio_write_nolock+938} > > {unmap_vmas+965} > > {__generic_file_write_nolock+158} > > {zeromap_page_range+990} > > {autoremove_wake_function+0} > > {__up_read+33} > > {generic_file_write+101} > > {vfs_write+233} {sys_write > > +83} > > {system_call+126} > > > > > Thanks, > > Badari > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: aart@kvack.org