From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail143.messagelabs.com (mail143.messagelabs.com [216.82.254.35]) by kanga.kvack.org (Postfix) with SMTP id 617E96B00DD for ; Tue, 12 Oct 2010 23:09:14 -0400 (EDT) Date: Wed, 13 Oct 2010 14:07:33 +1100 From: Dave Chinner Subject: Re: [PATCH 00/17] [RFC] soft and dynamic dirty throttling limits Message-ID: <20101013030733.GV4681@dastard> References: <20100912154945.758129106@intel.com> <20101012141716.GA26702@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20101012141716.GA26702@infradead.org> Sender: owner-linux-mm@kvack.org To: Christoph Hellwig Cc: Wu Fengguang , linux-mm , LKML , Andrew Morton , Theodore Ts'o , Jan Kara , Peter Zijlstra , Mel Gorman , Rik van Riel , KOSAKI Motohiro , Chris Mason , Christoph Hellwig , Li Shaohua List-ID: On Tue, Oct 12, 2010 at 10:17:16AM -0400, Christoph Hellwig wrote: > Wu, what's the state of this series? It looks like we'll need it > rather sooner than later - try to get at least the preparations in > ASAP would be really helpful. Not ready in it's current form. This load (creating millions of 1 byte files in parallel): $ /usr/bin/time ./fs_mark -D 10000 -S0 -n 100000 -s 1 -L 63 \ > -d /mnt/scratch/0 -d /mnt/scratch/1 \ > -d /mnt/scratch/2 -d /mnt/scratch/3 \ > -d /mnt/scratch/4 -d /mnt/scratch/5 \ > -d /mnt/scratch/6 -d /mnt/scratch/7 Locks up all the fs_mark processes spinning in traces like the following and no further progress is made when the inode cache fills memory. [ 2601.452017] fs_mark R running task 0 2303 2235 0x00000008 [ 2601.452017] ffff8801188f7878 ffffffff8103e2c9 ffff8801188f78a8 0000000000000000 [ 2601.452017] 0000000000000002 ffff8801129e21c0 ffff880002fd44c0 0000000000000000 [ 2601.452017] ffff8801188f78b8 ffffffff810a9a08 ffff8801188f78e8 ffffffff810a98e5 [ 2601.452017] Call Trace: [ 2601.452017] [] ? kvm_clock_read+0x1c/0x20 [ 2601.452017] [] ? sched_clock+0x9/0x10 [ 2601.452017] [] ? sched_clock_local+0x25/0x90 [ 2601.452017] [] ? __lock_acquire+0x330/0x14d0 [ 2601.452017] [] ? local_clock+0x34/0x80 [ 2601.452017] [] ? pvclock_clocksource_read+0x58/0xd0 [ 2601.452017] [] ? pvclock_clocksource_read+0x58/0xd0 [ 2601.452017] [] ? kvm_clock_read+0x1c/0x20 [ 2601.452017] [] ? sched_clock+0x9/0x10 [ 2601.452017] [] ? lock_acquire+0xb4/0x140 [ 2601.452017] [] ? sched_clock+0x9/0x10 [ 2601.452017] [] ? sched_clock_local+0x25/0x90 [ 2601.452017] [] ? prop_get_global+0x32/0x50 [ 2601.452017] [] ? prop_fraction_percpu+0x30/0xa0 [ 2601.452017] [] ? bdi_dirty_limit+0x9b/0xe0 [ 2601.452017] [] ? balance_dirty_pages_ratelimited_nr+0x178/0x580 [ 2601.452017] [] ? _raw_spin_unlock+0x2b/0x40 [ 2601.452017] [] ? __mark_inode_dirty+0xc5/0x230 [ 2601.452017] [] ? iov_iter_copy_from_user_atomic+0x95/0x170 [ 2601.452017] [] ? generic_file_buffered_write+0x1cc/0x270 [ 2601.452017] [] ? xfs_file_aio_write+0x79f/0xaf0 [ 2601.452017] [] ? kvm_clock_read+0x1c/0x20 [ 2601.452017] [] ? kvm_clock_read+0x1c/0x20 [ 2601.452017] [] ? sched_clock+0x9/0x10 [ 2601.452017] [] ? sched_clock_local+0x25/0x90 [ 2601.452017] [] ? do_sync_write+0xda/0x120 [ 2601.452017] [] ? might_fault+0x5c/0xb0 [ 2601.452017] [] ? security_file_permission+0x1f/0x80 [ 2601.452017] [] ? vfs_write+0xc8/0x180 [ 2601.452017] [] ? sys_write+0x54/0x90 [ 2601.452017] [] ? system_call_fastpath+0x16/0x1b This is on an 8p/4GB RAM VM. FWIW, this one test now has a proven record of exposing writeback, VM and filesystem regressions, so I'd suggest that anyone doing any sort of work that affects writeback adds it to their test matrix.... Cheers, Dave. -- Dave Chinner david@fromorbit.com -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org