From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail6.bemta7.messagelabs.com (mail6.bemta7.messagelabs.com [216.82.255.55]) by kanga.kvack.org (Postfix) with ESMTP id C2CCA6B004D for ; Tue, 29 Nov 2011 19:20:17 -0500 (EST) Date: Tue, 29 Nov 2011 16:20:14 -0800 From: Andrew Morton Subject: Re: [patch 1/5] mm: exclude reserved pages from dirtyable memory Message-Id: <20111129162014.aa290174.akpm@linux-foundation.org> In-Reply-To: <1322055258-3254-2-git-send-email-hannes@cmpxchg.org> References: <1322055258-3254-1-git-send-email-hannes@cmpxchg.org> <1322055258-3254-2-git-send-email-hannes@cmpxchg.org> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Johannes Weiner Cc: Mel Gorman , Rik van Riel , Minchan Kim , Michal Hocko , Christoph Hellwig , Wu Fengguang , Dave Chinner , Jan Kara , Shaohua Li , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org On Wed, 23 Nov 2011 14:34:14 +0100 Johannes Weiner wrote: > From: Johannes Weiner > > The amount of dirtyable pages should not include the full number of > free pages: there is a number of reserved pages that the page > allocator and kswapd always try to keep free. > > The closer (reclaimable pages - dirty pages) is to the number of > reserved pages, the more likely it becomes for reclaim to run into > dirty pages: > > +----------+ --- > | anon | | > +----------+ | > | | | > | | -- dirty limit new -- flusher new > | file | | | > | | | | > | | -- dirty limit old -- flusher old > | | | > +----------+ --- reclaim > | reserved | > +----------+ > | kernel | > +----------+ > > This patch introduces a per-zone dirty reserve that takes both the > lowmem reserve as well as the high watermark of the zone into account, > and a global sum of those per-zone values that is subtracted from the > global amount of dirtyable pages. The lowmem reserve is unavailable > to page cache allocations and kswapd tries to keep the high watermark > free. We don't want to end up in a situation where reclaim has to > clean pages in order to balance zones. > > Not treating reserved pages as dirtyable on a global level is only a > conceptual fix. In reality, dirty pages are not distributed equally > across zones and reclaim runs into dirty pages on a regular basis. > > But it is important to get this right before tackling the problem on a > per-zone level, where the distance between reclaim and the dirty pages > is mostly much smaller in absolute numbers. > > ... > > --- a/mm/page-writeback.c > +++ b/mm/page-writeback.c > @@ -327,7 +327,8 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) > &NODE_DATA(node)->node_zones[ZONE_HIGHMEM]; > > x += zone_page_state(z, NR_FREE_PAGES) + > - zone_reclaimable_pages(z); > + zone_reclaimable_pages(z) - > + zone->dirty_balance_reserve; Doesn't compile. s/zone/z/. Which makes me suspect it wasn't tested on a highmem box. This is rather worrisome, as highmem machines tend to have acute and unique zone balancing issues. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org