From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Thu, 03 Nov 2005 15:39:36 -0800 From: "Martin J. Bligh" Reply-To: "Martin J. Bligh" Subject: Re: [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19 Message-ID: <53860000.1131061176@flay> In-Reply-To: References: <4366C559.5090504@yahoo.com.au><4366D469.2010202@yahoo.com.au><20051101135651.GA8502@elte.hu><1130854224.14475.60.camel@localhost><20051101142959.GA9272@elte.hu><1130856555.14475.77.camel@localhost><20051101150142.GA10636@elte.hu><1130858580.14475.98.camel@localhost><20051102084946.GA3930@elte.hu><436880B8.1050207@yahoo.com.au><1130923969.15627.11.camel@localhost><43688B74.20002@yahoo.com.au><255360000.1130943722@[10.10.2.4]><4369824E.2020407@yahoo.com.au><306020000.1131032193@[10.10.2.4]><1131032422.2839.8.camel@laptopd505.fenrus.org> <309420000.1131036740@[10.10.2.4]><311050000.1131040276@[10.10.2.4]><314040000.1131043735@[10.10.2.4]> <43370000.1131057466@flay> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Content-Disposition: inline Sender: owner-linux-mm@kvack.org Return-Path: To: Linus Torvalds Cc: Mel Gorman , Arjan van de Ven , Nick Piggin , Dave Hansen , Ingo Molnar , Andrew Morton , kravetz@us.ibm.com, linux-mm , Linux Kernel Mailing List , lhms , Arjan van de Ven List-ID: > Ahh, you're right, there's a totally separate watermark for highmem. > > I think I even remember this. I may even be responsible. I know some of > our less successful highmem balancing efforts in the 2.4.x timeframe had > serious trouble when they ran out of highmem, and started pruning lowmem > very very aggressively. Limiting the highmem water marks meant that it > wouldn't do that very often. > > I think your patch may in fact be fine, but quite frankly, it needs > testing under real load with highmem. > > In general, I don't _think_ we should do anything different for highmem at > all, and we should just in general try to keep a percentage of pages > available. Now, the percentage probably does depend on the zone: we should > be more aggressive about more "limited" zones, ie the old 16MB DMA zone > should probably try to keep a higher percentage of free pages around than > the normal zone, and that in turn should probably keep a higher percentage > of pages around than the highmem zones. Hmm. it strikes me that there will be few (if any?) allocations out of highmem. PPC64 et al dump everything into ZONE_DMA though - so those should be uncapped already. > And that's not because of fragmentation so much, but simply because the > lower zones tend to have more "desperate" users. Running out of the normal > zone is thus a "worse" situation than running out of highmem. And we > effectively never want to allocate from the 16MB DMA zone at all, unless > it is our only choice. Well it's not 16MB on the other platforms, but ... > We actually do try to do that with that "lowmem_reserve[]" logic, which > reserves more pages in the lower zones the bigger the upper zones are (ie > if we _only_ have memory in the low 16MB, then we don't reserve any of it, > but if we have _tons_ of memory in the high zones, then we reserve more > memory for the low zones and thus make the watermarks higher for them). > > So the watermarking interacts with that lowmem_reserve logic, and I think > that on HIGHMEM, you'd be screwed _twice_: first because the "pages_min" > is limited, and second because HIGHMEM has no lowmem_reserve. > > Does that make sense? Yes. So we were only capping highmem before, now I squint at it closer. I was going off a simplification I'd written for a paper, which is not generally correct. I doubt frag is a problem in highmem, so maybe the code is correct as-is. We only want contig allocs for virtual when it's mapped 1-1 to physical (ie the kernel mapping) or real physical things. I suppose I could write something to trawl the source tree to check that assumption, but it feels right ... M. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org