* Getting big areas of memory, in 2.3.x? @ 1999-12-09 1:03 Jeff Garzik 1999-12-09 2:28 ` Alan Cox 0 siblings, 1 reply; 36+ messages in thread From: Jeff Garzik @ 1999-12-09 1:03 UTC (permalink / raw) To: linux-mm; +Cc: Linux Kernel List Guys, What's the best way to get a large region of DMA'able memory for use with framegrabbers and other greedy drivers? Per a thread on glx-dev, Andi Kleen mentions that the new 2.3.x MM stuff still doesn't allieviate the need for bigphysarea and similar patches. Is there there any way a driver can improve its chance of getting a large region of memory? ie. can it tell the system to force out user pages to make memory available, etc. Thanks, Jeff -- Jeff Garzik | Just once, I wish we would encounter Building 1024 | an alien menace that wasn't immune to MandrakeSoft, Inc. | bullets. -- The Brigadier, "Dr. Who" -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 1:03 Getting big areas of memory, in 2.3.x? Jeff Garzik @ 1999-12-09 2:28 ` Alan Cox 1999-12-09 2:45 ` Jeff Garzik 0 siblings, 1 reply; 36+ messages in thread From: Alan Cox @ 1999-12-09 2:28 UTC (permalink / raw) To: Jeff Garzik; +Cc: linux-mm, linux-kernel > What's the best way to get a large region of DMA'able memory for use > with framegrabbers and other greedy drivers? Do you need physically linear memory > > Per a thread on glx-dev, Andi Kleen mentions that the new 2.3.x MM stuff > still doesn't allieviate the need for bigphysarea and similar patches. It helps, however the best answer is to use sane hardware which has scatter gather - eg the bttv frame grabbers grab 1Mb of memory or more, but they grab it as arbitary pages not a linear block. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 2:28 ` Alan Cox @ 1999-12-09 2:45 ` Jeff Garzik 1999-12-09 5:22 ` Oliver Xymoron 1999-12-09 12:25 ` Ingo Molnar 0 siblings, 2 replies; 36+ messages in thread From: Jeff Garzik @ 1999-12-09 2:45 UTC (permalink / raw) To: Alan Cox; +Cc: Linux Kernel List, linux-mm Alan Cox wrote: > > > What's the best way to get a large region of DMA'able memory for use > > with framegrabbers and other greedy drivers? > > Do you need physically linear memory > Yes. For the Meteor-II grabber I don't think so, but it looks like the older (but mostly compatible) Corona needs it. > > Per a thread on glx-dev, Andi Kleen mentions that the new 2.3.x MM stuff > > still doesn't allieviate the need for bigphysarea and similar patches. > > It helps, however the best answer is to use sane hardware which has scatter > gather - eg the bttv frame grabbers grab 1Mb of memory or more, but they > grab it as arbitary pages not a linear block. That's the easy answer too :) -- Jeff Garzik | Just once, I wish we would encounter Building 1024 | an alien menace that wasn't immune to MandrakeSoft, Inc. | bullets. -- The Brigadier, "Dr. Who" -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 2:45 ` Jeff Garzik @ 1999-12-09 5:22 ` Oliver Xymoron 1999-12-09 12:25 ` Ingo Molnar 1 sibling, 0 replies; 36+ messages in thread From: Oliver Xymoron @ 1999-12-09 5:22 UTC (permalink / raw) To: Jeff Garzik; +Cc: Alan Cox, Linux Kernel List, linux-mm On Wed, 8 Dec 1999, Jeff Garzik wrote: > Alan Cox wrote: > > > > > What's the best way to get a large region of DMA'able memory for use > > > with framegrabbers and other greedy drivers? > > > > Do you need physically linear memory > > > Yes. For the Meteor-II grabber I don't think so, but it looks like the > older (but mostly compatible) Corona needs it. Most PCI DMA controllers can send you an end of transfer interrupt, at which point you can hand it the next contiguous segment to transfer to - software scatter-gather. Note that the number of segments (fragments) could very well be far fewer than the number of pages, meaning the overhead could be pretty minimal, providing the latency doesn't kill you. If the card has an NT driver, it almost certainly can be made to do this, as NT has no support for allocating large physically contiguous memory from drivers and pretty much forces this model. -- "Love the dolphins," she advised him. "Write by W.A.S.T.E.." -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 2:45 ` Jeff Garzik 1999-12-09 5:22 ` Oliver Xymoron @ 1999-12-09 12:25 ` Ingo Molnar 1999-12-09 20:24 ` Jeff Garzik 1999-12-10 13:52 ` Stephen C. Tweedie 1 sibling, 2 replies; 36+ messages in thread From: Ingo Molnar @ 1999-12-09 12:25 UTC (permalink / raw) To: Jeff Garzik; +Cc: Alan Cox, Linux Kernel List, linux-mm On Wed, 8 Dec 1999, Jeff Garzik wrote: > > > What's the best way to get a large region of DMA'able memory for use > > > with framegrabbers and other greedy drivers? > > > > Do you need physically linear memory > > > Yes. For the Meteor-II grabber I don't think so, but it looks like the > older (but mostly compatible) Corona needs it. hm, you could use the bootmem allocator for now - it allocates a physically continuous 165MB mem_map[] on my box just fine. The problem with bootmem is that it's "too early" in the bootup process, you cannot cleanly hook into it, because it's use is forbidden after free_all_bootmem() is called. hm, does anyone have any conceptual problem with a new allocate_largemem(pages) interface in page_alloc.c? It's not terribly hard to scan all bitmaps for available RAM and mark the large memory area allocated and remove all pages from the freelists. Such areas can only be freed via free_largemem(pages). Both calls will be slow, so should be only used at driver initialization time and such. -- mingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 12:25 ` Ingo Molnar @ 1999-12-09 20:24 ` Jeff Garzik 1999-12-09 20:31 ` Kanoj Sarcar 1999-12-09 23:15 ` Ingo Molnar 1999-12-10 13:52 ` Stephen C. Tweedie 1 sibling, 2 replies; 36+ messages in thread From: Jeff Garzik @ 1999-12-09 20:24 UTC (permalink / raw) To: Ingo Molnar; +Cc: Alan Cox, Linux Kernel List, linux-mm Ingo Molnar wrote: > hm, does anyone have any conceptual problem with a new > allocate_largemem(pages) interface in page_alloc.c? It's not terribly hard > to scan all bitmaps for available RAM and mark the large memory area > allocated and remove all pages from the freelists. Such areas can only be > freed via free_largemem(pages). Both calls will be slow, so should be only > used at driver initialization time and such. Would this interface swap out user pages if necessary? That sort of interface would be great, and kill a number of hacks floating around out there. -- Jeff Garzik | Just once, I wish we would encounter Building 1024 | an alien menace that wasn't immune to MandrakeSoft, Inc. | bullets. -- The Brigadier, "Dr. Who" -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 20:24 ` Jeff Garzik @ 1999-12-09 20:31 ` Kanoj Sarcar 1999-12-09 20:39 ` Rik van Riel 1999-12-09 20:50 ` William J. Earl 1999-12-09 23:15 ` Ingo Molnar 1 sibling, 2 replies; 36+ messages in thread From: Kanoj Sarcar @ 1999-12-09 20:31 UTC (permalink / raw) To: Jeff Garzik; +Cc: mingo, alan, linux-kernel, linux-mm > > Ingo Molnar wrote: > > hm, does anyone have any conceptual problem with a new > > allocate_largemem(pages) interface in page_alloc.c? It's not terribly hard > > to scan all bitmaps for available RAM and mark the large memory area > > allocated and remove all pages from the freelists. Such areas can only be > > freed via free_largemem(pages). Both calls will be slow, so should be only > > used at driver initialization time and such. > > Would this interface swap out user pages if necessary? That sort of > interface would be great, and kill a number of hacks floating around out > there. > Swapping out user pages is not a sure shot thing unless Linux implements reverse maps, so that we can track which page is being used by which pte. Without rmaps, any possible solution will be quite costly, if not an outright hack, IMO. Rmaps is probably not happening in 2.3. Kanoj -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 20:31 ` Kanoj Sarcar @ 1999-12-09 20:39 ` Rik van Riel 1999-12-09 20:54 ` Kanoj Sarcar 1999-12-09 23:16 ` Ingo Molnar 1999-12-09 20:50 ` William J. Earl 1 sibling, 2 replies; 36+ messages in thread From: Rik van Riel @ 1999-12-09 20:39 UTC (permalink / raw) To: Kanoj Sarcar; +Cc: Jeff Garzik, mingo, alan, linux-kernel, linux-mm On Thu, 9 Dec 1999, Kanoj Sarcar wrote: > > Ingo Molnar wrote: > > > hm, does anyone have any conceptual problem with a new > > > allocate_largemem(pages) interface in page_alloc.c? It's not terribly hard > > > to scan all bitmaps for available RAM and mark the large memory area > > > allocated and remove all pages from the freelists. Such areas can only be > > > freed via free_largemem(pages). Both calls will be slow, so should be only > > > used at driver initialization time and such. > > > > Would this interface swap out user pages if necessary? That sort of > > interface would be great, and kill a number of hacks floating around out > > there. > > Swapping out user pages is not a sure shot thing unless Linux implements > reverse maps, so that we can track which page is being used by which pte. > > Without rmaps, any possible solution will be quite costly, if not an > outright hack, IMO. Not only that, we would also need to make sure that no kernel data pages are in the way. This means we'll need both reverse maps and a "real" zoned allocator. Not a 2.4 thing, I'm afraid :( Rik -- The Internet is not a network of computers. It is a network of people. That is its real strength. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 20:39 ` Rik van Riel @ 1999-12-09 20:54 ` Kanoj Sarcar 1999-12-09 23:21 ` Ingo Molnar 1999-12-09 23:16 ` Ingo Molnar 1 sibling, 1 reply; 36+ messages in thread From: Kanoj Sarcar @ 1999-12-09 20:54 UTC (permalink / raw) To: Rik van Riel; +Cc: jgarzik, mingo, alan, linux-kernel, linux-mm > > On Thu, 9 Dec 1999, Kanoj Sarcar wrote: > > > Ingo Molnar wrote: > > > > hm, does anyone have any conceptual problem with a new > > > > allocate_largemem(pages) interface in page_alloc.c? It's not terribly hard > > > > to scan all bitmaps for available RAM and mark the large memory area > > > > allocated and remove all pages from the freelists. Such areas can only be > > > > freed via free_largemem(pages). Both calls will be slow, so should be only > > > > used at driver initialization time and such. > > > > > > Would this interface swap out user pages if necessary? That sort of > > > interface would be great, and kill a number of hacks floating around out > > > there. > > > > Swapping out user pages is not a sure shot thing unless Linux implements > > reverse maps, so that we can track which page is being used by which pte. > > > > Without rmaps, any possible solution will be quite costly, if not an > > outright hack, IMO. > > Not only that, we would also need to make > sure that no kernel data pages are in the way. > > This means we'll need both reverse maps and > a "real" zoned allocator. Not a 2.4 thing, > I'm afraid :( > Well, at least in 2.3, kernel data (and page caches) are below 1G, which means there's a lot of memory possible out there with references only from user memory. Shm page references are revokable too. Of course, in 2.5, things will probably change. Kanoj -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 20:54 ` Kanoj Sarcar @ 1999-12-09 23:21 ` Ingo Molnar 1999-12-09 22:27 ` Kanoj Sarcar 0 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 1999-12-09 23:21 UTC (permalink / raw) To: Kanoj Sarcar; +Cc: Rik van Riel, jgarzik, alan, linux-kernel, linux-mm On Thu, 9 Dec 1999, Kanoj Sarcar wrote: > Well, at least in 2.3, kernel data (and page caches) are below 1G, > which means there's a lot of memory possible out there with > references only from user memory. Shm page references are > revokable too. [...] we already kindof replace pages, see replace_with_highmem(). Reverse ptes do help, but are not a necessity to get this. Neither reverse ptes, nor any other method guarantees that a large amount of continuous RAM can be allocated. Only boot-time allocation can be guaranteed. -- mingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:21 ` Ingo Molnar @ 1999-12-09 22:27 ` Kanoj Sarcar 0 siblings, 0 replies; 36+ messages in thread From: Kanoj Sarcar @ 1999-12-09 22:27 UTC (permalink / raw) To: Ingo Molnar; +Cc: riel, jgarzik, alan, linux-kernel, linux-mm > > > On Thu, 9 Dec 1999, Kanoj Sarcar wrote: > > > Well, at least in 2.3, kernel data (and page caches) are below 1G, > > which means there's a lot of memory possible out there with > > references only from user memory. Shm page references are > > revokable too. [...] > > we already kindof replace pages, see replace_with_highmem(). Reverse ptes > do help, but are not a necessity to get this. Neither reverse ptes, nor > any other method guarantees that a large amount of continuous RAM can be > allocated. Only boot-time allocation can be guaranteed. > > -- mingo > Unfortunately, a bunch of these drivers are loadable modules, so unless they do some trickery, boot-time allocation does not apply for them. A similar category of drivers would like to do this dynamically too. For drivers that want to do this a fixed number of time at bootup, yes, boot-time allocation is the answer ... If I am not wrong, replace_with_highmem() replaces a page when the kernel is quite sure there's exactly one reference on the page, and that is from the executing code. For the dynamic case, the problem is in trying to rip away unknown number of kernel/user references from a given page. Rmaps do not guarantee it, they just improve the chances of success in such problems at an affordable cost. Kanoj -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 20:39 ` Rik van Riel 1999-12-09 20:54 ` Kanoj Sarcar @ 1999-12-09 23:16 ` Ingo Molnar 1999-12-09 23:09 ` Benjamin C.R. LaHaise 1999-12-10 12:21 ` Rik van Riel 1 sibling, 2 replies; 36+ messages in thread From: Ingo Molnar @ 1999-12-09 23:16 UTC (permalink / raw) To: Rik van Riel; +Cc: Kanoj Sarcar, Jeff Garzik, alan, linux-kernel, linux-mm On Thu, 9 Dec 1999, Rik van Riel wrote: > a "real" zoned allocator. Not a 2.4 thing, would you mind elaborating what such a "real" zoned allocator has, compared to the current one? -- mingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:16 ` Ingo Molnar @ 1999-12-09 23:09 ` Benjamin C.R. LaHaise 1999-12-10 0:44 ` Ingo Molnar 1999-12-10 12:21 ` Rik van Riel 1 sibling, 1 reply; 36+ messages in thread From: Benjamin C.R. LaHaise @ 1999-12-09 23:09 UTC (permalink / raw) To: Ingo Molnar Cc: Rik van Riel, Kanoj Sarcar, Jeff Garzik, alan, linux-kernel, linux-mm On Fri, 10 Dec 1999, Ingo Molnar wrote: > On Thu, 9 Dec 1999, Rik van Riel wrote: > > > a "real" zoned allocator. Not a 2.4 thing, > > would you mind elaborating what such a "real" zoned allocator has, > compared to the current one? The type of allocation determines what pool memory is allocated from. Ie nonpagable kernel allocations come from one zone, atomic allocations from another and user from yet another. It's basically the same thing that the slab does, except for pages. The key advantage is that allocations of different types are not mixed, so the lifetime of allocations in the same zone tends to be similar and fragmentation tends to be lower. -ben -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:09 ` Benjamin C.R. LaHaise @ 1999-12-10 0:44 ` Ingo Molnar 1999-12-10 0:18 ` William J. Earl 1999-12-11 19:56 ` Stephen C. Tweedie 0 siblings, 2 replies; 36+ messages in thread From: Ingo Molnar @ 1999-12-10 0:44 UTC (permalink / raw) To: Benjamin C.R. LaHaise Cc: Rik van Riel, Kanoj Sarcar, Jeff Garzik, alan, linux-kernel, linux-mm On Thu, 9 Dec 1999, Benjamin C.R. LaHaise wrote: > The type of allocation determines what pool memory is allocated from. > Ie nonpagable kernel allocations come from one zone, atomic > allocations from another and user from yet another. It's basically > the same thing that the slab does, except for pages. The key > advantage is that allocations of different types are not mixed, so the > lifetime of allocations in the same zone tends to be similar and > fragmentation tends to be lower. well, this is perfectly possible with the current zone allocator (check out how build_zonelists() builds dynamic allocation paths). I dont see much point in it though, it might prevent fragmentation to a certain degree, but i dont think it is a fair use of memory resources. (i'm pretty sure the atomic zone would stay unused most of the time) But you might want to try it out - just pass many small zones in free_area_init_core() and modify build_zonelists() to have private and isolated zones for GFP_ATOMIC, etc. the SLAB is completely different as it has micro-units of a few pages. A zoned allocator must work on a larger scale, and cannot afford wasting memory on the order of those larger units. -- mingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-10 0:44 ` Ingo Molnar @ 1999-12-10 0:18 ` William J. Earl 1999-12-11 19:56 ` Stephen C. Tweedie 1 sibling, 0 replies; 36+ messages in thread From: William J. Earl @ 1999-12-10 0:18 UTC (permalink / raw) To: Ingo Molnar Cc: Benjamin C.R. LaHaise, Rik van Riel, Kanoj Sarcar, Jeff Garzik, alan, linux-kernel, linux-mm Ingo Molnar writes: > On Thu, 9 Dec 1999, Benjamin C.R. LaHaise wrote: > > > The type of allocation determines what pool memory is allocated from. > > Ie nonpagable kernel allocations come from one zone, atomic > > allocations from another and user from yet another. It's basically > > the same thing that the slab does, except for pages. The key > > advantage is that allocations of different types are not mixed, so the > > lifetime of allocations in the same zone tends to be similar and > > fragmentation tends to be lower. > > well, this is perfectly possible with the current zone allocator (check > out how build_zonelists() builds dynamic allocation paths). I dont see > much point in it though, it might prevent fragmentation to a certain > degree, but i dont think it is a fair use of memory resources. (i'm pretty > sure the atomic zone would stay unused most of the time) But you might > want to try it out - just pass many small zones in free_area_init_core() > and modify build_zonelists() to have private and isolated zones for > GFP_ATOMIC, etc. > > the SLAB is completely different as it has micro-units of a few pages. A > zoned allocator must work on a larger scale, and cannot afford wasting > memory on the order of those larger units. ... For a production implementation of large pages, the zones have to be more dynamic. That is, there has to be a way to move a large page from the "moveable" zone to the "unmoveable" zone (when we run out of "unmoveable" space and the kernel wants more), and to temporarily put moveable (small) pages in the "unmoveable" zone, to avoid just this inefficient use of memory. (This assumes that an allocation of an "unmoveable" page will evict a "moveable" page from the "unmoveable" zone before expanding the "unmoveable" zone, if there are no free pages left in the "unmoveable" zone.) Even this scheme is, of course, not a perfect solution, if there are multiple large page sizes, and "unmoveable" allocations can request a page of any size, since one could then wind up with fragmentation of unmoveable memory. A reasonable compromise might be to force "ummoveable" allocations larger than a basic page to some particular large page size, make that page size the unit of additions to the "unmoveable" zone, and delete from the "unmoveable" zone any large pages which hecome entirely free (or composed only of free and "moveable" pages). This last is what I did on the SGI O2. It still allows for "moveable" large pages of any size, which gains the efficiency benefits of large pages for applications, at the cost of limiting driver and other kernel allocations to the specific large page size. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-10 0:44 ` Ingo Molnar 1999-12-10 0:18 ` William J. Earl @ 1999-12-11 19:56 ` Stephen C. Tweedie 1 sibling, 0 replies; 36+ messages in thread From: Stephen C. Tweedie @ 1999-12-11 19:56 UTC (permalink / raw) To: Ingo Molnar Cc: Benjamin C.R. LaHaise, Rik van Riel, Kanoj Sarcar, Jeff Garzik, alan, linux-kernel, linux-mm, Stephen Tweedie Hi, On Fri, 10 Dec 1999 01:44:53 +0100 (CET), Ingo Molnar <mingo@chiara.csoma.elte.hu> said: > On Thu, 9 Dec 1999, Benjamin C.R. LaHaise wrote: >> The type of allocation determines what pool memory is allocated from. >> Ie nonpagable kernel allocations come from one zone, atomic >> allocations from another and user from yet another. ... > well, this is perfectly possible with the current zone allocator (check > out how build_zonelists() builds dynamic allocation paths). I dont see > much point in it though, it might prevent fragmentation to a certain > degree, but i dont think it is a fair use of memory resources. (i'm pretty > sure the atomic zone would stay unused most of the time) Don't use static zones then. Something I talked about with Linus a while back was to separate memory into 4MB or 16MB zones, and do allocation not from individual zones but from zone lists. Then you just keep track of two lists of zones: one which contains zones which are known to have been used for non-pagable allocations, and another in which all allocations are pagable. The pagable-allocation zone family can always be used for large allocations: you just select a contiguous region of pages which aren't currently being used by the contiguous allocator, and page them out (or relocate them to a different zone if you prefer). If this is only needed by device initialisation, the relocation doesn't have to be fast. A dumb, brute-force search (such as is already done by sys_swapoff()) will do fine. --Stephen -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:16 ` Ingo Molnar 1999-12-09 23:09 ` Benjamin C.R. LaHaise @ 1999-12-10 12:21 ` Rik van Riel 1999-12-10 13:42 ` Ingo Molnar 1 sibling, 1 reply; 36+ messages in thread From: Rik van Riel @ 1999-12-10 12:21 UTC (permalink / raw) To: Ingo Molnar; +Cc: Kanoj Sarcar, Jeff Garzik, alan, linux-kernel, linux-mm On Fri, 10 Dec 1999, Ingo Molnar wrote: > On Thu, 9 Dec 1999, Rik van Riel wrote: > > > a "real" zoned allocator. Not a 2.4 thing, > > would you mind elaborating what such a "real" zoned allocator has, > compared to the current one? It would assign certain types of use to certain zones of memory and do so dynamically. Ie. we'd have a 4MB zone allocated to kernel and pagetable stuff and other areas assigned to user pages. Now when we need to have another kernel data area we can move pages out of one of the user area's as needed. We can also move out arbitrarily large chunks of contiguous user pages if we need to allocate such an area. Rik -- The Internet is not a network of computers. It is a network of people. That is its real strength. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-10 12:21 ` Rik van Riel @ 1999-12-10 13:42 ` Ingo Molnar 1999-12-10 18:04 ` William J. Earl 0 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 1999-12-10 13:42 UTC (permalink / raw) To: Rik van Riel; +Cc: Kanoj Sarcar, Jeff Garzik, alan, linux-kernel, linux-mm On Fri, 10 Dec 1999, Rik van Riel wrote: > It would assign certain types of use to certain > zones of memory and do so dynamically. (this is exactly what happens in the current page_alloc.c. Check out how we handle GFP_DMA for example.) > Ie. we'd have a 4MB zone allocated to kernel and pagetable stuff and > other areas assigned to user pages. Now when we need to have another > kernel data area we can move pages out of one of the user area's as > needed. We can also move out arbitrarily large chunks of contiguous > user pages if we need to allocate such an area. this is possible (sans the relocation process which is a special thing anyway), but why would we want to allocate large chunks of contiguous user pages? -- mingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-10 13:42 ` Ingo Molnar @ 1999-12-10 18:04 ` William J. Earl 0 siblings, 0 replies; 36+ messages in thread From: William J. Earl @ 1999-12-10 18:04 UTC (permalink / raw) To: Ingo Molnar Cc: Rik van Riel, Kanoj Sarcar, Jeff Garzik, alan, linux-kernel, linux-mm Ingo Molnar writes: ... > this is possible (sans the relocation process which is a special thing > anyway), but why would we want to allocate large chunks of contiguous user > pages? ... To be able to use, for example, 2 or 4 MB pages on x86 to reduce TLB thrashing (and, if the I/O path understands large pages, the software overhead to set up large direct or raw I/O requests). -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 20:31 ` Kanoj Sarcar 1999-12-09 20:39 ` Rik van Riel @ 1999-12-09 20:50 ` William J. Earl 1 sibling, 0 replies; 36+ messages in thread From: William J. Earl @ 1999-12-09 20:50 UTC (permalink / raw) To: Kanoj Sarcar; +Cc: Jeff Garzik, mingo, alan, linux-kernel, linux-mm Kanoj Sarcar writes: > > > > Ingo Molnar wrote: > > > hm, does anyone have any conceptual problem with a new > > > allocate_largemem(pages) interface in page_alloc.c? It's not terribly hard > > > to scan all bitmaps for available RAM and mark the large memory area > > > allocated and remove all pages from the freelists. Such areas can only be > > > freed via free_largemem(pages). Both calls will be slow, so should be only > > > used at driver initialization time and such. > > > > Would this interface swap out user pages if necessary? That sort of > > interface would be great, and kill a number of hacks floating around out > > there. > > > > Swapping out user pages is not a sure shot thing unless Linux implements > reverse maps, so that we can track which page is being used by which pte. > > Without rmaps, any possible solution will be quite costly, if not an > outright hack, IMO. With rmaps, one can simply move the page, instead of swapping it out. Also, even with rmaps, we will also have to have placement control for "long term" unmoveable allocations. That is, whenever a page is allocated for some use where it cannot be moved by the large page assembly routine, such as certain kernel data structures, it must be placed in an area of memory devoted to such pages, where that area of memory can grow, by adding large-page-sized chunks of space ot it, but can be expected to never shrink. If a page is converted to such a use, it must be moved to the "unmoveable" area. Pages in the "unmoveable" area can be used for "moveable" purposes, but will sometimes need to be moved to the "moveable" area to make room for allocations of "unmoveable" pages, to minimize the need to grow the "unmoveable" area. Without placement control, memory gradually becomes fragmented with unmoveable pages, so, after the system has been running a while, it becomes impossible to allocate any large pages, even with rmaps. The SGI O2 implements this model (in IRIX, and successfully allocates large pages on demand, occupying in total a large percentage of main memory, even after the system has been running for weeks. The main change required to interfaces is a flag to page allocation specifying "unmoveable allocation" and a pair of "make page unmoveable" and "make page moveable" functions, to be called when, for example, an application locks some memory in place, in order to point hardware control blocks at it. The "make page unmoveable" routine has to handle relocating the page, if necessary, including possibly moving some "moveable" page out of the way. The overhead is pretty small, except when memory is highly congested. The page cleaner should do a little extra work, to try to keep some pages in the "unmoveable" area available, to reduce the likelihood of needing to move pages when allocating an unmoveble page or when making a moveable page unmoveable. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 20:24 ` Jeff Garzik 1999-12-09 20:31 ` Kanoj Sarcar @ 1999-12-09 23:15 ` Ingo Molnar 1999-12-09 22:13 ` William J. Earl 1 sibling, 1 reply; 36+ messages in thread From: Ingo Molnar @ 1999-12-09 23:15 UTC (permalink / raw) To: Jeff Garzik; +Cc: Alan Cox, Linux Kernel List, linux-mm On Thu, 9 Dec 1999, Jeff Garzik wrote: > > hm, does anyone have any conceptual problem with a new > > allocate_largemem(pages) interface in page_alloc.c? It's not terribly hard > > to scan all bitmaps for available RAM and mark the large memory area > > allocated and remove all pages from the freelists. Such areas can only be > > freed via free_largemem(pages). Both calls will be slow, so should be only > > used at driver initialization time and such. > > Would this interface swap out user pages if necessary? That sort of > interface would be great, and kill a number of hacks floating around out > there. not at the moment - but it's not really necessery because this is ment for driver initialization time, which usually happens at boot time. -- mingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:15 ` Ingo Molnar @ 1999-12-09 22:13 ` William J. Earl 1999-12-09 22:26 ` Alan Cox 1999-12-09 23:24 ` Ingo Molnar 0 siblings, 2 replies; 36+ messages in thread From: William J. Earl @ 1999-12-09 22:13 UTC (permalink / raw) To: Ingo Molnar; +Cc: Jeff Garzik, Alan Cox, Linux Kernel List, linux-mm Ingo Molnar writes: > On Thu, 9 Dec 1999, Jeff Garzik wrote: > > > > hm, does anyone have any conceptual problem with a new > > > allocate_largemem(pages) interface in page_alloc.c? It's not terribly hard > > > to scan all bitmaps for available RAM and mark the large memory area > > > allocated and remove all pages from the freelists. Such areas can only be > > > freed via free_largemem(pages). Both calls will be slow, so should be only > > > used at driver initialization time and such. > > > > Would this interface swap out user pages if necessary? That sort of > > interface would be great, and kill a number of hacks floating around out > > there. > > not at the moment - but it's not really necessery because this is ment for > driver initialization time, which usually happens at boot time. ... That is not the case for loadable (modular) drivers. Loading st as a module, for example, after boot time sometimes works and sometimes does not, especially if you set the maximum buffer size larger (to, say, 128K, as is needed on some drives for good space efficiency). -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 22:13 ` William J. Earl @ 1999-12-09 22:26 ` Alan Cox 1999-12-09 23:42 ` William J. Earl 1999-12-09 23:24 ` Ingo Molnar 1 sibling, 1 reply; 36+ messages in thread From: Alan Cox @ 1999-12-09 22:26 UTC (permalink / raw) To: William J. Earl; +Cc: mingo, jgarzik, alan, linux-kernel, linux-mm > That is not the case for loadable (modular) drivers. Loading st > as a module, for example, after boot time sometimes works and sometimes does > not, especially if you set the maximum buffer size larger (to, say, 128K, > as is needed on some drives for good space efficiency). Dont mix up crap code with crap hardware. Scsi generic had similar problems and has been fixed. There are very very few non scatter-gather scsi controllers -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 22:26 ` Alan Cox @ 1999-12-09 23:42 ` William J. Earl 1999-12-09 23:50 ` Alan Cox 0 siblings, 1 reply; 36+ messages in thread From: William J. Earl @ 1999-12-09 23:42 UTC (permalink / raw) To: Alan Cox; +Cc: mingo, jgarzik, linux-kernel, linux-mm Alan Cox writes: > > That is not the case for loadable (modular) drivers. Loading st > > as a module, for example, after boot time sometimes works and sometimes does > > not, especially if you set the maximum buffer size larger (to, say, 128K, > > as is needed on some drives for good space efficiency). > > Dont mix up crap code with crap hardware. Scsi generic had similar problems > and has been fixed. There are very very few non scatter-gather scsi controllers I only mentioned st as example of the inability to get a large page long after system startup. Large pages are good for a variety of purposes. For example, large pages for programs with large code or data footprints can dramatically reduce TLB misses. If the I/O system learns to do direct I/O, the overhead of setting up large I/O operations, whether for disk I/O or for OpenGL operations such as writing a large image to the screen (via DMA), is much reduced when the I/O is done from large pages. The CPU overhead of setting up I/O operations is pretty minimal when you are doing file I/O to and from a single IDE disk, but far from minimal for higher-bandwidth targets, such as a graphics controller or a HDTV camera. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:42 ` William J. Earl @ 1999-12-09 23:50 ` Alan Cox 1999-12-10 0:30 ` William J. Earl ` (2 more replies) 0 siblings, 3 replies; 36+ messages in thread From: Alan Cox @ 1999-12-09 23:50 UTC (permalink / raw) To: William J. Earl; +Cc: alan, mingo, jgarzik, linux-kernel, linux-mm > For example, large pages for programs with large code or data footprints > can dramatically reduce TLB misses. If the I/O system learns to do direct Yep. One thing Irix always seemed to be rather neat about was page size dependant on ram size of box. > I/O, the overhead of setting up large I/O operations, whether for disk I/O > or for OpenGL operations such as writing a large image to the screen > (via DMA), is much reduced when the I/O is done from large pages. PC's have the AGP GART. That provides an MMU for the graphics card in effect. > for higher-bandwidth targets, such as a graphics controller or a > HDTV camera. I don't know of any capture cards that don't do scatter gather. Most of them do scatter gather with skipping and byte alignment so you can DMA around other windows. This is the main point. There are so so few devices that actually _have_ to have lots of linear memory it is questionable that it is worth paying the price to allow modules to allocate that way -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:50 ` Alan Cox @ 1999-12-10 0:30 ` William J. Earl 1999-12-10 0:37 ` Alan Cox 1999-12-10 4:19 ` Oliver Xymoron 1999-12-10 10:14 ` Thomas Sailer 2 siblings, 1 reply; 36+ messages in thread From: William J. Earl @ 1999-12-10 0:30 UTC (permalink / raw) To: Alan Cox; +Cc: mingo, jgarzik, linux-kernel, linux-mm Alan Cox writes: ... > > for higher-bandwidth targets, such as a graphics controller or a > > HDTV camera. > > I don't know of any capture cards that don't do scatter gather. Most of them > do scatter gather with skipping and byte alignment so you can DMA around > other windows. > > This is the main point. There are so so few devices that actually _have_ to > have lots of linear memory it is questionable that it is worth paying the > price to allow modules to allocate that way If the only issue were devices which cannot do scatter-gather, I would certainly agree. However, except for the SGI O2 (which only cares about 64 KB pages in hardware, anyway), all of the SGI hardware has been happy to do scatter-gather. What we found with (high resolution) digital media and other applications which do a lot of large DMAs was that the overhead of doing the equivalent of map_kiobuf()/unmap_kiobuf() for large buffers composed of many small pages was substantial, compared to doing it for large buffers composed of large pages. Admittedly, the IRIX equivalent is less efficient than map_kiobuf(), but map_kiobuf() does still have to touch a lot of cache lines when visiting all of the small pages in a large buffer. Then too, there is the matter of TLB misses for applications which visit a lot of data, especially on processors with reasonably large caches. With 4 KB pages and 64 TLB entries, the TLB cannot map all of a cache larger than 256 KB. If the cache is, say, 2 MB and the application cycles through many of the pages in the cache in a loop, you can wind up with a TLB miss for almost every load (other than those from the stack). With 1 MB pages, there are almost no TLB misses. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-10 0:30 ` William J. Earl @ 1999-12-10 0:37 ` Alan Cox 0 siblings, 0 replies; 36+ messages in thread From: Alan Cox @ 1999-12-10 0:37 UTC (permalink / raw) To: William J. Earl; +Cc: alan, mingo, jgarzik, linux-kernel, linux-mm > Then too, there is the matter of TLB misses for applications which > visit a lot of data, especially on processors with reasonably large > caches. With 4 KB pages and 64 TLB entries, the TLB cannot map all of > a cache larger than 256 KB. If the cache is, say, 2 MB and the > application cycles through many of the pages in the cache in a loop, > you can wind up with a TLB miss for almost every load (other than those from > the stack). With 1 MB pages, there are almost no TLB misses. With very large amounts of memory I don't doubt this. X86 is alas crippled with a choice of 4K, 2Mb or 4Mb pages. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:50 ` Alan Cox 1999-12-10 0:30 ` William J. Earl @ 1999-12-10 4:19 ` Oliver Xymoron 1999-12-10 10:14 ` Thomas Sailer 2 siblings, 0 replies; 36+ messages in thread From: Oliver Xymoron @ 1999-12-10 4:19 UTC (permalink / raw) To: Alan Cox; +Cc: William J. Earl, mingo, jgarzik, linux-kernel, linux-mm On Thu, 9 Dec 1999, Alan Cox wrote: > > for higher-bandwidth targets, such as a graphics controller or a > > HDTV camera. > > I don't know of any capture cards that don't do scatter gather. Most of them > do scatter gather with skipping and byte alignment so you can DMA around > other windows. I know of one, built internally, using a standard PCI controller. And it pumps data a lot faster than a typical frame grabber. But it's not a big deal, because as I mentioned before, most if not all PCI board chipsets can send you an interrupt at the end of a short DMA transfer, which means you can program another transfer immediately afterwards and thereby do scatter-gather in your driver. If your driver preallocs a large virtual space, locks it down, and then scans it to create a list of fragments of contiguous memory, the interrupt handler can be made pretty fast and simple. Alternately you can ask for chunks of linear memory in smaller and smaller sizes until you've gathered enough. The overhead here is usually not a big deal at all unless your device has no buffering, in which case it had better be able to do scatter-gather on its own anyway. In some ways, the latency is better because you can start processing the data from partial transfers before you'd be able to with a single notification s-g setup. > This is the main point. There are so so few devices that actually _have_ to > have lots of linear memory it is questionable that it is worth paying the > price to allow modules to allocate that way Especially when many of the exceptions can be handled in another way. -- "Love the dolphins," she advised him. "Write by W.A.S.T.E.." -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:50 ` Alan Cox 1999-12-10 0:30 ` William J. Earl 1999-12-10 4:19 ` Oliver Xymoron @ 1999-12-10 10:14 ` Thomas Sailer 2 siblings, 0 replies; 36+ messages in thread From: Thomas Sailer @ 1999-12-10 10:14 UTC (permalink / raw) To: Alan Cox; +Cc: mingo, linux-kernel, linux-mm Alan Cox wrote: > This is the main point. There are so so few devices that actually _have_ to > have lots of linear memory it is questionable that it is worth paying the > price to allow modules to allocate that way Soundcard hardware wavetable synthesizers come in mind. There are very few cards that do _not_ require multimegabyte contiguous memory. But then again software synthesizers are realizable these days. Tom -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 22:13 ` William J. Earl 1999-12-09 22:26 ` Alan Cox @ 1999-12-09 23:24 ` Ingo Molnar 1999-12-09 22:33 ` Jeff Garzik 1999-12-09 23:32 ` Rogier Wolff 1 sibling, 2 replies; 36+ messages in thread From: Ingo Molnar @ 1999-12-09 23:24 UTC (permalink / raw) To: William J. Earl; +Cc: Jeff Garzik, Alan Cox, Linux Kernel List, linux-mm On Thu, 9 Dec 1999, William J. Earl wrote: > > not at the moment - but it's not really necessery because this is > > ment for driver initialization time, which usually happens at boot > > time. > That is not the case for loadable (modular) drivers. Loading st > as a module, for example, after boot time sometimes works and > sometimes does not, especially if you set the maximum buffer size > larger (to, say, 128K, as is needed on some drives for good space > efficiency). yep, if eg. an fsck happened before modules are loaded then RAM is filled up with the buffer-cache. The best guarantee is to compile such drivers into the kernel. -- mingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:24 ` Ingo Molnar @ 1999-12-09 22:33 ` Jeff Garzik 1999-12-09 23:32 ` Rogier Wolff 1 sibling, 0 replies; 36+ messages in thread From: Jeff Garzik @ 1999-12-09 22:33 UTC (permalink / raw) To: Ingo Molnar; +Cc: William J. Earl, Alan Cox, Linux Kernel List, linux-mm Ingo Molnar wrote: > yep, if eg. an fsck happened before modules are loaded then RAM is filled > up with the buffer-cache. The best guarantee is to compile such drivers > into the kernel. Buffer cache is disposable memory, though... -- Jeff Garzik | Just once, I wish we would encounter Building 1024 | an alien menace that wasn't immune to MandrakeSoft, Inc. | bullets. -- The Brigadier, "Dr. Who" -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:24 ` Ingo Molnar 1999-12-09 22:33 ` Jeff Garzik @ 1999-12-09 23:32 ` Rogier Wolff 1999-12-09 23:44 ` JF Martinez 1999-12-09 23:46 ` Andi Kleen 1 sibling, 2 replies; 36+ messages in thread From: Rogier Wolff @ 1999-12-09 23:32 UTC (permalink / raw) To: Ingo Molnar Cc: William J. Earl, Jeff Garzik, Alan Cox, Linux Kernel List, linux-mm Ingo Molnar wrote: > yep, if eg. an fsck happened before modules are loaded then RAM is filled > up with the buffer-cache. The best guarantee is to compile such drivers > into the kernel. My ISDN drivers don't start up correctly after an fsck. What I should do is: hogmem 8 & sleep 5 kill %1 before trying to start the ISDN drivers. (This is on a 16M machine). Roger. -- ** R.E.Wolff@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2137555 ** *-- BitWizard writes Linux device drivers for any device you may have! --* "I didn't say it was your fault. I said I was going to blame it on you." -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:32 ` Rogier Wolff @ 1999-12-09 23:44 ` JF Martinez 1999-12-10 0:52 ` Ingo Molnar 1999-12-09 23:46 ` Andi Kleen 1 sibling, 1 reply; 36+ messages in thread From: JF Martinez @ 1999-12-09 23:44 UTC (permalink / raw) To: mingo; +Cc: wje, R.E.Wolff, , alan, linux-kernel, linux-mm > > Ingo Molnar wrote: > > yep, if eg. an fsck happened before modules are loaded then RAM is filled > > up with the buffer-cache. The best guarantee is to compile such drivers > > into the kernel. > Modules are crucial. The best gurantee is fix the problem and keep the drivers where they must be: in modules not in the main kernel. -- Jean Francois Martinez -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:44 ` JF Martinez @ 1999-12-10 0:52 ` Ingo Molnar 0 siblings, 0 replies; 36+ messages in thread From: Ingo Molnar @ 1999-12-10 0:52 UTC (permalink / raw) To: JF Martinez Cc: wje, R.E.Wolff, Jeff Garzik, Alan Cox, linux-kernel, MM mailing list On Fri, 10 Dec 1999, JF Martinez wrote: > > > yep, if eg. an fsck happened before modules are loaded then RAM is filled > > > up with the buffer-cache. The best guarantee is to compile such drivers > > > into the kernel. > > Modules are crucial. The best gurantee is fix the problem and keep > the drivers where they must be: in modules not in the main kernel. modules are nice for many things (like installation), but if you expect to be able to allocate 100MB continuous RAM on a booted-up 128MB box then you are simply out of luck. if modules with tough RAM-needs are absolutely needed for whatever reason, then use initrd and there will be no fsck problems ... -- mingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 23:32 ` Rogier Wolff 1999-12-09 23:44 ` JF Martinez @ 1999-12-09 23:46 ` Andi Kleen 1 sibling, 0 replies; 36+ messages in thread From: Andi Kleen @ 1999-12-09 23:46 UTC (permalink / raw) To: Rogier Wolff Cc: Ingo Molnar, William J. Earl, Jeff Garzik, Alan Cox, Linux Kernel List, linux-mm On Fri, Dec 10, 1999 at 12:32:01AM +0100, Rogier Wolff wrote: > Ingo Molnar wrote: > > yep, if eg. an fsck happened before modules are loaded then RAM is filled > > up with the buffer-cache. The best guarantee is to compile such drivers > > into the kernel. > > My ISDN drivers don't start up correctly after an fsck. This is a known bug in the isdn driver. They use a >64K array for their device structures. The easy fix is to just replace the kmalloc with a vmalloc() [the better fix would be to use a array of pointers and allocate the device structures only when needed]. These are just internal structures that are never touched by hardware, so vmalloc is fine. I believe Karsten has fixed it in the latest I4L Tree. -Andi --- This is like TV. I don't like TV. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: Getting big areas of memory, in 2.3.x? 1999-12-09 12:25 ` Ingo Molnar 1999-12-09 20:24 ` Jeff Garzik @ 1999-12-10 13:52 ` Stephen C. Tweedie 1 sibling, 0 replies; 36+ messages in thread From: Stephen C. Tweedie @ 1999-12-10 13:52 UTC (permalink / raw) To: Ingo Molnar Cc: Jeff Garzik, Alan Cox, Linux Kernel List, linux-mm, Stephen Tweedie Hi, On Thu, 9 Dec 1999 13:25:00 +0100 (CET), Ingo Molnar <mingo@chiara.csoma.elte.hu> said: > hm, does anyone have any conceptual problem with a new > allocate_largemem(pages) interface in page_alloc.c? It's not > terribly hard to scan all bitmaps for available RAM and mark the > large memory area allocated and remove all pages from the > freelists. Even better: the zoned allocator makes it pretty easy to reserve (say) the top 25% of memory for use only by freeable (ie. page cache and anonymous) pages: just make a separate zone for that. If there is memory that you know you can reshuffle, then a slow, swapout-style exhaustive VM search will eventually let you allocate any page you want from that zone (barring only mlock()ed pages). That's maybe more work than we want for a problem which may disappear eventually of its own accord: a lot of AGP chipsets these days have a GART which is visible from the PCI side, and that lets you map discontiguous physical pages into a virtual region which looks contiguous to the PCI hardware. There's similar hardware on the Sparc and Alpha PCI boxes (is it universal on PCI buses on those platforms?) --Stephen -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://humbolt.geo.uu.nl/Linux-MM/ ^ permalink raw reply [flat|nested] 36+ messages in thread
end of thread, other threads:[~1999-12-11 19:56 UTC | newest] Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 1999-12-09 1:03 Getting big areas of memory, in 2.3.x? Jeff Garzik 1999-12-09 2:28 ` Alan Cox 1999-12-09 2:45 ` Jeff Garzik 1999-12-09 5:22 ` Oliver Xymoron 1999-12-09 12:25 ` Ingo Molnar 1999-12-09 20:24 ` Jeff Garzik 1999-12-09 20:31 ` Kanoj Sarcar 1999-12-09 20:39 ` Rik van Riel 1999-12-09 20:54 ` Kanoj Sarcar 1999-12-09 23:21 ` Ingo Molnar 1999-12-09 22:27 ` Kanoj Sarcar 1999-12-09 23:16 ` Ingo Molnar 1999-12-09 23:09 ` Benjamin C.R. LaHaise 1999-12-10 0:44 ` Ingo Molnar 1999-12-10 0:18 ` William J. Earl 1999-12-11 19:56 ` Stephen C. Tweedie 1999-12-10 12:21 ` Rik van Riel 1999-12-10 13:42 ` Ingo Molnar 1999-12-10 18:04 ` William J. Earl 1999-12-09 20:50 ` William J. Earl 1999-12-09 23:15 ` Ingo Molnar 1999-12-09 22:13 ` William J. Earl 1999-12-09 22:26 ` Alan Cox 1999-12-09 23:42 ` William J. Earl 1999-12-09 23:50 ` Alan Cox 1999-12-10 0:30 ` William J. Earl 1999-12-10 0:37 ` Alan Cox 1999-12-10 4:19 ` Oliver Xymoron 1999-12-10 10:14 ` Thomas Sailer 1999-12-09 23:24 ` Ingo Molnar 1999-12-09 22:33 ` Jeff Garzik 1999-12-09 23:32 ` Rogier Wolff 1999-12-09 23:44 ` JF Martinez 1999-12-10 0:52 ` Ingo Molnar 1999-12-09 23:46 ` Andi Kleen 1999-12-10 13:52 ` Stephen C. Tweedie
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox