* Re: kmalloc with GFP_DMA, or get_free_pages!!!
@ 2001-03-21 18:57 Jalajadevi Ganapathy
2001-03-21 18:57 ` Jeff Garzik
0 siblings, 1 reply; 4+ messages in thread
From: Jalajadevi Ganapathy @ 2001-03-21 18:57 UTC (permalink / raw)
To: Jeff Garzik; +Cc: Linux-MM
I could not find that file in the Documentation directory.
I have one more question here. I read from a book that virt_to_phy is same
as virt_to_bus for PCI devices. Is that True?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: kmalloc with GFP_DMA, or get_free_pages!!!
2001-03-21 18:57 kmalloc with GFP_DMA, or get_free_pages!!! Jalajadevi Ganapathy
@ 2001-03-21 18:57 ` Jeff Garzik
0 siblings, 0 replies; 4+ messages in thread
From: Jeff Garzik @ 2001-03-21 18:57 UTC (permalink / raw)
To: Jalajadevi Ganapathy; +Cc: Linux-MM
Jalajadevi Ganapathy wrote:
>
> I could not find that file in the Documentation directory.
It's in kernel 2.4:
> [jgarzik@rum linux_2_4]$ ls -l Documentation/DM*
> -rw-r--r-- 1 jgarzik jgarzik 15302 Mar 7 04:00 Documentation/DMA-mapping.txt
> I have one more question here. I read from a book that virt_to_phy is same
> as virt_to_bus for PCI devices. Is that True?
On some platforms yes, on some, no. Nevertheless, in kernel 2.4.x at
least, do not use virt_to_bus and virt_to_phys, use DMA mapping...
Using virt_to_bus will kill the link step on some platforms.
--
Jeff Garzik | May you have warm words on a cold evening,
Building 1024 | a full mooon on a dark night,
MandrakeSoft | and a smooth road all the way to your door.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: kmalloc with GFP_DMA, or get_free_pages!!!
2001-03-20 19:11 Jalajadevi Ganapathy
@ 2001-03-20 19:56 ` Jeff Garzik
0 siblings, 0 replies; 4+ messages in thread
From: Jeff Garzik @ 2001-03-20 19:56 UTC (permalink / raw)
To: Jalajadevi Ganapathy; +Cc: Linux-MM
Jalajadevi Ganapathy wrote:
>
> To allocate memory for DMA operations,
Use PCI DMA. Yes, even for ISA devices. Read
Documentation/DMA-mapping.txt.
--
Jeff Garzik | May you have warm words on a cold evening,
Building 1024 | a full mooon on a dark night,
MandrakeSoft | and a smooth road all the way to your door.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 4+ messages in thread
* kmalloc with GFP_DMA, or get_free_pages!!!
@ 2001-03-20 19:11 Jalajadevi Ganapathy
2001-03-20 19:56 ` Jeff Garzik
0 siblings, 1 reply; 4+ messages in thread
From: Jalajadevi Ganapathy @ 2001-03-20 19:11 UTC (permalink / raw)
To: Linux-MM
To allocate memory for DMA operations, i can use kmalloc (,,GFP_DMA.. ). In
which case I need to use get_free_pages? Both the above gives me the
contiguous memory. I understand that get_free_pages gives me in terms of
pages. So it would be more than the memory which i asked for. I am not sure
watz the exact difference between these two.
Could anyone plz lemme know about this?
Thanks
Jalaja
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2001-03-21 18:57 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-03-21 18:57 kmalloc with GFP_DMA, or get_free_pages!!! Jalajadevi Ganapathy
2001-03-21 18:57 ` Jeff Garzik
-- strict thread matches above, loose matches on Subject: below --
2001-03-20 19:11 Jalajadevi Ganapathy
2001-03-20 19:56 ` Jeff Garzik
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox