* [PATCH] add PF_MEMALLOC to __alloc_pages()
@ 2001-01-03 15:03 Rik van Riel
2001-01-03 23:03 ` Zlatko Calusic
0 siblings, 1 reply; 3+ messages in thread
From: Rik van Riel @ 2001-01-03 15:03 UTC (permalink / raw)
To: Linus Torvalds; +Cc: Alan Cox, Mike Galbraith, linux-kernel, linux-mm
Hi Linus, Alan, Mike,
the following patch sets PF_MEMALLOC for the current task
in __alloc_pages() to avoid infinite recursion when we try
to free memory from __alloc_pages().
Please apply the patch below, which fixes this (embarrasing)
bug...
regards,
Rik
--
Hollywood goes for world dumbination,
Trailer at 11.
http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com.br/
--- linux-2.4.0-prerelease/mm/page_alloc.c.orig Wed Jan 3 12:52:13 2001
+++ linux-2.4.0-prerelease/mm/page_alloc.c Wed Jan 3 13:01:19 2001
@@ -427,7 +427,9 @@
if (order > 0 && (gfp_mask & __GFP_WAIT)) {
zone = zonelist->zones;
/* First, clean some dirty pages. */
+ current->flags |= PF_MEMALLOC;
page_launder(gfp_mask, 1);
+ current->flags &= ~PF_MEMALLOC;
for (;;) {
zone_t *z = *(zone++);
if (!z)
@@ -475,7 +477,9 @@
* free ourselves...
*/
} else if (gfp_mask & __GFP_WAIT) {
+ current->flags |= PF_MEMALLOC;
try_to_free_pages(gfp_mask);
+ current->flags &= ~PF_MEMALLOC;
memory_pressure++;
if (!order)
goto try_again;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH] add PF_MEMALLOC to __alloc_pages()
2001-01-03 15:03 [PATCH] add PF_MEMALLOC to __alloc_pages() Rik van Riel
@ 2001-01-03 23:03 ` Zlatko Calusic
2001-01-04 13:34 ` Rik van Riel
0 siblings, 1 reply; 3+ messages in thread
From: Zlatko Calusic @ 2001-01-03 23:03 UTC (permalink / raw)
To: Rik van Riel
Cc: Linus Torvalds, Alan Cox, Mike Galbraith, linux-kernel, linux-mm
Rik van Riel <riel@conectiva.com.br> writes:
> Hi Linus, Alan, Mike,
>
> the following patch sets PF_MEMALLOC for the current task
> in __alloc_pages() to avoid infinite recursion when we try
> to free memory from __alloc_pages().
>
> Please apply the patch below, which fixes this (embarrasing)
> bug...
>
[snip]
> * free ourselves...
> */
> } else if (gfp_mask & __GFP_WAIT) {
> + current->flags |= PF_MEMALLOC;
> try_to_free_pages(gfp_mask);
> + current->flags &= ~PF_MEMALLOC;
> memory_pressure++;
> if (!order)
> goto try_again;
>
Hm, try_to_free_pages already sets the PF_MEMALLOC flag!
--
Zlatko
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH] add PF_MEMALLOC to __alloc_pages()
2001-01-03 23:03 ` Zlatko Calusic
@ 2001-01-04 13:34 ` Rik van Riel
0 siblings, 0 replies; 3+ messages in thread
From: Rik van Riel @ 2001-01-04 13:34 UTC (permalink / raw)
To: Zlatko Calusic
Cc: Linus Torvalds, Alan Cox, Mike Galbraith, linux-kernel, linux-mm
On 4 Jan 2001, Zlatko Calusic wrote:
> Rik van Riel <riel@conectiva.com.br> writes:
>
> > + current->flags |= PF_MEMALLOC;
> > try_to_free_pages(gfp_mask);
> > + current->flags &= ~PF_MEMALLOC;
>
> Hm, try_to_free_pages already sets the PF_MEMALLOC flag!
Yes. Linus already pointed out this error to me
yesterday (and his latest tree should be fine).
regards,
Rik
--
Hollywood goes for world dumbination,
Trailer at 11.
http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com.br/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2001-01-04 13:34 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-01-03 15:03 [PATCH] add PF_MEMALLOC to __alloc_pages() Rik van Riel
2001-01-03 23:03 ` Zlatko Calusic
2001-01-04 13:34 ` Rik van Riel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox