linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Linus Torvalds <torvalds@transmeta.com>
To: Kanoj Sarcar <kanoj@google.engr.sgi.com>
Cc: Rajagopal Ananthanarayanan <ananth@sgi.com>, linux-mm@kvack.org
Subject: Re: Oops in __free_pages_ok (pre7-1) (Long) (backtrace)
Date: Wed, 3 May 2000 11:17:52 -0700 (PDT)	[thread overview]
Message-ID: <Pine.LNX.4.10.10005031110200.6180-100000@penguin.transmeta.com> (raw)
In-Reply-To: <200005031731.KAA80944@google.engr.sgi.com>


On Wed, 3 May 2000, Kanoj Sarcar wrote:
> 
> At no point between the time try_to_swap_out() is running, will is_page_shared()
> wrongly indicate the page is _not shared_, when it is really shared (as you
> say, it is pessimistic). 

Note that this is true only if you assume processor ordering.

With no common locks, a less strictly ordered system (like an alpha) might
see the update of the swap-count _much_ later on the second CPU, so that
is_page_shared() may end up not being pessimistic after all (it could get
the new page count, but the old swap-count, and thinks that the page is
free to be removed from the swap cache).

This is why not having a shared lock looks like a bug to me. Even if that
particular bug might never trigger on an x86.

_Something_ obviously triggers on the x86, though. 

Note that we may be barking up the wrong tree here: it may be a completely
different page mishandling that causes this. For example, one bug in NFS
used to be that it free'd a page that was allocated with "alloc_pages()"
using "free_page()" - which takes the virtual address and only works for
"normal" pages. Now, if you have more than about 960MB of memory and the
allocated page was a highmem page, you may end up freeing the wrong page
due to mixing metaphors, and suddenly the page counts are wrong.

And with the wrong page counts, the BUG() can/will happen only much later,
because a innocent "__free_page()" ends up doing the BUG(), but the real
offender happened earlier.

We fixed one such bug in NFS. Maybe there are more lurking? How much
memory do the machines have that have problems?

		Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

  reply	other threads:[~2000-05-03 18:17 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <8ener4$6djpb$1@fido.engr.sgi.com>
2000-05-03  3:11 ` Rajagopal Ananthanarayanan
2000-05-03  3:47   ` Linus Torvalds
2000-05-03  5:26     ` Kanoj Sarcar
2000-05-03  6:22       ` Rajagopal Ananthanarayanan
2000-05-03 16:11         ` Kanoj Sarcar
2000-05-03 16:19           ` Linus Torvalds
2000-05-03 16:35             ` Kanoj Sarcar
2000-05-03 17:16               ` Linus Torvalds
2000-05-03 17:31                 ` Kanoj Sarcar
2000-05-03 18:17                   ` Linus Torvalds [this message]
2000-05-03 18:37                     ` Rajagopal Ananthanarayanan
2000-05-03 18:37                     ` Kanoj Sarcar
2000-05-03 19:41                       ` Rajagopal Ananthanarayanan
2000-05-03 21:28                     ` Jeff Garzik
2000-05-03  8:11       ` Linus Torvalds
2000-05-03  8:31         ` Linus Torvalds
2000-05-03 16:08           ` Kanoj Sarcar
2000-05-03 16:14             ` Linus Torvalds
2000-05-03 16:24               ` Kanoj Sarcar
2000-05-04  1:38             ` Linus Torvalds
2000-05-04  2:44               ` Rajagopal Ananthanarayanan
2000-05-04  4:05                 ` Linus Torvalds
2000-05-04  3:16               ` Rajagopal Ananthanarayanan
2000-05-04  4:10                 ` Linus Torvalds
2000-05-05  4:46                   ` Rajagopal Ananthanarayanan
2000-05-04  7:42               ` Rajagopal Ananthanarayanan
2000-05-04 15:33                 ` Linus Torvalds
2000-05-04 15:57                   ` Rik van Riel
2000-05-04 17:19                   ` Rajagopal Ananthanarayanan
2000-05-04 17:41                     ` Rik van Riel
2000-05-04 18:18                       ` Rajagopal Ananthanarayanan
2000-05-04 18:43                         ` Linus Torvalds
2000-05-04 19:00                           ` Rik van Riel
2000-05-04 19:17                             ` Linus Torvalds
2000-05-04 21:16                               ` Rajagopal Ananthanarayanan
2000-05-04 21:51                                 ` Rik van Riel
2000-05-04 22:21                                 ` Linus Torvalds
2000-05-05  0:47                                   ` 7-4 VM killing (A solution) Rajagopal Ananthanarayanan
2000-05-05  1:30                                     ` Rik van Riel
2000-05-05  1:47                                       ` Rajagopal Ananthanarayanan
2000-05-05  5:13                                     ` Linus Torvalds
2000-05-05  6:44                                       ` Rajagopal Ananthanarayanan
2000-05-05  6:51                                         ` Linus Torvalds
2000-05-05 10:23                                           ` Rik van Riel
2000-05-04 20:40                   ` Oops in __free_pages_ok (pre7-1) (Long) (backtrace) Roger Larsson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.10.10005031110200.6180-100000@penguin.transmeta.com \
    --to=torvalds@transmeta.com \
    --cc=ananth@sgi.com \
    --cc=kanoj@google.engr.sgi.com \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox