linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Trond Myklebust <trond.myklebust@fys.uio.no>
To: "Juan J. Quintela" <quintela@fi.udc.es>
Cc: linux-mm@kvack.org, Linus Torvalds <torvalds@transmeta.com>,
	linux-kernel@vger.rutgers.edu
Subject: Re: PATCH: rewrite of invalidate_inode_pages
Date: Fri, 12 May 2000 00:56:45 +0200 (CEST)	[thread overview]
Message-ID: <14619.15021.76570.36949@charged.uio.no> (raw)
In-Reply-To: <ytthfc4hfmp.fsf@vexeta.dc.fi.udc.es>

>>>>> " " == Juan J Quintela <quintela@fi.udc.es> writes:

     > (I have removed the locking to clarify the example).  It can be
     > that I am not understanding something obvious, but I think that
     > the old code also invalidates oll the pages.

No it doesn't. If there are locked pages it skips them. In the end we 
should find ourselves with a ring of locked pages, so we're doing the
equivalent of the loop

  while (head != curr) {
      curr = curr->next;
      if (PageLocked(page))
	    continue;
      .... This code is no longer called 'cos all pages are locked .....
  }


     > new one, liberates all the non_locked pages and then sleeps
     > waiting one page to become unlocked.  the other version when

This is wrong. The reason is that under NFS, the rpciod can call
invalidate_inode_pages(). If it sleeps on a locked page, then it means
we must have some page IO in progress on that page. Who serves page IO
under NFS? rpciod.
So we deadlock...

As I said. The whole idea behind invalidate_inode_pages() is to serve
the need of NFS (and any future filesystems) for non-blocking
invalidation of the page cache.

Cheers,
  Trond
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

      reply	other threads:[~2000-05-11 22:56 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2000-05-11 21:40 Juan J. Quintela
2000-05-11 21:47 ` Linus Torvalds
2000-05-11 21:56   ` Juan J. Quintela
2000-05-11 22:22     ` Linus Torvalds
2000-05-12  1:01       ` Juan J. Quintela
2000-05-12  2:02         ` PATCH: new page_cache_get() (try 2) Juan J. Quintela
2000-05-11 22:22     ` PATCH: rewrite of invalidate_inode_pages Ingo Molnar
2000-05-11 22:34     ` Trond Myklebust
2000-05-11 22:54       ` Juan J. Quintela
2000-05-11 23:17         ` Trond Myklebust
2000-05-11 23:28           ` Juan J. Quintela
2000-05-11 23:55             ` Trond Myklebust
2000-05-12 11:28             ` John Cavan
2000-05-12 11:37               ` Juan J. Quintela
2000-05-12 12:51                 ` Trond Myklebust
2000-05-12 13:21                   ` Arjan van de Ven
2000-05-12 13:35                     ` Trond Myklebust
2000-05-12 17:57                       ` Juan J. Quintela
2000-05-12 13:30                   ` Juan J. Quintela
2000-05-11 22:05   ` Jeff V. Merkey
2000-05-11 22:28 ` Trond Myklebust
2000-05-11 22:43   ` Juan J. Quintela
2000-05-11 22:56     ` Trond Myklebust [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=14619.15021.76570.36949@charged.uio.no \
    --to=trond.myklebust@fys.uio.no \
    --cc=linux-kernel@vger.rutgers.edu \
    --cc=linux-mm@kvack.org \
    --cc=quintela@fi.udc.es \
    --cc=torvalds@transmeta.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox