linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@digeo.com>
To: "Martin J. Bligh" <mbligh@aracnet.com>
Cc: Rik van Riel <riel@conectiva.com.br>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	linux-mm mailing list <linux-mm@kvack.org>
Subject: Re: ZONE_NORMAL exhaustion (dcache slab)
Date: Mon, 21 Oct 2002 21:20:26 -0700	[thread overview]
Message-ID: <3DB4D20A.8A579516@digeo.com> (raw)
In-Reply-To: <2622146086.1035233637@[10.10.2.3]>

"Martin J. Bligh" wrote:
> 
> > I cannot make it happen here, either.  2.5.43-mm2 or current devel
> > stuff.  Heisenbug; maybe something broke dcache-rcu?  Or the math
> > overflow (unlikely).
> 
> Dipankar is going to give me some debug code once he's slept for
> a while ... that should help see if dcache-rcu went wacko.

Well if it doesn't happen again...

> >> So it looks as though it's actually ext2_inode cache that's first against the wall.
> >
> > Well that's to be expected.  Each ext2 directory inode has highmem
> > pagecache attached to it, which pins the inode.  There's no highmem
> > eviction pressure so your normal zone gets stuffed full of inodes.
> >
> > There's a fix for this in Andrea's tree, although that's perhaps a
> > bit heavy on inode_lock for 2.5 purposes.  It's a matter of running
> > invalidate_inode_pages() against the inodes as they come off the
> > unused_list.  I haven't got around to it yet.
> 
> Thanks; no urgent problem (though we did seem to have a customer hitting
> a very similar situation very easily in 2.4 ... we'll see if Andrea's
> fixes that, then I'll try to reproduce their problem on current 2.5).

Oh it's reproduceable OK.  Just run

	make-teeny-files 7 7

against a few filesystems and watch the fun

http://www.zip.com.au/~akpm/linux/patches/stuff/make-teeny-files.c
 
> >> larry:~# egrep '(dentry|inode)' /proc/slabinfo
> >> isofs_inode_cache      0      0    320    0    0    1 :  120   60
> >> ext2_inode_cache  667345 809181    416 89909 89909    1 :  120   60
> >> shmem_inode_cache      3      9    416    1    1    1 :  120   60
> >> sock_inode_cache      16     22    352    2    2    1 :  120   60
> >> proc_inode_cache      12     12    320    1    1    1 :  120   60
> >> inode_cache          385    396    320   33   33    1 :  120   60
> >> dentry_cache      1068289 1131096    160 47129 47129    1 :  248  124
> >
> > OK, so there's reasonable dentry shrinkage there, and the inodes
> > for regular files whch have no attached pagecache were reaped.
> > But all the directory inodes are sitting there pinned.
> 
> OK, this all makes a lot of sense ... apart from one thing:
> from looking at meminfo:
> 
> HighTotal:    15335424 kB
> HighFree:     15066160 kB
> 
> Even if every highmem page is pagecache, that's only 67316 pages by
> my reckoning (is pagecache broken out seperately in meminfo? both
> Buffers and Cached seem to large). If I only have 67316 page of
> pagecache, how can I have 667345 inodes with attatched pagecache pages?
> Or am I just missing something obvious and fundamental?

Maybe you didn't cat /dev/sda2 for long enough?

You should end up with very little dcache and tons of icache.
Here's what I get:

  ext2_inode_cache:   420248KB   420256KB   99.99
       buffer_head:    40422KB    41648KB   97.5 
      dentry_cache:      667KB    10211KB    6.54
biovec-BIO_MAX_PAGES:      768KB      780KB   98.46

Massive internal fragmentation of the dcache there.  But it takes
a long time.

Generally, I feel that the proportional-shrink on slab is applying
too much pressure when there's not much slab and too little when
there's a lot.  If you have 400 megs of inodes I don't really think
they are likely to be used again soon.

Perhaps we need to multiply the slab cache scanning pressure by the
slab occupancy.  That's simple to do.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

  reply	other threads:[~2002-10-22  4:20 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-10-21 20:40 Martin J. Bligh
2002-10-21 21:13 ` Andrew Morton
2002-10-21 21:16   ` Martin J. Bligh
2002-10-21 21:33     ` Andrew Morton
2002-10-21 21:33       ` Martin J. Bligh
2002-10-21 21:49         ` Andrew Morton
2002-10-21 22:30         ` Rik van Riel
2002-10-21 22:53           ` Andrew Morton
2002-10-22  0:31             ` Martin J. Bligh
2002-10-22  3:39               ` Andrew Morton
2002-10-22  3:53                 ` Martin J. Bligh
2002-10-22  4:20                   ` Andrew Morton [this message]
2002-10-22  5:49                     ` Martin J. Bligh
2002-10-22  6:21                       ` Andrew Morton
2002-10-22 16:13                         ` Martin J. Bligh
2002-10-24 11:35                   ` Ed Tomlinson
2002-10-24 14:28                     ` Martin J. Bligh
2002-10-22 16:33             ` Rik van Riel
2002-10-22 17:05               ` Andrew Morton
2002-10-22 16:21       ` Dipankar Sarma

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3DB4D20A.8A579516@digeo.com \
    --to=akpm@digeo.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mbligh@aracnet.com \
    --cc=riel@conectiva.com.br \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox