* Comments on shmfs-0.1.010
@ 1998-07-16 22:03 Zlatko Calusic
1998-07-18 0:50 ` Eric W. Biederman
0 siblings, 1 reply; 4+ messages in thread
From: Zlatko Calusic @ 1998-07-16 22:03 UTC (permalink / raw)
To: linux-mm; +Cc: Eric Biederman
Hi!
Today, I finally found some time to play with shmfs and I must admit
that I'm astonished with the results!
After some trouble with patching (lots of conflicts which had to be
resolved manually), to my complete surprise, shmfs proved to be quite
stable and reliable.
I found these messages in logs (after every boot):
swap_after_unlock_page: lock already cleared
Adding Swap: 128988k swap-space (priority 0)
swap_after_unlock_page: lock already cleared
Adding Swap: 128484k swap-space (priority 0)
and lots of these:
Jul 16 22:50:42 atlas kernel: write_page: called on a clean page!
Jul 16 22:51:16 atlas last message repeated 612 times
Jul 16 22:51:29 atlas last message repeated 463 times
Jul 16 22:51:29 atlas kernel: kmalloc: Size (131076) too large
Jul 16 22:51:30 atlas kernel: write_page: called on a clean page!
Jul 16 22:51:30 atlas last message repeated 10 times
Jul 16 22:51:30 atlas kernel: kmalloc: Size (135172) too large
Jul 16 22:51:30 atlas kernel: write_page: called on a clean page!
Jul 16 22:51:30 atlas last message repeated 9 times
Jul 16 22:51:31 atlas kernel: kmalloc: Size (139268) too large
etc...
But other than that, machine didn't crash, and shmfs is happily
running right now, while I'm writing this. :)
I decided to comment those "write_page..." messages, recompile kernel,
and finally do some benchmarking:
2.1.108 + shmfs:
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
100 2611 90.7 3924 86.2 3201 13.3 4763 61.4 6736 24.4 143.7 4.0
Then I decided to apply my patch, which removes page aging etc...
(already sent to this list):
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
100 3023 99.5 4343 99.1 6342 26.3 7819 98.4 17860 64.0 156.4 3.6
^^^^^ ^^^^^
Final result is great (almost 18MB/s, never saw such a big number in
bonnie :)).
Last experiment I did was to put entry in /etc/fstab so that shmfs get
mounted on /tmp at boot time. That indeed worked, but unfortunately, X
(or maybe fvwm?) refused to work after that change, for unknown reason
(nothing in logs).
And that's it.
In the end, relevant info about my setup:
P166MMX, 64MB RAM
hda: WDC AC22000L, ATA DISK drive
sda: FUJITSU Model: M2954ESP SUN4.2G Rev: 2545 (aic7xxx)
shmfs /shm shmfs defaults 0 0
/dev/hda1 none swap sw,pri=0 0 0
/dev/sda1 none swap sw,pri=0 0 0
Really good work, Eric!
I hope your code gets into official kernel, as soon as possible.
Regards,
--
Posted by Zlatko Calusic E-mail: <Zlatko.Calusic@CARNet.hr>
---------------------------------------------------------------------
Any sufficiently advanced bug is indistinguishable from a feature.
--
This is a majordomo managed list. To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: Comments on shmfs-0.1.010
1998-07-16 22:03 Comments on shmfs-0.1.010 Zlatko Calusic
@ 1998-07-18 0:50 ` Eric W. Biederman
1998-07-18 12:59 ` Zlatko Calusic
0 siblings, 1 reply; 4+ messages in thread
From: Eric W. Biederman @ 1998-07-18 0:50 UTC (permalink / raw)
To: Zlatko.Calusic; +Cc: linux-mm
>>>>> "ZC" == Zlatko Calusic <Zlatko.Calusic@CARNet.hr> writes:
ZC> Hi!
ZC> Today, I finally found some time to play with shmfs and I must admit
ZC> that I'm astonished with the results!
ZC> After some trouble with patching (lots of conflicts which had to be
ZC> resolved manually), to my complete surprise, shmfs proved to be quite
ZC> stable and reliable.
ZC> I found these messages in logs (after every boot):
ZC> swap_after_unlock_page: lock already cleared
ZC> Adding Swap: 128988k swap-space (priority 0)
ZC> swap_after_unlock_page: lock already cleared
ZC> Adding Swap: 128484k swap-space (priority 0)
This is a normal case with no harm.
I think normal 2.1.101 should cause it too.
It's simply a result of swapping adding swap.
ZC> and lots of these:
ZC> Jul 16 22:50:42 atlas kernel: write_page: called on a clean page!
ZC> Jul 16 22:51:16 atlas last message repeated 612 times
ZC> Jul 16 22:51:29 atlas last message repeated 463 times
ZC> Jul 16 22:51:29 atlas kernel: kmalloc: Size (131076) too large
ZC> Jul 16 22:51:30 atlas kernel: write_page: called on a clean page!
ZC> Jul 16 22:51:30 atlas last message repeated 10 times
ZC> Jul 16 22:51:30 atlas kernel: kmalloc: Size (135172) too large
ZC> Jul 16 22:51:30 atlas kernel: write_page: called on a clean page!
ZC> Jul 16 22:51:30 atlas last message repeated 9 times
ZC> Jul 16 22:51:31 atlas kernel: kmalloc: Size (139268) too large
ZC> etc...
A debugging message for a case I didn't realize was common!
I haven't had a chance to update it yet.
The kmalloc is a little worrysome though.
Are you creating really large files in shmfs?
ZC> But other than that, machine didn't crash, and shmfs is happily
ZC> running right now, while I'm writing this. :)
ZC> I decided to comment those "write_page..." messages, recompile kernel,
ZC> and finally do some benchmarking:
ZC> 2.1.108 + shmfs:
ZC> -------Sequential Output-------- ---Sequential Input-- --Random--
ZC> -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
ZC> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
ZC> 100 2611 90.7 3924 86.2 3201 13.3 4763 61.4 6736 24.4 143.7 4.0
ZC> Then I decided to apply my patch, which removes page aging etc...
ZC> (already sent to this list):
ZC> -------Sequential Output-------- ---Sequential Input-- --Random--
ZC> -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
ZC> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
ZC> 100 3023 99.5 4343 99.1 6342 26.3 7819 98.4 17860 64.0 156.4 3.6
ZC> ^^^^^ ^^^^^
ZC> Final result is great (almost 18MB/s, never saw such a big number in
ZC> bonnie :)).
I'm a little worried by the slow output that uses huge chunks of cpu time.
But it looks like I wrote my block allocation algorithm properly.
I have a lot of tuning options that can influence things, primarily
because it is development code and I'm not sure what the best approach
is. Did you change any of them from their default?
ZC> Last experiment I did was to put entry in /etc/fstab so that shmfs get
ZC> mounted on /tmp at boot time. That indeed worked, but unfortunately, X
ZC> (or maybe fvwm?) refused to work after that change, for unknown reason
ZC> (nothing in logs).
Look at the permissions on /tmp. But default only root can write to shmfs...
I should probably implement uid gid options to set the permissions of
the root directory but I haven't done that yet.
ZC> P166MMX, 64MB RAM
ZC> hda: WDC AC22000L, ATA DISK drive
ZC> sda: FUJITSU Model: M2954ESP SUN4.2G Rev: 2545 (aic7xxx)
ZC> shmfs /shm shmfs defaults 0 0
ZC> /dev/hda1 none swap sw,pri=0 0 0
ZC> /dev/sda1 none swap sw,pri=0 0 0
Interesting. If I read this correctly you might have been getting
parrallel raid type read performance off of your two disks, on the
block read test.
ZC> Really good work, Eric!
ZC> I hope your code gets into official kernel, as soon as possible.
Thanks for the encouragement, but until I equal or better ext2 in all
marks the works not done :)
Eric
--
This is a majordomo managed list. To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Comments on shmfs-0.1.010
1998-07-18 0:50 ` Eric W. Biederman
@ 1998-07-18 12:59 ` Zlatko Calusic
1998-07-18 16:03 ` Eric W. Biederman
0 siblings, 1 reply; 4+ messages in thread
From: Zlatko Calusic @ 1998-07-18 12:59 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: linux-mm
ebiederm+eric@npwt.net (Eric W. Biederman) writes:
> >>>>> "ZC" == Zlatko Calusic <Zlatko.Calusic@CARNet.hr> writes:
>
> ZC> Hi!
> ZC> Today, I finally found some time to play with shmfs and I must admit
> ZC> that I'm astonished with the results!
>
> ZC> After some trouble with patching (lots of conflicts which had to be
> ZC> resolved manually), to my complete surprise, shmfs proved to be quite
> ZC> stable and reliable.
>
> ZC> I found these messages in logs (after every boot):
>
> ZC> swap_after_unlock_page: lock already cleared
> ZC> Adding Swap: 128988k swap-space (priority 0)
> ZC> swap_after_unlock_page: lock already cleared
> ZC> Adding Swap: 128484k swap-space (priority 0)
>
> This is a normal case with no harm.
> I think normal 2.1.101 should cause it too.
> It's simply a result of swapping adding swap.
Well, it looks like it's harmless. I don't know why. :)
>
> ZC> and lots of these:
>
> ZC> Jul 16 22:50:42 atlas kernel: write_page: called on a clean page!
> ZC> Jul 16 22:51:16 atlas last message repeated 612 times
> ZC> Jul 16 22:51:29 atlas last message repeated 463 times
> ZC> Jul 16 22:51:29 atlas kernel: kmalloc: Size (131076) too large
> ZC> Jul 16 22:51:30 atlas kernel: write_page: called on a clean page!
> ZC> Jul 16 22:51:30 atlas last message repeated 10 times
> ZC> Jul 16 22:51:30 atlas kernel: kmalloc: Size (135172) too large
> ZC> Jul 16 22:51:30 atlas kernel: write_page: called on a clean page!
> ZC> Jul 16 22:51:30 atlas last message repeated 9 times
> ZC> Jul 16 22:51:31 atlas kernel: kmalloc: Size (139268) too large
> ZC> etc...
>
> A debugging message for a case I didn't realize was common!
> I haven't had a chance to update it yet.
>
> The kmalloc is a little worrysome though.
> Are you creating really large files in shmfs?
Yes, I was creating very big file to test some things.
But after I applied my patch, I never saw those kmalloc messages?!
>
> ZC> But other than that, machine didn't crash, and shmfs is happily
> ZC> running right now, while I'm writing this. :)
>
> ZC> I decided to comment those "write_page..." messages, recompile kernel,
> ZC> and finally do some benchmarking:
>
> ZC> 2.1.108 + shmfs:
>
> ZC> -------Sequential Output-------- ---Sequential Input-- --Random--
> ZC> -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
> ZC> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
> ZC> 100 2611 90.7 3924 86.2 3201 13.3 4763 61.4 6736 24.4 143.7 4.0
>
> ZC> Then I decided to apply my patch, which removes page aging etc...
> ZC> (already sent to this list):
>
> ZC> -------Sequential Output-------- ---Sequential Input-- --Random--
> ZC> -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
> ZC> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
> ZC> 100 3023 99.5 4343 99.1 6342 26.3 7819 98.4 17860 64.0 156.4 3.6
> ZC> ^^^^^ ^^^^^
> ZC> Final result is great (almost 18MB/s, never saw such a big number in
> ZC> bonnie :)).
>
> I'm a little worried by the slow output that uses huge chunks of cpu time.
> But it looks like I wrote my block allocation algorithm properly.
>
> I have a lot of tuning options that can influence things, primarily
> because it is development code and I'm not sure what the best approach
> is. Did you change any of them from their default?
>
Unfortunately not. Time for experimenting ran out. :(
> ZC> Last experiment I did was to put entry in /etc/fstab so that shmfs get
> ZC> mounted on /tmp at boot time. That indeed worked, but unfortunately, X
> ZC> (or maybe fvwm?) refused to work after that change, for unknown reason
> ZC> (nothing in logs).
>
> Look at the permissions on /tmp. But default only root can write to shmfs...
> I should probably implement uid gid options to set the permissions of
> the root directory but I haven't done that yet.
>
Now when you say it, problem was probably that. Trivial. :)
And since fvwm writes some stupid temp file, now everything is
obvious.
> ZC> P166MMX, 64MB RAM
> ZC> hda: WDC AC22000L, ATA DISK drive
> ZC> sda: FUJITSU Model: M2954ESP SUN4.2G Rev: 2545 (aic7xxx)
>
> ZC> shmfs /shm shmfs defaults 0 0
> ZC> /dev/hda1 none swap sw,pri=0 0 0
> ZC> /dev/sda1 none swap sw,pri=0 0 0
>
> Interesting. If I read this correctly you might have been getting
> parrallel raid type read performance off of your two disks, on the
> block read test.
>
> ZC> Really good work, Eric!
> ZC> I hope your code gets into official kernel, as soon as possible.
>
> Thanks for the encouragement, but until I equal or better ext2 in all
> marks the works not done :)
>
Yesterday I tried to copy linux tree to /shm and got these errors:
Jul 17 18:57:10 atlas kernel: shmfs: No more inodes!
Jul 17 18:57:10 atlas last message repeated 3 times
Jul 17 18:57:10 atlas kernel: shmfs_mkdir: shmfs_new_inode failed
Jul 17 18:57:10 atlas kernel: shmfs: No more inodes!
Jul 17 18:57:10 atlas last message repeated 2 times
Jul 17 18:57:10 atlas kernel: shmfs_mkdir: shmfs_new_inode failed
Jul 17 18:57:10 atlas kernel: shmfs: No more inodes!
Jul 17 18:57:10 atlas kernel: shmfs_mkdir: shmfs_new_inode failed
Jul 17 18:57:10 atlas kernel: shmfs: No more inodes!
Jul 17 18:57:10 atlas kernel: shmfs: No more inodes!
Jul 17 18:57:10 atlas kernel: shmfs_mkdir: shmfs_new_inode failed
...
Tree has around 4200 files (which is slightly more than inode limit on
Linux!). Few last files didn't get copied.
Regards,
--
Posted by Zlatko Calusic E-mail: <Zlatko.Calusic@CARNet.hr>
---------------------------------------------------------------------
I'm a nobody, nobody is perfect, therefore I'm perfect.
--
This is a majordomo managed list. To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: Comments on shmfs-0.1.010
1998-07-18 12:59 ` Zlatko Calusic
@ 1998-07-18 16:03 ` Eric W. Biederman
0 siblings, 0 replies; 4+ messages in thread
From: Eric W. Biederman @ 1998-07-18 16:03 UTC (permalink / raw)
To: Zlatko.Calusic; +Cc: linux-mm
>>>>> "ZC" == Zlatko Calusic <Zlatko.Calusic@CARNet.hr> writes:
>> This is a normal case with no harm.
>> I think normal 2.1.101 should cause it too.
>> It's simply a result of swapping adding swap.
ZC> Well, it looks like it's harmless. I don't know why. :)
In that case it is harmless because it is reading the first page of
swap onto the swap lock! And since there are no races there the lock
isn't needed.
>> Are you creating really large files in shmfs?
ZC> Yes, I was creating very big file to test some things.
ZC> But after I applied my patch, I never saw those kmalloc messages?!
Currently all of pointers to file blocks are allocated just in kernel
memory. So really big files might cause that. I haven't seen them so
I haven't a clue.
ZC> Unfortunately not. Time for experimenting ran out. :(
Well that at least tells me which options were used to get those
performance marks.
ZC> Yesterday I tried to copy linux tree to /shm and got these errors:
ZC> Tree has around 4200 files (which is slightly more than inode limit on
ZC> Linux!). Few last files didn't get copied.
The story is that I allocate a fixed number of inodes to shmfs at mount time.
And then when I need one I look through those structures for one that is unused.
That is fine for testing my kernel patch, but in the long run it is a problem.
The temporary work around is to due:
mount -t shmfs -o inodes=10240 none /tmp
Anything less than 65535 should be legal.
The raw development version has a fix for this and a few other things
that I allocate in kernel memory, but it isn't stable yet. I'm using
the stable code to create my kernel patches.
Eric
--
This is a majordomo managed list. To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~1998-07-18 15:50 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1998-07-16 22:03 Comments on shmfs-0.1.010 Zlatko Calusic
1998-07-18 0:50 ` Eric W. Biederman
1998-07-18 12:59 ` Zlatko Calusic
1998-07-18 16:03 ` Eric W. Biederman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox