I got some replies on my summary, which were quite useful.
RESULT: newfs the filesystem with smaller blocks (e.g. -b 4096, but sun4u
can currently only use 8192 byte blocks) and allow smaller fragmentation
(-f 512).
I haven't tested it (because the user removed most of his stuff), but
the information should be in an archive - shouldn't it? I also included
Casper's (ex. of newfs options) and Karl's (DB) replies ...
Thanks again to
Casper Dik <casper@hollan.Sun.Com>
btirg@ui.uis.doleta.gov (Roland Grefer)
David Thorburn-Gundlach <david@bae.uga.edu>
"Karl E. Vogel" <vogelke@c17mis.region2.wpafb.af.mil>
davem@fdgroup.co.uk (David Mitchell)
"Burelbach, Jonathan" <JBurelbach@feddata.com>
Markus.
From: Casper Dik <casper@holland.Sun.COM>
> When you have many small files, fragmentation is a problem but not
> one that's fixable usign dump/retore.
> The best way to deal with that is eitehr changign teh storage format
> or dump and then *newfs* with a smaller fragment and smaller block
> size. (1k fragment/8K blocks are the default; you could use 512 /4K
> fragment/ blocks. (Unfortunately, such filesystems are not
> mountable on Ultras, something Sun should fix)
> Ue fastfs when restoring such a filesystem or it will take forever.
> (ftp.wins.uva.nl:/pub/solaris/fastfs.c.gz)
>From "Karl E. Vogel" <vogelke@c17mis.region2.wpafb.af.mil>
> You might be able to use the Berkeley DB routines to set up
> fixed-length files in such a way as to avoid the fragmentation. For
> example:
> Any file shorter than 128 bytes --> pad to 128 bytes and append to
> 000128.dat
> Any file between 129-256 bytes --> pad to 256 bytes and append to
> 000256.dat
> For a large enough collection of files, you would be dropping the
> fragmentation size of the system from 1K down to approximately 128
> bytes, as well as freeing up a bunch of inodes.
-- [ fvd@ira.uka.de | weber@crd.ge.com IAKS Uni KA ] [ University of Karlsruhe, Markus Weber, Parkstr. 17, 76131 Karlsruhe,Germany ]