Dynamic allocation only saves memory when your program executes for a short
period of time, or your program uses memory in such a way that memory
fragmentation is a non issue (all your allocations are the same size,
or you're double-indirecting everything and running a compactor
periodically). Among other things, every long-running X program is a
perfect example of how to fragment your memory rather horribly.
(side point: sure, if you have an entire memory page which is unused,
your kernel will probably leave it in swap space, but when you've
got fragmented memory, there's a good chance that there's a little 8
or 16 or maybe 64 byte block in the middle of your page that is still
being used regularly)
Dynamically allocating a large number of objects with very different sizes
is _very_ bad in terms of memory fragmentation. Many people solve that
problem by dynamically allocating all their strings in fixed sizes or
size increments. But in such a case, all the memory savings you might
have won by using dynamic allocation over static buffers is lost.
When you consider that many (most?) malloc() implementations maintain
pools of power-of-two sized blocks and simply manage those instead of
trying to deal with blocks of any size, we're back in precisely that
position of wasting memory.
Now that the memory-saving argument is weakened, the static buffers look
even more attractive to a programmer, in spite of their problems for
security and program reliability.
But, as I'm sure we've all shuddered to hear time and time again, "But this
program isn't system critical, so we don't need to go to all that effort to
make it robust."
And after all: who would've thought that sort was system critical?
-- Jon Paul Nollmann ne' Darren Senn sinster@darkwater.com Unsolicited commercial email will be archived at $1/byte/day. Wasurenaide -- doko e itte mo soko ni iru yo.