I wrote this article a few months ago. I'm glad it's helped to trigger
this discussion. The briefness of the article was driven by the desire
to make the general USENIX subscribership aware of the problem and try
to get them to think about it, while also talking about it in such a manner
that didn't require stack diagrams and kernel details. That did make it a
bit short.
Before I delve into the rest of this, I'd like to explain that my approach to
security problems tends to reflect a 'defense-in-depth' view: no one
method will fix 100% of any class of security problems. There is no silver
bullet. However, assuming that a modification does not have any significant
impact other than making it harder to breach security, I'm in favor of making
the modification.
nate <nate@millcomm.com> said:
> 1. 'you gotta change the code'
> This one is obvious; people must change their SUID programs'
> source code to avoid nasty things like gets() sprintf() strcat() and
> strcpy() using things like fgets() strncat() strncpy() as substitutes.
One approach to this problem is the way BSDI handles this: a program
compiled with these functions works, but also complains to STDERR about
the fact the program uses 'risky' functions (I don't know the precise
wording here, I don't have a BSDI machine around to test on). I believe
you can override the complaining with an environment variable. I like this
approach because it encourages better programming behavior while breaking
very little if anything.
nate <nate@millcomm.com> said:
> 2. 'hmm. what if you change the compiler?'
A more general form of the question is, "What if you change the language?"
Adding bounds checking to your C compiler is a good approach, if nothing else
you only use it in environments where security needs a greater emphasis.
I'd love to build firewalls with every program on it built with a compiler
like that. Not that it would make it foolproof, but it would add another
layer of security in an environment where the speed/security tradeoff is
tilted strongly in the security direction.
For a couple of years now I've been discussing alternatives to C for
security-oriented programming (Ada mostly, and more recently Java) because
as I say in the ;login: column, C has too many traps for the unwary. It's
_too easy_ to write insecure C code.
Marcus Ranum said this better on the Firewalls mailing list:
Message-id: <199703121435.JAA05164@mail.clark.net>
mjr@clark.net said:
> The most important part of many jobs is choosing the right tools. A
> terrific programmer can write secure code in assembler. A good
> programmer can write secure code in C. The bad news is that there are
> a lot of mediocre programmers and a lot of them are writing the next
> Killer Internet App or Online Commerce thing we're going to have to
> deal with.
>
> ADA was an attempt to make a language that prevented bad programmers
> from writing bad code. That was a bad idea. What we need are languages
> that make it easy for bad programmers to write correct code. And we
> need to stop insisting that all our programmers be
> psycho-detail-oriented wizards capable of doing all the things C
> requires.
>
> You can open walnuts with a chainsaw. You can open walnuts with a car,
> or a high-powered rifle, too. But, with a nutcracker you don't need to
> be an expert and you can get the job done quickly and cleanly.
nate <nate@millcomm.com> said:
> 3. 'ok, what about the CPU/OS kernel stack exec permission?'
Again, this is far from a solve-everything solution. But I prefer a
partial solution to what we have now, which is lots of applications written
that are _easily_ exploitable.
Tim Newsham <newsham@ALOHA.NET> points out in his reply that just removing
the exec bits on the stack doesn't prevent exploits; that a cleverly
constructed exploit could point the return address at existing code within
the application that does something that a malicious person would like
executed, for example, an exec() call with different arguments. I agree
in theory, however to successfully carry this out you'd need to be able
to determine the address of the stack exactly, in order to place the
new arguments and supply pointers to them for the altered call. The
current exploits do not have to be so exacting, as they can execute their
own code to calculate argument pointers. A possible counter to this
attack is simply to place the stack in a random location at program
startup. Even if the source to the program is available, it would be
random chance that the exploit works. In the meantime the attacker would
generate a lot of SEGVs.
At least by removing the exec bit on the stack you've eliminated a class
of security exploits that are rapidly reaching the "grep source for
insecure calls, check which enviroment variables/program arguments that
are trusted, write exploit in 10 minutes" stage.
This situation we find ourselves in is not really a C problem, or an OS
problem, or a CPU problem, but partly all three. We wouldn't have a
problem here if C did bounds-checking all over (or things were written
in another language that does). We wouldn't have a problem if the only
way to put a return address on the CPU's stack is to run a CALL instruction
(or have ring-0 equivalent access, I wouldn't mind that). The Java
Virtual Machine is like that. We wouldn't have a problem if the OS enforced
a "stack window" (on a subroutine call, change the selector of the stack so it
has a new bottom limit, one that disallows writing to the return address
placed there).
Since the problem is entwined with the CPU design, some of these aren't
possible for a given CPU architechure.
Further, all of these present some kind of performance hit. (My guess is
that the "stack window" method is most desirable in terms of code
not being broken, but it increases the subroutine call performance
penalty, perhaps by too much. And you have to trap user mode subroutine
calls/return in the kernel. Yuck.) I guess what I really want is a CPU/OS
combination that did this transparently, so that we could continue running
our buggy C/Pascal/Basic/Java/whatever programs. A good bounds-checking C
compiler would be a stopgap measure (but a desirable one).
I'm hoping someone thinks of a clever idea to fix this problem permanently.
-- Shawn Instenes, Silicon Forest Consultants: shawni@siforest.com