Re: Buffer Overflows: A Summary

Lamont Granquist (lamontg@HITL.WASHINGTON.EDU)
Sat, 03 May 1997 05:38:59 -0700

On Fri, 2 May 1997, Tommy Marcus McGuire wrote:
> Oddly enough, we had a talk here in the CS department earlier this
> week by Mootaz Elnozahy from Carnegie Mellon who suggested the idea of
> writing a system call pattern associated with a security sensitive
> program. The pattern would specify which calls would be used, with
> what arguments, and in what order, etc. The kernel could check the
> program's execution, and if the kernel detects a problem, it drops the
> program into a secure mode where the attacker continues to get
> responses like the attack is succeeding, but can't actually do any
> damage.

I was thinking you could just have the compiler enable only those function
calls that the program legitimately used. You should be able (with some
difficulty, but by no means insurmountable) to be able to get it to the
point where all you need to do is pass the compiler a flag and it would
spit out a "secure" binary (or even have this be the default, with an
option to turn it off if you needed speed).

Making it go into a mode where it appeared that the attack would be
succeeding seems to me to be very difficult and i'm not certain it is a
good idea. I don't like the whole philosophy of letting it "appear" that
your machine has been hacked into. Personally, I'd rather just have a
hardened machine that would cause hackers to go somewhere else, rather
than risk even a chance of irritating a hacker to the point where they
would decide to waste my time trying to "prove something" to me (like,
say, launching a denial of service attack from some untouchable machine in
the Czech republic or something...). Ideally, I'd just like to be
notified of the event, notified with as much information as possible about
who was probably doing it, and then have the machine deny them access. As
an example, I've turned off the option to send a fake /etc/passwd in
response to hostile cgi-bin/phf probes. That way, our machine doesn't
attract any attention, but we get notification and can contact the
sysadmin of the site which originated the request (which in one case was
in fact a linux box identifying itself as being in the Czech republic, go
figure...).

And for security professionals who are attempting to "trap" a hacker, I
don't see why you don't just let them hack into a unix box for real and
attempt to track them down. Why go through an extraordinary amount of
trouble to try to "fake" a breach of security when you can accomplish the
same results by using a throw-away machine? And people who don't have the
resources to have a throw-away machine, probably don't have the resources
to be trying to track people down this way (although dreams of busting the
next Mitnick and getting a book deal probably make quite a few sysads try
stupid stuff like this...)

> A neat idea, although I don't know how practical it would be.

Yeah. It would just mean that buffer overflow scripts would have to be
custom tailored to each program and use a bit more finesse (although
standard buffer overflow exploits using basic file I/O would probably
become common...). I think the time would be better spent removing
privaledged code, and making the privaledged code more bug-free...

--
Lamont Granquist <lamontg@hitl.washington.edu> (206)616-1469 fax:(206)543-5380
Human Interface Technology Lab.  University of Washington.  Seattle, WA
PGP pubkey: finger lamontg@near.hitl.washington.edu