strcpy versus strncpy

Morten Welinder (terra@DIKU.DK)
Tue, 03 Mar 1998 01:31:24 +0100

A recent article on BugTraq suggested that using strcpy should
almost always be considered a bug. That's not right. It is,
in fact, the wrong way around: strncpy is almost always a bug.

True, strncpy will avoid buffer overruns, but that only proven
that strncpy is better than incorrect use of strcpy. The problem
is that such use of strncpy can introduce problems of its own:

1. Different parts of a program can assign different meanings as
a file name to the same string because they truncate at different
lengths. (Since a program might pass on a file name to a sub-
process, such different lengths need not be in the same program.)

2. Certain operations, such as prepending "./" or $PWD to a file
name, can change the semantics of the name. Since prepending
$PWD would typically be done to make a file name robust, this
might come as a nasty surprise to some programs.

3. What you think is plenty, others may call insufficient.
Automatically generated file/function/variable/whatever names
tend to be long. Why should a program fail to work with those.
Not convinced? What does the following program do on Solaris?

int main () { return printf ("aaa...10000...aaa\n"); }

With gcc, it dumps core due to stack overflow deep down in printf.
With Sun's cc, it prints a few thousand a's because the compiler
silently truncates the string.

With dynamic allocating available, there really is no execuse for
using strncpy, with the possible exception where memory attacks
might be a larger problem, but that should not be the case with
argv/environ based strings.

I know of no current vulnerabilities based on strncpy, but I'm sure
they are there for anyone to find.

All the above applies to snprintf over sprintf also, of course.

This is, of course, my humble opinion,

Morten
terra@gnu.org