[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gnulib] xalloc.h proposed fix to detect potential ptrdiff_t ove

From: Paul Eggert
Subject: Re: [Bug-gnulib] xalloc.h proposed fix to detect potential ptrdiff_t overflow
Date: 20 Nov 2003 00:39:19 -0800
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.3

Bruno Haible <address@hidden> writes:

> 1) This patch makes it impossible to allocates arrays of more than 2 GB,
>    on 32-bit machines, even when the OS allows it.

This reminds me of a similar situation that occurred as people made
the transition from 16- to 32-bit machines.  Some people worried a lot
about allocating buffers of size 32768 through 65535 bytes, and
converted their programs to use 'unsigned' instead of 'int' indexes,
etc., etc., just so their programs could allocate big arrays on 16-bit
hosts.  Other people didn't bother: they just ported their code to
32-bit hosts.

In the long run, the latter approach was far better: it was simpler
and more reliable.  It's not worth contorting one's code to cater to
machines that are on their last legs, address-space-wise.  A single
power of two will be erased by Moore's law in a matter of months, but
the code will be contorted forever.

>    malloc() has nothing to do with ptrdiff_t.

The problem here is not malloc itself, but it is strongly related to
malloc, as the problem occurs in programs that subtract pointers that
point into malloced buffers.

>        When you are subtracting two pointers into the same
>        object, the result is of type ptrdiff_t; but if the resulting value
>        does not fit in ptrdiff_t, the behavior is undefined.
>    This means that ptrdiff_t is ill-defined by design

That is not something that we can fix in a library by defining a new
type.  It is a defect of the C language.  There are two plausible ways
to program around the defect.  We can either rewrite all our programs
to avoid all subtraction of pointers into arrays that might be large;
or we can fix our storage allocator so that problem cannot happen.
The latter fix is much easier, much more reliable, and much less

>    The ideal ptrdiff_t would be at least 1 bit wider than size_t.

Yes, and on implementations where that's true, the proposed fix
doesn't have any extra restrictions.

>      b) Do pointer subtraction only when you a priori know which is the
>         bigger and which is the smaller pointer, then cast the result to
>         size_t.

This approach will work on all practical hosts that I know of (even
though it uses undefined behavior).  But it will require too much
reworking of existing code.  We don't have time to scan all of gnulib,
coreutils, tar, diffutils, etc., looking for all instances of pointer
subtraction to see whether there's a problem.  And even if we did such
a scan, such problems could creep in later.

It's much simpler to outlaw arrays that could cause ptrdiff_t
overflow.  Yes, that will prevent allocating single large arrays on
small hosts.  But the need for that is quite rare now, and it's a
quite temporary need because the people who actually need such arrays
are typically running on 64-bit hosts anyway.  We shouldn't contort
our code, or have to do intensive and continuing manual scans of our
code, merely to cater to this obsolescent (and largely hypothetical)

reply via email to

[Prev in Thread] Current Thread [Next in Thread]