bug-gnulib
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gnulib] checking for overflow


From: Jim Meyering
Subject: Re: [Bug-gnulib] checking for overflow
Date: Mon, 20 Oct 2003 20:13:21 +0200

Bruno Haible <address@hidden> wrote:
> Jim Meyering wrote:
>> You may have seen the one regarding a recently-fixed bug in `ls -C -w N'.
>> For some large values of N, ls would hit an address overflow
>> bug and segfault.
>
> And for some not-so-large values of N, such as 60000, ls will allocate
> 700 MB of memory. Which also allows some kind of denial-of-service attack.

Paul Eggert fixed that bug a few days ago -- at least to the extent
that ls no longer blindly uses N when that value is unnecessarily large.
So, you can still get behavior that is quadratic in the number of files
in a directory when using -C (not necessarily N).

> Therefore in this case I would blame the apparently quadratic memory
> consumption, together with the ability to pass an arbitrary ls option
> via ftpd.
>
> I only wish to know whether each time I malloc() a pathname, let's say, to add
> a suffix to an existing pathname, I need to check for size_t overflow.
> The answer will likely be "no". Then I'd like to know which criteria can
> be used to decide this.

I agree.  I think we'll have to use our judgement.  It'll be some time
before we have to worry about file names with length not representable
as a size_t.  However, it might be useful to be able to process a `line'
of length 2^31 or 2^32 or more.  It's certainly possible, but that
suggests that any tool that presumes it can read a line into memory
be prepared to fail gracefully if it encounters such a long line --
and maybe even to continue processing other lines or files.  grep is
one such tool, and it has a problem handling files with very long lines
(though I proposed a patch to fix that some time ago).  E.g. doing this
makes grep try to allocate 8GB of RAM for a line

  $ dd bs=1 seek=8G of=big < /dev/null 2> /dev/null
  $ echo a > f2
  $ (ulimit -v 5000; grep a big f2)

Unfortunately, grep fails with memory exhausted and exits immediately.

  grep: memory exhausted
  [Exit 1]

Once properly patched, it detects the failure, frees the buffer in
question and continues to find a match in the following file:

  grep: big: File too large
  f2:a
  [Exit 1]

> For multiplications, it seems clear that checking is required.
>
> For multiplication by 2: can we rely on malloc() failing for sizes between
> 2 GB and 4 GB? If so, it would mean we'll get a malloc failure before the
> size_t overflow.

I'm not sure I follow.  Why might we be able to rely on such a thing?
Do some (most?) malloc implementations fail for N in the 2-4GB range?

Doubling is fine for small sizes, but becomes unreasonable beyond
a certain point.  If an application is using a 100MB-buffer and needs to
append a few kilobytes, doubling the buffer size sounds like overkill.

Maybe applications that care should use something like this
(where the `10' and 4MB are sort of arbitrary)

  N += MIN (N + 10, 4 * 1024 * 1024);
  (but with overflow-checking, of course)

With 4GB of RAM, I wouldn't mind if an application failed 4MB
sooner than necessary.

> For addition?

Judgement again?  Probably depends a lot on the context.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]