[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Coverity false positives triggered by gnulib's implementation of bas

From: Paul Eggert
Subject: Re: Coverity false positives triggered by gnulib's implementation of base64
Date: Thu, 9 May 2019 15:28:04 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1

On 5/9/19 3:13 PM, Bruno Haible wrote:
> So they are combining data flow analysis - in order to determine
> that the argument of base64_alloc is untrusted data - with a
> heuristic - "if a function contains array accesses with indices that
> are computed with ntohs calls, we should flag it as dangerous consumer".

If that's their heuristic then it's obviously bogus and should get fixed.

I understood it to be more complicated than that. For example, perhaps
they see that base64_encode copies its input data to its output buffer
via a swap-like function so that if the input is tainted then the output
is too; this is not an unreasonable heuristic and if they recently added
it they'll be finding more Heartbleed-related bugs in calling code than
they did before. However, if that's the case, the problematic area is in
the calling code, and adding the comment could mask real bugs.

> But maybe it will be sufficient to mask all b64c arguments
> with '& 0x3f', like you already suggested in the other mail?
I hope so. Partly because I hope GCC will optimize away the &0x3f so
there's no runtime cost. And partly because if it does pacify Coverity
we can be pretty sure that it's just a Coverity false alarm and Coverity
should get fixed (which means less work for us :-).

If the comment is really needed then I'd like to know more about the
problem, because it currently remains a mystery (at least to me).

reply via email to

[Prev in Thread] Current Thread [Next in Thread]