bug-gnulib
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: tar + cpio - covscan issues


From: Kamil Dudka
Subject: Re: tar + cpio - covscan issues
Date: Thu, 15 Apr 2021 22:07:30 +0200

Hi Bruno,

On Saturday, April 10, 2021 8:40:06 PM CEST Bruno Haible wrote:
> Hi Kamil,
> 
> > I meant the public reports on this mailing-list like the one that Ondrej
> > sent. As gnulib is embedded in multiple RPM packages of Fedora/RHEL, such
> > reports are going to come periodically until you change your attitude to
> > handling false positives upstream.
> > ...
> > The problem is that this is a duplicated effort to your
> > reviews of Coverity results upstream.
> 
> So far the number of these reports has been small and manageable. When it
> gets higher, we may very well use your 'csdiff' package.
> 
> Is there, besides this package, also a "best practices" document how to
> use it (e.g. where to store the results of previous run, how to map
> categories to priorities, etc.)?

these tools can be fed with Coverity's JSON format, which can be obtained
with `cov-format-errors --json-output-v7 ...`.  I am not sure whether this 
utility is available to users of scan.coverity.com though.  csdiff/csgrep can 
also be fed with the diagnostic output of GCC and other tools with compatible 
output format.

Storing results from previous runs works reliably only until you update the 
static analyzers.  So you either need to always check their versions or build 
the code twice to obtain comparable results.

Priorities are usually not used for upstream scans because upstream developers 
either fix/suppress all the reported issues, or at least the issues that are 
introduced by the changes to be merged.

We have to use priorities downstream for some projects though.  For example, 
our scan of coreutils-8.32-18.el9 resulted in 237 reports, which would be 
nearly impossible for me to review.  Out of them 11 reports were classified
as important to be reviewed/fixed.

> > And many downstream consumers have to
> > duplicate the effort as well.
> 
> Downstream consumers can exclude the gnulib-copied directories using the
> 'csgrep' program, AFAIU?

Not so easily.  csgrep can filter the results by path in the source tree.
The problem with gnulib is that different projects embed it in different 
directories.  For example, coreutils has it in /lib whereas findutils has
it in /gl/lib while /lib contains other source files that we do not want
to exclude.  So we would have to maintain such exclusion lists per project.

People maintaining their own medium-size projects can easily play with this.  
I am in a different situation when I need to scan 3700 distinct projects and 
approx. 480 million lines of code with more or less the same manpower ;-)

Kamil

> > The main advantage of code improvements and inline code annotations is
> > that
> > they travel together with the source code when the files are moved in the
> > source tree, across the projects, or embedded into other projects.  All
> > the
> > downstream consumers can consume these improvements at no additional cost.
> 
> True. But it clutters up the source code. For tools that produce 5-10 times
> more false reports than good reports, I wouldn't do this. Things are
> different for tools or warning categories which, on average, produce > 75%
> good reports (like, specific warnings included in "gcc -Wall").
> 
> Bruno





reply via email to

[Prev in Thread] Current Thread [Next in Thread]