bug-make
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Optimization for reading *.d files


From: brenorg
Subject: Re: Optimization for reading *.d files
Date: Sat, 18 Mar 2017 22:49:10 -0700 (PDT)

Paul Smith-20 wrote
> Before you go too far with performance improvements you should really
> move to the latest version of GNU make (4.2.1) or even try the current
> Git HEAD (but you'll need to install autotools etc. to build from Git).
> 
> It's less useful to be doing performance testing with a version of make
> that old.

You are right. I'll do that and get back with the results.


Paul Smith-20 wrote
> On Sat, 2017-03-18 at 19:25 -0700, brenorg wrote:
>> There are lots of dependency files and they can be processed in parallel,
>> before being merged into the database.
> 
> Well, make is not multithreaded so you can't process files in parallel. 
> I suppose that for slower disks it could be that some kind of
> asynchronous file reading which would allow data to be retrieved from
> the disk while make works on previously-retrieved data could be useful
> but I'm not aware of any async file IO which is anywhere close to
> portable.  Also, with SSD more common these days file IO latency is
> nowhere near what it used to be, and decreasing all the time.
> 
> Someone would have to prove the extra complexity was gaining a
> significant amount of performance before I would be interested.

So I suppose making it multi-threaded is out of question right? :) I didn't
think of that.

Even if there are not portable operation, it should be possible to ifdef the
code and make it work at least for some systems. Not great, but I imagine it
would work.

And I don't think SSD is so common. And even it it is there are still tons
of people working on NFS, and other stuff that adds latency - specially for
large projects.


Paul Smith-20 wrote
>> For that, GNU make could need an extension on the include directive to
>> handle "include *.d" differently as it knows dependency files won't
>> alter/create variables but just add dependencies.
> 
> I'm certainly not willing to do something like declare all included
> files ending with .d should be treated as dependency files: people might
> use .d files for other things, and they might create dependency files
> with different names.  ".d" is just a convention and not a strong one at
> that IMO.
> 
> However, it could be that the user would declare in her makefile that
> all included files matching a given pattern be treated as simple files
> (for example).  That would be acceptable, again if the gains were
> significant enough.

Yes, sorry I didn't make that clear. The user must be very much aware of
what is being done to enable such capability.

Paul Smith-20 wrote
> I'm not convinced that it's impossible to speed up the parser in
> general, though.  It shouldn't take twice as long to parse a simple line
> using the full scanner, as it does with a targeted scanner.  After all,
> you're not making use of any of the special features of parsing so those
> should cost you very little.
> 
> I'd prefer to investigate improving the existing parser, rather than
> create a completely separate parser path.

I could agree if the difference were 10x or more. But I believe 2x a
reasonable gain from removing so many features. From what I looked on the
code, the main hassle comes from the target specific variable assignment. 

Having two parser paths should not be so bad as it sounds, since the
simplified parser is much much simpler. I needed no more than 20 lines to do
it (could be buggy of course...but you get the point).


So to sum up:
0 - I will get back with results for a newer version.
1 - How crazy it would be to make it multi-threaded?
2- This should be configurable with a very strong disclaimer. The
alternative scanner wouldn't do any sanity check, so it could be dangerous.
3 - Other option could involve creating a separate tool to collect a bunch
of "simple files" and pre-process them into a compact database. That
resulting file could then be read into the makefile. By doing that, Make
would have to understand this internal compact database format. Still, it
would probably need a lot code, even more than the simple scanner.






--
View this message in context: 
http://gnu-make.2324884.n4.nabble.com/Optimization-for-reading-d-files-tp17656p17658.html
Sent from the Gnu - Make - Bugs mailing list archive at Nabble.com.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]