lilypond-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Automated testing for users' LilyPond collections with new development v


From: Karlin High
Subject: Automated testing for users' LilyPond collections with new development versions
Date: Mon, 28 Nov 2022 16:49:12 -0600
User-agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0

This message intrigued me:

<https://lists.gnu.org/archive/html/lilypond-devel/2022-11/msg00222.html>

In it, Eric Benson reported a setup that allows testing new versions of LilyPond on a sizable body of work in a somewhat automated fashion.

Now, could automation like that also make use of the infrastructure for LilyPond's regression tests?

<http://lilypond.org/doc/v2.23/Documentation/contributor/regtest-comparison>

What effort/value would there be in making an enhanced convert-ly tool that tests a new version of LilyPond on a user's entire collection of work, reporting differences between old and new versions in performance and output?

Enabling something like this:

* New release of LilyPond comes out. Please test.

* Advanced users with large collections of LilyPond files do the equivalent of "make test-baseline," but for their collection instead of LilyPond's regtests. Elapsed time is recorded, also CPU and RAM info as seems good.

* New LilyPond gets installed

* Upgrade script runs convert-ly on the collection, first offering backup via convert-ly options or tarball-style.

* Equivalent of "make check" runs

* A report generates, optionally as email to lilypond-devel, with summary of regression test differences and old-vs-new elapsed time.

Ideally, this could quickly produce lots of good testing info for development versions of LilyPond, in a way encouraging user participation.
--
Karlin High
Missouri, USA



reply via email to

[Prev in Thread] Current Thread [Next in Thread]