The standard tests
The "run" command
The Perl scripts
If you find
a problem
These are the standard test cases that are computed every night in Lexington. They are files with names ending in ".in". When they are computed they produce output files with the same name but ending in ".out". Additional files are created too - these mostly results of all the assert commands, ending in ".asr", and overviews of the model results ".ovr".
The assert command is new in C94 and provides the infrastructure needed for automatic testing. This command tells the code what answer was expected. If the computed results do not agree then the code exits with an error condition.
The source has many C assert macros that are designed to validate the code's internal decisions. The assert macro only exists if the NDEBUG macro is not set when the code is compiled. (If the NDEBUG macro is set then assert macros within the source are ignored by the compiler.) The test cases should be run at least once with the assert macro active, that is, do not include a compiler option to define the NDEBUG macro. In most compilers the NDEBUG macro is set when compiler optimization is set to a high level. In practice this means that the entire code should be compiled with only low optimization and the tests computed to validate the platform. Then recompile with higher optimization for production runs, and recompute the test cases.
When executed as a stand-alone program the code expects to read commands from
standard input and write results to standard output. I compute single models by defining a shell script that
called "run".
It contains the following line
cloudy.exe < $1.in > $1.out
The Cloudy command lines needed to compute a model are placed in a text file with a name
like "orion.in
", indicating that it
is the input file to compute a model of Orion. Then the shell command line
run orion
will read the contents of orion.in, compute the model, and write the results back to orion.out.
The purpose of each test case is given in the documentation that follows the input commands. Cloud stops reading input commands if the end of file or a blank line is encountered. Each *.in file begins with the commands, followed by the blank line to tell the code to stop reading, then followed by a description of the purpose of the test. This description is totally ignored.
I wrote a series of Perl scripts to compute all test cases and then check for errors. These provide automatic ways to validate the code.
Each Perl script will need to be changed before it can be used since they need to know the names of directories where files are located, and how to find the Cloudy executable. Each script explains which variables need to be set for the script to run.
Perl comments start with a sharp sign "#" and end with the end of line.
Perl variable names begin with a dollar sign "$".
A Perl script is executed by typing perl followed by the name of the script, as in
perl runall.pl
The runall.pl script will compute all the models, the files *.in, in the current directory. The path to the cloudy executable must be set with the Perl variable $exe.
This script searches for problems in the test cases that were computed with runall.pl. It will look for all botched asserts, warnings, and models that did not end.
This is the Perl script that is executed every night in Lexington that the code has changed. It computes all models, checks the results, backs up results and the code, and then sends mail announcing success or failure.
Cloudy should run on all platforms without errors. Botched asserts or outright crashes should never happen. I can't fix it if I don't know it's broken. Please let me know of any problems. My email address is gary@cloud9.pa.uky.edu
Visit http://nimbus.pa.uky.edu/cloudy
for details and latest updates.
Good luck,
Gary J. Ferland