The standard tests
Asserts - reliability
in the face of complexity
The "run" command
The Perl scripts
If you find
a problem
For more information visit the Cloudy web site, www.nublado.org
The standard test cases that are computed every night in Lexington are the set of files with names ending in ".in". Each contains the commands needed to compute a particular model. When they are computed they produce output files with the same name but ending in ".out". Additional files are created too - these mostly results of all the assert commands, ending in ".asr", and overviews of the model results ".ovr".
The purpose of each test case is given in the documentation that follows the input commands. Cloudy stops reading input commands if the end of file or a blank line is encountered. Each *.in file begins with the commands, followed by the blank line to tell the code to stop reading, then followed by a description of the purpose of the test. This description is totally ignored - the command parser stops when it encounters the first empty line.
Cloudy is designed to be autonomous and self-aware. The code uses extensive self-checking to insure that the results are valid. This philosophy is described in Ferland 2001, ASP Conference Series, Vol 247, Spectroscopic Challenges of Photoionized Plasmas, G Ferland & D Savin, editors (astro-ph/0210161).
Asserts provide the ability to automatically validate complex results. There are two types of asserts here - the first are a set of commands that are included in the input files and tell the code what answer to expect, and the second are C language macros that are part of the source and confirm the internal decisions made by the code.
All of the files in the test suite include assert commands. The assert command was introduced in C94, is described in the Miscellaneous Commands section of Hazy I, and provides the infrastructure needed for complete automatic testing of the code. This command tells the code what answer to expect. If the computed results do not agree then the code prints a standard comment and exits with an error condition. These assert commands have nothing to do with the simulation itself, and would not be included in an actual calculation. You should ignore them, or even delete all of them (a Perl script, tests_remove_asserts.pl, is provided is provided as part of the test suite to do this).
The source code also includes many C assert macros that are designed to validate the code's internal decisions. The assert macro only exists if the NDEBUG macro is not set when the code is compiled. (If the NDEBUG macro is set then assert macros within the source are ignored by the compiler.) The test cases should be run at least once with the assert macro active, that is, do not include a compiler option to define the NDEBUG macro. In most compilers the NDEBUG macro is set when compiler optimization is set to a high level. In practice this means that the entire code should be compiled with only low optimization and the tests computed to validate the platform. Then recompile with higher optimization for production runs, and recompute the test cases.
When executed as a stand-alone program the code expects to read commands from
standard input and write results to standard output. I compute single models by defining a shell script called "run".
It contains the following line
cloudy.exe < $1.in > $1.out
The Cloudy commands needed to compute a model are placed in a text file with a name
like "orion.in
", indicating that it
is the input file to compute a model of Orion. Then the shell command line
run orion
will read the contents of orion.in, compute the model, and write the results out to orion.out.
The test suite includes a series of Perl scripts that compute all test cases and then check for errors. These provide an automatic way to validate the code.
Each Perl script will need to be changed before it can be used since it needs to know the names of directories where files are located, and how to find the Cloudy executable. Each script explains which variables need to be set for the script to run.
Perl comments start with a sharp sign "#" and end with the end of
line.
Perl variable names begin with a dollar sign "$".
A Perl script is executed by typing perl
followed by the name of the script, as in perl runall.pl
The runall.pl script will compute all the models, the files *.in, in the current directory. You need to change the variable $exe to point to the Cloudy executable on your system and the script must be executed from the directory where the tests suite is located.
This script searches for problems in all of the test cases (the *.out files) in the current directory. It first looks for botched asserts and warnings. These indicate very serious problems. A list of any models with these problems is placed in the file serious.txt. Next it whether the string PROBLEM was printed in any of the models, and writes a list of these to the file minor.txt. A few of these can occur in a normal series of models and they do not, by themselves, indicate a serious problem. Finally it looks for all models that did not end..
This script will rerun all models that had botched asserts. This is mostly used while developing the code.
This script was written by Peter van Hoof and will run the test suite using a number of processors. The beginning of the script says how to use it.
This script will rerun all models that crashed - the did not end at all. This is mostly used while developing the code.
This script will rerun some of the models - those listed in runsome.dat.
This backs up some files onto a CDRW and sends email announcing results.
This script creates two files that document the test suite. The file doc_tsuite.html is an html representation of the set of tests, while doc_tsuite.txt is a comma delimited table that can be incorporated into a word processor.
These are a series of Perl scripts that are run every night to provide a mechanism for automatically testing the code and announcing results.
This script is located in the unix directory, and is executed first. It copies the current version of the source, data, and test suite to the "last" directory on the gradj.pa.uky.edu ftp site. Two complete versions are copied - one in tar.gz format with Unix end of lines, and another to zip format with PC end of lines.
This is by far the most important script, and is fundamental to the code's development. This script is executed every night that the code has changed. It computes all models and then checks the results looking for failed asserts or models that did not end. It then sends email announcing success or failure. It also backs up copies of the results, the executable that produced them, and the code source.
The backup files have names, in order of increasing age, *.bak, *.bk1, *.bk2, *.bk3, *.bk4. The output, executable, data, and source files are backed up this way as well. The executable files have names like cloudy.exe, cloudy_bak.exe, cloudy_bk1.exe, etc. The data files have names like data.tar, data_bak.tar, etc.
It is possible to run old versions of the code using versions of the run command. I have scripts called runbak, runbk2, runbk3, and runbk4, that will read an input file, use the appropriate version of the backed up executable, and create output files with names like name1.bak, name2.bak, etc, where the number indicates how far back the executable file was.
If no serious problems were detected by autorun.pl then it will execute the script last_good.pl. This copies the current versions of the source, data, and test suite to the last_good directory on the ftp server and to the last_good executable.
Cloudy should run on all platforms without errors. Botched asserts or outright crashes should never happen. I can't fix it if I don't know it's broken. Please let me know of any problems. My email address is gary@pa.uky.edu
Visit http://www.nublado.org for details and latest updates.
Good luck,
Gary J. Ferland