Setting up the code
Revision History
To execute the code as a stand-alone
program
To run a grid of models
Helper applications
Acknowledgements
Mailing list
Comments or suggestions?
reviewed 2002 Dec 15
Instructions are now totally web-based since this insures the most up-to-date advice.
Instructions for downloading the files, and an overview of setting things up, are located at http://nimbus.pa.uky.edu/cloudy/cloudy_96.htm These steps include the following:
Compile the code using an ANSI standard C compiler. Instructions are at http://nimbus.pa.uky.edu/Cloudy/compiling_96.htm
Compile the stellar atmosphere grids if they will be used. Continua from several sets of stellar atmospheres can be automatically accessed by the code. See http://nimbus.pa.uky.edu/cloudy/stars96.htm for more details. This is described further in the read me file located within the data directory. You can skip this step if you do not want these continua.
Run the test cases. The test suite is described at
http://nimbus.pa.uky.edu/cloudy/testing_96.htm. These are the series
of tests that are executed every night here in Lexington. Cloudy is designed to be autonomous and
self-aware. The code uses extensive self-checking to insure that the results are valid.
This philosophy is described in Ferland 2001, ASP Conference Series, Vol
247, Spectroscopic Challenges of Photoionized Plasmas, G Ferland & D
Savin, editors (astro-ph/0210161).
The models in the test suite include assert commands in the input stream.
If the predictions do not agree with expectations the code will announce this at
the end of the calculation. The distributed set of test case files includes an
html read me file, the input files
themselves, and Perl scripts to execute the files and verify their
results. This is described further in the read me file located within the tests
directory.
Incorporate any hot fixes after downloading the files. Hot fixes are a set of corrections that must be made to the distributed source. They are listed on the hot fixes page of the web site, http://nimbus.pa.uky.edu/cloudy/hotfix_96.htm . The downloaded source was used to generate the test output on the web site and did not include these hot fixes.
This is maintained in the web site, http://nimbus.pa.uky.edu/cloudy/cloudy_96_revision_history.htm . This is a complete history of changes to the active version of the code.
From the command line the code would be executed as follows, assuming that the executable is called cloudy.exe:
cloudy.exe < input_file > output_file
In this example commands are read in from the file "input_ file" and results are sent to the file "output_ file". A typical input file is the series of commands written one per line in free format. The optional semicolon indicates the end of the the part of the line containing information.
title typical input stream
blackbody 120,000K
luminosity 37; log of luminosity in H-ionizing radiation
radius 17; log of inner radius of cloud, in cm
hden 4; log of hydrogen density, cm^-3
I do this with a script I call "run". It depends on your operating system, but might look something like this:
cloudy.exe < $1.in > $2.out
Then, if the input file is called model.in the command
run model
will read model.in and create model.out.
Robin Williams added an option to the main program so that it can accept a command line argument to specify the input and output files. If you execute the code as in
cloudy.exe -p model
The code will read in model.in, write output to model.out, and use the string "model" for the punch prefix option (described in Hazy).
Often the most insight is gained from producing a large number of models with various input parameters changing, to see how predicted quantities change as a result. To do this you want to write your own main program, and delete the one that comes with the distribution.
In the distribution the main program is the file maincl.c. Delete this file (or rename it to something like maincl.old) , and also delete maincl.o if you compiled the entire distribution.
You still need to compile the rest of the code, generating *.o object files. You might consider turning these into a library. Then compile your new main program, and link it all together. Note that in C the main program must be called main, but it can live in a file with any name. All of the routines you need to access are declared in the header file cddrive.h. Include this header file in your main program. That header also describes how the various driving routines should be called. The documentation in the cddrive.h header is more up to date than Hazy.
To be placed on the Cloudy mailing list and be notified of updates to the code, please send a request to garyl@pa.uky.edu The Cloudy home page http://nimbus.pa.uky.edu/cloudy also has an option to place yourself on the mailing list. It is important to be on this list, to make sure that you have the current version of the code, with hot fixes included. Anything as complex as Cloudy must contain bugs (see Ferland 2001, referenced above). These are fixed as soon as they are found.
extract_atomic_data.pl is a perl script that will read the source and extract all atomic data references. It creates the files atomicdata.txt and oldatomicdata.txt. These are tab-delimited tables of references and data types. A parallel perl script lives in the data directory.
list_routines.pl is a perl script that will look at all the source files and make a list of routine names and their purpose. The output files are sorttable.html, an html file listing the results, and sortsrc.tct.
list_headers.pl makes a list of all header files, with each name followed by a list of all the source files that contain references to that header. The list is produced in the file listfiles.txt.
Many people have helped in developing the code, along with the NSF and NASA. A list of acknowledgements is located at http://nimbus.pa.uky.edu/cloudy/acknowledgements.htm .
Please send comments or suggestions to Gary Ferland at gary@pa.uky.edu
Visit http://nimbus.pa.uky.edu/cloudy
for details and latest updates.
Good luck,
Gary J. Ferland