Processing pingdata manually
CODAS processing paradigm:
At its operational level, CODAS processing consists of a series
of C
programs or matlab programs that interact with the CODAS database or
with files on the disk. C programs usually deal with the database
directly, by loading data (eg. loadping.exe), extracting
data (eg. adcpsect.ext) or by
manipulating the databaes (eg. rotate.exe, putnav.exe,
dbupdate.exe). Matlab
programs
are used to maniplate files on the disk so C programs can use them, or
in the case of VmDAS or UHDAS data, Matlab is used to read the original
data files and created translated versions (on the disk) that C
programs can read.
All steps can be run from the shell command line (or from the matlab
command line). Adcptree.py creates a processing directory tree
and copies templates or documented, editable files to the various
subdirectories, setting up the tree for processing. To
process a dataset manually, one would work their way through the
directories, repeating (in the proper order) the following steps:
- edit the appropriate file
- run the related program
C programs are almost always called with a control file to specify
parameters that the user may wish to configure or change. These
include predictable values, such as the database name or yearbase, and
configurable values, such as a reference layer depth range. C
programs are called on a command line from the relevant working
directory as (for example)
acpsect adcpsect.cnt
The original ".cnt" files are self-documented, showing the various
options that can be chosen. The user is advised to leave these
fiels as is and name their copies something else, such as
"adcpsect.tmp", and then run it as "adcpsect adcpsect.tmp"
Matlab programs are copied by adcptree.py to the appropriate directory
and exist as a script (or a stub that calls a script). The matlab
program can be edited and then run in the appropriate directory.
The original demo and process.txt are designed so the new user can run
adcptree.py, read the document, and run the programs, ending up with a
new processed dataset. Because the scheme is so modular, and many
of
the variables are repeated, CODAS processing lends itself to
scripting. The
original demo demonstrates and explains all of the usual CODAS
processing steps, but it has other (less common) steps included, and is
very detailed. It is hence very easy to get discouraged.
The newer script (quick_adcp.py)
encapsulates the steps in the demo, and automates them. We
recommend using quick_adcp.py to process new pingdata, and follow along
in process.txt to see what the steps are doing. Explore your
processing directory, check off the files being created as you do each
step. It is just a script running most of the same steps
illustrated in process.txt.
You can start your own processing by doing the following:
- By now, you should have unzipped the bin*.zip in a directory
somewhere.
In our
case, the root directory of all our code is "/home/noio/programs",
denoted
PROGRAMS, so the demo directory is
"/home/noio/programs/codas3/adcp/demo" (or
PROGRAMS/codas3/adcp/demo). In general, you should NEVER change
anything within the PROGRAMS tree. Instructions for download and
setup are here.
- Pick a working directory that is outside the trees created by
unzipping
the zip files you downloaded.
- Create a processing directory using "adcptree.py" (for unix
users) or "adcptree_py" (for Windows users). For example,
"adcptree.py pingdemo" creates the processing directory
"pingdemo" and appropriate subdirectories,
and copies all the necessary control files, Matlab *.m files, etc. into
the
appropriate places. (When you are done with your processing, your
directory
tree should look like the directory in PROGRAMS/codas3/adcp/demo.)
- Documentation is available on line, or on your own computer (if
you downloaded doc.zip). A file called "adcp_processing.html"
contains a link to the documentation on your computer, accessible with
a web browser. Specifically, quick_adcp.py has a condensed version of the
processing steps for pingdata. If you download these two pingdata files
(1
, 2)
you can work on the same dataset.
- Follow along in process.txt to see the
steps being run. Although these are not the same data, the steps are
similar.