IOP has three different layers of functionality: 1) common code for driving the Fermilab DA systems and Princeton camera electronics, 2) special purpose code for driving the individual instruments (IOP, SOP, MOP), and 3) system-wide monitoring and warning/error display code.
We perform this task by turning the server shell into a "poor man's" real time system. We have created a job scheduling loop using the Tcl/Tk after command. This scheduler invokes a procedure periodically, with features to catch errors and send them either to murmur or to stdout. There are commands to see the jobs in the scheduler loop.
Each subsystem gets a handler, a procedure for asking the subsystem about state, health and error conditions. First, this involves communications. We have six different communications methods: 1) telnet to a subsystem and query, which is how most subsystems work; 2) telnet to a subsystem and parse an ASCII data stream, which is how our TCC works, and which we suggest should be avoided; 3) parse an ASCII file as it grows, as we do for the murmur log while looking for warning and error messages from the VME crates; 4) interpret binary UDP packets as we do from the MCP reporting of the interlock system; 5) querying the status pools of the DA; and 6) reading gang files to extract quality assurance information from them.
Once we have received state from the subsystem we build two arrays: the data and the error arrays. All status information we put into the data array, named for example cameraData. All error state information we put into the error array, e.g. cameraError. These TCL arrays provide a common ``blackboard'' for routines to access. Our conventions is to allow the data arrays to grow with time but to reset the error arrays each time the error state is queried. This works well if the the instrument maintains its own error state and clears it when the condition disappears and works less well for single error reports, as come for instance from the DA.
We replicate the blackboard in the client graphical state display. This is done by using TCL-DP ability to send and receive UDP packets. The packeteer code registers clients. For each client and each data and error array, the "packeteer" diffs the array, transforms the array into a list, sends the list as a single UDP packet. The "unpacketeer" collects the packet and remakes the array.
The unpacketeer collects the packet and remakes the array, checking the name of the incoming array, and fires the appropriate mapping routine, to map the error array into a bit map. The bit map is carried in a hierarchal tree array structured parallel to the graphical interface. All error codes on a given tree and layer of the hierarchy must be unique and then they may be OR'd together to determine which part of the graphical interface is to be painted red. The uniqueness of the bit also allows clearing of the error as the observer sees it and acknowledges it.
The graphical status display is that of a hierarchical tree. On the top layer is a set of nine ``LED''s, all must be green for successful high quality data to be taken. Each LED is the top layer of a hierarchy. At each layer of the hierarchy, the bits of the error masks of the error map of the levels one below are summed; if the sum is 1 or greater the graphical unit is painted red. At layers below the top LED there are panels showing all of the devices or components. This layer either shows the errors associated with the device, or the status of the device, depending of which type of mouse click. At the leaf status entries, one can usually click the name of the status and receive a plot of the quantity since the start of the day.
We have code to determine the optimal rotator angle and CCD clocking rate using a new technique called "Lskip". This reduces the time to find these parameters down to 10 minutes or so, from the half hour or more from our previous and more primitive techniques.
There is code to tell determine how to run the telescope along the Survey scans. The TCC itself knows nothing about Survey scans; we must compute, from a given starting time or RA, the correct position to slew to, the angle away from RA which to rotate, and the velocity vectors along RA and Dec at which to scan.
We must maintain good focus. We bring over the gang files, which contain simple real time reductions of the data, including analysis of the images on the focus chips. We use the information in the gangs to drive a simple PID loop to control the secondary; we call this the focus loop. The SOP guider is built around these same tools.
We must check whether at the end of the night there remain images on the DA pool disks that have not been written to tape, and if so, we must write them.
Downstream pipelines need the bias vector that code in IOP generates from a bias drift, the bad column map that IOP can produce from staring frames, and the myriad of bookkeeping files that IOP generates, from headers through report files. IOP is the source.
During this commissioning period we are discovering new and interesting error paths in this code. We expect this, and expect that new error paths will get rarer as time goes on.
The [sim]OPs are not very user friendly, and the documentation is limited to very high level papers and low level help strings. We have instead invested our time in personal training.
Development and maintenance are in the hands of the remote developers. Over the long term, the best course is to incorporate one or more observers in light maintenance. Only this way can they gain the confidence and skill to modify what is, in the end, their tools to do the survey to their own needs.
Review of Oberving Systems and Survey Operations
Apache Point Observatory
April 25-27, 2000