Documentation:Users:AutomatedAnalysis - MdsWiki
Personal tools

From MdsWiki

Jump to: navigation, search


Automated Analysis

MDSplus provides tools to automate data analysis tasks as well as data acquisition tasks. In general data acquisition is scheduled serially, with numeric sequence numbers. Data analysis tasks mostly use conditional scheduling.

Conditional Scheduling

For conditional scheduling instead of providing a phase and sequence number, the user specifies a boolean expression of other actions. Each action is treated in a 'tri-state' manner in the expression. The three values are:

  1. not ready - the action has not yet run
  2. true - the action has run successfully
  3. false - the action has failed.

Once all of the terms of the expression are either true or false (none of them not ready), if the expression evaluates to true the action is run. If it evaluates to false it is abandoned.

There is a TDI function called submit that starts an compute job, as a side effect. This can be used as the task of an action to perform script based tasks. For example:

  1. run a script to arm some hardware
  2. run a script to compute some analyzed results
  3. run a script to perform some engineering checks and notify key personnel of detected problems.

It is called as follows:

submit(full-file-name-of-script [,host-to-run-job-on [,shot-number]])

typically it is used with only the first argument


For example:


This will cause that script to be run by the shot cycle with one command line argument which is the shot number. The equivalent of the user typing:

 /usr/local/cmod/codes/spectroscopy/hirex_sr/ shot-number

This Script contains:

$ cat
cd /usr/local/cmod/codes/spectroscopy/hirex_sr/
synchronize_unix hirex_sr_store_007 $1
synchronize_unix hirex_sr_store_008 $1
synchronize_unix hirex_sr_store_009 $1
synchronize_unix hirex_sr_store_010 $1
synchronize_unix hirex_sr_get_lines $1
export shot=$1 
# Do the IDL routines
idl <<EOF
shot = long(getenv('shot'))

When this script runs it will create a log file in a directory specified by the MDS_LOGS environment variable. For example:


As the script above is starting for shot 12345 it will create:


and put its output in:


When the job completes the contents of this file will be appended to:


and the shot specific file will be deleted.

Job synchronization

The submit command completes as soon as the job is started, which complicates making chains of analysis jobs that depend on each other. If a user has:

  • analysis A that depends on a digitizer (e.g. /usr/local/cmod/codes/
  • analysis B that depends on the results from analysis A (e.g. /usr/local/cmod/codes/
  • analysis C that depends on the results from analysis B and some other digitizer (e.g. /usr/local/cmod/codes/

if they schedule A conditionally on the digitizer and schedule B conditionally on A,

  • A will start as soon as the digitizer completes
  • B will start as soon as A starts.

This is probably not the user's intent. The synchronize_unix shell command command addresses this issue. Its form is:

synchonize_unix job-name shot-number


  • job-name is the base name of the script of the job to synchronize against. In the hirex_sr example above that would be:
  • shot-number is the shot number to synchronize against, in that example 12345

This command will wait for the hirex_sr_analysis job of shot 12345 to complete before proceeding.

In the analysis chain digitizer, A, B, C above.

  • The action A would be scheduled conditionally on the 1st digitizer's store action
  • The action B would be scheduled conditionally on the the action A
  • The action C would be scheduled conditionally on the 2nd digitizer and the action B
  • the digitizer completes synchronously, so no special care is needed in
  • since A is run asynchronously should start with:
synchronize_unix a $1
  • since B is run asynchronously, and C depends on it and a digitizer that completes synchronously, only needs to concern itself with the completion of B. It would start with:
synchronize_unix b $1


  1. Scripts are run by the dispatcher under whatever user account the dispatcher runs under. (e.g. mdsplus user)
    1. The run in the system's generic login environment (not the user's), this forces their authors to be explicit about the required environmental setup (paths etc...)
  2. Log files are placed in $MDS_LOGS (e.g. /usr/local/cmod/logs)
    1. all systems executing these scripts need shared access to this directory
    2. all hosts executing these scripts must be running an mdsip server
    3. all hosts executing these scripts must have access to the scripts
  3. if the host to run the job is not specified, will invoke the tdi fun remote_submit_queues() to get the host name to run the job on.
  4. the scripts must be executable.


The implementation of and synchronize_unix is a bit circuitous, to avoid race conditions and problems with delays in NFS consistency.

  • invokes unix_submit file-name shot-number PRE on the local host, and then invokes remote_submit host-name file-name shot-number on the local host.
$ cat /usr/local/mdsplus/tdi/ 
public fun submit(in _file, optional in _host, optional in _shot)
  if (!present(_host)) _host="any";
  if (!present(_shot)) _shot=$shot;
  tcl('spawn unix_submit '//_file//' '//_shot//' PRE');
  tcl('spawn/nowait remote_submit '//_host//' '//_file//' '//_shot);
  • unix_submit with 'PRE' as the last argument creates the job and shot specific lock file in the $MDS_LOGS directory.
  • remote_submit uses tditest to invoke the tdi function
  • connects to the host (optionally picking one using and evaluates tcl("spawn unix_submit file-name shot-number")
  • unix_submit without PRE as the trailing argument runs the job piping its output to a file in $MDS_LOGS, concatenates that file to non shot specific version of that file, and deletes the lock file and the shot specific log file.