Analysis Manager > Application Procedural Interface (API) > Analysis Manager API
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX''">XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX''">   
Analysis Manager API
Analysis Manager Application Procedural Interface (API) Description
The Analysis Manager can be used by literally any program or user interface that can access the Analysis Manager API. Thus with some programming knowledge and skill, customized user interfaces to the Analysis Manager can be made. The API is not a part of the Analysis Manager product for general use. However, for completeness, this appendix describes these procedural calls to access the Analysis Manager. The necessary include file and an example usage of the API is also included in this appendix. If you have a special customization need to incorporate the Analysis Manager API, please contact your local MSC representative. MSC is happy to provide customized solutions to its customers on a fee basis.
Assumptions:
The product is ALREADY installed and configured. A description of what it takes to install and configure is included in System Management, but for the purpose of describing the API, assume this
for now.
A Quick Background
There are 3 machines involved in the job submit/abort/monitor cycle:
1. The QueMgr scheduling process machine, labelled the master node.
2. The user's submit (home) machine where the graphical user interface (GUI) is run and the input files are located, labelled the submit node.
3. The analysis machine where the job actually runs, labelled the analysis node.
All 3 machines can be the same or different or any combination of these. And there are two separate persistent processes, which are already running as part of the installation. These processes (called daemons on Unix or services on Windows) are the:
1. QueMgr - job scheduling daemon or service
2. RmtMgr - remote command daemon or service
There is one and only one QueMgr process per site (or group or organization or network) but there are many RmtMgr processes. A RmtMgr process runs on each and every analysis machine. A RmtMgr can also be run on each submit machine (recommended). If the submit and analysis machines are the same host, then only one RmtMgr needs to be running.
The QueMgr and RmtMgr processes start up at boot time automatically and run always, but use very little memory and CPU resources, so users will not notice performance effects. Also these processes can run as root (Administrator on Windows) or as any user, if these privileges are not available.
Each RmtMgr binds to a known/chosen port number that is the same for every RmtMgr machine. Each RmtMgr process collects machine statistics on free CPU cycles, free memory and free disk space and returns this data to the QueMgr at frequent intervals.
The QueMgr then maintains a sorted list of each RmtMgr machine and its capacity to report back to a GUI/user. (A least loaded host selection is currently being developed so the QueMgr selects the actual host for a submit based on these statistics, instead of a user explicitly setting the hostname in the GUI.)
There are a few other AM executables:
1. The TxtMgr - a simple text-based UI which is built on this API and demonstrates all these features.
2. The JobMgr - GUI back-end processes, starts up on the same machine as the GUI (submit machine) when a job is submitted and runs only for the life of a job. There is always 1 JobMgr process per job.
3. The analysis family: These 3 programs are all built on top of an additional API which uses many common features each must do. The common code is run and the custom work for each application is in a few separate routines, pre_app(), post_app(), abort_app()
NasMgr - The MSC Nastran analysis process which communicates data to/from the JobMgr and spawns the actual MSC Nastran sub-process. It also reads include files and transfers them, adds FMS statements to the deck if appropriate, and periodically sends job resource data and msgpop message data to the JobMgr to store off.
MarMgr - The Abaqus analysis process which does the same things as NasMgr, but for the Abaqus application.
AbaMgr - The Abaqus analysis process which does the same things as NasMgr, but for the Abaqus application.
GenMgr - The General application analysis process, used for any other application. Does what NasMgr does except it has no knowledge of the application and just runs it and collects resource usage.
General outline of the Analysis Manager API:
With Analysis Manager there are 5 fundamental functions one can perform:
1. Submit a job
2. Abort a job
3. Monitor a specific job
4. Monitor all the hosts/queues
5. List statistics of a completed job
 
Note:  
With the job database viewer ($P3_HOME/p3manager_files/bin/ARCH/Job_Viewer) one can view/gather/query statistics about ALL jobs for a company/site/etc. as the QueMgr maintains a database of all job statistics. (The database is generally located in the $P3_HOME/p3manager_files/default/log/QueMgr.rdb file.)
Each function requires some common data and some unique data. Common data include the QueMgr host and port it is listening on and the configuration structure information. Unique data is described
further below.
Configure
The first step to any of the Analysis Manager functions is to connect to an already running QueMgr. To do this you must first know the host and port of the running QueMgr, which is usually in the $P3_HOME/p3manager_files/org.cfg or the $P3_HOME/p3manager_files/default/conf/QueMgr.sid file. After that, simply call
CONFIG *cfg; 
char qmgr_host[128];
int qmgr_port;
int ret_code;
int error_msg;
cfg = get_config(qmgr_host, qmgr_port, &ret_code, error_msg)
ret_code and possbily error_msg are returned for checking errors.
The CONFIG structure is defined in an include file shown below. Then initialize sub-parts of the configuration structure by calling
init_config(cfg)
Then determine the application name/index. The application is the name of the application you plan to work with, most-likely MSC Nastran, but it could be anything that is already pre-configured. A configuration includes basically the application name, and a list of hosts and paths where it is installed, as described in the $P3_HOME/p3manager_files/default/conf/host.cfg file, read by the QueMgr on start up. Each application has different names and possibly different options to the Analysis Manager functions. All applications names/indexes are in the cfg structure so the GUI can ask the user and check against the accepted list.
Then call the function of choice:
1. Submit a job
2. Abort a job
3. Monitor a specific job
4. Monitor all the hosts/queues
5. List statistics of a completed job
Submit
For submit, the GUI then needs to fill in the application structure data and make a call to submit the job. The call may block and wait for the job to complete (maybe a very long time) or it can return immediately. See the job info rcf/GUI settings listed below for what can be set and changed. Assuming defaults for ALL settings, then only a jobname (input file selection), hostname and (possibly) memory need to be set before submitting.
Then call
char *jobfile;
char *jobname; /* usually same as basename of jobfile */
int background;
int ret_code;
int job_number;
job_number = submit_job(jobfile,jobname,background,&ret_code);
This call goes through many steps: contacting the QueMgr, getting a valid reserved job number, asking the QueMgr to start a JobMgr, etc. and then sends all the config/rcf/GUI structure info to the JobMgr. The JobMgr runs for the life of the job and is essentially the back-end of the GUI, transferring files to/from the user submit machine to the analysis machine (the NasMgr, MarMgr, AbaMgr or
GenMgr process).
Abort
For abort, the GUI then needs to query the QueMgr for a list of jobs, and then present this for the user
to select:
char *qmgr_host;
int qmgr_port;
JOBLIST *job_list;
int job_count;
job_count = get_job_list(qmgr_host,qmgr_port,job_list);
Once a job is chosen, a simple call deletes it:
int job_number; 
char *job_user;
int ret_code;
ret_code = delete_job(job_number,job_user);
Monitor Running Job
For monitor a specific job, the GUI then needs to query the QueMgr for a list of jobs, and then present this for the user to select:
char *qmgr_host;
int qmgr_port;
JOBLIST *job_list;
int job_count;
job_count = get_job_list(qmgr_host,qmgr_port,job_list);
Once a job is chosen, a simple call with a severity level returns data:
int job_number; 
int severity_level;
int cpu, mem, disk;
int msg_count;
char *ret_string;
ret_string = monitor_job(job_number,severity_level,
&cpu,&mem,&disk,&msg_count);
The ret_string then contains a list (array of strings) of all messages the application stored (msgpop type) that are <= the severity level input along with a resource usage string. The number of msgpop messages is stored in msg_count, to be referenced like:
for(i=0;i<msg_count;i++)
printf("%s",ret_string[i]);
printf("cpu time used by job = %d,
mem used by job = %d, disk used by job = %d\n",cpu,mem,disk);
The CPU, MEM and DISK values are the current resources used by the job.
Monitor Hosts/Queues
For monitor all hosts/queues, the GUI then needs to make a call and get back all QueMgr data for the application chosen. This gets complex. There are 4 different types/groups of data available. For now lets just assume only one type is wanted. There are:
1. FULL_LIST
2. JOB_LIST
3. QUEMGR_LOG
4. QUE_STATUS
5. HOST_STATS
Each has its own syntax and set of data. For the QUE_STATUS type, the call returns an array of structures containing the hostname, number of running jobs, number of waiting jobs, maximum jobs allowed to run on that host, for the given (input) application.
char *qmgr_host; 
int qmgr_port;
int job_count;
QUESTAT *que_info;
que_info = get_que_stats(qgr_host,qmgr_port,&job_count);
for(i=0;i<job_count;i++)
printf("%s %d %d %d\n",
que_info[i].hostname,que_info[i].num_running,
que_info[i].num_waiting,que_info[i].maxtsk);
For FULL_LIST:
See Include File.
For JOB_LIST:
See Include File.
For QUEMGR_LOG, this is simply a character string of the last 4096 bytes of the QueMgr log file:
See Include File.
For HOST_STATS:
See Include File.
Monitor Completed Job
For a list completed jobs, the GUI then needs to query the QueMgr for a list of jobs, and then present this for the user to select.
char *qmgr_host;
int qmgr_port;
JOBLIST *job_list;
int job_count;
job_count = get_job_list(qmgr_host,qmgr_port,job_list);
Once a job is chosen, a simple call will return all the job data saved.
int job_number; 
JOBLIST *comp_info;
comp_info = get_completedjob_stats(job_number);
Remote Manager
On a another level, a GUI could also connect to any RmtMgr and ask it to perform a command and return the output from that command. This is essentially a remote shell (rsh) host command as on a Unix machine. This functionality may come in handy when adding/extending the Analysis Manager product to network install other MSC software or whatever is thought of. The syntax for this is as follows:
char *ret_msg; 
int ret_code;
char *rmtuser;
char *rmthost;
int rmtport;
char *command;
int background (== FORGROUND (0) or BACKGROUND (1))
ret_msg = remote_command(rmtuser, rmthost,
rmtport, command, background, &ret_code)
Structures
The JOBLIST structure contains these members:
int job_number; 
char job_name[128];
char job_user[128];
char job_host[128];
char work_dir[256];
int port_number;
cfg structure from config.h:
typedef struct{ 
char org_name[NAME_LENGTH];
char org_name2[NAME_LENGTH];
char host_name[NAME_LENGTH];
unsigned int addr;
int port;
}ORG;
typedef struct{
char prog_name[NAME_LENGTH];
char app_name[NAME_LENGTH];
char args[PATH_LENGTH];
char extension[24];
}PROGS;
typedef struct{
char pseudohost_name[NAME_LENGTH];
char host_name[NAME_LENGTH];
char exepath[PATH_LENGTH];
char rcpath[PATH_LENGTH];
int glob_index;
int sub_index;
char arch[NAME_LENGTH];
unsigned int address;
}HSTS;
typedef struct{ 
int num_hosts;
HSTS *hosts;
}HOST;
typedef struct{
char pseudohost_name[NAME_LENGTH];
char exepath[PATH_LENGTH];
char rcpath[PATH_LENGTH];
int type;
}APPS;
typedef struct{ 
char host_name[NAME_LENGTH];
int num_subapps;
APPS subapp[MAX_SUB_APPS];
int maxtsk;
char arch[NAME_LENGTH];
unsigned int address;
}TOT_HST;
typedef struct{ 
char queue_name1[NAME_LENGTH];
char queue_name2[NAME_LENGTH];
int glob_index;
}QUES;
typedef struct{ 
int num_queues;
QUES *queues;
}QUEUE;
typedef struct{ 
char queue_name1[NAME_LENGTH];
char queue_name2[NAME_LENGTH];
HOST sub_host[MAX_APPS];
}TOT_QUE;
typedef struct{ 
char file_sys_name[NAME_LENGTH];
int model;
int max_size;
int cur_free;
}FILES;
typedef struct{ 
char pseudohost_name[NAME_LENGTH];
int num_fsystems;
FILES *sub_fsystems;
}TOT_FSYS;
typedef struct{ 
char sepuser_name[NAME_LENGTH];
}SEP_USER;
typedef struct{
int QUE_TYPE;
char ADMIN[128];
int NUM_APPS;
unsigned int timestamp;
/* prog names */
PROGS progs[MAX_APPS];
/* host stuff */
HOST hsts[MAX_APPS];
int total_h; 
TOT_HST *total_h_list;
/* que stuff */
char que_install_path[PATH_LENGTH];
char que_options[PATH_LENGTH];
int min_mem_value;
int min_disk_value;
int min_time_value;
QUEUE ques[MAX_APPS];
int total_q;
TOT_QUE *total_q_list;
/* file stuff */
int total_f; 
TOT_FSYS *total_f_list

/* separate user stuff */
int total_u; 
SEP_USER *total_u_list;
}CONFIG;
An example of all the rcf/GUI settings from the app_config.h files:
cfg.total_host[0].host_name = hal9000.macsch.com 
cfg.total_host[0].arch = HP700
cfg.total_host[0].maxtasks = 3
cfg.total_host[0].num_apps = 3
cfg.total_host[0].sub_app[MSC/NASTRAN].pseudohost_name = nas_host_u
cfg.total_host[0].sub_app[MSC/NASTRAN].exepath = /msc/bin/nast705
cfg.total_host[0].sub_app[MSC/NASTRAN].rcpath = /msc/conf/nast705rc
cfg.total_host[0].sub_app[ABAQUS].pseudohost_name = aba_host_u
cfg.total_host[0].sub_app[ABAQUS].exepath = /hks/abaqus
cfg.total_host[0].sub_app[ABAQUS].rcpath = /hks/site/abaqus.env
cfg.total_host[0].sub_app[GENERIC].pseudohost_name = gen_host_u
cfg.total_host[0].sub_app[GENERIC].exepath = /apps/bin/GENERALAPP
cfg.total_host[0].sub_app[GENERIC].rcpath = NONE
cfg.total_host[1].host_name = daisy.macsch.com
cfg.total_host[1].arch = WINNT
cfg.total_host[1].maxtasks = 3
cfg.total_host[1].num_apps = 4
cfg.total_host[1].sub_app[MSC/NASTRAN].pseudohost_name = nas_host_nt
cfg.total_host[1].sub_app[MSC/NASTRAN].exepath = c:/msc/bin/nastran.exe
cfg.total_host[1].sub_app[MSC/NASTRAN].rcpath = c:/msc/conf/nast706.rcf
cfg.total_host[1].sub_app[ABAQUS].pseudohost_name = aba_host_nt
cfg.total_host[1].sub_app[ABAQUS].exepath = c:/hks/abaqus.exe
cfg.total_host[1].sub_app[ABAQUS].rcpath = c:/hks/site/abaqus.env
cfg.total_host[1].sub_app[GENERIC].pseudohost_name = gen_host_nt
cfg.total_host[1].sub_app[GENERIC].exepath = c:/apps/bin/GENERALAPP.exe
cfg.total_host[1].sub_app[GENERIC].rcpath = NONE
cfg.total_host[1].sub_app[GENERIC2].pseudohost_name = gen_host2_nt
cfg.total_host[1].sub_app[GENERIC2].exepath = c:/WINNT/System32/mem.exe
cfg.total_host[1].sub_app[GENERIC2].rcpath = NONE
# 
unv_config.auto_mon_flag = 0
unv_config.time_type = 0
unv_config.delay_hour = 0
unv_config.delay_min = 0
unv_config.specific_hour = 0
unv_config.specific_min = 0
unv_config.specific_day = 0
unv_config.mail_on_off = 0
unv_config.mon_file_flag = 0
unv_config.copy_link_flag = 0
unv_config.job_max_time = 0
unv_config.project_name = nastusr
unv_config.orig_pre_prog =
unv_config.orig_pos_prog =
unv_config.exec_pre_prog =
unv_config.exec_pos_prog =
unv_config.separate_user = nastusr
unv_config.p3db_file =
# 
nas_config.disk_master = 0
nas_config.disk_dball = 0
nas_config.disk_scratch = 0
nas_config.disk_units = 2
nas_config.scr_run_flag = 1
nas_config.save_db_flag = 0
nas_config.copy_db_flag = 0
nas_config.mem_req = 0
nas_config.mem_units = 0
nas_config.smem_units = 0
nas_config.extra_arg =
nas_config.num_hosts = 2
nas_host[hal9000.macsch.com].mem = 0
nas_host[hal9000.macsch.com].smem = 0
nas_host[daisy.macsch.com].mem = 0
nas_host[daisy.macsch.com].smem = 0
nas_config.default_host = nas_host_u
nas_config.default_queue = N/A
nas_submit.restart_type = 0
nas_submit.restart = 0
nas_submit.modfms = 0
nas_submit.nas_input_deck =
nas_submit.cold_jobname =
# 
aba_config.copy_res_file = 1
aba_config.save_res_file = 0
aba_config.mem_req = 0
aba_config.mem_units = 0
aba_config.disk_units = 2
aba_config.space_req = 0
aba_config.append_fil = 0
aba_config.user_sub =
aba_config.use_standard = 1
aba_config.extra_arg =
aba_config.num_hosts = 2
aba_host[hal9000.macsch.com].num_cpus = 1
aba_host[hal9000.macsch.com].pre_buf = 0
aba_host[hal9000.macsch.com].pre_mem = 0
aba_host[hal9000.macsch.com].main_buf = 0
aba_host[hal9000.macsch.com].main_mem = 0
aba_host[daisy.macsch.com].num_cpus = 1
aba_host[daisy.macsch.com].pre_buf = 0
aba_host[daisy.macsch.com].pre_mem = 0
aba_host[daisy.macsch.com].main_buf = 0
aba_host[daisy.macsch.com].main_mem = 0
aba_config.default_host = aba_host_u
aba_config.default_queue = N/A
aba_submit.restart = 0
aba_submit.aba_input_deck =
aba_submit.restart_file =
# 
gen_config[GENERIC].disk_units = 2
gen_config[GENERIC].space_req = 0
gen_config[GENERIC].mem_units = 2
gen_config[GENERIC].mem_req = 0
gen_config[GENERIC].cmd_line = jid=$JOBFILE mem=$MEM
gen_config[GENERIC].mon_file = $JOBNAME.log
gen_config[GENERIC].default_host = gen_host_u
gen_config[GENERIC].default_queue = N/A
gen_submit[GENERIC].gen_input_deck =
# 
gen_config[GENERIC2].disk_units = 2
gen_config[GENERIC2].space_req = 0
gen_config[GENERIC2].mem_units = 2
gen_config[GENERIC2].mem_req = 0
gen_config[GENERIC2].cmd_line =
gen_config[GENERIC2].mon_file = $JOBNAME.log
gen_config[GENERIC2].default_host = gen_host2_nt
gen_config[GENERIC2].default_queue = N/A
gen_submit[GENERIC2].gen_input_deck =
#