Analysis Manager > System Management > Examples of Configuration Files
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX''">XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX''">   
Examples of Configuration Files
The Analysis Manager is a distributed application and requires a predefined set of configuration files for its use. These configuration files may be changed using the Configuration Management tool (called p3am_admin (AdmMgr)) (see Configuration Management Interface) or they may be edited by hand. When one or more of the configuration files is changed, the Queue Manager must either be restarted or reconfigured to force it to read and recognize the changes.
Host Configuration File
To set up and execute different applications on a variety of physical computer hosts, the Analysis Manager uses a host configuration file (host.cfg) located in the directory:
$P3_HOME/p3manager_files/default/conf
where $P3_HOME is the location of the installation, typically /msc.
The host.cfg file contains five distinct areas of information/administrator, Queue Type, AM Host information, Physical Host information, and Application information, in this order. The queue type may be either MSC, LSF, or NQS. The administrator is any valid user name except root (or Administrator on Windows).
The AM host information has these fields associated with it:
AM Hostname
A unique name for the combination of the analysis application and physical host. It can be called anything but must be unique, for example nas68_venus.
Physical Host
The physical host name where the analysis application will run.
Type
The unique integer ID assigned to this type of analysis. This is automatically assigned by the program and the user should not have to worry about this.
Path
How this machine can find the analysis application. For MSC Nastran, this is the runtime script (typically the nast68 file), for MSC.Marc, ABAQUS or GENERAL applications, this is the executable location.
rcpath
How this machine can find the analysis application runtime configuration file: the MSC Nastran nast68rc file or the ABAQUS site.env file. This is not applicable to a MSC.Marc, GENERAL application and should be filed with the keyword NONE.
The physical host information has the following fields associated with it:
Physical Host
Name of host machine for the use of the Analysis Manager
Class
Machine type (RS6K, HP700, etc.)
Max
Maximum allowable concurrent processes for this machine
MaxAppTsk
Maximum application tasks. This is used say, if four MSC Nastran hosts are configured, but there are only enough licenses for three concurrent jobs. Without this the 4th job would always fail. With MaxAppTsk set to 3, the 4th job waits in the queue until one of the previous running jobs completes, then it gets submitted. It is ONLY present if the configuration file version is >=2. This is set with the VERS: or VERSION: field at the top of the file.
 
Note:  
The MaxAppTsk setting must be added manually. There is no widget in the AdmMgr to do this. If there are NO configuration files on start up of the AdmMgr, then it will set the version to 2 and use 1000 as the MaxAppTsk. If configuration files exist and version 2 is set, it will honor whatever is already there and pass them through. If version 1 is set, then MaxAppTsk is not written to the configuration files.
The application information has the following fields associated with it:
 
Type
A number indicating analysis program type
Prog_name
The name of the application job manager for this application
Patran name
The name of the application which corresponds to the Patran
analysis preference
Options
Optional arguments for use with the GENERAL application.
If the scheduling system is a separate package (e.g., LSF or NQS), then the Analysis Manager will submit jobs to a queue provided. Queues are described below. Also, If the scheduler is separate from the Analysis Manager, then the maximum task field is not used. All tasks are submitted through the queue and the queueing system will execute or hold each task according to its own configuration. An example of a host.cfg file is given below. Each comment line must begin with a # character. All fields are separated by one or more spaces. All fields must be present.
#------------------------------------------------------
# Analysis Manager host.cfg file
#------------------------------------------------------
#
#
# A/M Config file version
# Que Type: possible choices are P3, LSF, or NQS
#
VERSION: 2
ADMIN: am_admin
QUE_TYPE: MSC
#
#------------------------------------------------------
# AM HOSTS Section
#------------------------------------------------------
#
# Must start with a “P3AM_HOSTS:” tag.
#
# AM Host:
# Name to represent the choice as it will appear
# on the AM menus.
#
# Physical Host:
# Actual hostname of the machine to run the application on.
#
# Type:
# 1 - MSC.Nastran
# 2 - ABAQUS
# 3 - MSC.Marc
# 20 - User defined (General) application #1
# 21 - User defined (General) application #3
# etc. (max of 29)
#
# This field defines the application for this entry.
# Each value will have a corresponding entry in the
# “APPLICATIONS” section.
#
# EXE_Path:
# Where executable entry is made.
#
# RC_Path:
# Where runtime configuration file (if present) is found.
# Set to “NONE” if “General” application.
#
#------------------------------------------------------
# Physical Hosts Section
#------------------------------------------------------
#
# Must start with a “PHYSICAL_HOSTS:” tag.
#
# Class:
# HP700 - Hewlett Packard HP-UX
# RS6K - IBM RS/6000 AIX
# SGI5 - Silicon Graphics IRIX
# SUNS - Sun Solaris
# LX86 - Linux/
# WINNT - Windows
#
# Max:
#
# Maximum allowable concurrent tasks for this host.
#
#------------------------------------------------------
# Applications Section
#------------------------------------------------------
#
# Must start with a “APPLICATIONS:” tag.
#
# Type: See above for values
# Prog_name:
#
# The name of the Patran AM Task Manager executable to start.
#
# This field must be set to the following, based on the
# application it represents:
#
# MSC.Nastran -> NasMgr
# HKS/ABAQUS -> AbaMgr
# MSC.Marc -> MarMgr
# Any General App -> GenMgr
#
# option args:
#
# This field contains the default command line which will
# appear in the AM user interface configure menu. This
# field is only valid for user defined (General) applications.
# The command line can contain any text including any of the
# following keywords (which will be evaluated at runtime):
#
# $JOBFILE Actual filename selected (w/o full path)
# $JOBNAME Jobname ($JOBFILE w/o extension)
# $P3AMHOST Hostname of AM host
# $P3AMDIR Dir on AM host where $JOBFILE resides
# $APPNAME Application name (P3 preference name)
# $PROJ Project Name selected
# $DISK Total Disk space requested (mb)
#
#
# AM Host Physical Host Type EXE_Path RC_Path
#---------------------------------------------------------------------
P3AM_HOSTS:
 
Venus_nas675
venus
1
/msc/msc675/bin/nast675
/msc/msc675/conf/nast675rc
Venus_nas68
venus
1
/msc/msc68/bin/nast68
/msc/msc68/conf/nast68rc
Venus_aba53
venus
2
/hks/abaqus
/hks/site/abaqus.env
Venus_mycode
venus
20
/mycode/script
NONE
Mars_nas68
mars
1
/msc/msc68/bin/nast68
/msc/msc68/conf/nast68rc
Mars_aba5
mars
2
/hks/abaqus
/hks/site/abaqus.env
Mars_mycode
mars
20
/mycode/script
NONE
#---------------------------------------------------------------------
#
#Physical Host Class Max
#--------------------------------------------------------------
PHYSICAL HOSTS:
 
venus
SGI4D
2
mars
SUN4
1
#--------------------------------------------------------------
#
#
#Type Prog_name MSC P3 name MaxAppTsk [option args]
#--------------------------------------------------------------
APPLICATIONS:

#--------------------------------------------------------------
1
NasMgr
MSC.Nastran
3
 
2
AbaMgr
ABAQUS
3
 
3
MarMgr
MSC.Marc
3
 
20
GenMgr
MYCODE
3
-j $JOBNAME -f $JOBFILE
Disk Configuration File
This configuration file defines the scratch disk space and disk systems to use for temporary files and databases. Every AM host must have a filesystem associated with it.
In particular, the Analysis Manager’s MSC Nastran Manager (NasMgr) generates MSC Nastran File Management Section (FMS) statements for each job submitted. The FMS statements are to initialize and allocate each MSC Nastran scratch and database file for each job. In order to define files to be written for each scratch and database logical file, the Analysis Manager uses the disk configuration file (called disk.cfg) to know each file system for each host in the host.cfg file that is to be used when running MSC Nastran. So, the disk configuration file contains a list of each host, a list of each file system for that host, and the file system type. There are two different Analysis Manager file system types, nfs or local (leave field blank). An example of the disk.cfg file:
#---------------------------------------------------------------------
# Analysis Manager disk.cfg file
#---------------------------------------------------------------------
#
# AM Host
#
# AM host from the host.cfg file “Patran AM_HOSTS” section.
#
# File System
#
# The filesystem directory
#
# Type
#
# The type of filesystem. If the filesystem is local
# to the machine, this field is left blank. If the
# filesystem is NFS mounted, the string “nfs” appears
# in this field.
#
# #
 
# AM Host File
System Type
(nfs or blank)
#----------------------------------------------------------------------
Venus_nas675
/user2/nas_scratch
 
Venus_nas675
/venus/users/nas_scratch
 
#
 
 
Venus_nas68
/user2/nas_scratch
 
Venus_nas68
/venus/users/nas_scratch
 
Venus_nas68
/tmp
 
#
 
 
Venus_aba53
/user2/aba_scratch
 
Venus_aba53
/venus/users/aba_scratch
 
Venus_aba53
/tmp
 
#
 
 
Venus_mycode
/tmp
 
Venus_mycode
/server/scratch
nfs
#
 
 
Mars_nas68
/mars/nas_scratch
 
#
 
 
Mars_aba5.2
/mars/users/aba_scratch
 
Mars_aba5.2
/tmp
 
#
 
 
Mars_mycode
/tmp
 
#---------------------------------------------------------------------
Each comment line must begin with a # character. All fields are separated by one or more spaces. All fields must be present.
In this example, the term file system is used to define a directory that may or may not be its own file system, and that already exists and has permissions so that any the Analysis Manager user can create directories below it. It is recommended that the Analysis Manager file systems be directories with large amounts of disk space and restricted to the Analysis Manager’s use, because the Analysis Manager’s MSC Nastran, MSC.Marc, ABAQUS, and GENERAL Managers only know about their own jobs
and processes.
Queue Configuration File
If a separate scheduling system (i.e., LSF or NQS) is being used at this site, the Analysis Manager can interact with it, using the queue configuration file. This file is of the same name as the Queue Manager type field in the host.cfg file (i.e. QUE_TYPE: LSF or NQS), with a.cfg extension (i.e., lsf.cfg or nqs.cfg). The Queue Manager configuration file lists each queue name, and all hosts allowed to run applications for that queue. In addition, a queue install path is required, so that the Analysis Manager can execute queue commands with the proper path. An example of a Queue Manager configuration file is given below.
Each comment line must begin with a # character. All fields are separated by one or more spaces. All fields must be present.
#------------------------------------------------------
# Analysis Manager lsf.cfg file
#------------------------------------------------------
#
# Below is the location (path) of the LSF executables (i.e. bsub)
#
QUE_PATH: /lsf/bin
QUE_OPTIONS:
QUE_MIN_MEM:
QUE_MIN_DISK:
#
# Below, each queue which will execute MSC tasks is listed.
# Each queue contains a list of hosts (from host.cfg) which
# are eligible to run tasks from the given queue.
#
# NOTE:
# Each queue can only contain one Host of a given application
# version(i.e., if there are two version entries for
# MSC.Nastran, nas67 and nas68, then each queue
# set up to run MSC.Nastran tasks could only include
# one of these versions. To be able to submit to
# the other version, create a separate, additional
# MSC queue containing the same LSF queue name, but
# referencing the other version)
#
#
TYPE: 1

#
TYPE: 2
#MSC Que
LSF Que
Hosts
#---------------------------------------------------------
Priority_nas
priority
mars_nas675, venus_nas675
Normal_nas
normal
mars_nas675, venus_nas675
Night_nas
night
mars_nas675
#---------------------------------------------------------

#
#MSC Que
LSF Que
Hosts
#---------------------------------------------------------
Priority_aba
priority
mars_aba53, venus_aba53
Normal_aba
normal
mars_aba53, venus_aba53
Night_aba
night
mars_aba53, venus_aba53
#---------------------------------------------------------
Organizational Group Configuration File
If you wish to link all organizational groups (running Queue Managers) together, so that any user may see and switch between these organizational groups from within the Analysis Manager without setting any environment variables, then it is necessary to create an org.cfg file in the top level directory:
$P3_HOME/p3manager_files/org.cfg
where $P3_HOME is the Patran installation location.
Three fields are required in this file:
 
org
The organizational group name
master host
The host on which the Queue Manager daemon is running for the particular organizational group in question
port #
The unique port ID used for this Queue Manager daemon. Each Queue Manager must have been started with the -port option.
An example of this configuration file follows:
#------------------------------------------------------
# Patran ANALYSIS MANAGER org.cfg file
#------------------------------------------------------
#
 
 
# Org
Master Host
Port #
#------------------------------------------------------
default
casablanca
1500
atf
atf_ibm
1501
lsf_atf
atf_sgi
1502
support
umea
1503
Separate Users Configuration File
In order to allow execution of analysis jobs on machines where the user does not have an account, the system administrator may have to set up special accounts or allow access to other accounts for users to submit jobs. This is done with the .p3amusers configuration file located in $P3_HOME/p3manager_files/<org>/conf.
The file contents are very simple. They simply contain the name of each user account per line that will be allowed for others to use and submit jobs. As an example:
user1
user2
sippola
smith
The filename begins with a period (“.”) meaning it will be hidden when issuing a normal directory content command. The Queue Manager daemon must be restarted once this file is created or modified.
 
Note:  
Any user account that is configured in this manner must exist not only on the machine where the analysis is going to run, but also on the machine from which the job was submitted.
The capability or necessity of this separate user file has somewhat been obsoleted. In general the following applies:
1. On Unix machines, if RmtMgrs are running as root then they can run the job as the user (or the separate user as specified by this file) with no problem.
2. On Unix machines, if RmtMgrs are running as a specific user then the job will run as that user regardless of the user (or separate user) who submitted the job.
3. If Windows, the job gets runs as whoever is running the RmtMgr on the PC. The user (and separate user) is ignored.
Group/Queue Feature
This configuration file msc.cfg, allows the default least-loaded criteria to be modified when using the host grouping feature for automatically selection the least loaded machine to submit to. The file contents look like:
SORT_ORDER: free_tasks cpu_util last_job_time avail_mem free_disk
GROUP: grp_nas2004
MIN_DISK: 10
MIN_MEM:: 5
MAX_CPU_UTIL: 95
The SORT_ORDER line lists the names of the sort criteria in the order you care to sort eligible hosts. The remaining lines are then for each group you care to change the defaults. Thus you must define multiple entries of the GROUP, MIN_DISK, MIN_MEM, MAX_CPU_UTIL for each group.
A group can not contain multiple entries that use the same physical hosts (e.g.: nast2004_host1 and nast2001_host1 in the above example) because then the Analysis Manager would not know which to use. In this case just create another group name (grp_nas2001 like above) and it will work as expected. You can have different applications in the same group with no problems. You could in the above example have used grp_nas2004 as the group name for all the MSC Nastran entries (possibly changing the name of the group to make more sense that its for hosts which run MSC Nastran) or you can keep them separate with the added flexibility of defining a different sort order and util/mem/disk criteria for each application/group.