Analysis Manager > System Management > Configuration Management Interface
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX''">XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX''">   
Configuration Management Interface
The Analysis Manager requires a predefined set of configuration files for its use. These configuration files may be changed and validated using the Configuration Management Preference (p3am_admin executable name), which enables menu-driven ease in changing and testing the configuration files. Examples of the configuration files are found in Examples of Configuration Files. You may also edit them using any text editor; however, you will probably find it more intuitive to use the administration tool until you become familiar with the configuration files.
To run p3am_admin from the installation directory using all defaults type:
$P3_HOME/bin/p3am_admin
or to call out a specific set of configuration files other than the default:
$P3_HOME/bin/p3am_admin -org <org>
where <org> is the directory name containing the configuration file located in $P3_HOME/p3manager_files/<org>/{conf, log, proj} or specify the full path:
<path_name>/AdmMgr $P3_HOME -org <org>
where:
 
<path_name>
=
$P3_HOME/p3manager_files/bin/<arch>/
<arch>
=
the architecture type of the machine you wish to run on which can be one of the following:
 
HP700
-
Hewlett Packard HP-UX
RS6K
-
IBM RS/6000 AIX
SGI5
-
Silicon Graphics IRIX
SUNS
-
Sun SPARC Solaris
LX86
-
Linux (MSC or Red Hat)
WINNT
-
Windows 2000 or XP
The arguments are defined as follows:
$P3_HOME
The path where the Analysis Manager is installed. This path will be used to locate the p3manager_files directory. For example, if /msc/patran200x is specified, the p3am_admin (AdmMgr) program will look for the /msc/patran/p3manager_files directory. Typically, the install directory is /msc/patran200x and defined in an environment variable called $P3_HOME.
-org <org>
This is the organizational group to be used. See Organization Environment Variables for a description on the use of organizations. It is the name of the directory under the p3manager_files directory that contains the configuration files.
Both of the arguments listed above are optional. If they are not specified, the p3am_admin (AdmMgr) program will check for the following two environment variables:
P3_HOME
The path where the Analysis Manager is installed.
P3_ORG
The organization to be used. This is the <org> directory.
If the command line arguments are not specified, then at least the P3_HOME environment variable must be set. The P3_ORG variable is not required. If the P3_ORG variable is not set and the -org option is not provided, an organization of default is used. Therefore, p3am_admin (AdmMgr) will check for configuration files in the following location:
$P3_HOME/p3manager_files/default/conf
When running the p3am_admin (AdmMgr) program, it is recommended this be done on the master node and as the root user. The p3am_admin (AdmMgr) program can be run as normal users, but some of the testing options will not be available. In addition, the user may not have the necessary privileges to save the changes to the configuration files or start up a Queue Manager daemon.
When p3am_admin (AdmMgr) starts up, it will take the arguments provided (or environment variables) and check to see if configuration files already exist. The configuration files should exist as follows. The last two are only necessary if LSF or NQS queueing are used.
$P3_HOME/p3manager_files/<org>/conf/host.cfg
$P3_HOME/p3manager_files/<org>/conf/disk.cfg
$P3_HOME/p3manager_files/<org>/conf/lsf.cfg
$P3_HOME/p3manager_files/<org>/conf/nqs.cfg
If these files exist, they will be read in for use within the p3am_admin (AdmMgr) program. If these files are not found, p3am_admin (AdmMgr) will start up in an initial state. In this state there are no hosts, filesystems, or queues defined and they must all be added using the p3am_admin (AdmMgr) functionality.
Therefore, upon initial installation and/or configuration of the Analysis Manager, the p3am_admin (AdmMgr) program will come up in an initial state and the user can build up configuration files to save.
Action Options
The initial form for p3am_admin (AdmMgr) has the following Actions/Options:
1. Modify Config Files
2. Test Configuration
3. Reconfigure Que Mgr
On Windows the Administration tree tab is the equivalent:
Modify Configuration Files
Modify Config(uration) Files has the following Objects:
1. Applications
2. Physical Hosts
3. Hosts
4. Filesystems
5. Queues
Selecting the Queue Type
The Analysis Manager requires a Queue Type, whether to use LSF, NQS or Analysis Manager own queueing capability. This typically should be the first thing set when setting up a new configuration.
To select or change the selection of a Queue Type, click on the Queue Type: menu for either LSF, NQS, or MSC or AM Queue, as listed on the right side of the Modify Config Files form. Only one queue type may be selected.
To save the configuration, the Apply button must be pressed and the newly added queue type information will be saved in the host.cfg file.
Note:  
Apply saves all configuration files: host, disk, and, if applicable, lsf or nqs.
Queue Managers set up on Windows only have choice of the default MSC Queue type.
LSF and NQS are not supported for Queue Managers running on Windows.
Administrator User
You must also set the Admin user. This should not be root on Unix or the administrator account on Windows, but should be a normal user name.
Configuration Version
There are three configuration versions. The functionality accessible to setup is dependent on which version you select. Version 1 is the original.
Version 2 includes an additional capability of limiting the maximum number of task for any given applications allowed to run at any one time. If this number is exceeded, any additional submittals are queued until the maximum number of tasks for that application drops below this number. This is typically used when there are only so may application licenses available such that a job cannot be submitted without a license being available. Otherwise the application might fail due to no license
being available.
Version 3 includes all capabilities of versions 1 and 2, and also includes the ability to set up a host, made of a group of hosts, that will be monitored for the least loaded machines. Once a machine in that group satisfies the loading critiria, the job is submitted to that machine.
Applications
Since the Analysis Manager can execute different applications, it needs to know which applications to execute and how to access them. This configuration information is stored in the host.cfg file located in the $P3_HOME/p3manager_files/default/conf directory. This portion of the host.cfg file contains the following fields:
type
An integer number used to identify the application. The user never has to worry about this number because it is automatically assigned by the program.
program name
Program names can be either:
NasMgr
for executing MSC Nastran
MarMgr
for executing MSC.Marc
AbaMgr
for executing ABAQUS
GenMgr
for executing other analysis modules.
Patran name
The name of the Patran Preference from which to key off of when invoking the Analysis Manager. These can be MSC Nastran, MSC.Marc, ABAQUS, ANSYS, etc. Check to see what the exact Patran Preference spelling is and remove any spaces. If the Preference does not exist then the first configured application will be used when the Analysis Manager is invoked from Patran after which the user can change it to the one he wants.
optional args
Used for generic program execution only. These specify the arguments to be added to the invoking line when running a generic application.
MaxAppTask
By default this is not set. If the configuration file version is set to 2 or 3, then you may specify the maximum number of tasks that the given application can run at any one time (on all machines). This is convenient when you don’t want jobs submitted with the possibility of one or more not being able to check out the proper licenses if none are available because too many jobs are running at once.
The p3am_admin (AdmMgr) program can be used to add and delete applications or change any field above as shown in the forms below.
The exception to this is the Maximum Number of Tasks. This value must be changed manually by editing the configuration file and then restarting the Queue Manager service on Windows. On UNIX, this can be controlled through the Administration GUI.
Adding an Application
To add an application, select the Add Application button. (On Windows, right mouse click the Applications tree tab.) An application list form appears from which an application can be selected. If GENERAL is selected the Application Name and Optional Args data boxes appear on the main form.
For GENERAL, enter the name of the application as it is know by the Patran Preference, without any spaces. For example if ANSYS 5 is a preference, then enter ANSYS5.
Enter the optional arguments that are needed to run the specified analysis code. For example, if an executable for the MARC analysis code needs arguments of -j jobname, you can specify -j $JOBNAME as the optional args. Arguments can be specified explicitly such as the -j, or they can be placed in as variables such as the $JOBNAME. The following variables are available:
 
$JOBFILE
Actual filename selected (without full path)
$JOBNAME
The jobname ($JOBFILE without extension)
$P3AMHOST
The client hostname from where the job was submitted
$P3AMDIR
Directory on client host where $JOBFILE resides
$APPNAME
Application name (Patran Preference name)
$PROJ
Project Name selected
$DISK
Total Disk space requested (mb)
Up to 10 GENERAL applications can be added. To save the configuration, the Apply button must be pressed and the newly added application information will be saved in the host.cfg file. On Windows this is Save Config Settings under the Queue pull down menu.
 
Note:  
Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.
Deleting an Application
To remove an application, select the Delete Application button. A list of defined applications appears.
Select one to be deleted by clicking on the application name in the list. Then, select OK. The application will be removed from the list and the list of application will disappear.
On Windows, simply select the application you want to delete from the Applications tree tab and press the Delete button (or right-mouse click the application and select Delete).
To save the configuration, the Apply button must be pressed and the newly deleted application information will be saved in the host.cfg file. On Windows this is Save Config Settings under the Queue pull down menu.
 
Note:  
Apply saves all configuration files: host, disk, and, if applicable, lsf or nqs.
Physical Hosts
Since the Analysis Manager can execute jobs on different hosts, it needs to know about each analysis host. Host configuration for the Analysis Manager is done via the host.cfg file located in the $P3_HOME/p3manager_files/default/conf directory.
This portion of the host.cfg file contains the following fields:
physical host
Name of host machine for the use of the Analysis Manager
class
System & O/S type:
 
HP700
-
Hewlett Packard HP-UX
 
RS6K
-
IBM RS/6000 AIX
 
SGI5
-
Silicon Graphics IRIX
 
SUNS
-
Sun SPARC Solaris
 
LX86
-
Linux (MSC or Red Hat)
 
WINNT
-
Windows 2000 or XP
maximum tasks
Maximum allowable concurrent job processes for this machine.
The p3am_admin (AdmMgr) program can be used to add and delete hosts or change any field above as shown in the forms below.
Adding a Physical Host
To add a host for use by the Analysis Manager, press the Add Physical Host button. (On Windows, right mouse click the Physical Hosts tree tab.) A new host description will be created and displayed in the left scrolled window, with Host Name: Unknown and Host Type: UNKNOWN.
Enter the name of the host in the Host Name box, and select the system/OS in the Host Type menu. Additional hosts can be added by repeating this process.
When all hosts have been added, select Apply and the newly added host information will be saved in the host.cfg file. On Windows this is Save Config Settings under the Queue pull down menu.
 
Note:  
Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.
Deleting a Host
To remove a host from use by the Analysis Manager, select the Delete Physical Host button on the bottom of the p3am_admin (AdmMgr) form. A list of possible hosts will appear.
Select the host to be deleted by clicking on the hostname in the list. Then, select OK. The host will be removed from the list of hosts and the list will go away.
On Windows, simply select the Host you want to delete from the Physical Hosts tree tab and press the Delete button (or right-mouse click the host and select Delete).
When all host configurations are ready, select Apply and the revised host.cfg file will be saved, excluding the deleted hosts. On Windows this is Save Config Settings under the Queue pull down menu.
Analysis Manager Host Configurations
In addition to specifying physical hosts, it is necessary to specify specific names by which the Analysis Manager can recognize the actions it should take on various hosts. For example, it may be possible that ABAQUS and MSC Nastran are configured to run on the same physical host or that two versions of MSC Nastran are installed on the same physical host. In order to account for this, each application and physical host has its own name or AM host name assigned to it. Host configuration for the Analysis Manager is done via the host.cfg file located in the $P3_HOME/p3manager_files/default/conf directory.
This portion of the host.cfg file contains the following fields:
AM hostname
Unique name for the combination of the analysis application and physical host. It can be called anything but must be unique, for example, nas68_venus.
physical host
The physical host name where the analysis application will run.
type
The unique integer ID assigned to this type of analysis. This is automatically assigned by the program and the user should not have to worry about this.
path
How this machine can find the analysis application - for MSC Nastran, this is the runtime script (typically the nast200x file), for MSC.Marc, ABAQUS, and GENERAL applications, this is the executable location.
rcpath
How this machine can find the analysis application runtime configuration file - the MSC Nastran nast200xrc file or the ABAQUS site.env file. This is not applicable to MSC.Marc or GENERAL application and should be filed with the keyword NONE.
The p3am_admin (AdmMgr) program can be used to add and delete AM hosts and change any field above as shown by the forms below.
Adding an AM Host
An AM host is a unique name which the user will specify when submitting a job. Information contained in the AM host is a combination of the physical host and application type along with the physical location of that application. To add a specific AM host press the Add AM Host button. A new host description will be created and displayed in the left scrolled window, with AM Host Name: Unknown, Physical Host: UNKNOWN, and Application Type: Unknown.
Enter the unique name of the host in the AM Host Name box, and select the Physical Host that this application will run on. The application is selected from the Application Type menu. Then, specify the Configuration Location and Runtime Location paths in the corresponding boxes. The unique name should reflect the name of the application to be run and where it will run. For example, if V68 of MSC Nastran is to be run on host venus, then specify NasV68_venus as the AM host name.
The Runtime Location is the actual path to the executable or script to be run, such as /msc/bin/nas68 for MSC Nastran. The Config Location is the actual path to the MSC Nastran rc (nast68rc) file or the ABAQUS site.env file.
Additional AM hosts can be added by repeating this process.
For each AM host, at least one filesystem must be specified. Use the Add Filesystem capability in Modify Config Files/Filesystems to specify a filesystem for each added host.
When all hosts have been added, select Apply and the newly added host information will be saved in the host.cfg file. On Windows this is Save Config Settings under the Queue pull down menu. Note that Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.
For Group, see Groups (of hosts).
Deleting an AM Host
To remove a host from use by the Analysis Manager, select the Delete AM Host button. A list of possible hosts will appear.
Select the host to be deleted by clicking on the hostname in the list. Then, select OK. The host will be removed from the list of hosts and the list of hosts will go away.
On Windows, simply select the AM Host you want to delete from the AM Hosts tree tab and press the Delete button (or right-mouse click the host and select Delete).
When all host configurations are ready, select Apply and the revised host.cfg file will be saved, excluding the deleted hosts. On Windows this is Save Config Settings under the Queue pull down menu.
Disk Configuration
In order to define filesystems to be written for scratch and database files, the Analysis Manager needs to have a list of each file system for each host in the disk.cfg file that is to be used when running analyses. This file contains a list of each host, a list of each file system for that host, and the file system type. There are two different Analysis Manager file system types: NFS and local.
Adding a Filesystem
Use the Modify Config Files/Filesystems form to specify or add a filesystem for use by the Analysis Manager.
Press the Add Filesystem button. Then, select a host from the list provided.
There are two types of filesystems: NFS and local. Select the appropriate type for the newly added filesystem.
Additional filesystems can be added by repeating this process. Multiple filesystems can be added for each host. When all filesystems have been added, select Apply and the newly added filesystem information will be saved in the disk.cfg file.
Each host must contain at least one filesystem.
After adding a host or filesystem, test the configuration information using the Test Configuration form. See Test Configuration.
Note:  
When using the Analysis Manager with LSF or NQS, you must run the administration program and start a Queue Manager on the same machine that LSF or NQS executables
are located.
On Windows the form appears as below:
When an AM Host is created, one filesystem is created by default (c:\temp). You can add more filesystems to an AM Host under the Disk Space tree tab and pressing the Add button. You can change the directory path by clicking on the Directory itself and editing it in the normal method on Windows. The Type is changed by the pulldown menu next to the Directory name. If the filesystem is a Unix filesystem, make sure you remove the c:, e.g., /tmp.
Deleting a Filesystem
At the bottom of the Modify Config Files/Filesystems form, select the Delete Filesystem button to delete a filesystem from use by the Analysis Manager.
Then, select a host from the list provided, and click OK.
After selecting a host, a list of filesystems defined for the chosen host will appear. Choose the filesystem to delete from this list and click OK.
On Windows, select the AM Host under the Disk Space tree tab and press the Delete button. The last filesystem created is deleted.
Additional filesystems can be deleted by repeating this process. When all appropriate filesystems have been deleted, select Apply and the updated filesystem information will be saved in the disk.cfg file. On Windows this is Save Config Settings under the Queue pull down menu.
Queue Configuration
If the LSF or NQS scheduling system is being used at this site, the Analysis Manager can interact with it using the queue configuration file (i.e., lsf.cfg or nqs.cfg). Ensure that LSF or NQS Queue is set for the Queue Type field in the Modify Config Files form. See Analysis Manager Host Configurations. This sets a line in the host.cfg file to QUE_TYPE: LSF or NQS. The Queue Manager configuration file lists each queue name, and all hosts allowed to run MSC Nastran, MSC.Marc, MSC Nastran, or other GENERAL applications for that queue. In addition, a queue install path is required so that the Analysis Manager can execute queue commands with the proper path.
 
Note:  
NQS and LSF are only supported by Unix platform Queue Managers. Although you can submit to an LSF or NQS queue from Windows to a Unix platform, the Windows Queue Manager does not support LSF or NQS submittals at this time.
Adding a Queue
To add a queue for use by the Analysis Manager, press the Add Queue button on the bottom of the p3am_admin (AdmMgr) form. A new queue description will be created and displayed on the left panel, with MSC Queue Name: Unknown and LSF (or NQS) Queue Name: Unknown.”
Enter the names of the queue in the MSC Queue Name and LSF (or NQS) Queue Name boxes provided. These names can be the same or different. In addition, the administrator must also choose between one or more hosts from the listbox on the right side of the specified queue name. The host in the listbox to the right only appear after selecting an application from the Application pulldown menu. Only those hosts configured to run that application will appear in the list box. These are the hosts which will be allowed to run the analysis application when submitted to that queue.
Additional queues can be added by repeating this process. When all queues have been added, press Apply and the newly queue host information will be saved in the lsf.cfg (or nqs.cfg) file.
Various information need to be supplied for the Analysis Manager to communicate properly with the queueing software. The most important information is the Executable Path. Enter the full executable path where the NQS or LSF executables can be found. In addition, you may specify additional (optional) parameters for the NQS or LSF executables to use if necessary. Keywords can also be used. The description of how these keywords work can be found in General. Two keywords are available: MEM and DISK, which are evaluated to what Minimum MEMory and DISK space has been specified. For example, if an NQS command has these additional parameters: -nr -lm $MEM -lf $DISK
then submission will be
qsub -nr -lm <current MEM value> -lf <current DISK value>
where the current MEMory value is the larger of the MEMory specified here or the general memory requirement specified by the user. Current DISK space operates similarly. See Memory and/or Disk Space. The MEM and DISK specified here are only used if additional parameters using the keywords are supplied.
Deleting a Queue
To remove a queue from use by the Analysis Manager, press the Delete Queue button on the bottom of the p3am_admin (AdmMgr) form. A list of possible queues will appear.
Select the queue to be deleted by clicking on the queue name in the list. Then, select OK. The queue will be removed from the list of queues and the list of queues will go away.
When the queue configuration is ready, select Apply and the revised lsf.cfg (or nqs.cfg) file will be saved, excluding the deleted queues.
Groups (of hosts)
This is a nice feature that allows you to define a group. This group attribute can then be assigned to an AM Host. All AM Hosts with this attribute are grouped together and when a job is submitted, the host in this group list that matches the least loaded criteria is the one selected for job submission. This is a semi-automatic host selection mechanism based on certain criteria that is explained below.
This version of Analysis Manager supports the concept of groups of hosts. In the host.cfg file if you specify VERSION: 3 as the first non-commented line and you also add the group/queue name on the end of the am_host line in the AM_HOSTS section then you will have enabled this feature. Here is an example:
VERSION: 3
...
AM_HOSTS:
#am_host host type bin path rc path group
#------------------------------------------------------------------------------
N2004_hst1 host1 1 /msc/bin/n2004 /msc/conf/nast2004rc grp_nas2004
N2004_hst2 host2 1 /msc/bin/n2004 /msc/conf/nast2004rc grp_nas2004
N2004_hst3 host3 1 /msc/bin/n2004 /msc/conf/nast2004rc grp_nas2004
N2001_hst1 host1 1 /msc/bin/n2001 /msc/conf/nast2001rc grp_nas2001
N2001_hst2 host2 1 /msc/bin/n2001 /msc/conf/nast2001rc grp_nas2001
N2001_hst3 host3 1 /msc/bin/n2001 /msc/conf/nast2001rc grp_nas2001
M2001_hst1 host1 3 /m2001/marc NONE grp_mar2001
M2001_hst2 host2 3 /m2001/marc NONE grp_mar2001
M2001_hst3 host3 3 /m2001marc NONE grp_mar2001
...
In this configuration, when you submit a job, you will also have the choice of the group name with the added label of 'least-loaded-grp:<group name>' in addition to and to distinguish it from regular host names. When you select this group instead of a regular host, the Analysis Manager will then decide which host from the list of those in the group is best-suited to run the job and start it there when possible. Here, best-suited means the next available host based several factors, including:
Free tasks on each host (Maximum currently running jobs)
Cpu utilization of host
Available memory of host
Free disk space of host
Time since most recent job was started on host
If in the above example, you submitted an MSC Nastran job to the grp_nas2004 then there are 3 machines the Analysis Manager could select to run the job, host1, host2 or host3. The Analysis Manager will query each host for the current cpu utilization, available memory and free disk space (as configured by the Analysis Manager) and also the free tasks and time since an Analysis Manager job was last started and figure out which, if any, machine can run the job. If more than one machine can run the job based on the criteria above then the Analysis Manager will select the best suited host by sorting the acceptable hosts in a user-selectable sort order. If no machines have met the criteria then the job remains queued, and the Analysis Manager will try again to find a suitable host at periodic intervals. The user selectable sort order is specificed in an optional configuration file called msc.cfg. If this file does not exist then the sort order and criteria are as follows:
free_tasks
cpu_util
avail_mem
free_disk
last_job_time
Where the defaults for cpu util, available mem and disk are:
Cpu util: 98
Available mem: 5 mb
Available disk: 10 mb
Thus any host that has cpu util < 98 and available mem > 5mb and available disk > 10mb and at least one free task (so it can start another Analysis Manager job) is eligible to run a job and the best suited host will be the one after a sort on all eligible hosts is done. You can change the sort order and defaults for cpu util, available mem and disk in the msc.cfg file. The msc.cfg file exists in the same location as the host.cfg and disk.cfg and has this format as explained in Group/Queue Feature.
Test Configuration
The p3am_admin (AdmMgr) program has various tests that facilitate verification of the configuration.
Application Test
Changes to the host.cfg file dealing with defined applications can be tested by selecting the Test Configuration/Applications option. The Applications Test form will appear when the Application Test button is pressed. On Windows press the Test Configuration button under Adminstration
This test checks to make sure that:
1. Applications are defined only once.
2. At least one AM host has been assigned on which the application can be executed.
Physical Hosts Test
Changes to portions of the host.cfg file dealing with physical hosts can be tested by selecting the Test Configuration/Physical Hosts option.
At the bottom left of the form are two buttons:
1. Basic Host Test.
2. Network Host Test.
Basic Host Test
The Basic Host Test will validate the host configuration information in the host.cfg file. There are no requirements for running the Basic Host Test. A message box provides status information as each of the following Basic Host Tests are run:
1. Validates that at least one host in the host.cfg file is present.
2. Ensures that each host specified is a valid host with the nameserver. (i.e. makes sure the machine that the p3am_admin (AdmMgr) program is running on recognizes each of the host
names provided.)
3. Ensure that a valid Host Type has been provided for each of the hosts. (i.e., when a new host is added, the Host Type is set to Unknown. Makes sure the user changed it to something valid).
4. Checks that a Master host has been selected.
5. Makes sure two hosts with the same address were not specified.
If a problem is detected, close the form and return to the Modify Config File form to correct
the configuration.
Network Host Test
The Network Host Test will validate all of the physical host configuration information in the host.cfg file, and validate communication paths between hosts.
Requirements to run the Network Host Test include:
1. Must be root.
2. Must be on the Master node (the host running the Queue Manager).
3. Must provide a username (each user must be tested separately.
A message box provides status information as each of the following network host tests is run:
1. Checks user remote command (rcmd) access between the Master node and other specified hosts.
2. Validates that each host has the correct architecture setting.
3. Makes sure that each host sees the installation directory in the same way.
4. Checks the Analysis Manager directories (i.e., makes sure user can read all configuration files; ensures he can create a directory under proj directory and other locations).
If a problem is detected, close the form and return to the Modify Config Files form to correct the configuration or exit to the system to correct the problem. It is highly recommended that you run the Network Host Test for each user who wants to use the Analysis Manager.
AM Hosts Test
Changes to portions of the host.cfg file dealing with the AM hosts can be tested by selecting the Test Configuration/AM Hosts option.
At the bottom left of the form are two buttons:
1. Basic AM Host Test.
2. Network AM Host Test.
Basic Host Test
The Basic AM Host Test will validate the AM host configuration information in the host.cfg file. There are no requirements for running the Basic AM Host Test. A message box provides status information as each of the following Basic Host Tests are run:
1. Validates that at least one AM host in the host.cfg file is present.
2. Ensures that each AM host specified is a valid name.
3. Ensure that each AM host has a physical host assigned.
4. Checks that an application has been assigned to each AM host.
5. Checks that the configuration file exists for each AM host. This is applicable to MSC Nastran and ABAQUS only.
6. Checks that the actual executable is accessible and has the proper privileges.
7. Makes sure two AM hosts with the same names are not specified.
If a problem is detected, close the form and return to the Modify Config File form to correct the configuration.
Network Host Test
The Network AM Host Test will validate all of the AM host configuration information in the host.cfg file, and validate communication paths between hosts.
Requirements to run the Network Host Test include:
1. Must be root.
2. Must be on Master node.
3. Must provide a username.
A message box provides status information as each of the following network AM host tests is run:
1. Checks the location and privileges of the configuration file for each application.
2. Checks the runtime location (executable) for each application and privileges.
If a problem is detected, close the form and return to the Modify Config Files form to correct the configuration or exit to the system to correct the problem. It is highly recommended that you run the Network Host Test for each user who wants to use the Analysis Manager.
Disk (Filesystem) Test
Changes to the disk.cfg file can be tested by selecting the Test Configuration/ Disk Configuration option. The test disk configuration form will appear.
At the bottom left of the form are two buttons:
1. Basic Disk Test.
2. Network Disk Test.
Basic Disk Test
The Basic Disk Test will validate the disk configuration information. There are no requirements for running the Basic Disk Test. A message box provides status information as each of the following basic disk tests are run:
1. Makes sure at least one filesystem is defined for each host.
2. Makes sure there is a value for each filesystem (i.e., no empty entries), and that the entries are absolute paths which start with a “/”.
3. Checks the length of the filesystem definitions. If they are greater than 25, a warning is provided. This may cause problems.
If a problem is detected, close the form and return to the Modify Config Files form to correct the
disk configuration.
Network Disk Test
The Network Disk Test will validate all of the disk configuration information. A message box provides status information as each test is run.
Requirements to run the Network Disk Test include:
1. Must be root.
2. Must be on Master node.
3. Must provide a username.
A message box provides status information as each of the following network disk tests are run:
1. Check that each filesystem exists for each host, and that it can be written to by the provided user.
If a problem is detected, close the form and return to the Modify Config Files form to correct the disk configuration or exit to the system to correct the problem. It is highly recommended that you run the Network Disk Test for each user who wants to use the Analysis Manager.
Queue Test
Changes to the lsf.cfg queue configuration file can be tested by selecting the Test Configuration/Queue Configuration option. The test queue configuration form will appear.
At the bottom left of the form are two buttons:
1. Basic Queue Test.
2. Advanced Queue Test.
Basic Queue Test
The Basic Queue Test will validate the queue configuration information in the lsf.cfg or nqs.cfg file. A queueing system (i.e., LSF) must be defined to run the Basic Queue Test. A message box provides status information as each of the following basic queue tests are run:
1. Makes sure at least one queue has been specified.
2. Makes sure that at least one application has been specified per queue.
3. Makes sure a unique MSC queue name has been specified.
4. Makes sure that the LSF or NQS queue in unique and exists.
5. Make sure that for each queue at least one physical host is specified as a member of the queue.
6. Make sure the LSF or NQS executables path has been specified and that it is an absolute path.
If a problem is detected, close the form and return to the Modify Config Files form to correct the configuration.
Advanced Queue Test
The Advanced Queue Test will validate all of the queue configuration information (i.e., the lsf.cfg or nqs.cfg file).
Requirements to run the Advanced Queue Test include:
1. Must be using LSF or NQS for queue management.
2. Must be on Master node.
3. Must provide a username.
4. Must be root.
A message box provides status information as each of the following network queue tests is run:
1. Makes sure the LSF executables (bsub, bkill, bjobs) or the NQS executables (qsub, qdel, qstart) are in the specified location, and are executable by the provided user.
If a problem is detected, close the form and return to the Modify Config Files form to correct the queue configuration or exit to the system to correct the problem. It is highly recommended that you run the Advanced Queue Test for each user who wants to use the Analysis Manager.
Queue Manager
This simply allows any changes in the configuration files that may have been implemented during a p3am_admin (AdmMgr) session to be applied. If the configuration files are owned by root then you must have root access to change them. Once they have been changed, in order for the QueMgr to recognize them, it must be reconfigured. Simply press the Apply button with the Restart QueMgr toggle selected. This forces the Queue Manager to reread the configuration files. Once the Queue Manager has been reconfigured, new jobs submitted will use the updated configuration.
If a reconfiguration is issued while jobs are currently running, then those jobs are allowed to finish before the reconfiguration occurs. During this period, the Queue Manager is said to be in drain mode, not accepting any new jobs until all old jobs are complete and the Queue Manager has reconfigured itself. The Queue Manager can also be halted immediately (which kills any job running) or can be halted after it is drained.”
When the Queue Manager is halted, the three toggles on the right side change to one toggle to allow the Queue Manager to be started. All configurations that are being used are shown on the left. When the Queue Manager is halted, you may change some of the configurations on the left side, such as Port, Log File, and Log File User before starting the daemon again. For more information on the Queue Manager see Starting the Queue/Remote Managers.
On Windows you can Start and Stop the Queue Manager from the Queue pulldown menu when you are in the Administration tree tab.
Or you can right mouse click the Administration tree tab and the choices to Read or Save configuration file or Start and Stop the Queue Manager are also available.