Analysis Manager > System Management > Organization Environment Variables
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX''">XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX''">   
Organization Environment Variables
P3_HOME & P3_PLATFORM
These two environment variables are set automatically for the user when Patran and/or the Analysis Manager is invoked, if the proper installation has been performed. The P3_HOME variable references the actual installation directory and P3_PLATFORM references the machine architecture. This architecture can be one of the following:
HP700
-
Hewlett Packard HP-UX
RS6K
-
IBM RS/6000 AIX
SGI5
-
Silicon Graphics IRIX
SUNS
-
Sun SPARC Solaris
LX86
-
Linux (MSC or Red Hat)
WINNT
-
Windows 2000 or XP
These variables can be set in the following manner with cshell (if necessary):
setenv P3_HOME /msc/patran200x
setenv P3_PLATFORM HP700
or for bourne shell or kern shell users:
P3_HOME=/patran3
P3_PLATFORM=DECA
export P3_HOME
export P3_PLATFORM
or on Windows:
set P3_HOME=c:/msc/patran200x
set P3_PLATFORM=WINNT
In most instances, users will never have to concern themselves with these environment variables but are included here for completeness. In a typical Patran installation, a file called .wrapper exists in the $P3_HOME/bin directory which automatically determines these environment variables. The names of the invoking scripts, p3analysis_mgr and p3am_admin exist as pointers to .wrapper in this bin directory which, when executed, determines the variable values and then executes the actual scripts. For this to work conveniently, the user should have $P3_HOME/bin in his/her path, otherwise the entire path name must be used when invoking the programs.
P3_ORG
It may be desirable to have multiple Queue Managers running (groups of systems for the Analysis Manager to use) each with a separate organizational directory for Analysis Manager configuration files. An optional environment variable, P3_ORG, may be set for each user to specify a separate named organizational directory. If defined, the Analysis Manager will use it for accessing its required configuration files and thus connect to the Queue Manager specified by P3_ORG.
If not defined, the default organizational directory of default is used (i.e., $P3_HOME/p3manager_files/default).
For example, suppose a site has many computers with MSC Nastran installed, yet access to each is to be limited to certain engineering groups. Each group expects to only be able to submit to their computers and are not permitted to see a choice to submit to the other groups’ machines. Therefore, to solve this problem, set up two or more different Analysis Manager organizational groups and start separate Queue Managers for each. The configuration files (host.cfg and disk.cfg) for the first set of machines are located in the $P3_HOME/p3manager_files/default/conf directory. Now, to handle another group’s machines, create another directory structure under $P3_HOME/p3manager_files called groupb. A directory $P3_HOME/p3manager_files/groupb should be created with subdirectories identical to the $P3_HOME/p3manager_files/default tree. The easiest way to create the new organization is to just copy the default organization tree:
cp -r $P3_HOME/p3manager_files/default $P3_HOME/p3manager_files/groupb 
Now, the configuration files in the $P3_HOME/p3manager_files/group directory can be edited to set up the new group of machines. When more than one organization is defined, there will be one Queue Manager (QueMgr) running for each organizational group. When starting the Queue Manager, the -org argument must be used for organizations which are not default.
In this example, the Queue Manager for the groupb organization would be started as follows:
QueMgr $P3_HOME -org groupb
Where $P3_HOME is the directory where the installation is located, say /msc/patran200x.
Once the organizations are created, the configuration files edited and the Queue Managers started, users can utilize the P3_ORG environment variable to specify which Queue Manager to communicate with. Therefore, users wishing to use the newer version of MSC Nastran will set their P3_ORG environment variable to groupb and users wishing to use the normal version of MSC Nastran will not set the environment variable at all.
setenv P3_ORG groupb
or for bourne shell or kern shell users:
P3_ORG=groupb
export P3_ORG
or on Windows:
set P3_ORG=groupb
It is also possible to dynamically change organizational groups without setting this environment variable. If different organizational groups need to be created, but access should be given to all users, a configuration file called org.cfg can be created and placed in the $P3_HOME/p3manager_files directory. This will allow users to change organizational group directories from the Analysis Manager user interface. The form of this configuration file is described in Examples of Configuration Files. If this configuration file is used to allow users to change groups on the fly from the user interface, then a QueMgr must be started on each host with unique port IDs using the -port parameter, e.g.,
QueMgr $P3_HOME -port 1501 -org groupb
P3_PORT and P3_MASTER
The Analysis Manager is actually very flexible in the manner in which it can be installed and accessed. This is to take into account the variation in networks that exist from company to company or even from department to department. These two environment variables exist to allow flexibility in more restrictive networking environments.
Say, for example, that there are 50 machines, all with their own Patran installations, yet none of them are NFS mounted to each other or to the master node where the QueMgr daemon is running. A difficult way to solve this problem is to make sure the same configuration files exist on all 50 machines in the same directory tree structure, including the master node. If a change has to be made to the configuration file, then all 50 machines and the master node will have to be updated. Since the QueMgr is the only program that needs to read the configuration files, an easier solution would be for only the master node to keep an up-to-date set of configuration files and have the users of the other 50 machines set the P3_PORT and P3_MASTER environment variables to reference the port and machine of the master node. For example,
setenv P3_PORT 1501
setenv P3_MASTER bangkok
or for bourne shell or kern shell users:
P3_PORT=1501
P3_MASTER=bangkok
export P3_PORT
export P3_MASTER
or on Windows:
set P3_PORT=1501
set P3_MASTER=bangkok
where 1501 is the port number that QueMgr is using and bangkok is the master node’s
computer hostname.
The Analysis Manager works in the following manner:
1. First a check is made to see if P3_PORT and P3_MASTER have been set. If they have, the information is used to communicate with the master host and only the organizational group specified either through P3_ORG or the default organizational group will appear.
2. If P3_PORT and P3_MASTER have not been set, then the org.cfg configuration file is read which contains the master nodes and port numbers for all organizational groups that have been created. If P3_ORG is also specified, then that organizational group will appear as the default; however, all other organizational groups will still be accessible.
3. If neither P3_PORT, P3_MASTER or the org.cfg file have been specified or exist, then the defaults are used. The default host is the host machine that the Analysis Manager was started on. The default port is 1900. And the default configuration files (organization) are in the default directory. Multiple organizational groups will not be accessible, however the P3_ORG variable can be specified to change organizational groups each time the Analysis Manager is invoked. In this last method, the user or the system administrator that starts the QueMgr does not need to ever worry about assigning unique port numbers. However, it is one of the most restrictive installations and methods of access.
MSC_RMTMGR_ARGS and MSC_QUEMGR_ARGS
The RmtMgr and QueMgr will also read the environment variables called MSC_RMTMGR_ARGS and MSC_QUEMGR_ARGS, respectively, for all of its arguments. It is one big string as in this cshell setting:
setenv MSC_RMTMGR_ARGS “-port 1850 -path /msc/patran200x”
setenv MSC_QUEMGR_ARGS “-port 1950 -path /msc/patran200x”
or for bourne shell or kern shell users:
MSC_RMTMGR_ARGS=“-port 1850 -path /msc/patran200x”
MSC_QUEMGR_ARGS=“-port 1950 -path /msc/patran200x”
export MSC_RMTMGR_ARGS
export MSC_QUEMGR_ARGS
or on Windows:
set MSC_RMTMGR_ARGS=“-port 1850 -path /msc/patran200x”
set MSC_QUEMGR_ARGS=“-port 1950 -path /msc/patran200x”
And then when RmtMgr and/or QueMgr start they will check these and get their arguments from these strings. Real command line arguments overwrite these in case both are set.
This method is needed on Windows because there is currently no way to save the startup arguments for a service, so on reboot the RmtMgr would not know its startup arguments. It would have to read a file or read an environment string to get them. The only problem right now is if you have two RmtMgr's running on the same machine there is no way to have different args for each.
 
Note:  
On Windows these variables should be set under the System Control Panel such that on reboot, the RmtMgr and QueMgr start up with these arguments. You can check the Event Viewer under Adminstrative Tools Control Panel to check for proper startup.
AM_CMD_STATUS and AM_JOB_STATUS
In addition, at the end of a job these two environment variables get set.
AM_CMD_STATUS gets set on the executing host after the job has completed there, with the exit status of the command. This can be used by a postscript on the execute host to possibly do different things based on the exit status of the job. One must know the exit status of the application they are running to know what is good and what is bad or f there are any other possible codes/meanings.
AM_JOB_STATUS get set on the submit host at the end of the job, after all the files have been transferred and such and this can be used by a post program on the submit host for the same reasons. The values for this environment variable are 0, 1, 2 where 0 means successful, 1 means abort and 2 means failure of
any kind.