Marc > Running an Analysis > Domain Decomposition
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX''">XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX''">   
Domain Decomposition
DDM Interface
Each widget of this form is discussed in the table below.
 
DDM Parameter
Description
Decomposition Method
Set this to Automatic (available only in Marc Version 2005 and higher) if you wish Marc to automatically create the domains during analysis run time. Set to Semi-Automatic if you wish to have Patran automatically break the model into domains which can be visualized before submittal. Set to Manual to have full control over the domains. This requires the creation of the groups before they can be selected here in this form and associated to a domain.
Number of Domains
This determines how many domains are to be created. When you change this number and press the Enter or Return key, the spread sheet updates with this number of rows. The default is 2. This corresponds to the number of CPUs desired to run the job. For the Automatic method, this is the only input that is required and the spreadsheet is not visible.
Metis Method
Domain Island Removal
Coarse Graph
These are parameters used when the Decomposition Method is set to Automatic. The decomposer uses the Metis algorithm which can be set to Best (default), Node Based or Element Based. Also the two toggles, Domain Island Removal and Coarse Graph can be set ON or OFF, which affect the decomposer. For more detail, see the MSC.Marc documentation. When any settings other than the defaults of these widgets are set, the PROCESSOR parameter is written to the input deck.
Single POST File
In Marc 2005 and beyond, a single input file can be used for Domain Decomposition runs. A single results output (POST - t16/t19) file can also be requested but setting this toggle. This puts a one (1) in the 5th field of the 2nd data block of the POST option.
Create
To create more or less domains, you change the Number of Domains widget accordingly and press this button or the Return or Enter key which updates the Domain Information spread sheet.
Visualize
By pressing this button, all groups currently posted will be unposted. The groups from the selected rows of the spreadsheet will be color coded and posted. The plot will be wireframe. It can be turned into shaded or hidden plot with the standard tools. Only domains from the selected spreadsheet rows will be plotted. If a row is not selected, that domain will not be plotted.
Validate
By pressing this button, all domains will be validated that there are no duplicate or overlapping elements. A message will be placed in the Patran command line window.
Reset Graphics
This will return the graphics screen back to the way it was before you pressed Visualize. If you exit the tool, the graphics will also be reset as if this button were pressed.
Model / Current Group
This switch is used for Automatic and Semi-Automatic DDM only. For Automatic, either the entire Model or the Current Group is translated into the input deck. For Semi-Automatic, this dictates on what part of the model the decomposition is done (not what is translated to the input deck). This is not applicable for Manual decomposition.
Domain Information
This spreadsheet is created when the Create button is pushed or the Number of Domains is changed. The number of rows is dependent on the Number of Domains specified. Any cell in any row may be selected. Multiple rows may be selected. Although not all cell contents can be changed. This depends on the Decomposition Method setting. For Automatic, this information is not visible.
Domain
This column of the spreadsheet is hard coded and simply says Domain 1, Domain 2, etc. for each domain. It cannot be changed but is selectable.
Group
This column lists the group that makes up the connectivity for the domain. If Decompose Domains By is set to Manual, these cells are initially empty. You must select the cell in which the Select a Group list box becomes visible and you can select the group for that domain.
Select a Group
For Manual decomposition, you must select a group from this listbox when one of the cells is selected in the Domain Information spread sheet. If you do not see the group you desire here, it is likely that it has not been created. Create groups in the Groups | Create pulldown menu from the main Patran menu bar.
Use LSF
If this toggle is ON, then the Host File button is no accessible because the LSF load sharing facility is used to submit the job. The optimum machines are found based on the LSF configuration. See Submittal to LSF Queues, 13 for more detail.
Host File
This brings up a file browser to select the hostfile which contains information about the machines and number of CPUs as well as scratch disk and Marc executable locations. This is required if submitting a parallel job to a cluster of homogeneous machines. This is not required for submitting to a single machine with multiple processors.
Do Not Copy Files
Copy Files
When submitting to a cluster of machines, this dictates whether files are copied or not. By default files are not copied, assuming they reside in a shared directory. See DDM Submittal, 340 below.
OK
Closes the form and saves all settings or changes.
Defaults
This will return the form to its factory default settings.
Cancel
Closes the form but does not save any changes you made.
Some notes on the operation of the graphical interface:
If an Semi-Automatic operation is redone, it resets everything and it overwrites the groups. To delete groups, you must do it through the Group application. So take care, because it is easy to perform the decomposition multiple time. But each time new groups are created and they are not automatically deleted.
You can mix and match the different methods of creating domains. For example, you can do this: press the Create button with the Semi-Automatic methods then change the method to the Manual setting and change the group. On the Manual setting you can also change the Number of Domains and have the spreadsheet update without losing any already defined information such as adding more domains.
 
Note:  
A key criterion for running a successful DDM job is for you to make certain that the node and element numbering for the entire model is consecutive. For example a model with nodes 1-100 and elements 1-250 will work fine. But a model with nodes 1-50, 52-151 and elements 1-200, 202-251 will not work.
DDM Submittal
This section discusses the mechanics of a DDM analysis. In general, by default a DDM job is submitted as follows:
Single Machine:
run_marc -j jobname -nproc #
Network:
run_marc -j jobname -nproc # -host hostfile -ci NO -cr NO
Where nproc is the number of processors (#). Only the network submittal needs the hostfile information. For single file DDM submittals (automatice DDM), -nps is used instead of -nproc.
In either case, a DDM job may be submitted from the Marc Preference locally or to a remote machine. For remote submittal, the MarcSubmit program copies all necessary files to the machine the job is submitted to and then the Marc DDM job is submitted. After completion, the MarcSubmit program copies all files back to the machine the job was submitted from for use in post-processing.
There are four mechanisms for submitting DDM jobs depending on the Marc Version and whether a single machine with multiple processors has been selected, or a cluster of machines.
1. Single Machine - Automatic
A single input file is created and submitted to a machine using Marc 2005 (or greater) which automatically performs the decomposition and takes advantage of the multiple processor machine.
2. Single Machine - Manual or Semi-Automatic
An input file is created for each domain called #jobname.dat (where the # is the domain number) plus the master input file (jobname.dat) and submitted to a machine using any Marc version. Each #jobname.dat file is submitted to one of the processors of the multiple processor machine.
3. Cluster of Machines - Automatic
A single input file is created and submitted to a machine using Marc 2005 (or greater) which automatically performs the decomposition and takes advantage of the cluster of machines specified in the hostfile.
4. Cluster of Machines - Manual or Semi-Automatic
An input file is created for each domain called #jobname.dat (where the # is the domain number) plus the master input file (jobname.dat) and submitted to a machine using any Marc version. Each #jobname.dat files is submitted to one of the machines in the cluster specified in the hostfile. By default the input files and the output results files are not copied to each machine locally but are assumed to reside in a shared or nsf mounted directory. This is done with the -ci NO and -cr NO options, respectively. If the files are to be copied then these options are not used and this necessitates that scratch directories be specified in the hostfile. The files are then copied to and from these scratch directories on each of the machines in the hostfile.
As can be deciphered from the above, in Marc 2005 (or beyond) all you need is a single input file for submitting a DDM analysis job. For previous versions of MSC.Marc, several input files are created for submitting a DDM job. The total number of files created in this case is equal to the number of subdivisions of the model plus one additional file. A baseline file that has no model or history information is created called jobname.dat. The rest of the files created are 1jobname.dat, 2jobname.dat, etc. up to the number of domains created. Each of these files contains coordinate and connectivity data for its domain only. Any options that reference element or node numbers will be contained in that domain exclusively. Besides this the rest of the information contained in the input files are identical. If you are using Marc 2005 (or beyond), submitting an input file for analysis is enhanced and simplified. Only a single file is submitted for DDM in MSC.Marc 2005 and beyond however, the old method can still be used if multiple files are supplied.
 
Note:  
There are multiple results (POST) files from a DDM run just as there are input files. There is one for each domain by the same names with the .t16 or .t19 file extension. In order to view these results, it is only necessary to attach to the master jobname.t16/t19 file.
DDM Configuration
Below are a few notes for proper configuration of DDM. However, please see the Marc Parallel Version for Windows NT / UNIX Installation and User Notes for proper configuration of Marc DDM. Marc DDM must be configured properly in order for DDM to work properly from Patran. If you have trouble, please check the following:
On Windows machines:
1. Make sure PaTENT MPI (Marc 2003 or earlier) or the Cluster Manager service (Marc 2005 and greater) is installed and running as a service.
2. Make sure you have a valid license of PaTENT MPI service if necessary (Marc 2003 or earlier. The license file is generally found under <install_dir>\marc200x \patentmpi\admin\license.dat. Contact MSC if this license has expired.
3. When using a cluster of Windows machines it is recommended that all input files be in a shared directory when you submit the job (in other words, submit the job from a shared directory that all machines can see).
4. The Marc installation on the master host should be in a shared directory also unless all machines have their own installation of Marc, and then they must be properly referenced in the hostfile.
5. If you are submitting from a Windows machine to a UNIX machine, make sure that you have a valid .rhosts file in your home directory. Place the name of the Windows machine and the remote machine you are submitting to in the .rhosts file. The name must appear exactly as is when you do a top command on the UNIX machine when you have a telnet session open from your Windows machine.
6. If you cannot do an rsh or an rlogin from your Windows machine to the UNIX machine then there is something wrong with your remote access as set up by the .rhosts file. Check with a system administrator.
On UNIX:
1. You must be able to rlogin to all referenced machines in the hostfile without supplying a password. If you cannot, check that your .rhosts file has the name of all the machines in it. Check with a system administrator if you need help.
2. Only homogeneous clusters of machines are truly supported. They must all be running the same MPI service or daemons. For example a cluster of 64 bit HP machines must all use the HP MPI; a cluster of 32 bit HP machines can use either HP MPI or MPICH, but not a mixture; heterogeneous clusters should work if they all use MPICH but this is not officially supported; UNIX and Windows clusters are not supported.