Intel® Advisor Help
Intel® Advisor allows you to analyze parallel tasks running on a cluster, so your MPI application can be examined for parallelism opportunities. Use mpirun or mpiexec with the advixe-cl command to spawn MPI processes across the cluster.
MPI analysis can be performed only through the command line interface, but the result can be viewed through the standalone GUI, as well as the command line.
Use Intel Advisor to collect analysis data on the cluster. Once the analysis data is gathered, the project directory should be copied from the cluster to the system where the application was developed. The result data is then finalized in a new project directory using the import-dir action. Once finalization is complete, the data can be viewed.
Tips
Only homogenous clusters are supported.
Application source files may or may not exist on any host within the cluster, but source files must be present on the system used to view the data. Source files should be specified when using the import-dir action.
Analysis data can be saved to a shared partition, or to local directories on the cluster. The processes save their data collections in unique subdirectories, named rank.#, under the project directory. Only one processes' data can be imported and viewed. Use the mpi-rank option to specify which process should be imported when using a shared partition. If you wish to import and view more than one processes' data, specify a new project directory for each import-dir command.
Run MPI analysis on the cluster, saving the data to shared or local directories. The processes save their data collections in unique subdirectories, named rank.#, under the project directory.
If necessary, copy the project directory from the cluster to the development system where you want to view the data. The application source files should be on this development system.
Import the project directory from the cluster to a new project directory. This finalizes the result. Use the mpi-rank option to specify which processes' data should be imported.
View the report with the standalone GUI or on the advixe-cl command line.
You can use the Intel Advisor to analyze the Intel® MPI Library and other MPI implementations, but be aware of the following details:
You may need to adjust the command examples in this section to work for non-Intel MPI implementations. For example, adjust commands provided for process ranks to limit the number of processes in the job.
An MPI implementation needs to operate in cases when there is the Intel Advisor process (advixe-cl) between the launcher process (mpirun/mpiexec) and the application process. This means that the communication information should be passed using environment variables, as most MPI implementations do. Intel Advisor does not work on an MPI implementation that tries to pass communication information from its immediate parent process.
Use mpirun or mpiexec with the advixe-cl command to spawn processes across the cluster and collect data about the application.
Each process has a rank associated with it. This rank is used to identify the result data.
General Syntax for MPI Analysis
To collect performance or Dependencies data for an MPI program with Intel Advisor on Windows* or Linux OS, the general form of the mpirun command is:
$ mpirun -n <N> advixe-cl -project-dir <project_PATH> -collect <analysis_type> -search-dir src:r=<sources_PATH> -- myApplication [myApplication_options]where:
<N> is the number of nodes in the cluster
<project_PATH> specifies the PATH/name of the project directory
<analysis_type> is survey, suitability or Dependencies
<sources_PATH> is the path to the directory where annotated sources are stored
The above examples use the mpirun command, as opposed to mpiexec or mpiexec.hydra, where real-world commands might use the mpiexec* command. mpirun is a higher-level command that dispatches to mpiexec or mpiexec.hydra, depending on the current default and specified options. All the listed examples work for the mpiexec* commands as well as the mpirun command.
Output
As a result of this command, the Intel Advisor creates a number of result directories in the current directory, named as rank.0, rank.1, ... rank.n, where the numeric suffix n corresponds to the MPI process rank.
Syntax for Single-node MPI Analysis
To collect data on a single node, the command takes this form:
mpirun -n 3 myApplication [myApplication_options] : -n 1 advixe-cl -project-dir=./advi -collect survey -search-dir src:r=./src -- myApplication [myApplication_options]This example runs the application on three nodes, running it under Intel Advisor on node 3.
There are two options specifically for MPI.
import-dir
Specify the project directory where the analysis data is stored. When data collection is complete, import this data into the directory specified by project-dir so it can be finalized.
mpi-rank
Specify the rank of the process to import. Use this only when there is more than one experiment in a project directory, such as when results are stored on a shared partition.
For more details on analyzing MPI applications, see the Intel MPI Library and online MPI documentation on the Intel® Developer Zone at http://software.intel.com/en-US/articles/intel-mpi-library-documentation/
Other Intel® Developer Zone online resources that discuss usage of the Intel® Parallel Studio XE Cluster Edition with the Intel MPI Library:
Hybrid applications: Intel MPI Library and OpenMP* on the Intel Developer Zone at http://software.intel.com/en-US/articles/hybrid-applications-intelmpi-openmp/