Issue: Ineffective peeled/remainder loop(s) present

All or some source loop iterations are not executing in the loop body. Improve performance by moving source loop iterations from peeled/remainder loops to the loop body.

Recommendation: Specify the expected loop trip count Confidence: Low

The compiler cannot statically detect the trip count. To fix: Identify the expected number of iterations using a directive:
!DIR$ LOOP COUNT
.
Read More:

Recommendation: Disable unrolling Confidence: Medium

The trip count after loop unrolling is too small compared to the vector length. To fix: Prevent loop unrolling or decrease the unroll factor using a directive:
!DIR$ NOUNROLL
or
!DIR$ UNROLL
.
Read More:

Recommendation: Use a smaller vector length Confidence: Medium

The compiler chose a vector length, but the trip count might be smaller than that vector length. To fix: Specify a smaller vector length using a directive:
!DIR$ SIMD VECTORLENGTH
.
Read More:

Recommendation: Align data Confidence: Medium

One of the memory accesses in the source loop does not start at an optimally aligned address boundary. To fix: Align the data and tell the compiler the data is aligned. To align data, use
__declspec(align())
. To tell the compiler the data is aligned, use
__assume_aligned()
before the source loop.
Read More:

Recommendation: Add data padding Confidence: Medium

The trip count is not a multiple of vector length. To fix: Do one of the following:
  • Increase the size of objects and add iterations so the trip count is a multiple of vector length.
  • Increase the size of static and automatic objects, and use a compiler option to add data padding.
Windows* OS Linux* OS
/Qopt-assume-safe-padding -qopt-assume-safe-padding
Note: These compiler options apply only to Intel® Many Integrated Core Architecture (Intel® MIC Architecture). Option
-qopt-assume-safe-padding
is the replacement compiler option for
-opt-assume-safe-padding
, which is deprecated.

When you use one of these compiler options, the compiler does not add any padding for static and automatic objects. Instead, it assumes that code can access up to 64 bytes beyond the end of the object, wherever the object appears in your application. To satisfy this assumption, you must increase the size of static and automatic objects in your application.

Optional: Specify the trip count, if it is not constant, using a directive:
!DIR$ LOOP COUNT

Read More:

Recommendation: Collect trip counts data Confidence: Need more data

The Survey Report lacks trip counts data that might generate more precise recommendations. To fix: Run a Trip Counts analysis.

Recommendation: Force vectorized remainder Confidence: Medium

The compiler did not vectorize the remainder loop, even though doing so could improve performance. To fix: Force vectorization using a directive:
!DIR$ SIMD VECREMAINDER
or
!DIR$ VECTOR VECREMAINDER
.
Read More:

Issue: Data type conversions present

There are multiple data types within loops. Utilize hardware vectorization support more effectively by avoiding data type conversion.

Recommendation: Use the smallest data type Confidence: Low

The source loop contains data types of different widths. To fix: Use the smallest data type that gives the needed precision to use the entire vector register width.
Example: If only 16-bits are needed, using a short rather than an int can make the difference between eight-way or four-way SIMD parallelism, respectively.

Issue: User function call(s) present

User-defined functions in the loop body are preventing the compiler from vectorizing the loop.

Recommendation: Enable inline expansion Confidence: Low

Inlining of user-defined functions is disabled by compiler option. To fix: When using the
Ob
or
inline-level
compiler option to control inline expansion, replace the
0
argument with the
1
argument to enable inlining when an
inline
keyword or attribute is specified or the
2
argument to enable inlining of any function at compiler discretion.
Windows* OS Linux* OS
Ob1 or Ob2 -inline-level=1 or -inline-level=2
Read More:

Recommendation: Vectorize user function(s) inside loop Confidence: Low

Some user-defined function(s) are not vectorized or inlined by the compiler. To fix: Do one of the following:
  • Enforce vectorization of the source loop by means of SIMD instructions and/or create a SIMD version of the function(s) using a directive:
    Target Directive
    Source loop !DIR$ SIMD or !$OMP SIMD
    Inner function definition or declaration !$OMP DECLARE SIMD
  • If using the
    Ob
    or
    inline-level
    compiler option to control inline expansion with the
    1
    argument, use an
    inline
    keyword to enable inlining or replace the
    1
    argument with
    2
    to enable inlining of any function at compiler discretion.
Read More:

Recommendation: Convert to Fortran SIMD-enabled functions Confidence: Low

Passing an array/array section to an
ELEMENTAL
function/subroutine is creating a dependency that prevents vectorization. To fix:
  • Enforce vectorization of the source loop using SIMD instructions and/or create a SIMD version of the function(s) using a directive:
    Target Directive
    Source loop !DIR$ SIMD or !$OMP SIMD
    Inner function definition or declaration !$OMP DECLARE SIMD
  • Call from a
    DO
    loop.
Example:
Original code:
elemental subroutine callee(t,q,r) real, intent(in) :: t, q real, intent(out) :: r r = t + q end subroutine callee ... do k = 1,nlev call callee(a(:,k), b(:,k), c(:,k)) end do ...
Revised code:
subroutine callee(t,q,r) !$OMP DECLARE SIMD(callee) real, intent(in) :: t, q real, intent(out) :: r r = t + q end subroutine callee ... do k = 1,nlev !$OMP SIMD do i = 1,n call callee(a(i,k), b(i,k), c(i,k)) end do end do ...
Read More:

Issue: Serialized user function call(s) present

User-defined functions in the loop body are not vectorized.

Recommendation: Enable inline expansion Confidence: Low

Inlining of user-defined functions is disabled by compiler option. To fix: When using the
Ob
or
inline-level
compiler option to control inline expansion, replace the
0
argument with the
1
argument to enable inlining when an
inline
keyword or attribute is specified or the
2
argument to enable inlining of any function at compiler discretion.
Windows* OS Linux* OS
Ob1 or Ob2 -inline-level=1 or -inline-level=2
Read More:

Recommendation: Vectorize serialized function(s) inside loop Confidence: Medium

Some user-defined function(s) are not vectorized or inlined by the compiler. To fix: Do one of the following:
  • Enforce vectorization of the source loop by means of SIMD instructions and/or create a SIMD version of the function(s) using a directive:
    Target Directive
    Source loop !DIR$ SIMD or !$OMP SIMD
    Inner function definition or declaration !$OMP DECLARE SIMD
  • If using the
    Ob
    or
    inline-level
    compiler option to control inline expansion with the
    1
    argument, use an
    inline
    keyword to enable inlining or replace the
    1
    argument with
    2
    to enable inlining of any function at compiler discretion.
Read More:

Issue: Scalar math function call(s) present

Math functions in the loop body are preventing the compiler from effectively vectorizing the loop. Improve performance by enabling vectorized math call(s).

Recommendation: Enable inline expansion Confidence: Low

Inlining is disabled by compiler option. To fix: When using the
Ob
or
inline-level
compiler option to control inline expansion, replace the
0
argument with the
1
argument to enable inlining when an
inline
keyword or attribute is specified or the
2
argument to enable inlining of any function at compiler discretion.
Windows* OS Linux* OS
Ob1 or Ob2 -inline-level=1 or -inline-level=2
Read More:

Recommendation: Use the Intel short vector math library for vector intrinsics Confidence: High

Your application calls scalar instead of vectorized versions of math functions. To fix: Do all of the following:
  • Use the
    -mveclibabi=svml
    compiler option to specify the Intel short vector math library ABI type for vector instrinsics.
  • Use the
    -ftree-vectorize
    and
    -funsafe-math-optimizations
    compiler options to enable vector math functions.
  • Use the
    -L/path/to/intel/lib
    and
    -lsvml
    compiler options to specify an SVML ABI-compatible library at link time.
Example:
gfortran PROGRAM.FOR -O2 -ftree-vectorize -funsafe-math-optimizations -mveclibabi=svml -L/opt/intel/lib/intel64 -lm -lsvml -Wl,-rpath=/opt/intel/lib/intel64
program main parameter (N=100000000) real*8 angles(N), results(N) integer i call srand(86456) do i=1,N angles(i) = rand() enddo ! the loop will be auto-vectorized do i=1,N results(i) = cos(angles(i)) enddo end
Read More:

Recommendation: Use a Glibc library with vectorized SVML functions Confidence: Low

Your application calls scalar instead of vectorized versions of math functions. To fix: Do all of the following:
  • Upgrade the Glibc library to version 2.22 or higher. It supports SIMD directives in OpenMP* 4.0 or higher.
  • Upgrade the GNU* gfortran compiler to version 4.9 or higher. It supports vectorized math function options.
  • Use the
    -fopenmp
    and
    -ffast-math
    compiler options to enable vector math functions.
  • Use appropriate OpenMP SIMD directives to enable vectorization.
Note: Also use the
-I/path/to/glibc/install/include
and
-L/path/to/glibc/install/lib
compiler options if you have multiple Glibc libraries installed on the host.
Example:
gfortran PROGRAM.FOR -O2 -fopenmp -ffast-math -lrt -lm -mavx2
program main parameter (N=100000000) real*8 angles(N), results(N) integer i call srand(86456) do i=1,N angles(i) = rand() enddo !$OMP SIMD do i=1,N results(i) = cos(angles(i)) enddo end
Read More:

Recommendation: Vectorize math function calls inside loops Confidence: Medium

Your application calls serialized versions of math functions when you use the
precise
floating point model. To fix: Do one of the following:
  • Add
    fast-transcendentals
    compiler option to replace calls to transcendental functions with faster calls.
    Windows* OS Linux* OS
    /Qfast-transcendentals -fast-transcendentals
    CAUTION: This may reduce floating point accuracy.
  • Enforce vectorization of the source loop using a directive:
    !DIR$ SIMD
    or
    !$OMP SIMD
Read More:

Recommendation: Change the floating point model Confidence: Medium

Your application calls serialized versions of math functions when you use the
strict
floating point model. To fix: Do one of the following:
  • Use the
    fast
    floating point model to enable more aggressive optimizations or the
    precise
    floating point model to disable optimizations that are not value-safe on fast transcendental functions.
    Windows* OS Linux* OS
    /fp:fast -fp-model fast
    /fp:precise /Qfast-transcendentals -fp-model precise -fast-transcendentals
    CAUTION: This may reduce floating point accuracy.
  • Use the
    precise
    floating point model and enforce vectorization of the source loop using a directive:
    !DIR$ SIMD
    or
    !$OMP SIMD
Read More:

Issue: System function call(s) present

System function call(s) in the loop body are preventing the compiler from vectorizing the loop.

Recommendation: Remove system function call(s) inside loop Confidence: Low

Typically system function or subroutine calls cannot be vectorized; even a print statement is sufficient to prevent vectorization. To fix: Avoid using system function calls in loops.

Issue: OpenMP function call(s) present

OpenMP* function call(s) in the loop body are preventing the compiler from effectively vectorizing the loop.

Recommendation: Move OpenMP call(s) outside the loop body Confidence: Low

OpenMP calls prevent automatic vectorization when the compiler cannot move the calls outside the loop body, such as when OpenMP calls are not invariant. To fix:
  1. Split the OpenMP parallel loop section into two using directives.
    Target Directive
    Outer section !$OMP PARALLEL SECTIONS
    Inner section !$OMP DO NOWAIT
  2. Move the OpenMP calls outside the loop when possible.
Example:
Original code:
!$omp parallel do private(tid, nthreads) do k = 1, N tid = omp_get_thread_num() ! this call inside loop prevents vectorization nthreads = omp_get_num_threads() ! this call inside loop prevents vectorization ... enddo
Revised code:
!$omp parallel private(tid, nthreads) ! Move OpenMP calls here tid = omp_get_thread_num() nthreads = omp_get_num_threads() $!omp do nowait do k = 1, N ... enddo !$omp end parallel
Read More:

Recommendation: Remove OpenMP lock functions Confidence: Low

Locking objects slows loop execution. To fix: Rewrite the code without OpenMP lock functions. For example, allocating separate arrays for each thread and then merging them after a parallel section may improve speed (but consume more memory).
Read More:

Issue: Indirect function call(s) present

Indirect function call(s) in the loop body are preventing the compiler from vectorizing the loop. Indirect calls, sometimes called indirect jumps, get the callee address from a register or memory; direct calls get the callee address from an argument. Even if you force loop vectorization, indirect calls remain serialized.

Recommendation: Remove indirect call(s) inside loop Confidence: Low

Indirect function or subroutine calls cannot be vectorized. To fix: Avoid using indirect calls in loops.

Issue: Assumed dependency present

The compiler assumed there is an anti-dependency (Write after read - WAR) or a true dependency (Read after write - RAW) in the loop. Improve performance by investigating the assumption and handling accordingly.

Recommendation: Confirm dependency is real Confidence: Need More Data

There is no confirmation that a real dependency is present in the loop. To confirm: Run a Dependencies analysis.

Recommendation: Remove dependency Confidence: Low

The Dependencies analysis shows there is a real dependency in the loop. To fix: Do one of the following:
  • Rewrite the code to remove the dependency.
  • If there is an anti-dependency, enable vectorization using the directive
    !DIR$ SIMD VECTORLENGTH(k)
    , where
    k
    is smaller than the distance between dependent iterations in anti-dependency.
Read More:

Recommendation: Enable vectorization Confidence: Low

The Dependencies analysis shows there is no real dependency in the loop for the given workload. Tell the compiler it is safe to vectorize using the
restrict
keyword or a directive:
Directive Outcome
!DIR$ SIMD or !$OMP SIMD Ignores all dependencies in the loop
!DIR$ IVDEP Ignores only vector dependencies (which is safest)
Read More:

Issue: Vector register spilling possible

Possible register spilling was detected and all vector registers are in use. This may negatively impact performance, because the spilled variable must be loaded to and unloaded from main memory. Improve performance by decreasing vector register pressure.

Recommendation: Decrease unroll factor Confidence: Low

The current directive unroll factor increases vector register pressure. To fix: Decrease unroll factor using a directive:
!DIR$ NOUNROLL
or
!DIR$ UNROLL

Read More:

Recommendation: Split loop into smaller loops Confidence: Low

Possible register spilling along with high vector register pressure is preventing effective vectorization. To fix: Use the directive
!DIR$ DISTRIBUTE POINT
or rewrite your code to distribute the source loop. This can decrease register pressure as well as enable software pipelining and improve both instruction and data cache use.
Read More:

Issue: Possible inefficient memory access patterns present

Inefficient memory access patterns may result in significant vector code execution slowdown or block automatic vectorization by the compiler. Improve performance by investigating.

Recommendation: Confirm inefficient memory access patterns Confidence: Need More Data

There is no confirmation inefficient memory access patterns are present. To confirm: Run a Memory Access Patterns analysis.

Issue: Inefficient memory access patterns present

There is a high of percentage memory instructions with irregular (variable or random) stride accesses. Improve performance by investigating and handling accordingly.

Recommendation: Use SoA instead of AoS Confidence: Low

An array is the most common type of data structure containing a contiguous collection of data items that can be accessed by an ordinal index. You can organize this data as an array of structures (AoS) or as a structure of arrays (SoA). While AoS organization is excellent for encapsulation, it can hinder effective vector processing. To fix: Rewrite code to organize data using SoA instead of AoS.
Read More:

Recommendation: Reorder loops Confidence: Low

This loop may has less efficient memory access patterns than a nearby outer loop. To fix: Run a Memory Access Patterns analysis on the outer loop. If the memory access patterns are more efficient for the outer loop, reorder the loops if possible.

Recommendation: Use the Fortran 2008 CONTIGUOUS attribute Confidence: Low

The loop is multi-versioned for unit and non-unit strides in assumed-shape arrays or pointers, but marked versions of the loop have unit stride access only. The CONTIGUOUS attribute specifies the target of a pointer or an assumed-shape array is contiguous. It can make it easier to enable optimizations that rely on the memory layout of an object occupying a contiguous block of memory. Note: The results are indeterminate and could result in wrong answers and segmentation faults if the user assertion is wrong and the data is not contiguous at runtime.
Example:
real, pointer, contiguous :: ptr(:)
real, contiguous :: arrayarg(:, :)
Read More:

Issue: Potential underutilization of FMA instructions

Your current hardware supports the AVX2 instruction set architecture (ISA), which enables the use of fused multiply-add (FMA) instructions. Improve performance by utilizing FMA instructions.

Recommendation: Target the AVX2 ISA Confidence: Low

Although static analysis presumes the loop may benefit from FMA instructions available with the AVX2 ISA, no AVX2-specific code executed for this loop. To fix: Use the
xCORE-AVX2
compiler option to generate AVX2-specific code, or the
axCORE-AVX2
compiler option to enable multiple, feature-specific, auto-dispatch code generation, including AVX2.
Windows* OS Linux* OS
/QxCORE-AVX2 or /QaxCORE-AVX2 -xCORE-AVX2 or -axCORE-AVX2
Read More:

Recommendation: Target a specific ISA instead of using the xHost option Confidence: Low

Although static analysis presumes the loop may benefit from FMA instructions available with the AVX2 ISA, no AVX2-specific code executed for this loop. To fix: Instead of using the
xHost
compiler option, which limits optimization opportunities by the host ISA, use the
axCORE-AVX2
compiler option to compile for machines with and without AVX2 support, or the
xCORE-AVX2
compiler option to compile for machines with AVX2 support only.
Windows* OS Linux* OS
/QxCORE-AVX2 or /QaxCORE-AVX2 -xCORE-AVX2 or -axCORE-AVX2
Read More:

Recommendation: Explicitly enable FMA generation when using the strict floating-point model Confidence: Low

Static analysis presumes the loop may benefit from FMA instructions available with the AVX2 ISA, but the
strict
floating-point model disables FMA instruction generation by default. To fix: Override this behavior using the
fma
compiler option.
Windows* OS Linux* OS
/Qfma -fma
Read More:

Intel, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
© 2016 Intel Corporation