Blob Blame History Raw
----------------------------------------------------------------------
                        KNOWN ISSUES
----------------------------------------------------------------------

### Solaris compilers

 * Compilation on solaris will freeze when MPICH is configured with 
   "--enable-strict". This is due to a solaris compiler bug related
   to a specific order of assignments to long long and short variables.
   See the following ticket for more information:

     https://trac.mpich.org/projects/mpich/ticket/2105

### PathScale compilers
 * Due to bugs in the PathScale compiler suite, some configurations of MPICH
   do not build correctly.
    - v5.0.1: When the --disable-shared configure option is passed to MPICH,
      applications will give a segfault.

    - v5.0.5: Unless you pass the --enable-fast=O0 configure flag to MPICH,
      applications will hang.

   See the following ticket for more information:

    https://trac.mpich.org/projects/mpich/ticket/2104

### Fine-grained thread safety

 * ch3:sock does not (and will not) support fine-grained threading.

 * MPI-IO APIs are not currently thread-safe when using fine-grained
   threading (--enable-thread-cs=per-object).

 * ch3:nemesis:tcp fine-grained threading is still experimental and may
   have correctness or performance issues.  Known correctness issues
   include dynamic process support and generalized request support.


### Lacking channel-specific features

 * ch3 does not presently support communication across heterogeneous
   platforms (e.g., a big-endian machine communicating with a
   little-endian machine).

 * Support for "external32" data representation is incomplete. This
   affects the MPI_Pack_external and MPI_Unpack_external routines, as
   well the external data representation capabilities of ROMIO.  In
   particular: noncontiguous user buffers could consume egregious
   amounts of memory in the MPI library and any types which vary in
   width between the native representation and the external32
   representation will likely cause corruption.  The following ticket
   contains some additional information:

     http://trac.mpich.org/projects/mpich/ticket/1754

 * ch3 has known problems in some cases when threading and dynamic
   processes are used together on communicators of size greater than
   one.


### Performance issues

 * SMP-aware collectives do not perform as well, in select cases, as
   non-SMP-aware collectives, e.g. MPI_Reduce with message sizes
   larger than 64KiB. These can be disabled by setting the environment
   variable MPIR_CVAR_ENABLE_SMP_COLLECTIVES to 0.

 * MPI_Irecv operations that are not explicitly completed before
   MPI_Finalize is called may fail to complete before MPI_Finalize
   returns, and thus never complete. Furthermore, any matching send
   operations may erroneously fail. By explicitly completed, we mean
   that the request associated with the operation is completed by one
   of the MPI_Test or MPI_Wait routines.