| ---------------------------------------------------------------------- |
| KNOWN ISSUES |
| ---------------------------------------------------------------------- |
| |
| ### CH4 |
| |
| * Build Issues |
| - CH4 will not build with Solaris compilers. |
| * Stability with large buffers |
| - CH4 may suffer from stability issues when operating on large |
| communication buffers (>= 2GB) |
| |
| * Dynamic process support |
| - Requires fi_tagged capability in OFI netmod |
| - Not supported in UCX netmod at this time |
| |
| * Test failures |
| - CH4 will not currently pass 100% of the testsuite. |
| - Test failures differ based on experimental setup and network |
| support built in. |
| |
| ### OS/X |
| |
| * C++ bindings - Exception handling in C++ bindings is broken because |
| of a bug in libc++ (see |
| https://github.com/android-ndk/ndk/issues/289). You can workaround |
| this issue by explicitly linking your application to libstdc++ |
| instead (e.g., mpicxx -o foo foo.cxx -lstdc++). |
| |
| * GCC builds - When using Homebrew or Macports gcc (not the default |
| gcc, which is just a symlink to clang), a bug in gcc |
| (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838) causes stack |
| alignment issues in some cases causing segmentation faults. You |
| can workaround this issue by building mpich with clang, or by |
| passing a lower optimization level than the default (-O2). |
| Currently, this bug only seems to show up when the |
| --disable-error-checking flag is passed to configure. |
| |
| ### PGI compilers |
| |
| * ch3/nemesis will segfault during MPI_Init when built with recent |
| versions of PGI compilers. |
| |
| ### PathScale compilers |
| * Due to bugs in the PathScale compiler suite, some configurations of MPICH |
| do not build correctly. |
| - v5.0.1: When the --disable-shared configure option is passed to MPICH, |
| applications will give a segfault. |
| |
| - v5.0.5: Unless you pass the --enable-fast=O0 configure flag to MPICH, |
| applications will hang. |
| |
| See the following ticket for more information: |
| |
| https://trac.mpich.org/projects/mpich/ticket/2104 |
| |
| ### Fine-grained thread safety |
| |
| * ch3:sock does not (and will not) support fine-grained threading. |
| |
| * MPI-IO APIs are not currently thread-safe when using fine-grained |
| threading (--enable-thread-cs=per-object). |
| |
| * ch3:nemesis:tcp fine-grained threading is still experimental and may |
| have correctness or performance issues. Known correctness issues |
| include dynamic process support and generalized request support. |
| |
| |
| ### Lacking channel-specific features |
| |
| * ch3 does not presently support communication across heterogeneous |
| platforms (e.g., a big-endian machine communicating with a |
| little-endian machine). |
| |
| * Support for "external32" data representation is incomplete. This |
| affects the MPI_Pack_external and MPI_Unpack_external routines, as |
| well the external data representation capabilities of ROMIO. In |
| particular: noncontiguous user buffers could consume egregious |
| amounts of memory in the MPI library and any types which vary in |
| width between the native representation and the external32 |
| representation will likely cause corruption. The following ticket |
| contains some additional information: |
| |
| http://trac.mpich.org/projects/mpich/ticket/1754 |
| |
| * ch3 has known problems in some cases when threading and dynamic |
| processes are used together on communicators of size greater than |
| one. |
| |
| |
| ### Process Managers |
| |
| * Hydra has a bug related to stdin handling: |
| |
| https://trac.mpich.org/projects/mpich/ticket/1782 |
| |
| |
| ### Performance issues |
| |
| * SMP-aware collectives do not perform as well, in select cases, as |
| non-SMP-aware collectives, e.g. MPI_Reduce with message sizes |
| larger than 64KiB. These can be disabled by setting the environment |
| variable MPIR_CVAR_ENABLE_SMP_COLLECTIVES to 0. |
| |
| * MPI_Irecv operations that are not explicitly completed before |
| MPI_Finalize is called may fail to complete before MPI_Finalize |
| returns, and thus never complete. Furthermore, any matching send |
| operations may erroneously fail. By explicitly completed, we mean |
| that the request associated with the operation is completed by one |
| of the MPI_Test or MPI_Wait routines. |