Blob Blame History Raw
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML3.2 EN">
<HTML>
<HEAD>
<META NAME="GENERATOR" CONTENT="DOCTEXT">
<TITLE>Constants</TITLE>
</HEAD>
<BODY BGCOLOR="FFFFFF">
<H1 id="Constants">Constants</H1>
Meaning of MPI's defined constants 
<H2>Data types</H2>
Note that the Fortran types should only be used in Fortran programs,
and the C types should only be used in C programs.  For example,
it is in error to use <tt>MPI_INT</tt> for a Fortran INTEGER.
Datatypes are of type <tt>MPI_Datatype</tt> in C, type <tt>INTEGER</tt> in Fortran,
and <tt>Type(MPI_Datatype)</tt> in Fortran08
<P>
<H2>C datatypes</H2>
<DL>
<DT><B>MPI_CHAR <a name="MPI_CHAR"></a></B> <DD> char

<DT><B>MPI_SIGNED_CHAR <a name="MPI_SIGNED_CHAR"></a></B> <DD> signed char

<DT><B>MPI_UNSIGNED_CHAR <a name="MPI_UNSIGNED_CHAR"></a></B> <DD> unsigned char

<DT><B>MPI_BYTE <a name="MPI_BYTE"></a></B> <DD> See standard; like unsigned char

<DT><B>MPI_WCHAR <a name="MPI_WCHAR"></a></B> <DD> wide character (wchar_t)

<DT><B>MPI_SHORT <a name="MPI_SHORT"></a></B> <DD> short

<DT><B>MPI_UNSIGNED_SHORT <a name="MPI_UNSIGNED_SHORT"></a></B> <DD> unsigned short

<DT><B>MPI_INT   <a name="MPI_INT"></a></B> <DD> int

<DT><B>MPI_UNSIGNED <a name="MPI_UNSIGNED"></a></B> <DD> unsigned int

<DT><B>MPI_LONG <a name="MPI_LONG"></a></B> <DD> long

<DT><B>MPI_UNSIGNED_LONG <a name="MPI_UNSIGNED_LONG"></a></B> <DD> unsigned long

<DT><B>MPI_LONG_LONG_INT <a name="MPI_LONG_LONG_INT"></a></B> <DD> long long

<DT><B>MPI_LONG_LONG <a name="MPI_LONG_LONG"></a></B> <DD> synonyn for <tt>MPI_LONG_LONG_INT
</tt>
<DT><B>MPI_UNSIGNED_LONG_LONG <a name="MPI_UNSIGNED_LONG_LONG"></a></B> <DD> unsigned long long

<DT><B>MPI_FLOAT <a name="MPI_FLOAT"></a></B> <DD> float

<DT><B>MPI_DOUBLE <a name="MPI_DOUBLE"></a></B> <DD> double

<DT><B>MPI_LONG_DOUBLE  <a name="MPI_LONG_DOUBLE"></a></B> <DD> long double (some systems may not implement this)

<DT><B>MPI_INT8_T  <a name="MPI_INT8_T"></a></B> <DD> int8_t

<DT><B>MPI_INT16_T <a name="MPI_INT16_T"></a></B> <DD> int16_t

<DT><B>MPI_INT32_T <a name="MPI_INT32_T"></a></B> <DD> int32_t

<DT><B>MPI_INT64_T <a name="MPI_INT64_T"></a></B> <DD> int64_t

<DT><B>MPI_UINT8_T  <a name="MPI_UINT8_T"></a></B> <DD> uint8_t

<DT><B>MPI_UINT16_T <a name="MPI_UINT16_T"></a></B> <DD> uint16_t

<DT><B>MPI_UINT32_T <a name="MPI_UINT32_T"></a></B> <DD> uint32_t

<DT><B>MPI_UINT64_T <a name="MPI_UINT64_T"></a></B> <DD> uint64_t

<DT><B>MPI_C_BOOL <a name="MPI_C_BOOL"></a></B> <DD> _Bool

<DT><B>MPI_C_FLOAT_COMPLEX <a name="MPI_C_FLOAT_COMPLEX"></a></B> <DD> float _Complex

<DT><B>MPI_C_COMPLEX <a name="MPI_C_COMPLEX"></a></B> <DD> float _Complex

<DT><B>MPI_C_DOUBLE_COMPLEX <a name="MPI_C_DOUBLE_COMPLEX"></a></B> <DD> double _Complex

<DT><B>MPI_C_LONG_DOUBLE_COMPLEX <a name="MPI_C_LONG_DOUBLE_COMPLEX"></a></B> <DD> long double _Complex
</DL>
<P>
<P>
The following are datatypes for the MPI functions <tt>MPI_MAXLOC</tt> and
<tt>MPI_MINLOC</tt>.
<DL>
<DT><B>MPI_FLOAT_INT <a name="MPI_FLOAT_INT"></a></B> <DD> <tt>struct { float, int }
</tt>
<DT><B>MPI_LONG_INT  <a name="MPI_LONG_INT"></a></B> <DD> <tt>struct { long, int }
</tt>
<DT><B>MPI_DOUBLE_INT <a name="MPI_DOUBLE_INT"></a></B> <DD> <tt>struct { double, int }
</tt>
<DT><B>MPI_SHORT_INT  <a name="MPI_SHORT_INT"></a></B> <DD> <tt>struct { short, int }
</tt>
<DT><B>MPI_2INT       <a name="MPI_2INT"></a></B> <DD> <tt>struct { int, int }
</tt>
<DT><B>MPI_LONG_DOUBLE_INT <a name="MPI_LONG_DOUBLE_INT"></a></B> <DD> <tt>struct { long double, int }</tt>; this
is an <em>optional</em> type, and may be set to <tt>MPI_DATATYPE_NULL
</tt>
</DL>
<P>
<P>
Special datatypes for C and Fortran
<DL>
<DT><B>MPI_PACKED <a name="MPI_PACKED"></a></B> <DD> For <tt>MPI_Pack</tt> and <tt>MPI_Unpack
</tt>
<DT><B>MPI_UB <a name="MPI_UB"></a></B> <DD> For <tt>MPI_Type_struct</tt>; an upper-bound indicator.  Removed in MPI 3

<DT><B>MPI_LB <a name="MPI_LB"></a></B> <DD> For <tt>MPI_Type_struct</tt>; a lower-bound indicator. Removed in MPI 3
</DL>
<P>
<H2>Fortran datatypes</H2>
<DL>
<DT><B>MPI_REAL <a name="MPI_REAL"></a></B> <DD> <tt>REAL
</tt>
<DT><B>MPI_INTEGER <a name="MPI_INTEGER"></a></B> <DD> <tt>INTEGER
</tt>
<DT><B>MPI_LOGICAL <a name="MPI_LOGICAL"></a></B> <DD> <tt>LOGICAL
</tt>
<DT><B>MPI_DOUBLE_PRECISION <a name="MPI_DOUBLE_PRECISION"></a></B> <DD> <tt>DOUBLE PRECISION
</tt>
<DT><B>MPI_COMPLEX <a name="MPI_COMPLEX"></a></B> <DD> <tt>COMPLEX
</tt>
<DT><B>MPI_DOUBLE_COMPLEX <a name="MPI_DOUBLE_COMPLEX"></a></B> <DD> <tt>complex*16</tt> (or <tt>complex*32</tt>) where supported.
</DL>
<P>
The following datatypes are optional
<DL>
<DT><B>MPI_INTEGER1  <a name="MPI_INTEGER1"></a></B> <DD> <tt>integer*1</tt> if supported

<DT><B>MPI_INTEGER2  <a name="MPI_INTEGER2"></a></B> <DD> <tt>integer*2</tt> if supported

<DT><B>MPI_INTEGER4  <a name="MPI_INTEGER4"></a></B> <DD> <tt>integer*4</tt> if supported

<DT><B>MPI_INTEGER8  <a name="MPI_INTEGER8"></a></B> <DD> <tt>integer*8</tt> if supported

<DT><B>MPI_INTEGER16 <a name="MPI_INTEGER16"></a></B> <DD> <tt>integer*16</tt> if supported

<DT><B>MPI_REAL4     <a name="MPI_REAL4"></a></B> <DD> <tt>real*4</tt> if supported

<DT><B>MPI_REAL8     <a name="MPI_REAL8"></a></B> <DD> <tt>real*8</tt> if supported
</DL>
<DL>
<DT><B>MPI_REAL16    </B> <DD> <tt>real*16</tt> if supported
</DL>
<DL>
<DT><B>MPI_COMPLEX8  </B> <DD> <tt>complex*8</tt> if supported
</DL>
<DL>
<DT><B>MPI_COMPLEX16 </B> <DD> <tt>complex*16</tt> if supported
</DL>
<DL>
<DT><B>MPI_COMPLEX32 </B> <DD> <tt>complex*32</tt> if supported
</DL>
<P>
The following are datatypes for the MPI functions <tt>MPI_MAXLOC</tt> and
<tt>MPI_MINLOC</tt>.  In Fortran, these datatype always consist of
two elements of the same Fortran type.
<DL>
<DT><B>MPI_2INTEGER <a name="MPI_2INTEGER"></a></B> <DD> <tt>INTEGER,INTEGER
</tt>
<DT><B>MPI_2REAL    <a name="MPI_2REAL"></a></B> <DD> <tt>REAL, REAL
</tt>
<DT><B>MPI_2DOUBLE_PRECISION <a name="MPI_2DOUBLE_PRECISION"></a></B> <DD> <tt>DOUBLE PRECISION, DOUBLE PRECISION
</tt>
</DL>
<P>
MPI Datatypes for MPI Types
<DL>
<DT><B>MPI_AINT <a name="MPI_AINT"></a></B> <DD> Datatype for an <tt>MPI_Aint
</tt>
<DT><B>MPI_OFFSET <a name="MPI_OFFSET"></a></B> <DD> Datatype for an <tt>MPI_Offset
</tt>
<DT><B>MPI_COUNT <a name="MPI_COUNT"></a></B> <DD> Datatype for an <tt>MPI_Count
</tt>
</DL>
<P>
<H2>MPI Datatype Combiner Names</H2>
<DL>
<DT><B>MPI_COMBINER_NAMED            <a name="MPI_COMBINER_NAMED"></a></B> <DD> a named predefined datatype

<DT><B>MPI_COMBINER_DUP              <a name="MPI_COMBINER_DUP"></a></B> <DD> MPI_TYPE_DUP

<DT><B>MPI_COMBINER_CONTIGUOUS       <a name="MPI_COMBINER_CONTIGUOUS"></a></B> <DD> MPI_TYPE_CONTIGUOUS

<DT><B>MPI_COMBINER_VECTOR           <a name="MPI_COMBINER_VECTOR"></a></B> <DD> MPI_TYPE_VECTOR

<DT><B>MPI_COMBINER_HVECTOR_INTEGER  <a name="MPI_COMBINER_HVECTOR_INTEGER"></a></B> <DD> Removed in MPI-3

<DT><B>MPI_COMBINER_HVECTOR          <a name="MPI_COMBINER_HVECTOR"></a></B> <DD> MPI_TYPE_CREATE_HVECTOR

<DT><B>MPI_COMBINER_INDEXED          <a name="MPI_COMBINER_INDEXED"></a></B> <DD> MPI_TYPE_INDEXED

<DT><B>MPI_COMBINER_HINDEXED_INTEGER <a name="MPI_COMBINER_HINDEXED_INTEGER"></a></B> <DD> Removed in MPI-3

<DT><B>MPI_COMBINER_HINDEXED         <a name="MPI_COMBINER_HINDEXED"></a></B> <DD> MPI_TYPE_CREATE_HINDEXED

<DT><B>MPI_COMBINER_INDEXED_BLOCK    <a name="MPI_COMBINER_INDEXED_BLOCK"></a></B> <DD> MPI_TYPE_CREATE_INDEXED_BLOCK

<DT><B>MPI_COMBINER_STRUCT_INTEGER   <a name="MPI_COMBINER_STRUCT_INTEGER"></a></B> <DD> Removed in MPI-3

<DT><B>MPI_COMBINER_STRUCT           <a name="MPI_COMBINER_STRUCT"></a></B> <DD> MPI_TYPE_CREATE_STRUCT

<DT><B>MPI_COMBINER_SUBARRAY         <a name="MPI_COMBINER_SUBARRAY"></a></B> <DD> MPI_TYPE_CREATE_SUBARRAY

<DT><B>MPI_COMBINER_DARRAY           <a name="MPI_COMBINER_DARRAY"></a></B> <DD> MPI_TYPE_CREATE_DARRAY

<DT><B>MPI_COMBINER_F90_REAL         <a name="MPI_COMBINER_F90_REAL"></a></B> <DD> MPI_TYPE_CREATE_F90_REAL

<DT><B>MPI_COMBINER_F90_COMPLEX      <a name="MPI_COMBINER_F90_COMPLEX"></a></B> <DD> MPI_TYPE_CREATE_F90_COMPLEX

<DT><B>MPI_COMBINER_F90_INTEGER      <a name="MPI_COMBINER_F90_INTEGER"></a></B> <DD> MPI_TYPE_CREATE_F90_INTEGER

<DT><B>MPI_COMBINER_RESIZED          <a name="MPI_COMBINER_RESIZED"></a></B> <DD> MPI_TYPE_CREATE_RESIZED

<DT><B>MPI_COMBINER_HINDEXED_BLOCK   <a name="MPI_COMBINER_HINDEXED_BLOCK"></a></B> <DD> MPI_TYPE_CREATE_HINDEXED_BLOCK
</DL>
<P>
<H2>MPI Datatype Type Classes</H2>
MPI Type classes used with routines to return Fortran types with defined
precision and range
<DL>
<DT><B>MPI_TYPECLASS_REAL    <a name="MPI_TYPECLASS_REAL"></a></B> <DD> <tt>REAL
</tt>
<DT><B>MPI_TYPECLASS_INTEGER <a name="MPI_TYPECLASS_INTEGER"></a></B> <DD> <tt>INTEGER
</tt>
<DT><B>MPI_TYPECLASS_COMPLEX <a name="MPI_TYPECLASS_COMPLEX"></a></B> <DD> <tt>COMPLEX
</tt>
</DL>
<P>
<H2>MPI Darray and Subarray Values</H2>
These values are used to create a datatype with the <tt>DARRAY</tt> and <tt>SUBARRAY
</tt>constructors.
<DL>
<DT><B>MPI_ORDER_C              <a name="MPI_ORDER_C"></a></B> <DD> Row-major order (as used by C)

<DT><B>MPI_ORDER_FORTRAN        <a name="MPI_ORDER_FORTRAN"></a></B> <DD> Column-major order (as used by Fortran)

<DT><B>MPI_DISTRIBUTE_BLOCK     <a name="MPI_DISTRIBUTE_BLOCK"></a></B> <DD> Block distribution

<DT><B>MPI_DISTRIBUTE_CYCLIC    <a name="MPI_DISTRIBUTE_CYCLIC"></a></B> <DD> Cyclic distribution

<DT><B>MPI_DISTRIBUTE_NONE      <a name="MPI_DISTRIBUTE_NONE"></a></B> <DD> This dimension is not distributed

<DT><B>MPI_DISTRIBUTE_DFLT_DARG <a name="MPI_DISTRIBUTE_DFLT_DARG"></a></B> <DD> Use the default distribution
</DL>
<P>
<H2>Communicators</H2>
Communicators are of type <tt>MPI_Comm</tt> in C, <tt>INTEGER</tt> in Fortran, and
<tt>Type(MPI_Comm)</tt> in Fortran08
<DL>
<DT><B>MPI_COMM_WORLD <a name="MPI_COMM_WORLD"></a></B> <DD> Contains all of the processes

<DT><B>MPI_COMM_SELF <a name="MPI_COMM_SELF"></a></B> <DD> Contains only the calling process
</DL>
<P>
<H2>Kind of communicator for 'MPI_COMM_SPLIT_TYPE'</H2>
<DL>
<DT><B>MPI_COMM_TYPE_SHARED <a name="MPI_COMM_TYPE_SHARED"></a></B> <DD> All processes that can share memory are grouped into
the same communicator.
</DL>
<P>
<H2>Groups</H2>
Groups are of type <tt>MPI_Group</tt> in C, <tt>INTEGER</tt> in Fortran,
and <tt>Type(MPI_Group)</tt> in Fortran08
<P>
<DL>
<DT><B>MPI_GROUP_EMPTY <a name="MPI_GROUP_EMPTY"></a></B> <DD> A group containing no members.
</DL>
<P>
<H2>Results of the compare operations on groups and communicators</H2>
<DL>
<DT><B>MPI_IDENT <a name="MPI_IDENT"></a></B> <DD> Identical

<DT><B>MPI_CONGRUENT  <a name="MPI_CONGRUENT"></a></B> <DD> (only for <tt>MPI_COMM_COMPARE</tt>) The groups are identical

<DT><B>MPI_SIMILAR <a name="MPI_SIMILAR"></a></B> <DD> Same members, but in a different order

<DT><B>MPI_UNEQUAL <a name="MPI_UNEQUAL"></a></B> <DD> Different
</DL>
<P>
<P>
<H2>Collective operations</H2>
The collective combination operations (e.g., <tt>MPI_REDUCE</tt>, <tt>MPI_ALLREDUCE</tt>,
<tt>MPI_REDUCE_SCATTER</tt>, and <tt>MPI_SCAN</tt>) take a combination operation.
This operation is of type <tt>MPI_Op</tt> in C and of type <tt>INTEGER</tt> in Fortran.
The predefined operations are
<P>
<DL>
<DT><B>MPI_MAX <a name="MPI_MAX"></a></B> <DD> return the maximum

<DT><B>MPI_MIN <a name="MPI_MIN"></a></B> <DD> return the minumum

<DT><B>MPI_SUM <a name="MPI_SUM"></a></B> <DD> return the sum

<DT><B>MPI_PROD <a name="MPI_PROD"></a></B> <DD> return the product

<DT><B>MPI_LAND <a name="MPI_LAND"></a></B> <DD> return the logical and

<DT><B>MPI_BAND <a name="MPI_BAND"></a></B> <DD> return the bitwise and

<DT><B>MPI_LOR <a name="MPI_LOR"></a></B> <DD> return the logical or

<DT><B>MPI_BOR <a name="MPI_BOR"></a></B> <DD> return the bitwise of

<DT><B>MPI_LXOR <a name="MPI_LXOR"></a></B> <DD> return the logical exclusive or

<DT><B>MPI_BXOR <a name="MPI_BXOR"></a></B> <DD> return the bitwise exclusive or

<DT><B>MPI_MINLOC <a name="MPI_MINLOC"></a></B> <DD> return the minimum and the location (actually, the value of
the second element of the structure where the minimum of
the first is found)

<DT><B>MPI_MAXLOC <a name="MPI_MAXLOC"></a></B> <DD> return the maximum and the location

<DT><B>MPI_REPLACE <a name="MPI_REPLACE"></a></B> <DD> replace b with a

<DT><B>MPI_NO_OP <a name="MPI_NO_OP"></a></B> <DD> perform no operation
</DL>
<P>
<H2>Notes on collective operations</H2>
<P>
The reduction functions (<tt>MPI_Op</tt>) do not return an error value.  As a result,
if the functions detect an error, all they can do is either call <tt>MPI_Abort
</tt>or silently skip the problem.  Thus, if you change the error handler from
<tt>MPI_ERRORS_ARE_FATAL</tt> to something else, for example, <tt>MPI_ERRORS_RETURN</tt>,
then no error may be indicated.
<P>
The reason for this is the performance problems in ensuring that
all collective routines return the same error value.
<P>
Note that not all datatypes are valid for these functions.  For example,
<tt>MPI_COMPLEX</tt> is not valid for <tt>MPI_MAX</tt> and <tt>MPI_MIN</tt>.  In addition, the MPI
1.1 standard did not include the C types <tt>MPI_CHAR</tt> and <tt>MPI_UNSIGNED_CHAR
</tt>among the lists of arithmetic types for operations like <tt>MPI_SUM</tt>.  However,
since the C type <tt>char</tt> is an integer type (like <tt>short</tt>), it should have been
included.  The MPI Forum will probably include <tt>char</tt> and <tt>unsigned char
</tt>as a clarification to MPI 1.1; until then, users are advised that MPI
implementations may not accept <tt>MPI_CHAR</tt> and <tt>MPI_UNSIGNED_CHAR</tt> as valid
datatypes for <tt>MPI_SUM</tt>, <tt>MPI_PROD</tt>, etc.  MPICH does allow these datatypes.
<P>
<H2>Permanent key values</H2>
These are the same in C and Fortran
<P>
<DL>
<DT><B>MPI_TAG_UB <a name="MPI_TAG_UB"></a></B> <DD> Largest tag value

<DT><B>MPI_HOST <a name="MPI_HOST"></a></B> <DD> Rank of process that is host, if any

<DT><B>MPI_IO <a name="MPI_IO"></a></B> <DD> Rank of process that can do I/O

<DT><B>MPI_WTIME_IS_GLOBAL <a name="MPI_WTIME_IS_GLOBAL"></a></B> <DD> Has value 1 if <tt>MPI_WTIME</tt> is globally synchronized.

<DT><B>MPI_UNIVERSE_SIZE <a name="MPI_UNIVERSE_SIZE"></a></B> <DD> Number of available processes.  See the standard for
a description of limitations on this value

<DT><B>MPI_LASTUSEDCODE <a name="MPI_LASTUSEDCODE"></a></B> <DD> Last used MPI error code (check - code or class?)

<DT><B>MPI_APPNUM <a name="MPI_APPNUM"></a></B> <DD> Application number, starting from 0.  See the standard for
<tt>MPI_COMM_SPAWN_MULTIPLE</tt> and <tt>mpiexec</tt> for details
</DL>
<P>
<H2>Null objects</H2>
<DL>
<DT><B>MPI_COMM_NULL          <a name="MPI_COMM_NULL"></a></B> <DD> Null communicator

<DT><B>MPI_OP_NULL            <a name="MPI_OP_NULL"></a></B> <DD> Null operation

<DT><B>MPI_GROUP_NULL         <a name="MPI_GROUP_NULL"></a></B> <DD> Null group

<DT><B>MPI_DATATYPE_NULL      <a name="MPI_DATATYPE_NULL"></a></B> <DD> Null datatype

<DT><B>MPI_REQUEST_NULL       <a name="MPI_REQUEST_NULL"></a></B> <DD> Null request

<DT><B>MPI_ERRHANDLER_NULL    <a name="MPI_ERRHANDLER_NULL"></a></B> <DD> Null error handler

<DT><B>MPI_WIN_NULL           <a name="MPI_WIN_NULL"></a></B> <DD> Null window handle

<DT><B>MPI_FILE_NULL          <a name="MPI_FILE_NULL"></a></B> <DD> Null file handle

<DT><B>MPI_INFO_NULL          <a name="MPI_INFO_NULL"></a></B> <DD> Null info handle

<DT><B>MPI_MESSAGE_NULL       <a name="MPI_MESSAGE_NULL"></a></B> <DD> Null message handle

<DT><B>MPI_ARGV_NULL          <a name="MPI_ARGV_NULL"></a></B> <DD> Empty ARGV value for spawn commands

<DT><B>MPI_ARGVS_NULL         <a name="MPI_ARGVS_NULL"></a></B> <DD> Empty ARGV array for spawn-multiple command

<DT><B>MPI_T_ENUM_NULL        <a name="MPI_T_ENUM_NULL"></a></B> <DD> Null MPI_T enum

<DT><B>MPI_T_CVAR_HANDLE_NULL <a name="MPI_T_CVAR_HANDLE_NULL"></a></B> <DD> Null MPI_T control variable handle

<DT><B>MPI_T_PVAR_HANDLE_NULL <a name="MPI_T_PVAR_HANDLE_NULL"></a></B> <DD> Null MPI_T performance variable handle

<DT><B>MPI_T_PVAR_SESSION_NULL<a name="MPI_T_PVAR_SESSION_NULL"></a></B> <DD> Null MPI_T performance variable session handle
</DL>
<P>
<H2>Predefined Constants</H2>
<DL>
<DT><B>MPI_MAX_PROCESSOR_NAME         <a name="MPI_MAX_PROCESSOR_NAME"></a></B> <DD> Maximum length of name returned by
<tt>MPI_GET_PROCESSOR_NAME
</tt>
<DT><B>MPI_MAX_ERROR_STRING           <a name="MPI_MAX_ERROR_STRING"></a></B> <DD> Maximum length of string return by
<tt>MPI_ERROR_STRING
</tt>
<DT><B>MPI_MAX_LIBRARY_VERSION_STRING <a name="MPI_MAX_LIBRARY_VERSION_STRING"></a></B> <DD> Maximum length of string returned by
<tt>MPI_GET_LIBRARY_VERSION_STRING</tt>???

<DT><B>MPI_MAX_PORT_NAME              <a name="MPI_MAX_PORT_NAME"></a></B> <DD> Maximum length of a port

<DT><B>MPI_MAX_OBJECT_NAME            <a name="MPI_MAX_OBJECT_NAME"></a></B> <DD> Maximum length of an object (?)

<DT><B>MPI_MAX_INFO_KEY               <a name="MPI_MAX_INFO_KEY"></a></B> <DD> Maximum length of an info key

<DT><B>MPI_MAX_INFO_VAL               <a name="MPI_MAX_INFO_VAL"></a></B> <DD> Maximum length of an info value

<DT><B>MPI_UNDEFINED                  <a name="MPI_UNDEFINED"></a></B> <DD> Used by many routines to indicated
undefined or unknown integer value

<DT><B>MPI_UNDEFINED_RANK             <a name="MPI_UNDEFINED_RANK"></a></B> <DD> Unknown rank

<DT><B>MPI_KEYVAL_INVALID             <a name="MPI_KEYVAL_INVALID"></a></B> <DD> Special keyval that may be used to detect
uninitialized keyvals.

<DT><B>MPI_BSEND_OVERHEAD             <a name="MPI_BSEND_OVERHEAD"></a></B> <DD> Add this to the size of a <tt>MPI_BSEND
</tt>buffer for each outstanding message

<DT><B>MPI_PROC_NULL                  <a name="MPI_PROC_NULL"></a></B> <DD> This rank may be used to send or receive from no-one.

<DT><B>MPI_ANY_SOURCE                 <a name="MPI_ANY_SOURCE"></a></B> <DD> In a receive, accept a message from anyone.

<DT><B>MPI_ANY_TAG                    <a name="MPI_ANY_TAG"></a></B> <DD> In a receive, accept a message with any tag value.

<DT><B>MPI_BOTTOM                     <a name="MPI_BOTTOM"></a></B> <DD> May be used to indicate the bottom of the address space

<DT><B>MPI_IN_PLACE                   <a name="MPI_IN_PLACE"></a></B> <DD> Special location for buffer in some
collective communication routines

<DT><B>MPI_VERSION                    <a name="MPI_VERSION"></a></B> <DD> Numeric value of MPI version (e.g., 3)

<DT><B>MPI_SUBVERSION                 <a name="MPI_SUBVERSION"></a></B> <DD> Numeric value of MPI subversion (e.g., 1)
</DL>
<P>
<H2>Topology types</H2>
<DL>
<DT><B>MPI_CART       <a name="MPI_CART"></a></B> <DD> Cartesian grid

<DT><B>MPI_GRAPH      <a name="MPI_GRAPH"></a></B> <DD> General graph

<DT><B>MPI_DIST_GRAPH <a name="MPI_DIST_GRAPH"></a></B> <DD> General distributed graph
</DL>
<P>
<H2>Special values for distributed graph</H2>
<DL>
<DT><B>MPI_UNWEIGHTED    <a name="MPI_UNWEIGHTED"></a></B> <DD> Indicates that the edges are unweighted

<DT><B>MPI_WEIGHTS_EMPTY <a name="MPI_WEIGHTS_EMPTY"></a></B> <DD> Special address that indicates no array of weights
information
</DL>
<P>
<H2>File Modes</H2>
<DL>
<DT><B>MPI_MODE_RDONLY          <a name="MPI_MODE_RDONLY"></a></B> <DD> Read only

<DT><B>MPI_MODE_RDWR            <a name="MPI_MODE_RDWR"></a></B> <DD> Read and write

<DT><B>MPI_MODE_WRONLY          <a name="MPI_MODE_WRONLY"></a></B> <DD> Write only

<DT><B>MPI_MODE_CREATE          <a name="MPI_MODE_CREATE"></a></B> <DD> Create the file if it does not exist

<DT><B>MPI_MODE_EXCL            <a name="MPI_MODE_EXCL"></a></B> <DD> It is an error if creating a file that already
exists

<DT><B>MPI_MODE_DELETE_ON_CLOSE <a name="MPI_MODE_DELETE_ON_CLOSE"></a></B> <DD> Delete the file on close

<DT><B>MPI_MODE_UNIQUE_OPEN     <a name="MPI_MODE_UNIQUE_OPEN"></a></B> <DD> The file will not be concurrently opened elsewhere

<DT><B>MPI_MODE_APPEND          <a name="MPI_MODE_APPEND"></a></B> <DD> The initial position of all file pointers is at
the end of the file

<DT><B>MPI_MODE_SEQUENTIAL      <a name="MPI_MODE_SEQUENTIAL"></a></B> <DD> File will only be accessed sequentially
</DL>
<P>
<H2>File Displacement</H2>
<DL>
<DT><B>MPI_DISPLACEMENT_CURRENT <a name="MPI_DISPLACEMENT_CURRENT"></a></B> <DD> Use with files opened with mode
<tt>MPI_MODE_SEQUENTIAL</tt> in calls to <tt>MPI_FILE_SET_VIEW
</tt>
</DL>
<P>
<H2>File Positioning</H2>
<DL>
<DT><B>MPI_SEEK_SET             <a name="MPI_SEEK_SET"></a></B> <DD> Set the pointer to <tt>offset
</tt>
<DT><B>MPI_SEEK_CUR             <a name="MPI_SEEK_CUR"></a></B> <DD> Set the pointer to the current position plus <tt>offset
</tt>
<DT><B>MPI_SEEK_END             <a name="MPI_SEEK_END"></a></B> <DD> Set the pointer to the end of the file plus <tt>offset
</tt>
</DL>
<P>
<H2>Window attributes</H2>
<DL>
<DT><B>MPI_WIN_BASE <a name="MPI_WIN_BASE"></a></B> <DD> window base address.

<DT><B>MPI_WIN_SIZE <a name="MPI_WIN_SIZE"></a></B> <DD> window size, in bytes

<DT><B>MPI_WIN_DISP_UNIT <a name="MPI_WIN_DISP_UNIT"></a></B> <DD> displacement unit associated with the window

<DT><B>MPI_WIN_CREATE_FLAVOR <a name="MPI_WIN_CREATE_FLAVOR"></a></B> <DD> how the window was created

<DT><B>MPI_WIN_MODEL <a name="MPI_WIN_MODEL"></a></B> <DD> memory model for window
</DL>
<P>
<H2>Window flavors</H2>
<DL>
<DT><B>MPI_WIN_FLAVOR_CREATE   <a name="MPI_WIN_FLAVOR_CREATE"></a></B> <DD> Window was created with MPI_WIN_CREATE.

<DT><B>MPI_WIN_FLAVOR_ALLOCATE <a name="MPI_WIN_FLAVOR_ALLOCATE"></a></B> <DD> Window was created with MPI_WIN_ALLOCATE.

<DT><B>MPI_WIN_FLAVOR_DYNAMIC  <a name="MPI_WIN_FLAVOR_DYNAMIC"></a></B> <DD> Window was created with MPI_WIN_CREATE_DYNAMIC.

<DT><B>MPI_WIN_FLAVOR_SHARED   <a name="MPI_WIN_FLAVOR_SHARED"></a></B> <DD> Window was created with MPI_WIN_ALLOCATE_SHARED.
</DL>
<P>
<H2>Window Memory Model</H2>
<DL>
<DT><B>MPI_WIN_SEPARATE <a name="MPI_WIN_SEPARATE"></a></B> <DD> Separate public and private copies of window memory

<DT><B>MPI_WIN_UNIFIED <a name="MPI_WIN_UNIFIED"></a></B> <DD> The publich and private copies are identical (by which
we mean that updates are eventually observed without additional RMA operations)
</DL>
<P>
<H2>Window Lock Types</H2>
<DL>
<DT><B>MPI_LOCK_EXCLUSIVE <a name="MPI_LOCK_EXCLUSIVE"></a></B> <DD> Only one process at a time will execute accesses
within the lock

<DT><B>MPI_LOCK_SHARED <a name="MPI_LOCK_SHARED"></a></B> <DD> Not exclusive; multiple processes may execute accesses
within the lock
</DL>
<P>
<H2>Window Assertions</H2>
See section 11.5 in MPI 3.1 for a detailed description of each of these
assertion values.
<DL>
<DT><B>MPI_MODE_NOCHECK      <a name="MPI_MODE_NOCHECK"></a></B> <DD> The matching calls to MPI_WIN_POST or MPI_WIN_START
have already completed, or no process holds or will attempt to acquire, a
conflicting lock.

<DT><B>MPI_MODE_NOSTORE      <a name="MPI_MODE_NOSTORE"></a></B> <DD> The local window has not been updated by stores
since the last synchronization

<DT><B>MPI_MODE_NOPUT        <a name="MPI_MODE_NOPUT"></a></B> <DD> The local window will not be updated by put or
accumulate until the next synchronization

<DT><B>MPI_MODE_NOPRECEDE    <a name="MPI_MODE_NOPRECEDE"></a></B> <DD> The fence does not complete any locally issued RMA
calls

<DT><B>MPI_MODE_NOSUCCEED    <a name="MPI_MODE_NOSUCCEED"></a></B> <DD> The fence does not start any locally issued RMA calls
</DL>
<P>
<H2>Predefined Info Object</H2>
<DL>
<DT><B>MPI_INFO_ENV <a name="MPI_INFO_ENV"></a></B> <DD> Contains the execution environment
</DL>
<P>
<H2>MPI Status</H2>
The <tt>MPI_Status</tt> datatype is a structure in C.  The three elements for use
by programmers are
<DL>
<DT><B>MPI_SOURCE <a name="MPI_SOURCE"></a></B> <DD> Who sent the message

<DT><B>MPI_TAG <a name="MPI_TAG"></a></B> <DD> What tag the message was sent with

<DT><B>MPI_ERROR <a name="MPI_ERROR"></a></B> <DD> Any error return (only when the error returned by the routine
has error class <tt>MPI_ERR_IN_STATUS</tt>)
</DL>
<P>
<DL>
<DT><B>MPI_STATUS_IGNORE   <a name="MPI_STATUS_IGNORE"></a></B> <DD> Ignore a single <tt>MPI_Status</tt> argument

<DT><B>MPI_STATUSES_IGNORE <a name="MPI_STATUSES_IGNORE"></a></B> <DD> Ignore an array of <tt>MPI_Status
</tt>
</DL>
<P>
<H2>Special value for error codes array</H2>
<DL>
<DT><B>MPI_ERRCODES_IGNORE <a name="MPI_ERRCODES_IGNORE"></a></B> <DD> Ignore an array of error codes
</DL>
<P>
<H2>MPI_T Constants</H2>
<DL>
<DT><B>MPI_T_VERBOSITY_USER_BASIC     <a name="MPI_T_VERBOSITY_USER_BASIC"></a></B> <DD> Basic information of interest to users

<DT><B>MPI_T_VERBOSITY_USER_DETAIL    <a name="MPI_T_VERBOSITY_USER_DETAIL"></a></B> <DD> Detailed information of interest to users

<DT><B>MPI_T_VERBOSITY_USER_ALL       <a name="MPI_T_VERBOSITY_USER_ALL"></a></B> <DD> All remaining information of interest to users

<DT><B>MPI_T_VERBOSITY_TUNER_BASIC    <a name="MPI_T_VERBOSITY_TUNER_BASIC"></a></B> <DD> Basic information required for tuning

<DT><B>MPI_T_VERBOSITY_TUNER_DETAIL   <a name="MPI_T_VERBOSITY_TUNER_DETAIL"></a></B> <DD> Detailed information required for tuning

<DT><B>MPI_T_VERBOSITY_TUNER_ALL      <a name="MPI_T_VERBOSITY_TUNER_ALL"></a></B> <DD> All remaining information required for tuning

<DT><B>MPI_T_VERBOSITY_MPIDEV_BASIC   <a name="MPI_T_VERBOSITY_MPIDEV_BASIC"></a></B> <DD> Basic information for MPI implementors

<P>
<P>
<DT><B>MPI_T_VERBOSITY_MPIDEV_DETAIL  <a name="MPI_T_VERBOSITY_MPIDEV_DETAIL"></a></B> <DD> Detailed information for MPI implementors

<DT><B>MPI_T_VERBOSITY_MPIDEV_ALL     <a name="MPI_T_VERBOSITY_MPIDEV_ALL"></a></B> <DD> All remaining information for MPI implementors

<DT><B>MPI_T_BIND_NO_OBJECT           <a name="MPI_T_BIND_NO_OBJECT"></a></B> <DD> Applies globally to entire MPI process

<DT><B>MPI_T_BIND_MPI_COMM            <a name="MPI_T_BIND_MPI_COMM"></a></B> <DD> MPI communicators

<DT><B>MPI_T_BIND_MPI_DATATYPE        <a name="MPI_T_BIND_MPI_DATATYPE"></a></B> <DD> MPI datatypes

<DT><B>MPI_T_BIND_MPI_ERRHANDLER      <a name="MPI_T_BIND_MPI_ERRHANDLER"></a></B> <DD> MPI error handlers

<DT><B>MPI_T_BIND_MPI_FILE            <a name="MPI_T_BIND_MPI_FILE"></a></B> <DD> MPI file handles

<DT><B>MPI_T_BIND_MPI_GROUP           <a name="MPI_T_BIND_MPI_GROUP"></a></B> <DD> MPI groups

<DT><B>MPI_T_BIND_MPI_OP              <a name="MPI_T_BIND_MPI_OP"></a></B> <DD> MPI reduction operators

<DT><B>MPI_T_BIND_MPI_REQUEST         <a name="MPI_T_BIND_MPI_REQUEST"></a></B> <DD> MPI requests

<DT><B>MPI_T_BIND_MPI_WIN             <a name="MPI_T_BIND_MPI_WIN"></a></B> <DD> MPI windows for one-sided communication

<DT><B>MPI_T_BIND_MPI_MESSAGE         <a name="MPI_T_BIND_MPI_MESSAGE"></a></B> <DD> MPI message object

<DT><B>MPI_T_BIND_MPI_INFO            <a name="MPI_T_BIND_MPI_INFO"></a></B> <DD> MPI info object

<DT><B>MPI_T_SCOPE_CONSTANT           <a name="MPI_T_SCOPE_CONSTANT"></a></B> <DD> read-only, value is constant

<DT><B>MPI_T_SCOPE_READONLY           <a name="MPI_T_SCOPE_READONLY"></a></B> <DD> read-only, cannot be written, but can
change

<DT><B>MPI_T_SCOPE_LOCAL              <a name="MPI_T_SCOPE_LOCAL"></a></B> <DD> may be writeable, writing is a local
operation

<DT><B>MPI_T_SCOPE_GROUP              <a name="MPI_T_SCOPE_GROUP"></a></B> <DD> may be writeable, must be done to a
group of processes, all processes in a group must be set to consistent values

<DT><B>MPI_T_SCOPE_GROUP_EQ           <a name="MPI_T_SCOPE_GROUP_EQ"></a></B> <DD> may be writeable, must be done to a
group of processes, all processes in a group must be set to the same value

<DT><B>MPI_T_SCOPE_ALL                <a name="MPI_T_SCOPE_ALL"></a></B> <DD> may be writeable, must be done to all
processes, all connected processes must be set to consistent values

<DT><B>MPI_T_SCOPE_ALL_EQ             <a name="MPI_T_SCOPE_ALL_EQ"></a></B> <DD> may be writeable, must be done to all
processes, all connected processes must be set to the same value

<DT><B>MPI_T_PVAR_CLASS_STATE         <a name="MPI_T_PVAR_CLASS_STATE"></a></B> <DD> set of discrete states (MPI_INT)

<DT><B>MPI_T_PVAR_CLASS_LEVEL         <a name="MPI_T_PVAR_CLASS_LEVEL"></a></B> <DD> utilization level of a resource

<DT><B>MPI_T_PVAR_CLASS_SIZE          <a name="MPI_T_PVAR_CLASS_SIZE"></a></B> <DD> size of a resource

<DT><B>MPI_T_PVAR_CLASS_PERCENTAGE    <a name="MPI_T_PVAR_CLASS_PERCENTAGE"></a></B> <DD> percentage utilization of a resource

<DT><B>MPI_T_PVAR_CLASS_HIGHWATERMARK <a name="MPI_T_PVAR_CLASS_HIGHWATERMARK"></a></B> <DD> high watermark of a resource

<DT><B>MPI_T_PVAR_CLASS_LOWWATERMARK  <a name="MPI_T_PVAR_CLASS_LOWWATERMARK"></a></B> <DD> low watermark of a resource

<DT><B>MPI_T_PVAR_CLASS_COUNTER       <a name="MPI_T_PVAR_CLASS_COUNTER"></a></B> <DD> number of occurances of an event

<DT><B>MPI_T_PVAR_CLASS_AGGREGATE     <a name="MPI_T_PVAR_CLASS_AGGREGATE"></a></B> <DD> aggregate value over an event (e.g.,
sum of all memory allocations)

<DT><B>MPI_T_PVAR_CLASS_TIMER         <a name="MPI_T_PVAR_CLASS_TIMER"></a></B> <DD> aggretate time spent executing event

<DT><B>MPI_T_PVAR_CLASS_GENERIC       <a name="MPI_T_PVAR_CLASS_GENERIC"></a></B> <DD> used for any other time of performance
variable
</DL>
<P>
<H2>Thread levels</H2>
<DL>
<DT><B>MPI_THREAD_SINGLE     <a name="MPI_THREAD_SINGLE"></a></B> <DD> Only one thread executes

<DT><B>MPI_THREAD_FUNNELED   <a name="MPI_THREAD_FUNNELED"></a></B> <DD> Only the main thread makes MPI calls

<DT><B>MPI_THREAD_SERIALIZED <a name="MPI_THREAD_SERIALIZED"></a></B> <DD> Only one thread at a time makes MPI calls

<DT><B>MPI_THREAD_MULTIPLE   <a name="MPI_THREAD_MULTIPLE"></a></B> <DD> Multiple threads may make MPI calls
</DL>
<P>
<H2>Special MPI types and functions</H2>
<P>
<DL>
<DT><B>MPI_Aint   <a name="MPI_Aint"></a></B> <DD> C type that holds any valid address.

<DT><B>MPI_Count  <a name="MPI_Count"></a></B> <DD> C type that holds any valid count.

<DT><B>MPI_Offset <a name="MPI_Offset"></a></B> <DD> C type that holds any valid file offset.

<DT><B>MPI_Handler_function <a name="MPI_Handler_function"></a></B> <DD> C function for handling errors (see
<tt>MPI_Errhandler_create</tt>) .

<DT><B>MPI_User_function <a name="MPI_User_function"></a></B> <DD> C function to combine values (see collective operations
and <tt>MPI_Op_create</tt>)

<DT><B>MPI_Copy_function <a name="MPI_Copy_function"></a></B> <DD> Function to copy attributes (see <tt>MPI_Keyval_create</tt>)

<DT><B>MPI_Delete_function <a name="MPI_Delete_function"></a></B> <DD> Function to delete attributes (see <tt>MPI_Keyval_create</tt>)

<DT><B>MPI_ERRORS_ARE_FATAL <a name="MPI_ERRORS_ARE_FATAL"></a></B> <DD> Error handler that forces exit on error

<DT><B>MPI_ERRORS_RETURN <a name="MPI_ERRORS_RETURN"></a></B> <DD> Error handler that returns error codes (as value of
MPI routine in C and through last argument in Fortran)
</DL>
<P>
<H2>MPI Attribute Default Functions</H2>
<DL>
<DT><B>MPI_COMM_NULL_COPY_FN <a name="MPI_COMM_NULL_COPY_FN"></a></B> <DD> Predefined attribute copy function for communicators

<DT><B>MPI_COMM_NULL_DELETE_FN <a name="MPI_COMM_NULL_DELETE_FN"></a></B> <DD> Predefined attribute delete function for communicators

<DT><B>MPI_COMM_DUP_FN  <a name="MPI_COMM_DUP_FN"></a></B> <DD> Predefined attribute duplicate function for communicators

<DT><B>MPI_WIN_NULL_COPY_FN <a name="MPI_WIN_NULL_COPY_FN"></a></B> <DD> Predefined attribute copy function for windows

<DT><B>MPI_WIN_NULL_DELETE_FN <a name="MPI_WIN_NULL_DELETE_FN"></a></B> <DD> Predefined attribute delete function for windows

<DT><B>MPI_WIN_DUP_FN   <a name="MPI_WIN_DUP_FN"></a></B> <DD> Predefined attribute duplicate function for windows

<DT><B>MPI_TYPE_NULL_COPY_FN <a name="MPI_TYPE_NULL_COPY_FN"></a></B> <DD> Predefined attribute copy function for datatypes

<DT><B>MPI_TYPE_NULL_DELETE_FN <a name="MPI_TYPE_NULL_DELETE_FN"></a></B> <DD> Predefined attribute delete function for datatypes

<DT><B>MPI_TYPE_DUP_FN <a name="MPI_TYPE_DUP_FN"></a></B> <DD> Predefined attribute duplicate function for datatypes
</DL>
<P>
<H2>MPI-1 Attribute Default Functions</H2>
<DL>
<DT><B>MPI_NULL_COPY_FN <a name="MPI_NULL_COPY_FN"></a></B> <DD> Predefined copy function

<DT><B>MPI_NULL_DELETE_FN <a name="MPI_NULL_DELETE_FN"></a></B> <DD> Predefined delete function

<DT><B>MPI_DUP_FN <a name="MPI_DUP_FN"></a></B> <DD> Predefined duplication function
</DL>
<P>
<H2>MPI Error classes</H2>
<DL>
<DT><B>MPI_SUCCESS               <a name="MPI_SUCCESS"></a></B> <DD> Successful return code

<DT><B>MPI_ERR_BUFFER            <a name="MPI_ERR_BUFFER"></a></B> <DD> Invalid buffer pointer

<DT><B>MPI_ERR_COUNT             <a name="MPI_ERR_COUNT"></a></B> <DD> Invalid count argument

<DT><B>MPI_ERR_TYPE              <a name="MPI_ERR_TYPE"></a></B> <DD> Invalid datatype argument

<DT><B>MPI_ERR_TAG               <a name="MPI_ERR_TAG"></a></B> <DD> Invalid tag argument

<DT><B>MPI_ERR_COMM              <a name="MPI_ERR_COMM"></a></B> <DD> Invalid communicator

<DT><B>MPI_ERR_RANK              <a name="MPI_ERR_RANK"></a></B> <DD> Invalid rank

<DT><B>MPI_ERR_ROOT              <a name="MPI_ERR_ROOT"></a></B> <DD> Invalid root

<DT><B>MPI_ERR_GROUP             <a name="MPI_ERR_GROUP"></a></B> <DD> Null group passed to function

<DT><B>MPI_ERR_OP                <a name="MPI_ERR_OP"></a></B> <DD> Invalid operation

<DT><B>MPI_ERR_TOPOLOGY          <a name="MPI_ERR_TOPOLOGY"></a></B> <DD> Invalid topology

<DT><B>MPI_ERR_DIMS              <a name="MPI_ERR_DIMS"></a></B> <DD> Illegal dimension argument

<DT><B>MPI_ERR_ARG               <a name="MPI_ERR_ARG"></a></B> <DD> Invalid argument

<DT><B>MPI_ERR_UNKNOWN           <a name="MPI_ERR_UNKNOWN"></a></B> <DD> Unknown error

<DT><B>MPI_ERR_TRUNCATE          <a name="MPI_ERR_TRUNCATE"></a></B> <DD> Message truncated on receive

<DT><B>MPI_ERR_OTHER             <a name="MPI_ERR_OTHER"></a></B> <DD> Other error; use Error_string

<DT><B>MPI_ERR_INTERN            <a name="MPI_ERR_INTERN"></a></B> <DD> Internal error code

<DT><B>MPI_ERR_IN_STATUS         <a name="MPI_ERR_IN_STATUS"></a></B> <DD> Look in status for error value

<DT><B>MPI_ERR_PENDING           <a name="MPI_ERR_PENDING"></a></B> <DD> Pending request

<DT><B>MPI_ERR_REQUEST           <a name="MPI_ERR_REQUEST"></a></B> <DD> Invalid mpi_request handle

<DT><B>MPI_ERR_ACCESS            <a name="MPI_ERR_ACCESS"></a></B> <DD> Permission denied

<DT><B>MPI_ERR_AMODE             <a name="MPI_ERR_AMODE"></a></B> <DD> Error related to the amode passed to
<tt>MPI_FILE_OPEN
</tt>
<DT><B>MPI_ERR_BAD_FILE          <a name="MPI_ERR_BAD_FILE"></a></B> <DD> Invalid file name (e.g., path name too long)

<DT><B>MPI_ERR_CONVERSION        <a name="MPI_ERR_CONVERSION"></a></B> <DD> An error occurred in a user supplied data
conversion function

<DT><B>MPI_ERR_DUP_DATAREP       <a name="MPI_ERR_DUP_DATAREP"></a></B> <DD> Conversion functions could not be registered
because a data representation identifier that was already defined was passed
to <tt>MPI_REGISTER_DATAREP
</tt>
<DT><B>MPI_ERR_FILE_EXISTS       <a name="MPI_ERR_FILE_EXISTS"></a></B> <DD> File exists

<DT><B>MPI_ERR_FILE_IN_USE       <a name="MPI_ERR_FILE_IN_USE"></a></B> <DD> File operation could not be completed, as
the file is currently open by some process

<DT><B>MPI_ERR_FILE              <a name="MPI_ERR_FILE"></a></B> <DD> Invalid file handle

<DT><B>MPI_ERR_IO                <a name="MPI_ERR_IO"></a></B> <DD> Other I/O error

<DT><B>MPI_ERR_NO_SPACE          <a name="MPI_ERR_NO_SPACE"></a></B> <DD> Not enough space

<DT><B>MPI_ERR_NO_SUCH_FILE      <a name="MPI_ERR_NO_SUCH_FILE"></a></B> <DD> File does not exist

<DT><B>MPI_ERR_READ_ONLY         <a name="MPI_ERR_READ_ONLY"></a></B> <DD> Read-only file or file system

<DT><B>MPI_ERR_UNSUPPORTED_DATAREP <a name="MPI_ERR_UNSUPPORTED_DATAREP"></a></B> <DD> Unsupported datarep passed to
<tt>MPI_FILE_SET_VIEW
</tt>
<DT><B>MPI_ERR_INFO              <a name="MPI_ERR_INFO"></a></B> <DD> Invalid info argument

<DT><B>MPI_ERR_INFO_KEY          <a name="MPI_ERR_INFO_KEY"></a></B> <DD> Key longer than MPI_MAX_INFO_KEY

<DT><B>MPI_ERR_INFO_VALUE        <a name="MPI_ERR_INFO_VALUE"></a></B> <DD> Value longer than MPI_MAX_INFO_VAL

<DT><B>MPI_ERR_INFO_NOKEY        <a name="MPI_ERR_INFO_NOKEY"></a></B> <DD> Invalid key passed to MPI_INFO_DELETE

<DT><B>MPI_ERR_NAME              <a name="MPI_ERR_NAME"></a></B> <DD> Invalid service name passed to MPI_LOOKUP_NAME

<DT><B>MPI_ERR_NO_MEM            <a name="MPI_ERR_NO_MEM"></a></B> <DD> Alloc_mem could not allocate memory

<DT><B>MPI_ERR_NOT_SAME          <a name="MPI_ERR_NOT_SAME"></a></B> <DD> Collective argument not identical on all
processes, or collective routines called in a different order by different
processes

<DT><B>MPI_ERR_PORT              <a name="MPI_ERR_PORT"></a></B> <DD> Invalid port name passed to MPI_COMM_CONNECT

<DT><B>MPI_ERR_QUOTA             <a name="MPI_ERR_QUOTA"></a></B> <DD> Quota exceeded

<DT><B>MPI_ERR_SERVICE           <a name="MPI_ERR_SERVICE"></a></B> <DD> Invalid service name passed to MPI_UNPUBLISH_NAME

<DT><B>MPI_ERR_SPAWN             <a name="MPI_ERR_SPAWN"></a></B> <DD> Error in spawning processes

<DT><B>MPI_ERR_UNSUPPORTED_OPERATION <a name="MPI_ERR_UNSUPPORTED_OPERATION"></a></B> <DD> Unsupported operation, such as seeking on
a file which supports sequential access only

<DT><B>MPI_ERR_WIN               <a name="MPI_ERR_WIN"></a></B> <DD> Invalid win argument

<DT><B>MPI_ERR_BASE              <a name="MPI_ERR_BASE"></a></B> <DD> Invalid base passed to MPI_FREE_MEM

<DT><B>MPI_ERR_LOCKTYPE          <a name="MPI_ERR_LOCKTYPE"></a></B> <DD> Invalid locktype argument

<DT><B>MPI_ERR_KEYVAL            <a name="MPI_ERR_KEYVAL"></a></B> <DD> Erroneous attribute key

<DT><B>MPI_ERR_RMA_CONFLICT      <a name="MPI_ERR_RMA_CONFLICT"></a></B> <DD> Conflicting accesses to window

<DT><B>MPI_ERR_RMA_SYNC          <a name="MPI_ERR_RMA_SYNC"></a></B> <DD> Wrong synchronization of RMA calls

<DT><B>MPI_ERR_SIZE              <a name="MPI_ERR_SIZE"></a></B> <DD> Invalid size argument

<DT><B>MPI_ERR_DISP              <a name="MPI_ERR_DISP"></a></B> <DD> Invalid disp argument

<DT><B>MPI_ERR_ASSERT            <a name="MPI_ERR_ASSERT"></a></B> <DD> Invalid assert argument

<DT><B>MPI_ERR_RMA_RANGE         <a name="MPI_ERR_RMA_RANGE"></a></B> <DD> Target memory is not part of the window (in
the case of a window created with MPI_WIN_CREATE_DYNAMIC, target memory is
not attached)

<DT><B>MPI_ERR_RMA_ATTACH        <a name="MPI_ERR_RMA_ATTACH"></a></B> <DD> Memory cannot be attached (e.g., because of
resource exhaustion)

<DT><B>MPI_ERR_RMA_SHARED        <a name="MPI_ERR_RMA_SHARED"></a></B> <DD> Memory cannot be shared (e.g., some process in
the group of the specified communicator cannot expose shared memory)

<DT><B>MPI_ERR_RMA_FLAVOR        <a name="MPI_ERR_RMA_FLAVOR"></a></B> <DD> Passed window has the wrong flavor for the
called function

<DT><B>MPI_ERR_LASTCODE          <a name="MPI_ERR_LASTCODE"></a></B> <DD> Last error code -- always at end
</DL>
<P>
<H2>Error codes for MPI_T</H2>
<P>
<DL>
<DT><B>MPI_T_ERR_MEMORY            <a name="MPI_T_ERR_MEMORY"></a></B> <DD> Out of memory

<DT><B>MPI_T_ERR_NOT_INITIALIZED   <a name="MPI_T_ERR_NOT_INITIALIZED"></a></B> <DD> Interface not initialized

<DT><B>MPI_T_ERR_CANNOT_INIT       <a name="MPI_T_ERR_CANNOT_INIT"></a></B> <DD> Interface not in the state to be initialized

<DT><B>MPI_T_ERR_INVALID_INDEX     <a name="MPI_T_ERR_INVALID_INDEX"></a></B> <DD> The index is invalid or has been deleted

<DT><B>MPI_T_ERR_INVALID_ITEM      <a name="MPI_T_ERR_INVALID_ITEM"></a></B> <DD> Item index queried is out of range

<DT><B>MPI_T_ERR_INVALID_HANDLE    <a name="MPI_T_ERR_INVALID_HANDLE"></a></B> <DD> The handle is invalid

<DT><B>MPI_T_ERR_OUT_OF_HANDLES    <a name="MPI_T_ERR_OUT_OF_HANDLES"></a></B> <DD> No more handles available

<DT><B>MPI_T_ERR_OUT_OF_SESSIONS   <a name="MPI_T_ERR_OUT_OF_SESSIONS"></a></B> <DD> No more sessions available

<DT><B>MPI_T_ERR_INVALID_SESSION   <a name="MPI_T_ERR_INVALID_SESSION"></a></B> <DD> Session argument is not valid

<DT><B>MPI_T_ERR_CVAR_SET_NOT_NOW  <a name="MPI_T_ERR_CVAR_SET_NOT_NOW"></a></B> <DD> Cvar can't be set at this moment

<DT><B>MPI_T_ERR_CVAR_SET_NEVER    <a name="MPI_T_ERR_CVAR_SET_NEVER"></a></B> <DD> Cvar can't be set until end of execution

<DT><B>MPI_T_ERR_PVAR_NO_STARTSTOP <a name="MPI_T_ERR_PVAR_NO_STARTSTOP"></a></B> <DD> Pvar can't be started or stopped

<DT><B>MPI_T_ERR_PVAR_NO_WRITE     <a name="MPI_T_ERR_PVAR_NO_WRITE"></a></B> <DD> Pvar can't be written or reset

<DT><B>MPI_T_ERR_PVAR_NO_ATOMIC    <a name="MPI_T_ERR_PVAR_NO_ATOMIC"></a></B> <DD> Pvar can't be R/W atomically

<DT><B>MPI_T_ERR_INVALID_NAME      <a name="MPI_T_ERR_INVALID_NAME"></a></B> <DD> Name doesn't match

<DT><B>MPI_T_ERR_INVALID           <a name="MPI_T_ERR_INVALID"></a></B> <DD> Invalid use of the interface or bad parameter
values(s)
</DL>
<P>
</BODY></HTML>