LBL

GASNet

UCB


Download
Specification
Publications
Performance
Users/Links
Bugs
Contact

GASNet-EX 2018 Performance Examples

Click here for more recent GASNet performance results


The following graphs show performance examples of GASNet-EX release 2018.9.0, measured late 2018.

Many of these graphs are reprinted (with permission) from the following publication, which also contains further discussion of the study and results:
Bonachea D, Hargrove P. GASNet-EX: A High-Performance, Portable Communication Library for Exascale, Proceedings of Languages and Compilers for Parallel Computing (LCPC'18). Oct 2018. doi:10.25344/S4QP4W.

Test Methodology:

All tests use two physical nodes, with one core injecting communication operations to the remote node and all other cores idle. Hardware configuration details are provided in each section.

Jump to:

aries-conduit: Cray XC40 with Haswell CPUs, Cray XC40 with Xeon Phi CPUs
ibv-conduit: EDR InfiniBand with POWER9 CPUs, EDR InfiniBand with Haswell CPUs
gemini-conduit: Cray XK7
pami-conduit: IBM Blue Gene/Q

aries-conduit vs Cray MPI: on 'Cori (Phase-I)' at NERSC

Cori-I: Cray XC40, Cray Aries Interconnect, Node config: 2 x 16-core 2.3 GHz Intel "Haswell", PE 6.0.4, Intel C 18.0.1.163, Cray MPICH 7.7.0

bwlat

aries-conduit vs Cray MPI: on 'Cori (Phase-II)' at NERSC

Cori-II: Cray XC40, Cray Aries Interconnect, Node config: 68-core 1.4 GHz Intel Phi "Knights Landing", PE 6.0.4, Intel C 18.0.1.163, Cray MPICH 7.7.0

bwlat

ibv-conduit vs IBM Spectrum MPI: on 'Summit' at OLCF

Summit: Mellanox EDR InfiniBand, Node config: 2 x IBM POWER9, Red Hat Liux 7.5, GNU C 6.4.0, IBM Spectrum MPI 10.2.0.7-20180830
These are results for a single InfiniBand HCA.

bwlat

We gratefully acknowledge the assistance of Geoffroy Vallee of ORNL, who collected the results on Summit.

ibv-conduit vs MVAPICH2: on 'Gomez' at the Joint Laboratory for System Evaluation (JLSE) at Argonne National Laboratory

Gomez: Mellanox EDR InfiniBand, Node config: 2 x Intel Xeon E7-8867v3 "Haswell-EX", Red Hat Linux 7.4, GNU C 4.8.5, MVAPICH2 2.3

bwlat

gemini-conduit vs Cray MPI: on 'Titan' at Oak Ridge National Laboratory (OLCF)

Titan: Cray XK7, Cray Gemini Interconnect, Node config: 16-core 2.2 GHz AMD Opteron 6274 (GPUs not used), PE 5.2.82, PGI C 18.4, Cray MPICH 7.6.3

bwlat

pami-conduit vs IBM MPI: on 'Cetus' at Argonne Leadership Computing Facility (ALCF)

Cetus: IBM Blue Gene/Q, 5D Torus Proprietary Interconnect, Node config: 16 1.6 GHz PowerPC64 A2 cores, BG/Q driver V1R2M4, GCC 4.4.7, IBM MPI (V1R2M4, MPICH2 1.5 based)

bwlat

MPI-3 RMA is not supported in the IBM MPI implementation for this system.

This research was funded in part by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration.

This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.


Back to the GASNet home page