Skip to main content

SuperMIC

SuperMIC (pronounced as Super Mick) is an LSU supercomputer funded by an National Science Foundation's (NSF) Major Research Instrumentation (MRI) award to the Center for Computation & Technology. 40 percent of its computational resources are reserved for participants in the Extreme Science and Engineering Discovery Environment (XSEDE) program, a national system of leadership-class HPC machines that scientists can use to share computing resources, data, and expertise.

SuperMIC is capable of a peak theoretical performance of over 925 TF. It achieved a performance of 557 TF during testing, which placed it as number 65 in the June 2014 Top500 List.

SuperMIC went operational on October 1, 2014. It contains a total of 382 nodes, each with two 10-core 2.8GHz Intel Ivy Bridge-EP processors. 380 compute nodes each have 64 GB of memory and 500 GB of local HDD storage. 360 of the compute nodes have 2 Intel Xeon Phi 7120P coprocessors. 20 of the compute nodes have 1 Intel Xeon Phi 7120P coprocessor and 1 NVIDIA Tesla K20X. The system is available to LSU and XSEDE users via an allocation process. XSEDE users may submit allocation and account requests through the XSEDE User Portal. LSU users will need to use their LSU HPC credentials to gain access to SuperMIC (see: LSU HPC account request), and require access to an LSU HPC allocation (see: LSU HPC allocation request) to run production jobs on the system.

HPC
Gallery
SuperMIC Racks

See the User Guide for detailed usage information. The system includes:

  • 1 Interactive Node
    • Two 2.8GHz 10-Core Ivy Bridge-EP E5-2680 Xeon 64-bit Processors
    • One Intel Xeon Phi 7120P Coprocessors
    • 128GB DDR3 1866MHz Ram
    • 1TB HD
    • 56 Gigabit/sec Infiniband network interface
    • 10 Gigabit Ethernet network interface
    • Red Hat Enterprise Linux 6
  • 1 Interactive Node
    • Two 2.8GHz 10-Core Ivy Bridge-EP E5-2680 Xeon 64-bit Processors
    • One NVIDIA Tesla K20X 6GB GPU
    • 128GB DDR3 1866MHz Ram
    • 1TB HD
    • 56 Gigabit/sec Infiniband network interface
    • 10 Gigabit Ethernet network interface
    • Red Hat Enterprise Linux 6
  • 360 Compute Nodes
    • Two 2.8GHz 10-Core Ivy Bridge-EP E5-2680 Xeon 64-bit Processors
    • Two Intel Xeon Phi 7120P Coprocessors
    • 64GB DDR3 1866MHz Ram
    • 500GB HD
    • 56 Gigabit/sec Infiniband network interface
    • 1 Gigabit Ethernet network interface
    • Red Hat Enterprise Linux 6
  • 20 Hybrid Compute Nodes
    • Two 2.8GHz 10-Core Ivy Bridge-EP E5-2680 Xeon 64-bit Processors
    • One Intel Xeon Phi 7120P Coprocessors
    • One NVIDIA Tesla K20X 6GB GPU with GPUDirect Support
    • 64GB DDR3 1866MHz Ram
    • 500GB HD
    • 56 Gigabit/sec Infiniband network interface
    • 1 Gigabit Ethernet network interface
    • Red Hat Enterprise Linux 6
  • Cluster Storage
    • 840TB Lustre High-Performance disk
    • 5TB NFS-mounted /home disk storage

Last modified: September 26 2014 09:23:12.