Getting started on Big Red II
Big Red II is Indiana University's main system for high-performance parallel computing. With a theoretical peak performance (Rpeak) of one thousand trillion floating-point operations per second (1 petaFLOPS) and a maximal achieved performance (Rmax) of 596.4 teraFLOPS, Big Red II is among the world's fastest research supercomputers. Owned and operated solely by IU, Big Red II is designed to accelerate discovery in a wide variety of fields, including medicine, physics, fine arts, and global climate research, and enable effective analysis of large, complex data sets (i.e., big data).
Big Red II is a Cray XE6/XK7 supercomputer with a hybrid architecture providing a total of 1,020 compute nodes:
- 344 CPU-only compute nodes, each containing two AMD Opteron 16-core Abu Dhabi x86_64 CPUs and 64 GB of RAM
- 676 CPU/GPU compute nodes, each containing one AMD Opteron 16-core Interlagos x86_64 CPU, one NVIDIA Tesla K20 GPU accelerator with a single Kepler GK110 GPU, and 32 GB of RAM
Big Red II runs a proprietary variant of Linux called Cray Linux Environment (CLE). In CLE, compute elements run a lightweight kernel called Compute Node Linux (CNL), and the service nodes run SUSE Enterprise Linux Server (SLES). All compute nodes are connected through the Cray Gemini interconnect.
On this page:
- System overview
- Accounts, access, and user policies
- Programming environment
- Running jobs
- X forwarding and interactive jobs
- Application-specific help
- Getting help
Accounts, access, and user policies
- System access
- What are my responsibilities as a computer user at IU?
- Policies regarding UITS research systems
- Working with data containing PHI
- Using Modules to manage your software environment
- Available software
- Cray native mode (ESM) and Cluster Compatibility Mode (CCM)
- Compiling C, C++, and Fortran programs on Big Red II at IU
- Compiling Java programs on Big Red II at IU
- Using CUDA on Big Red II
- Queue information
- Running batch jobs on Big Red II
- Running OpenMP or hybrid OpenMP/MPI jobs
- Using PCP to bundle multiple serial jobs to run them in parallel
- Monitoring memory and CPU usage
- Using the IU Cyberinfrastructure Gateway to monitor batch jobs on Big Red II
X forwarding and interactive jobs
- ARPACK - numerical library
- CPMD - molecular dynamics software
- GAUSS - matrix programming language
- GROMACS - molecular dynamics software
- MATLAB - numerical computing environment
- NAMD - molecular dynamics software
- SAS - statistics software
- Stata - statistics software
- Subversion - version-control software
- WRF - weather research and forecasting model
- If you have a system-specific question about Big Red II, Karst, Mason, or the Research Database Complex (RDC) contact the High Performance Systems (HPS) team.
- If you have questions about the Scholarly Data Archive (SDA), contact the Research Storage team.
- If you have questions about shared scratch or project space on the Data Capacitor II or Data Capacitor Wide Area Network (DC-WAN) file system, contact the High Performance File Systems (HPFS) team.
- If you have questions about the development tools, compilers, scientific or numerical libraries, or debuggers available on the research computing system, contact the Scientific Applications and Performance Tuning (SciAPT) team.
- If you have questions about the statistical and mathematical applications available on the research computing systems, contact the Research Analytics group.
- If you have questions about the bioinformatics and genome analysis packages available on the research computing systems, email the National Center for Genome Analysis Support (NCGAS).
For general inquiries about UITS Research Technologies systems and services, complete and submit the Research Technologies request for help form.
Pervasive Technology Institute
PTI enables innovation through collaboration. This helps further IU's mission of research, education, and engagement—both in and beyond Indiana.