National Computational Infrastructure
The National Computational Infrastructure is a high-performance computing and data services facility, located at the Australian National University in Canberra, Australian Capital Territory. The NCI is supported by the Australian Government's National Collaborative Research Infrastructure Strategy, with operational funding provided through a formal collaboration incorporating CSIRO, the Bureau of Meteorology, the Australian National University, Geoscience Australia, the Australian Research Council, and a number of research-intensive universities and medical research institutes.
Access to computational resources is provided to funding partners as well as researchers awarded grants under the National Computing Merit Allocation Scheme.
The current director is Andrew Rohl.
Notable staff
- Lindsay Botten – former director
- Chris Pigram – former CEO of Geoscience Australia and acting director after the retirement of Lindsay Botten.
- Sean Smith – former director
- Andrew Rohl - current director
Facility
Computer systems
As of June 2020, NCI operates two main high-performance computing installations, including:- Gadi, meaning 'to search for' in the local Ngunnawal language. a 9.26 PetaFLOP high-performance distributed memory cluster consisting of:
- * 145,152 cores across 3024 nodes
- * 160 nodes containing four Nvidia V100 GPUs
- * 567 Terabytes of main memory
- * 20 Petabytes of fast storage
- * 47 Petabytes of storage for large data files
- * 50 Petabytes of tape storage for archival
- * HDR Mellanox Infiniband in Dragonfly+ topology
- Tenjin, a 67 TeraFLOP bespoke high-performance partner cloud, consisting of:
- * 1600 Intel Xeon Sandy Bridge cores
- * 25 Terabytes of main memory
- * 160 Terabytes State Disk
Data services and storage
Datasets
NCI hosts multiple data sets that can be used on their computation systems including:- Aboriginal and Torres Strait Islander Data Archive which provides Australian Indigenous research data
- Australian Astronomy Optical Data Repository including:
- * Anglo-Australian Telescope current and selected historical datasets
- * Southern Sky Survey, using the ANU's robotic SkyMapper telescope at Mount Stromlo Observatory
- Australian National Geophysical Collection including:
- * Airborne geophysics data
- * Gravity data set
- * Seismic survey
- High-resolution 'raw' Indian Ocean sea floor data was generated as part of the search for Malaysia Airlines Flight 370.
Research
- Southern Sky Survey, using the ANU's robotic SkyMapper telescope at Mount Stromlo Observatory
- The Australian Community Climate and Earth System Simulator
- Medical and materials research
History
In 2007, APAC began its evolution into the present NCI collaboration.
The table below provides comprehensive history of supercomputer specifications present at the NCI and its antecedents.
Vayu
The Vayu computer cluster, the predecessor of Raijin, was based on a Sun Microsystems Sun Constellation System. The Vayu system was taken from Sun's code name for the compute blade within the system. Vayu is a Hindu god, the name meaning "wind". The cluster was officially launched on 2009-11-16 by the Government of Australia's Minister for Innovation, Industry, Science and Research, Senator Kim Carr, after provisional acceptance on 2009-09-18.Vayu was first operated in September 2009 with one-eighth of the final computing power, with the full system commissioned in March 2010. Vayu had the following performance characteristics:
- Peak performance: 140 TFLOPS
- Sustained: 250K SPECfp rate
- Resources: 110M hrs p.a.
- 11936 CPUs in 1,492 nodes in Sun X6275 blades, each containing
- * two quad-core 2.93 GHz Intel Nehalem CPUs
- * 24Gbyte DDR3-1333 memory
- * 24 GB Flash DIMM for swap and job scratch
- total: 36.9 TB of RAM on compute nodes
- Dual socket, quad-core Sun X4170, X4270, X4275 servers for Lustre fileserving
- approx 835 TB of global user storage
System software for the Vayu cluster includes:
- CentOS 5.4 Linux distribution
- the oneSIS cluster software management system
- the Lustre cluster file system
- the National Facility's variant of the OpenPBS batch queuing system