Slurm Workload Manager
The Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management, or simply Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters.
It provides three key functions:
- allocating exclusive and/or non-exclusive access to resources to users for some duration of time so they can perform work,
- providing a framework for starting, executing, and monitoring work, typically a parallel job such as Message Passing Interface on a set of allocated nodes, and
- arbitrating contention for resources by managing a queue of pending jobs.
Slurm uses a best-fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.
History
Slurm began development as a collaborative effort primarily by Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull as a Free Software resource manager. The first release happened in 2002. It was inspired by the closed source Quadrics RMS and shares a similar syntax. The name is a reference to the soda in Futurama. Over 250 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers., TOP500 list of most powerful computers in the world indicates that Slurm is the workload manager on more than half of the top ten systems.
Structure
Slurm's design is very modular with about 100 optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization.Features
Slurm features include:- No single point of failure, backup daemons, fault-tolerant job options
- Highly scalable
- High performance
- Free and open-source software
- Highly configurable with about 100 plugins
- Fair-share scheduling with hierarchical bank accounts
- Preemptive and gang scheduling
- Integrated with database for accounting and configuration
- Resource allocations optimized for network topology and on-node topology
- Advanced reservation
- Idle nodes can be powered down
- Different operating systems can be booted for each job
- Scheduling for generic resources
- Real-time accounting down to the task level
- Resource limits by user or bank account
- Accounting for power consumption by job
- Support of IBM Parallel Environment
- Support for job arrays
- Job profiling
- Sophisticated multifactor job prioritization algorithms
- Support for MapReduce+
- Support for burst buffer that accelerates scientific data movement
- Support for heterogeneous generic resources
- Automatic job requeue policy based on exit value
Supported platforms
Recent Slurm releases run only on Linux. Older versions had been ported to a few other POSIX-based operating systems, including BSDs, but this is no longerfeasible as Slurm now requires cgroups for core operations. Clusters running operating systems other than Linux will need to use
a different batch system, such as LPJS. Slurm also supports several unique computer architectures, including:
- IBM BlueGene/Q models, including the 20 petaflop IBM Sequoia
- Cray XT, XE and Cascade
- Tianhe-2 a 33.9 petaflop system with 32,000 Intel Ivy Bridge chips and 48,000 Intel Xeon Phi chips with a total of 3.1 million cores
- IBM Parallel Environment
- Anton
License
Slurm is available under the GNU General Public License v2.Commercial support
In 2010, the developers of Slurm founded SchedMD, which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available from Bull, Cray, and Science + Computing.Usage
Theslurm system has three main parts:slurmctld, a central control daemon running on a single control node ;- many computing nodes, each with one or more
slurmddaemons; - clients that connect to the manager node, often with ssh.
For clients, the main commands are
srun, sbatch, squeue and scancel.Jobs can be run in batch mode or interactive mode. For interactive mode, a compute node would start a shell, connects the client into it, and run the job. From there the user may observe and interact with the job while it is running. Usually, interactive jobs are used for initial debugging, and after debugging, the same job would be submitted by
sbatch. For a batch mode job, its stdout and stderr outputs are typically directed to text files for later inspection.