Concurrency (computer science)
In computer science, concurrency refers to the ability of a system to execute multiple tasks through simultaneous execution or time-sharing, sharing resources and managing interactions. Concurrency improves responsiveness, throughput, and scalability in modern computing, including:
- Operating systems and embedded systems
- Distributed systems, parallel computing, and high-performance computing
- Database systems, web applications, and cloud computing
Related concepts
Concurrency is a broader concept that encompasses several related ideas, including:- Parallelism. Parallelism executes tasks independently on multiple CPU cores. Concurrency allows for multiple threads of control at the program level, which can use parallelism or time-slicing to perform these tasks. Programs may exhibit parallelism only, concurrency only, both parallelism and concurrency, neither.
- Multi-threading and multi-processing
- Synchronization
- Coordination
- Concurrency Control
- Inter-process Communication
Issues
Because computations in a concurrent system can interact with each other while being executed, the number of possible execution paths in the system can be extremely large, and the resulting outcome can be indeterminate. Concurrent use of shared resources can be a source of indeterminacy, leading to issues such as deadlocks, and resource starvation.Design of concurrent systems often entails finding reliable techniques for coordinating their execution, data exchange, memory allocation, and execution scheduling to minimize response time and maximize throughput.
Theory
Concurrency theory has been an active field of research in theoretical computer science. One of the first proposals was Carl Adam Petri's seminal work on Petri nets in the early 1960s. In the years since, a wide variety of formalisms have been developed for modeling and reasoning about concurrency.Models
A number of formalisms for modeling and understanding concurrent systems have been developed, including:- The parallel random-access machine
- The actor model
- Computational bridging models such as the bulk synchronous parallel model
- Petri nets
- Process calculi
- * Calculus of communicating systems
- * Communicating sequential processes model
- * π-calculus
- Tuple spaces, e.g., Linda
- Simple Concurrent Object-Oriented Programming
- Reo Coordination Language
- Trace monoids
The proliferation of different models of concurrency has motivated some researchers to develop ways to unify these different theoretical models. For example, Lee and Sangiovanni-Vincentelli have demonstrated that a so-called "tagged-signal" model can be used to provide a common framework for defining the denotational semantics of a variety of different models of concurrency, while Nielsen, Sassone, and Winskel have demonstrated that category theory can be used to provide a similar unified understanding of different models.
The Concurrency Representation Theorem in the actor model provides a fairly general way to represent concurrent systems that are closed in the sense that they do not receive communications from outside. The mathematical denotation denoted by a closed system is constructed increasingly better approximations from an initial behavior called using a behavior approximating function to construct a denotation for as follows:
In this way, can be mathematically characterized in terms of all its possible behaviors.
Logics
Various types of temporal logic can be used to help reason about concurrent systems. Some of these logics, such as linear temporal logic and computation tree logic, allow assertions to be made about the sequences of states that a concurrent system can pass through. Others, such as action computational tree logic, Hennessy–Milner logic, and Lamport's temporal logic of actions, build their assertions from sequences of actions. The principal application of these logics is in writing specifications for concurrent systems.Practice
Concurrent programming encompasses programming languages and algorithms used to implement concurrent systems. Concurrent programming is usually considered to be more general than parallel programming because it can involve arbitrary and dynamic patterns of communication and interaction, whereas parallel systems generally have a predefined and well-structured communications pattern. The base goals of concurrent programming include correctness, performance and robustness. Concurrent systems such as Operating systems and Database management systems are generally designed to operate indefinitely, including automatic recovery from failure, and not terminate unexpectedly. Some concurrent systems implement a form of transparent concurrency, in which concurrent computational entities may compete for and share a single resource, but the complexities of this competition and sharing are shielded from the programmer.Because they use shared resources, concurrent systems in general require the inclusion of some kind of arbiter somewhere in their implementation, to control access to those resources. The use of arbiters introduces the possibility of indeterminacy in concurrent computation which has major implications for practice including correctness and performance. For example, arbitration introduces unbounded nondeterminism which raises issues with model checking because it causes explosion in the state space and can even cause models to have an infinite number of states.
Some concurrent programming models include coprocesses and deterministic concurrency. In these models, threads of control explicitly yield their timeslices, either to the system or to another process.