In simple terms, risk is the possibility of something bad happening. Risk involves uncertainty about the effects/implications of an activity with respect to something that humans value, often focusing on negative, undesirable consequences. Many different definitions have been proposed. The international standard definition of risk for common understanding in different applications is “effect of uncertainty on objectives”.
The understanding of risk, the common methods of analysis and assessment, the measurements of risk and even the definition of risk differ in different practice areas.


Oxford English Dictionary

The Oxford English Dictionary cites the earliest use of the word in English as of 1621, and the spelling as risk from 1655. While including several other definitions, the OED 3rd edition defines risk as:
the possibility of loss, injury, or other adverse or unwelcome circumstance; a chance or situation involving such a possibility.

The Cambridge Advanced Learner’s Dictionary gives a simple summary, defining risk as “the possibility of something bad happening”.

International Organization for Standardization

The International Organization for Standardization Guide 73 provides basic vocabulary to develop common understanding on risk management concepts and terms across different applications. ISO Guide 73:2009 defines risk as:
effect of uncertainty on objectives
Note 1: An effect is a deviation from the expected – positive or negative.
Note 2: Objectives can have different aspects and can apply at different levels.
Note 3: Risk is often characterized by reference to potential events and consequences or a combination of these.
Note 4: Risk is often expressed in terms of a combination of the consequences of an event and the associated likelihood of occurrence.
Note 5: Uncertainty is the state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence, or likelihood.

This definition was developed by an international committee representing over 30 countries and is based on the input of several thousand subject matter experts. It was first adopted in 2002. Its complexity reflects the difficulty of satisfying fields that use the term risk in different ways. Some restrict the term to negative impacts, while others include positive impacts.
ISO 31000:2018 “Risk management — Guidelines” uses the same definition with a simpler set of notes.


Many other definitions of risk have been influential:
Some resolve these differences by arguing that the definition of risk is subjective. For example:
No definition is advanced as the correct one, because there is no one definition that is suitable for all problems. Rather, the choice of definition is a political one, expressing someone’s views regarding the importance of different adverse effects in a particular situation.

The Society for Risk Analysis concludes that “experience has shown that to agree on one unified set of definitions is not realistic”. The solution is “to allow for different perspectives on fundamental concepts and make a distinction between overall qualitative definitions and their associated measurements.”

Practice areas

The understanding of risk, the common methods of management, the measurements of risk and even the definition of risk differ in different practice areas. This section provides links to more detailed articles on these areas.

Business risk

Business risks arise from uncertainty about the profit of a commercial business due to unwanted events such as changes in tastes, changing preferences of consumers, strikes, increased competition, changes in government policy, obsolescence etc.
Business risks are controlled using techniques of risk management. In many cases they may be managed by intuitive steps to prevent or mitigate risks, by following regulations or standards of good practice, or by insurance. Enterprise risk management includes the methods and processes used by organizations to manage risks and seize opportunities related to the achievement of their objectives.

Economic risk

is concerned with the production, distribution and consumption of goods and services. Economic risk arises from uncertainty about economic outcomes. For example, economic risk may be the chance that macroeconomic conditions like exchange rates, government regulation, or political stability will affect an investment or a company’s prospects.
In economics, as in finance, risk is often defined as quantifiable uncertainty about gains and losses.

Environmental risk

Environmental risk arises from environmental hazards or environmental issues.
In the environmental context, risk is defined as “The chance of harmful effects to human health or to ecological systems”.
Environmental risk assessment aims to assess the effects of stressors, often chemicals, on the local environment.

Financial risk

is concerned with money management and acquiring funds. Financial risk arises from uncertainty about financial returns. It includes market risk, credit risk, liquidity risk and operational risk.
In finance, risk is the possibility that the actual return on an investment will be different from its expected return. This includes not only "downside risk" but also "upside risk". In Knight’s definition, risk is often defined as quantifiable uncertainty about gains and losses. This contrasts with Knightian uncertainty, which cannot be quantified.
Financial risk modeling determines the aggregate risk in a financial portfolio. Modern portfolio theory measures risk using the variance of asset prices. More recent risk measures include value at risk.
Because investors are generally risk averse, investments with greater inherent risk must promise higher expected returns.
Financial risk management uses financial instruments to manage exposure to risk. It includes the use of a hedge to offset risks by adopting a position in an opposing market or investment.
In financial audit, audit risk refers to the potential that an audit report may failure to detect material misstatement either due to error or fraud.

Health risk

Health risks arise from disease and other biological hazards.
Epidemiology is the study and analysis of the distribution, patterns and determinants of health and disease. It is a cornerstone of public health, and shapes policy decisions by identifying risk factors for disease and targets for preventive healthcare.
In the context of public health, risk assessment is the process of characterizing the nature and likelihood of a harmful effect to individuals or populations from certain human activities. Health risk assessment can be mostly qualitative or can include statistical estimates of probabilities for specific populations.
A health risk assessment is a questionnaire screening tool, used to provide individuals with an evaluation of their health risks and quality of life

Health, safety, and environment risks

Health, safety, and environment are separate practice areas; however, they are often linked. The reason is typically to do with organizational management structures; however, there are strong links among these disciplines. One of the strongest links is that a single risk event may have impacts in all three areas, albeit over differing timescales. For example, the uncontrolled release of radiation or a toxic chemical may have immediate short-term safety consequences, more protracted health impacts, and much longer-term environmental impacts. Events such as Chernobyl, for example, caused immediate deaths, and in the longer term, deaths from cancers, and left a lasting environmental impact leading to birth defects, impacts on wildlife, etc.

Information technology risk

is the use of computers to store, retrieve, transmit, and manipulate data. IT risk arises from the potential that a threat may exploit a vulnerability to breach security and cause harm. IT risk management applies risk management methods to IT to manage IT risks. Computer security is the protection of IT systems by managing IT risks.
Information security is the practice of protecting information by mitigating information risks. While IT risk is narrowly focused on computer security, information risks extend to other forms of information.

Insurance risk

is a risk treatment option which involves risk sharing. It can be considered as a form of contingent capital and is akin to purchasing an option in which the buyer pays a small premium to be protected from a potential large loss.
Insurance risk is often taken by insurance companies, who then bear a pool of risks including market risk, credit risk, operational risk, interest rate risk, mortality risk, longevity risks, etc.
The term “risk” has a long history in insurance and has acquired several specialised definitions, including “the subject-matter of an insurance contract”, “an insured peril” as well as the more common “possibility of an event occurring which causes injury or loss”.

Occupational risk

is concerned with occupational hazards experienced in the workplace.
The Occupational Health and Safety Assessment Series standard OHSAS 18001 in 1999 defined risk as the “combination of the likelihood and consequence of a specified hazardous event occurring”. In 2018 this was replaced by ISO 45001 “Occupational health and safety management systems”, which use the ISO Guide 73 definition.

Project risk

A project is an individual or collaborative undertaking planned to achieve a specific aim. Project risk is defined as, "an uncertain event or condition that, if it occurs, has a positive or negative effect on a project’s objectives”. Project risk management aims to increase the likelihood and impact of positive events and decrease the likelihood and impact of negative events in the project.

Safety risk

is concerned with a variety of hazards that may result in accidents causing harm to people, property and the environment. In the safety field, risk is typically defined as the “likelihood and severity of hazardous events”. Safety risks are controlled using techniques of risk management.
A high reliability organisation involves complex operations in environments where catastrophic accidents could occur. Examples include aircraft carriers, air traffic control, aerospace and nuclear power stations. Some HROs manage risk in a highly quantified way. The technique is usually referred to as probabilistic risk assessment. See WASH-1400 for an example of this approach. The incidence rate can also be reduced due to the provision of better occupational health and safety programmes

Security risk

is freedom from, or resilience against, potential harm caused by others.
A security risk is "any event that could result in the compromise of organizational assets i.e. the unauthorized use, loss, damage, disclosure or modification of organizational assets for the profit, personal interest or political interests of individuals, groups or other entities."
Security risk management involves protection of assets from harm caused by deliberate acts.

Risk assessment and analysis

Since risk assessment and management is essential in security management, both are tightly related. Security assessment methodologies like CRAMM contain risk assessment modules as an important part of the first steps of the methodology. On the other hand, risk assessment methodologies like Mehari evolved to become security assessment methodologies. An ISO standard on risk management was published under code ISO 31000 on 13 November 2009.

Quantitative analysis

There are many formal methods used to "measure" risk.
Often the probability of a negative event is estimated by using the frequency of past similar events. Probabilities for rare failures may be difficult to estimate. This makes risk assessment difficult in hazardous industries, for example nuclear energy, where the frequency of failures is rare, while harmful consequences of failure are severe.
Statistical methods may also require the use of a cost function, which in turn may require the calculation of the cost of loss of a human life. This is a difficult problem. One approach is to ask what people are willing to pay to insure against death or radiological release, but as the answers depend very strongly on the circumstances it is not clear that this approach is effective.
Risk is often measured as the expected value of an undesirable outcome. This combines the probabilities of various possible events and some assessment of the corresponding harm into a single value. See also Expected utility. The simplest case is a binary possibility of Accident or No accident. The associated formula for calculating risk is then:
For example, if performing activity X has a probability of 0.01 of suffering an accident of A, with a loss of 1000, then total risk is a loss of 10, the product of 0.01 and 1000.
Situations are sometimes more complex than the simple binary possibility case. In a situation with several possible accidents, total risk is the sum of the risks for each different accident, provided that the outcomes are comparable:
For example, if performing activity X has a probability of 0.01 of suffering an accident of A, with a loss of 1000, and a probability of 0.000001 of suffering an accident of type B, with a loss of 2,000,000, then total loss expectancy is 12, which is equal to a loss of 10 from an accident of type A and 2 from an accident of type B.
One of the first major uses of this concept was for the planning of the Delta Works in 1953, a flood protection program in the Netherlands, with the aid of the mathematician David van Dantzig. The kind of risk analysis pioneered there has become common today in fields like nuclear power, aerospace and the chemical industry.
In statistical decision theory, the risk function is defined as the expected value of a given loss function as a function of the decision rule used to make decisions in the face of uncertainty.

Fear as intuitive risk assessment

People may rely on their fear and hesitation to keep them out of the most profoundly unknown circumstances. Fear is a response to perceived danger. Risk could be said to be the way we collectively measure and share this "true fear"—a fusion of rational doubt, irrational fear, and a set of unquantified biases from our own experience.
The field of behavioural finance focuses on human risk-aversion, asymmetric regret, and other ways that human financial behaviour varies from what analysts call "rational". Risk in that case is the degree of uncertainty associated with a return on an asset. Recognizing and respecting the irrational influences on human decision making may do much to reduce disasters caused by naive risk assessments that presume rationality but in fact merely fuse many shared biases.

Anxiety, risk and decision making

Fear, anxiety and risk

According to one set of definitions, fear is a fleeting emotion ascribed to a particular object, while anxiety is a trait of fear that lasts longer and is not attributed to a specific stimulus. Some studies show a link between anxious behaviour and risk. Joseph Forgas introduced valence based research where emotions are grouped as either positive or negative. Positive emotions, such as happiness, are believed to have more optimistic risk assessments and negative emotions, such as anger, have pessimistic risk assessments. As an emotion with a negative valence, fear, and therefore anxiety, has long been associated with negative risk perceptions. Under the more recent appraisal tendency framework of Jennifer Lerner et al., which refutes Forgas' notion of valence and promotes the idea that specific emotions have distinctive influences on judgments, fear is still related to pessimistic expectations.
Psychologists have demonstrated that increases in anxiety and increases in risk perception are related and people who are habituated to anxiety experience this awareness of risk more intensely than normal individuals. In decision-making, anxiety promotes the use of biases and quick thinking to evaluate risk. This is referred to as affect-as-information according to Clore, 1983. However, the accuracy of these risk perceptions when making choices is not known.

Consequences of anxiety

Experimental studies show that brief surges in anxiety are correlated with surges in general risk perception. Anxiety exists when the presence of threat is perceived. As risk perception increases, it stays related to the particular source impacting the mood change as opposed to spreading to unrelated risk factors. This increased awareness of a threat is significantly more emphasised in people who are conditioned to anxiety. For example, anxious individuals who are predisposed to generating reasons for negative results tend to exhibit pessimism. Also, findings suggest that the perception of a lack of control and a lower inclination to participate in risky decision-making is associated with individuals experiencing relatively high levels of trait anxiety. In the previous instance, there is supporting clinical research that links emotional evaluation, the anxiety that is felt and the option of risk avoidance.
There are various views presented that anxious/fearful emotions cause people to access involuntary responses and judgments when making decisions that involve risk. Joshua A. Hemmerich et al. probes deeper into anxiety and its impact on choices by exploring "risk-as-feelings" which are quick, automatic, and natural reactions to danger that are based on emotions. This notion is supported by an experiment that engages physicians in a simulated perilous surgical procedure. It was demonstrated that a measurable amount of the participants' anxiety about patient outcomes was related to previous regret and worry and ultimately caused the physicians to be led by their feelings over any information or guidelines provided during the mock surgery. Additionally, their emotional levels, adjusted along with the simulated patient status, suggest that anxiety level and the respective decision made are correlated with the type of bad outcome that was experienced in the earlier part of the experiment. Similarly, another view of anxiety and decision-making is dispositional anxiety where emotional states, or moods, are cognitive and provide information about future pitfalls and rewards. When experiencing anxiety, individuals draw from personal judgments referred to as pessimistic outcome appraisals. These emotions promote biases for risk avoidance and promote risk tolerance in decision-making.

Dread risk

It is common for people to dread some risks but not others: They tend to be very afraid of epidemic diseases, nuclear power plant failures, and plane accidents but are relatively unconcerned about some highly frequent and deadly events, such as traffic crashes, household accidents, and medical errors. One key distinction of dreadful risks seems to be their potential for catastrophic consequences, threatening to kill a large number of people within a short period of time. For example, immediately after the 11 September attacks, many Americans were afraid to fly and took their car instead, a decision that led to a significant increase in the number of fatal crashes in the time period following the 9/11 event compared with the same time period before the attacks.
Different hypotheses have been proposed to explain why people fear dread risks. First, the psychometric paradigm suggests that high lack of control, high catastrophic potential, and severe consequences account for the increased risk perception and anxiety associated with dread risks. Second, because people estimate the frequency of a risk by recalling instances of its occurrence from their social circle or the media, they may overvalue relatively rare but dramatic risks because of their overpresence and undervalue frequent, less dramatic risks. Third, according to the preparedness hypothesis, people are prone to fear events that have been particularly threatening to survival in human evolutionary history. Given that in most of human evolutionary history people lived in relatively small groups, rarely exceeding 100 people, a dread risk, which kills many people at once, could potentially wipe out one's whole group. Indeed, research found that people's fear peaks for risks killing around 100 people but does not increase if larger groups are killed. Fourth, fearing dread risks can be an ecologically rational strategy. Besides killing a large number of people at a single point in time, dread risks reduce the number of children and young adults who would have potentially produced offspring. Accordingly, people are more concerned about risks killing younger, and hence more fertile, groups.

Anxiety and judgmental accuracy

The relationship between higher levels of risk perception and "judgmental accuracy" in anxious individuals remains unclear. There is a chance that "judgmental accuracy" is correlated with heightened anxiety. Constans conducted a study to examine how worry propensity might influence college student's estimation of their performance on an upcoming exam, and the study found that worry propensity predicted subjective risk bias, even after variance attributable to current mood and trait anxiety had been removed. Another experiment suggests that trait anxiety is associated with pessimistic risk appraisals, while controlling for depression.

Human factors

One of the growing areas of focus in risk management is the field of human factors where behavioural and organizational psychology underpin our understanding of risk based decision making. This field considers questions such as "how do we make risk based decisions?", "why are we irrationally more scared of sharks and terrorists than we are of motor vehicles and medications?"
In decision theory, regret can play a significant part in decision-making, distinct from risk aversion.
Framing is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality, the risk of extreme events is discounted because the probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents caused by drunk driving – partly because any given driver frames the problem by largely or totally ignoring the risk of a serious or fatal accident.
For instance, an extremely disturbing event may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human tendencies for error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science.
All decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously wrong answers simply because it is socially painful to disagree, where there are conflicts of interest.
Framing involves other information that affects the outcome of a risky decision. The right prefrontal cortex has been shown to take a more global perspective while greater left prefrontal activity relates to local or focal processing.
From the Theory of Leaky Modules McElroy and Seta proposed that they could predictably alter the framing effect by the selective manipulation of regional prefrontal activity with finger tapping or monaural listening. The result was as expected. Rightward tapping or listening had the effect of narrowing attention such that the frame was ignored. This is a practical way of manipulating regional cortical activation to affect risky decisions, especially because directed tapping or listening is easily done.

Psychology of risk taking

A growing area of research has been to examine various psychological aspects of risk taking. Researchers typically run randomised experiments with a treatment and control group to ascertain the effect of different psychological factors that may be associated with risk taking. Thus, positive and negative feedback about past risk taking can affect future risk taking. In an experiment, people who were led to believe they are very competent at decision making saw more opportunities in a risky choice and took more risks, while those led to believe they were not very competent saw more threats and took fewer risks.

Other considerations

Risk and uncertainty

In his seminal work Risk, Uncertainty, and Profit, Frank Knight established the distinction between risk and uncertainty.
Thus, Knightian uncertainty is immeasurable, not possible to calculate, while in the Knightian sense risk is measurable.
Another distinction between risk and uncertainty is proposed by Douglas Hubbard:
In this sense, one may have uncertainty without risk but not risk without uncertainty. We can be uncertain about the winner of a contest, but unless we have some personal stake in it, we have no risk. If we bet money on the outcome of the contest, then we have a risk. In both cases there are more than one outcome. The measure of uncertainty refers only to the probabilities assigned to outcomes, while the measure of risk requires both probabilities for outcomes and losses quantified for outcomes.

Risk attitude, appetite and tolerance

The terms risk attitude, appetite, and tolerance are often used similarly to describe an organisation's or individual's attitude towards risk-taking. One's attitude may be described as risk-averse, risk-neutral, or risk-seeking. Risk tolerance looks at acceptable/unacceptable deviations from what is expected. Risk appetite looks at how much risk one is willing to accept. There can still be deviations that are within a risk appetite. For example, recent research finds that insured individuals are significantly likely to divest from risky asset holdings in response to a decline in health, controlling for variables such as income, age, and out-of-pocket medical expenses.
Gambling is a risk-increasing investment, wherein money on hand is risked for a possible large return, but with the possibility of losing it all. Purchasing a lottery ticket is a very risky investment with a high chance of no return and a small chance of a very high return. In contrast, putting money in a bank at a defined rate of interest is a risk-averse action that gives a guaranteed return of a small gain and precludes other investments with possibly higher gain. The possibility of getting no return on an investment is also known as the rate of ruin.
Risk compensation is a theory which suggests that people typically adjust their behavior in response to the perceived level of risk, becoming more careful where they sense greater risk and less careful if they feel more protected. By way of example, it has been observed that motorists drove faster when wearing seatbelts and closer to the vehicle in front when the vehicles were fitted with anti-lock brakes.

Risk as a vector quantity

Hubbard also argues that defining risk as the product of impact and probability presumes, unrealistically, that decision-makers are risk-neutral. A risk-neutral person's utility is proportional to the expected value of the payoff. For example, a risk-neutral person would consider 20% chance of winning $1 million exactly as desirable as getting a certain $200,000. However, most decision-makers are not actually risk-neutral and would not consider these equivalent choices. This gave rise to prospect theory and cumulative prospect theory. Hubbard proposes to instead describe risk as a vector quantity that distinguishes the probability and magnitude of a risk. Risks are simply described as a set or function of possible payoffs with their associated probabilities. This array is collapsed into a scalar value according to a decision-maker's risk tolerance.

Risk and autonomy in human services

The experience of many people who rely on human services for support is that 'risk' is often used as a reason to prevent them from gaining further independence or fully accessing the community, and that these services are often unnecessarily risk averse. "People's autonomy used to be compromised by institution walls, now it's too often our risk management practices", according to John O'Brien. Michael Fischer and Ewan Ferlie find that contradictions between formal risk controls and the role of subjective factors in human services can undermine service values, so producing tensions and even intractable and 'heated' conflict.

List of related books

This is a list of books about risk issues.
Acceptable RiskBaruch Fischhoff, Sarah Lichtenstein, Paul Slovic, Steven L. Derby, and Ralph Keeney1984
Against the Gods: The Remarkable Story of RiskPeter L. Bernstein1996
At risk: Natural hazards, people's vulnerability and disastersPiers Blaikie, Terry Cannon, Ian Davis, and Ben Wisner1994
Building Safer Communities. Risk Governance, Spatial Planning and Responses to Natural HazardsUrbano Fra Paleo2009
Dangerous Earth: An introduction to geologic hazardsBarbara W. Murck, Brian J. Skinner, Stephen C. Porter1998
Disasters and DemocracyRutherford H. Platt1999
Earth Shock: Hurricanes, volcanoes, earthquakes, tornadoes and other forces of natureW. Andrew Robinson1993
Human System Response to Disaster: An Inventory of Sociological FindingsThomas E. Drabek1986
Judgment Under Uncertainty: heuristics and biasesDaniel Kahneman, Paul Slovic, and Amos Tversky1982
Mapping Vulnerability: disasters, development, and peopleGreg Bankoff, Georg Frerks, and Dorothea Hilhorst2004
Man and Society in Calamity: The Effects of War, Revolution, Famine, Pestilence upon Human Mind, Behavior, Social Organization and Cultural LifePitirim Sorokin1942
Mitigation of Hazardous Comets and AsteroidsMichael J.S. Belton, Thomas H. Morgan, Nalin H. Samarasinha, Donald K. Yeomans2005
Natural Disaster Hotspots: a global risk analysisMaxx Dilley2005
Natural Hazard Mitigation: Recasting disaster policy and planningDavid Godschalk, Timothy Beatley, Philip Berke, David Brower, and Edward J. Kaiser1999
Natural Hazards: Earth’s processes as hazards, disasters, and catastrophesEdward A. Keller, and Robert H. Blodgett2006
Normal Accidents. Living with high-risk technologiesCharles Perrow1984
Paying the Price: The status and role of insurance against natural disasters in the United StatesHoward Kunreuther, and Richard J. Roth1998
Planning for Earthquakes: Risks, politics, and policyPhilip R. Berke, and Timothy Beatley1992
Practical Project Risk Management: The ATOM MethodologyDavid Hillson and Peter Simon2012
Reduction and Predictability of Natural DisastersJohn B. Rundle, William Klein, Don L. Turcotte1996
Regions of Risk: A geographical introduction to disastersKenneth Hewitt1997
Risk Analysis: a quantitative guideDavid Vose2008
Risk: An introduction Bernardus Ale2009
Risk and Culture: An essay on the selection of technical and environmental dangersMary Douglas, and Aaron Wildavsky1982
Socially Responsible Engineering: Justice in Risk Management Daniel A. Vallero, and P. Aarne Vesilind2006
Swimming with Crocodiles: The Culture of Extreme DrinkingMarjana Martinic and Fiona Measham 2008
The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASADiane Vaughan1997
The Environment as Hazard Ian Burton, Robert Kates, and Gilbert F. White1978
The Social Amplification of RiskNick Pidgeon, Roger E. Kasperson, and Paul Slovic2003
What is a Disaster? New answers to old questionsRonald W. Perry, and Enrico Quarantelli2005
Floods: From Risk to Opportunity Ali Chavoshian, and Kuniyoshi Takeuchi2013
The Risk Factor: Why Every Organization Needs Big Bets, Bold Characters, and the Occasional Spectacular FailureDeborah Perry Piscione2014

Referred literature