Interference freedom


In computer science, interference freedom is a technique for proving partial correctness of
concurrent programs with shared variables. Hoare logic had been introduced earlier
to prove correctness of sequential programs. In her PhD thesis under advisor David Gries, Susan Owicki
extended this work to apply to concurrent programs.
Concurrent programming had been in use since the mid 1960s for coding operating systems as sets
of concurrent processes, but there was no
formal mechanism for proving correctness. Reasoning about interleaved execution
sequences of the individual processes was difficult, was error prone,
and didn't scale up. Interference freedom
applies to proofs instead of execution sequences;
one shows that execution of one process cannot interfere with the correctness
proof of another process.
A range of intricate concurrent programs have been proved correct using interference
freedom, and interference freedom provides the basis for much of the ensuing work on
developing concurrent programs with shared variables and proving them correct.
The Owicki-Gries paper An axiomatic proof technique for parallel programs I
received the 1977 ACM Award for best paper in programming languages and
systems.
Note. Lamport
presents a similar idea. He writes, "After writing the
initial version of this paper, we learned of the recent work of
Owicki."
His paper has not received as much attention as Owicki-Gries, perhaps because it used
flow charts instead of the text of programming constructs like the if statement and while loop.
Lamport was generalizing Floyd's method while Owicki-Gries was generalizing
Hoare's method.
Essentially all later work in this area uses text and not flow charts.
Another difference is mentioned below in the
section on [|Auxiliary variables].

Dijkstra's Principle of non-interference

introduced the principle of non-interference in EWD 117,
"Programming Considered as a Human Activity", written about 1965.
This principle states that:
The correctness of the whole can be established by taking into
account only the exterior specifications of the parts, and not their
interior construction. Dijkstra outlined the general steps in using this principle:
  1. Give a complete spec of each individual part.
  2. Check that the total problem is solved when program parts meeting their specs are available.
  3. Construct the individual parts to satisfy their specs, but independent of one another and the context in which they will be used.
He gave several examples of this principle outside of programming. But
its use in programming is a main concern. For example, a programmer using a method should rely only on its spec to determine what it does and how to call it, and never on its implementation.
Program specs are written in Hoare logic, introduced by Sir Tony Hoare, as exemplified in the specs of processes and :



Meaning: If execution of in a state in which precondition is true terminates, then upon termination, postcondition
is true.
Now consider concurrent programming with shared variables. The specs of two processes and are given in terms of their pre- and post-conditions,
and we assume that implementations of and are given that satisfy their
specs. But when executing their implementations in parallel, since they share variables,
a race condition can occur; one process changes a shared variable to a value
that is not anticipated in the proof of the other process, so the other
process does not work as intended.
Thus, Dijkstra's Principle of non-interference is violated.
In her PhD thesis of 1975 in Computer Science, Cornell University, written under
advisor David Gries, Susan Owicki developed the notion of interference freedom.
If processes and satisfy interference freedom, then their parallel execution will work as planned.
Dijkstra called this work the first significant step toward applying Hoare logic to concurrent
processes.
To simplify discussions, we restrict attention to only two concurrent processes,
although Owicki-Gries allows more.

Interference freedom in terms of proof outlines

Owicki-Gries introduced the proof outline
for a Hoare triple. It contains all details needed for a proof of correctness of
using the axioms and inference rules of Hoare logic. Hoare alluded to proof outlines in his early work; for interference freedom, it had to be formalized.
A proof outline for begins with precondition and ends with postcondition.
Two assertions within braces appearing next to each other indicates that the first must imply the second.
Example: A proof outline for
where is:
























must hold,
where stands for with every
occurrence of replaced by.
Each statement in the proof outline is preceded by a precondition and followed by a postcondition, and
must be provable using some axiom or inference rule of Hoare logic. Thus, the proof outline contains all the information necessary to prove that is correct.
Now consider two processes and executing in parallel, and their specs:



Proving that they work suitably in parallel will require restricting them as follows.
Each expression in or may refer to at most one variable that can be
changed by the other process while is being evaluated, and may refer to at most once.
A similar restriction holds for assignment statements.
With this convention, the only indivisible action need be the memory reference. For example, suppose process references variable while changes. The value receives for
must be the value before or after changes, and not some spurious in-between value.
Definition of Interference-free
The important innovation of Owicki-Gries was to define what it means for a statement not to interfere with the proof of. If execution of cannot falsify any assertion given in the proof
outline of, then that proof still holds even in the face of concurrent execution of and.
Definition. Statement with precondition
does not interfere with the proof of if two conditions hold:


Let be any statement within but not within an statement.
Then.
Read the last Hoare triple like this: If the state is such that both and can
be executed, then execution of is not going to falsify.
Definition. Proof outlines for and are interference-free if the following holds.
Let be an or assignment statement of process.
Then does not interfere with the proof of.
Similarly for of process and.

Statements cobegin and await

Two statements were introduced to deal with concurrency. Execution of the statement executes and in parallel. It terminates when both and have terminated.
Execution of the statement is delayed until condition is true.
Then, statement is executed as an indivisible action—evaluation of is part of that indivisible action. If two processes are waiting for the same condition, when it becomes true, one of them continues waiting while the other proceeds.
The statement cannot be implemented efficiently and is not proposed to be inserted into the programming language. Rather it provides a means of representing several standard primitives such as semaphores—first express the semaphore operations as, then apply the techniques described here.
Inference rules for and are:






Auxiliary variables

An auxiliary variable does not occur in the program but is introduced in the proof of correctness to make reasoning simpler —or even possible. Auxiliary variables are used only in assignments to auxiliary variables, so their introduction neither alters the
program for any input nor affects the values of program variables. Typically,
they are used either as program counters or to record histories of a computation.
Definition. Let be a set of variables that appear in only in assignments
, where is in. Then is an auxiliary variable set for.
Since a set of auxiliary variables are used only in assignments to variables in, deleting all assignments to them doesn't change the program's correctness, and we have the inference rule elimination:
is an auxiliary variable set for. The variables in do not occur in or. is obtained from by deleting all assignments to the variables in.
Instead of using auxiliary variables, one can introduce a program counter into the proof system,
but that adds complexity to the proof system.
Note:
Apt discusses the Owicki-Gries logic in the context of recursive assertions, that is,
effectively computable assertions. He proves that all the assertions in proof outlines
can be recursive, but that this is no longer the case if auxiliary variables are used only as program counters and not to record histories of computation.
Lamport, in his similar work, uses assertions about token positions instead of auxiliary variables, where a token on an edge of a flow chart is akin to a program counter. There is no notion of a history variable.
This indicates that Owicki-Gries and Lamport's approach are not equivalent
when restricted to recursive assertions.

Deadlock and termination

Owicki-Gries deals mainly with partial correctness:
means: If executed in
a state in which is true terminates, then is true of the state upon termination.
However, Owicki-Gries also gives some practical techniques that use information obtained
from a partial correctness proof to derive other correctness properties,
including freedom from deadlock, program termination, and mutual exclusion.
A program is in deadlock if all processes that have not terminated are executing
statements
and none can proceed because their conditions are false.
Owicki-Gries provides conditions under which deadlock cannot occur.
Owicki-Gries presents an inference rule for total correctness of the while loop. It uses
a bound function that decreases with each iteration and is positive as long as the
loop condition is true. Apt et al show that this new inference rule does not satisfy interference freedom. The fact that the bound function is positive as long as the loop condition is true was not
included in an interference test. They show two ways to rectify this mistake.