Task Control Block


The Task Control Block is a data structure which contains the state of a task in, e.g., OS/360 and successors on IBM System/360 architecture and successors.

The TCB in OS/360 and successors

In OS/360, OS/VS1, SVS, MVS/370, MVS/XA, MVS/ESA, OS/390 and z/OS, the TCB contains, among other data, non-dispatchability flags and the general and floating point registers for a task that is not currently assigned to a CPU.
A TCB provides an anchor for a linked list of other, related request blocks ; the top-linked RB for a TCB contains the Program status word when the task is not assigned to a CPU.
When the control program's dispatcher selects a TCB to be dispatched, the dispatcher loads registers from the TCB and loads the PSW from the top RB of the TCB, thereby dispatching the unit of work.

Request Blocks

OS/360 has the following types of request blocks
; Interruption Request Block
; Program Request Block
; System Interruption Request Block
; Supervisor Request Blocks
An RB contains several fields, among them an old PSW, old general registers, a PSW and a wait count.

Dispatching

The Dispatcher is a routine in the nucleus that selects the work to be dispatched. It selects the highest priority task that:
  1. Is not running on another CPU
  2. Does not have any non-dispatchability flags set
  3. Has a top RB with a zero wait count.
The system maintains a pair of TCB pointers known as TCB old and TCB new. A TCB new pointer of zero causes the dispatcher to search for an eligible task.
When the dispatcher finds an eligible task, it sets the old and new TCB pointers. loads the registers from the TCB and loads the PSW from the top RB.
If the dispatcher fails to find eligible work, it enters an enabled wait.

History

With the introduction of MVS/370 and successor systems, a whole new environment was introduced: the Service Request Block, which generally has a higher priority than any Task Control Block, and, indeed, which itself has two distinct priorities: a Global SRB and a Local SRB ; and MVS's dispatcher must manage all of these with absolute consistency across as many as two processors and as many as sixteen processors.