IBM Future Systems project
The Future Systems project was a research and development project undertaken in IBM in the early 1970s to develop a revolutionary line of computer products, including new software models which would simplify software development by exploiting modern powerful hardware. The new systems were intended to replace the System/370 in the market some time in the late 1970s.
There were two key components to FS. The first was the use of a single-level store that allows data stored on secondary storage like disk drives to be referred to within a program as if it was data stored in main memory; variables in the code could point to objects in storage and they would invisibly be loaded into memory, eliminating the need to write code for file handling. The second was to include instructions corresponding to the statements in high-level programming languages, allowing the system to directly run programs without the need for a compiler to convert from the language to machine code. One could, for instance, write a program in a text editor and the machine would be able to run that directly.
Combining the two concepts in a single system in a single step proved to be an impossible task. This concern was pointed out from the start by the engineers, but it was ignored by management and project leaders for many reasons. Officially started in the fall of 1971, by 1974 the project was moribund, and formally cancelled in February 1975. The single-level store was implemented in the System/38 in 1978 and moved to other systems in the lineup after that, but the concept of a machine that directly ran high-level languages has never appeared in an IBM product.
History
370
The System/360 was announced in April 1964. Only six months later, IBM began a study project on what trends were taking place in the market and how these should be used in a series of machines that would replace the 360 in the future. One significant change was the introduction of useful integrated circuits, which would allow the many individual components of the 360 to be replaced with a smaller number of ICs. This would allow a more powerful machine to be built for the same price as existing models.By the mid-1960s, the 360 had become a massive best-seller. This influenced the design of the new machines, as it led to demands that the machines have complete backward compatibility with the 360 series. When the machines were announced in 1970, now known as the System/370, they were essentially 360s using small-scale ICs for logic, much larger amounts of internal memory and other relatively minor changes. A few new instructions were added and others cleaned up, but the system was largely identical from the programmer's point of view.
The recession of 1969–1970 led to slowing sales in the 1970-71 time period and much smaller orders for the 370 compared to the rapid uptake of the 360 five years earlier. For the first time in decades, IBM's growth stalled. While some in the company began efforts to introduce useful improvements to the 370 as soon as possible to make them more attractive, others felt nothing short of a complete reimagining of the system would work in the long term.
Replacing the 370
Two months before the announcement of the 370s, the company once again started considering changes in the market and how that would influence future designs. In 1965, Gordon Moore predicted that integrated circuits would see exponential growth in the number of circuits they supported, today known as Moore's Law. IBM's Jerrier A. Haddad wrote a memo on the topic, suggesting that the cost of logic and memory was going to zero faster than it could be measured.An internal Corporate Technology Committee study concluded a 30-fold reduction in the price of memory would take place in the next five years, and another 30 in the five after that. If IBM was going to maintain its sales figures, it was going to have to sell 30 times as much memory in five years, and 900 times as much five years later. Similarly, hard disk cost was expected to fall ten times in the next ten years. To maintain their traditional 15% year-over-year growth, by 1980 they would have to be selling 40 times as much disk space and 3600 times as much memory.
In terms of the computer itself, if one followed the progression from the 360 to the 370 and onto some hypothetical System/380, the new machines would be based on large-scale integration and would be dramatically reduced in complexity and cost. There was no way they could sell such a machine at their current pricing, if they tried, another company would introduce far less expensive systems. They could instead produce much more powerful machines at the same price points, but their customers were already underutilizing their existing systems. To provide a reasonable argument to buy a new high-end machine, IBM had to come up with reasons for their customers to need this extra power.
Another strategic issue was that while the cost of computing was steadily going down, the costs of programming and operations, being made of personnel costs, were steadily going up. Therefore, the part of the customer's IT budget available for hardware vendors would be significantly reduced in the coming years, and with it the base for IBM revenue. It was imperative that IBM, by addressing the cost of application development and operations in its future products, would at the same time reduce the total cost of IT to the customers and capture a larger portion of that cost.
AFS
In 1969, Bob O. Evans, president of the IBM System Development Division which developed their largest mainframes, asked Erich Bloch of the IBM Poughkeepsie Lab to consider how the company might use these much cheaper components to build machines that would still retain the company's profits. Bloch, in turn, asked Carl Conti to outline such systems. Having seen the term "future systems" being used, Evans referred to the group as Advanced Future Systems. The group met roughly biweekly.Among the many developments initially studied under AFS, one concept stood out. At the time, the first systems with virtual memory were emerging, and the seminal Multics project had expanded on this concept as the basis for a single-level store. In this concept, all data in the system is treated as if it is in main memory, and if the data is physically located on secondary storage, the VM system automatically loads it into memory when a program calls for it. Instead of writing code to read and write data in files, the programmer simply told the operating system they would be using certain data, which then appeared as objects in the program's memory and could be manipulated like any other variable. The VM system would ensure that the data was synchronized with storage when needed.
This was seen as a particularly useful concept at the time, as the emergence of bubble memory suggested that future systems would not have separate core memory and disk drives, instead everything would be stored in a large amount of bubble memory. Physically, systems would be single-level stores, so the idea of having another layer for "files" which represented separate storage made no sense, and having pointers into a single large memory would not only mean one could simply refer to any data as it if were local, but also eliminate the need for separate application programming interfaces for the same data depending on whether it was loaded or not.
HLS
Evans also asked John McPherson at IBM's Armonk headquarters to chair another group to consider how IBM would offer these new designs across their many divisions. A group of twelve participants spread across three divisions produced the "Higher Level System Report", or HLS, which was delivered on 25 February 1970. A key component of HLS was the idea that programming was more expensive than hardware. If a system could greatly reduce the cost of development, then that system could be sold for more money, as the overall cost of operation would still be lower than the competition.The basic concept of the System/360 series was that a single instruction set architecture would be defined that offered every possible instruction the assembly language programmer might desire. Whereas previous systems might be dedicated to scientific programming or currency calculations and had instructions for that sort of data, the 360 offered instructions for both of these and practically every other task. Individual machines were then designed that targeted particular workloads and ran those instructions directly in hardware and implemented the others in microcode. This meant any machine in the 360 family could run programs from any other, just faster or slower depending on the task. This proved enormously successful, as a customer could buy a low-end machine and always upgrade to a faster one in the future, knowing all their applications would continue to run.
Although the 360's instruction set was large, those instructions were still low-level, representing single operations that the central processing unit would perform, like "add two numbers" or "compare this number to zero". Programming languages and their links to the operating system allowed users to type in programs using high-level concepts like "open file" or "add these arrays". The compilers would convert these higher-level abstractions into a series of machine code instructions.
For HLS, the instructions would instead represent those higher-level tasks directly. That is, there would be instructions in the machine code for "open file". If a program called this instruction, there was no need to convert this into lower-level code, the machine would do this internally in microcode or even a direct hardware implementation. This worked hand-in-hand with the single-level store; to implement HLS, every bit of data in the system was paired with a descriptor, a record that contained the type of the data, its location in memory, and its precision and size. As descriptors could point to arrays and record structures as well, this allowed the machine language to process these as atomic objects.
By representing these much higher-level objects directly in the system, user programs would be much smaller and simpler. For instance, to add two arrays of numbers held in files in traditional languages, one would generally open the two files, read one item from each, add them, and then store the value to a third file. In the HLS approach, one would simply open the files and call add. The underlying operating system would map these into memory, create descriptors showing them both to be arrays and then the add instruction would see they were arrays and add all the values together. Assigning that value into a newly created array would have the effect of writing it back to storage. A program that might take a page or so of code was now reduced to a few lines. Moreover, as this was the natural language of the machine, the command shell was itself programmable in the same way, there would be no need to "write a program" for a simple task like this, it could be entered as a command.
The report concluded: