DIGITAL COMPUTER GENERATIONS
In the electronic computer world, we measure technological advancement by generations. A specific system is said to belong to a specific "generation." Each generation indicates a significant change in computer design. The UNIVAC I represents the first generation. Currently we are moving toward the fourth generation.
The computers of the first generation (1951-1958) were physically very large machines characterized by the vacuum tube (fig. 1-6). Because they used vacuum tubes, they were very unreliable, required a lot of power to run, and produced so much heat that adequate air conditioning was critical to protect the computer parts. Compared to today's computers, they had slow input and output devices, were slow in processing, and had small storage capacities. Many of the internal processing functions were measured in thousandths of a second (millisecond). The software (computer program) used on first generation computers was unsophisticated and machine oriented. This meant that the programmers had to code all computer instructions and data in actual machine language. They also had to keep track of where instructions and data were stored in memory. Using such a machine language (see chapter 3) was efficient for the computer but difficult for the programmer.
Figure 1-6. - First generation computers used vacuum tubes.
The computers of the second generation (1959-1963), were characterized by transistors (fig. 1-7) instead of vacuum tubes. Transistors were smaller, less expensive, generated almost no heat, and required very little power. Thus second generation computers were smaller, required less power, and produced a lot less heat. The use of small, long lasting transistors also increased processing speeds and reliability. Cost performance also improved. The storage capacity was greatly increased with the introduction of magnetic disk storage and the use of magnetic cores for main storage. High speed card readers, printers, and magnetic tape units were also introduced. Internal processing speeds increased. Functions were measured in millionths of a second (microseconds). Like the first generation, a particular computer of the second generation was designed to process either scientific or business oriented problems but not both. The software was also improved. Symbolic machine languages or assembly languages were used instead of actual machine languages. This allowed the programmer to use mnemonic operation codes for instruction operations and symbolic names for storage locations or stored variables. Compiler languages were also developed for the second generation computers (see chapter 3).
Figure 1-7. - Second generation computers used transistors.
The computers of this generation (1964-1970), many of which are still in use, are characterized by miniaturized circuits. This reduces the physical size of computers even more and increases their durability and internal processing speeds. One design employs solid-state logic microcircuits (fig. 1-8) for which conductors, resistors, diodes, and transistors have been miniaturized and combined on half-inch ceramic squares. Another smaller design uses silicon wafers on which the circuit and its components are etched. The smaller circuits allow for faster internal processing speeds resulting in faster execution of instructions. Internal processing speeds are measured in billionths of a second (nanoseconds). The faster computers make it possible to run jobs that were considered impractical or impossible on first or second generation equipment. Because the miniature components are more reliable, maintenance is reduced. New mass storage, such as the data cell, was introduced during this generation, giving a storage capacity of over 100 million characters. Drum and disk capacities and speed have been increased, the portable disk pack has been developed, and faster, higher density magnetic tapes have come into use. Considerable improvements were made to card readers and printers, while the overall cost has been greatly reduced. Applications using online processing, real-time processing, time sharing, multiprogramming, multiprocessing, and teleprocessing have become widely accepted. More on this in later chapters.
Figure 1-8. - Third generation computers used microcircuits.
Manufacturers of third generation computers are producing a series of similar and compatible computers. This allows programs written for one computer model to run on most larger models of the same series. Most third generation systems are designed to handle both scientific and business data processing applications. Improved program and operating software has been designed to provide better control, resulting in faster processing. These enhancements are of significant importance to the computer operator. They simplify system initialization (booting) and minimize the need for inputs to the program from a keyboard (console intervention) by the operator.
FOURTH GENERATION AND BEYOND
The computers of the fourth generation are not easily distinguished from earlier generations, yet there are some striking and important differences. The manufacturing of integrated circuits has advanced to the point where thousands of circuits (active components) can be placed on a silicon wafer only a fraction of an inch in size (the computer on a chip). This has led to what is called large scale integration (LSI) and very large scale integration (VLSI). As a result of this technology, computers are significantly smaller in physical size and lower in cost. Yet they have retained large memory capacities and are ultra fast. Large mainframe computers are increasingly complex. Medium sized computers can perform the same tasks as large third generation computers. An entirely new breed of computers called microcomputers (fig. 1-9) and minicomputers are small and inexpensive, and yet they provide a large amount of computing power.
Figure 1-9. - Fourth generation desktop (personal) computer.
What is in store for the future? The computer industry still has a long way to go in the field of miniaturization. You can expect to see the power of large mainframe computers on a single super chip. Massive data bases, such as the Navy's supply system, may be written into read-only memory (ROM) on a piece of equipment no bigger than a desktop calculator (more about ROM in chapter 2). The future challenge will not be in increasing the storage or increasing the computer's power, but rather in properly and effectively using the computing power available. This is where software (programs such as assemblers, report generators, subroutine libraries, compilers, operating systems, and applications programs) will come into play (see chapter 3). Some believe developments in software and in learning how to use these extraordinary, powerful machines we already possess will be far more important than further developments in hardware over the next 10 to 20 years. As a result, the next 20 years (during your career) may be even more interesting and surprising than the last 20 years.
Q.23 Technological advancement is measured by what, in the electronic computer world?