Invention of the integrated circuit

Source: Wikipedia, the free encyclopedia.

The first planar monolithic

Electrotechnical Laboratory) proposed similar chip designs where several transistors could share a common active area, but there was no electrical isolation
to separate them from each other.

These ideas could not be implemented by the industry, until a breakthrough came in late 1958. Three people from three U.S. companies solved three fundamental problems that hindered the production of integrated circuits.

monolithic integrated circuit (monolithic IC) chip.[1] Between late 1958 and early 1959, Kurt Lehovec of Sprague Electric Company developed a way to electrically isolate components on a semiconductor crystal, using p–n junction isolation
.

The first monolithic IC chip was invented by Robert Noyce of Fairchild Semiconductor.[2][3] He invented a way to connect the IC components (aluminium metallization) and proposed an improved version of insulation based on the planar process technology developed by Jean Hoerni. On September 27, 1960, using the ideas of Noyce and Hoerni, a group of Jay Last's at Fairchild Semiconductor created the first operational semiconductor IC. Texas Instruments, which held the patent for Kilby's invention, started a patent war, which was settled in 1966 by the agreement on cross-licensing.

There is no consensus on who invented the IC. The American press of the 1960s named four people: Kilby, Lehovec, Noyce and Hoerni; in the 1970s the list was shortened to Kilby and Noyce. Kilby was awarded the 2000 Nobel Prize in Physics "for his part in the invention of the integrated circuit".[4] In the 2000s, historians Leslie Berlin,[a] Bo Lojek[b] and Arjun Saxena[c] reinstated the idea of multiple IC inventors and revised the contribution of Kilby. Modern IC chips are based on Noyce's monolithic IC,[2][3] rather than Kilby's hybrid IC.[1]

Prerequisites

Waiting for a breakthrough

Changing vacuum tubes in the computer ENIAC. By the 1940s, some computational devices reached the level at which the losses from failures and downtime outweighed the economic benefits.

During and immediately after World War II a phenomenon named "the tyranny of numbers" was noticed, that is, some computational devices reached a level of complexity at which the losses from failures and downtime exceeded the expected benefits.

Boeing B-29 (put into service in 1944) carried 300–1000 vacuum tubes and tens of thousands of passive components.[d] The number of vacuum tubes reached thousands in advanced computers and more than 17,000 in the ENIAC (1946).[e] Each additional component reduced the reliability of a device and lengthened the troubleshooting time.[7]
Traditional electronics reached a deadlock and a further development of electronic devices required reducing the number of their components.

The invention of the first transistor in 1947 led to the expectation of a new technological revolution. Fiction writers and journalists heralded the imminent appearance of "intelligent machines" and robotization of all aspects of life.[8] Although transistors did reduce the size and power consumption, they could not solve the problem of reliability of complex electronic devices. On the contrary, dense packing of components in small devices hindered their repair.[7] While the reliability of discrete components was brought to the theoretical limit in the 1950s, there was no improvement in the connections between the components.[9]

Idea of integration

Early developments of the integrated circuit go back to 1949, when the German engineer

Siemens AG)[10] filed a patent for an integrated-circuit-like semiconductor amplifying device[11] showing five transistors on a common substrate in a 3-stage amplifier arrangement with two transistors working "upside-down" as impedance converter. Jacobi disclosed small and cheap hearing aids
as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported.

On May 7, 1952, the British radio engineer Geoffrey Dummer formulated the idea of integration in a public speech in Washington:

With the advent of the transistor and the work in semiconductors generally, it seems now to be possible to envisage electronic equipment in a solid block with no connecting wires. The block may consist of layers of insulating, conducting, rectifying and amplifying materials, the electrical functions being connected by cutting out areas of the various layers.[12][13]

Johnson's integrated generator (1953; variants with lumped and distributed capacitances). Inductances L, load resistor Rk and sources Бк и Бб are external. Uвых - U output.

Dummer later became famous as "the prophet of integrated circuits", but not as their inventor. In 1956 he produced an IC prototype by growth from the melt, but his work was deemed impractical by the UK Ministry of Defence,[13] because of the high cost and inferior parameters of the IC compared to discrete devices.[14]

In May 1952, Sidney Darlington filed a patent application in the United States for a structure with two or three transistors integrated onto a single chip in various configurations; in October 1952, Bernard Oliver filed a patent application for a method of manufacturing three electrically connected planar transistors on one semiconductor crystal.[15][16]

On May 21, 1953, Harwick Johnson filed a patent application for a method of forming various electronic components – transistors, resistors, lumped and distributed capacitances – on a single chip. Johnson described three ways of producing an integrated one-transistor oscillator. All of them used a narrow strip of a semiconductor with a

bipolar transistor on one end and differed in the methods of producing the transistor. The strip acted as a series of resistors; the lumped capacitors were formed by fusion whereas inverse-biased p-n junctions acted as distributed capacitors.[17] Johnson did not offer a technological procedure, and it is not known whether he produced an actual device. In 1959, a variant of his proposal was implemented and patented by Jack Kilby.[15]

In 1957, Yasuo Tarui, at

quadrapole" transistor, a form of unipolar (field-effect transistor) and a bipolar junction transistor on the same chip. These early devices featured designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other.[18]

Functional electronics

The leading US electronics companies (Bell Labs, IBM, RCA and General Electric) sought solution to "the tyranny of numbers" in the development of discrete components that implemented a given function with a minimum number of attached passive elements.[19] During the vacuum tube era, this approach allowed to reduce the cost of a circuit at the expense of its operation frequency. For example, a memory cell of the 1940s consisted of two triodes and a dozen passive components and ran at frequencies up to 200 kHz. A MHz response could be achieved with two pentodes and six diodes per cell. This cell could be replaced by one thyratron with a load resistor and an input capacitor, but the operating frequency of such circuit did not exceed a few kHz.[20]

In 1952, Jewell James Ebers from Bell Labs developed a prototype solid-state analog of thyratron – a four-layer transistor, or thyristor.[21] William Shockley simplified its design to a two-terminal "four-layer diode" (Shockley diode) and attempted its industrial production.[22] Shockley hoped that the new device would replace the polarized relay in telephone exchanges;[23] however, the reliability of Shockley diodes was unacceptably low, and his company went into decline.

At the same time, works on thyristor circuits were carried at Bell Labs, IBM and RCA. Ian Munro Ross and L. Arthur D'Asaro (Bell Labs) experimented with thyristor-based memory cells.[24] Joe Logue and Rick Dill (IBM) were building counters using monojunction transistors.[25] J. Torkel Wallmark and Harwick Johnson (RCA) used both the thyristors and field-effect transistors. The works of 1955–1958 that used germanium thyristors were fruitless.[26] Only in the summer of 1959, after the inventions of Kilby, Lehovec and Hoerni became publicly known, D'Asaro reported an operational shift register based on silicon thyristors. In this register, one crystal containing four thyristors replaced eight transistors, 26 diodes and 27 resistors. The area of each thyristor ranged from 0.2 to 0.4 mm2, with a thickness of about 0.1 mm. The circuit elements were isolated by etching deep grooves.[24][27]

From the point of view of supporters of functional electronics, semiconductor era, their approach was allowed to circumvent the fundamental problems of semiconductor technology.[24] The failures of Shockley, Ross and Wallmark proved the fallacy of this approach: the mass production of functional devices was hindered by technological barriers.[25]

Silicon technology

Early transistors were made of germanium. By the mid-1950s it was replaced by silicon which could operate at higher temperatures. In 1954, Gordon Kidd Teal from Texas Instruments produced the first silicon transistor, which became commercial in 1955.[28] Also in 1954, Fuller and Dittsenberger published a fundamental study of diffusion in silicon, and Shockley suggested using this technology to form p-n junctions with a given profile of the impurity concentration.[29]

In early 1955,

diffusion processes, and could be used for diffusion masking.[32][33] This accidental discovery revealed the second fundamental advantage of silicon over germanium: contrary to germanium oxides, "wet" silica
is a physically strong and chemically inert electrical insulator.

Surface passivation