Milestone-Proposal talk:Moore's Law - Predicts Integrated Circuit Complexity Growth, 1965

From IEEE Milestones Wiki
Revision as of 21:40, 7 February 2016 by Bberg (talk | contribs) (→‎Ted Hoff Comments in Support of the Moore's Law Milestone -- ~~~~: new section)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Advocates and reviewers will post their comments below. In addition, any IEEE member can sign in with their ETHW login (different from IEEE Single Sign On) and comment on the milestone proposal's accuracy or completeness as a form of public review.

Ted Hoff Comments in Support of the Moore's Law Milestone -- Bberg (talk) 15:40, 7 February 2016 (CST)

I am Brian Berg, and I am the Region 6 IEEE Milestone Coordinator.

I am providing comments I received from two important friends of mine: Dr. Marcian Edward "Ted" Hoff, Jr. and Dr. Eli Harari. Each of them has important first-hand experience with the dramatic impact of Moore's Law. Their experiences have truly changed the world.

I feel that it is important to appreciate how many technology advances took place based on the impact of Moore's Law on both technology advances as well as cost reductions. The result of this is that projects that were not technologically possible and/or cost-effective at their outset were funded and staffed, and then achieved success in the marketplace. Huge risks were tempered by Moore's Law. It is no exaggeration to say that the success of Silicon Valley and beyond would not have been possible without the assurance that Moore's Law provided re: these risks.

My first set of comments come from Ted Hoff. Ted's greatest achievement was the conceptual role he played in the invention of the 4004 microprocessor at Intel in 1971. Ted has received much recognition for the work he has done, including (1) the IEEE Computer Society's 1988 Computer Pioneer Award, (2) IEEE Fellow, (3) IEEE Cledo Brunetti Award in 1980, (4) induction into the National Inventors Hall of Fame in 1996, (5) receipt of the National Medal of Technology and Innovation in 2009 from President Obama, (6) 2011 IEEE/RSE Wolfson James Clerk Maxwell Award, (7) Fellow of the Computer History Museum, and (8) Intel Fellow.

I personally know Ted through many avenues, including his membership in the Silicon Valley Technology History Committee's event organizing team within the Santa Clara Valley Section, and I am Chair of that Committee. Our website is www.SiliconValleyHistory.com

Here are Ted's comments:

I first learned of what we now call Moore's Law shortly after joining Intel Corporation in 1968. Gordon Moore invited me to his office to show a chart in which he had plotted the progress made in integrated circuit (IC) fabrication since the time of the IC's invention in 1959. The chart showed that the number of components was doubling almost every year.

Gordon noted that Intel's proposed products would advance that progress, possibly even increasing the growth rate. Gordon had published his observations a few years before while still at Fairchild Semiconductor in an effort to predict where the technology would be a decade later. These observations over time came to be known as "Moore's Law."

Gordon had also prepared yield curves showing the number of working chips one could expect as a function of chip size (i.e., area) for different levels of manufacturing process performance (e.g., defect density). It was very important to chose an optimum chip size, and Gordon's yield charts showed that doubling the size of a chip would almost (but not quite) result in a yield that was the product of the smaller chip yield. Consider a wafer with 200 possible chips, yielding ten percent. One would expect to get twenty functional chips per wafer. Double the area of the chip, and there would be less than 100 possible chips (the wafer was round and the chips rectangular, so part of the periphery of the wafer is unusable, and that part grows with chip size).

With the same process conditions, the double-area chip would yield closer to one percent (ten percent squared), and so would produce about one functional chip per wafer. Thus, doubling the functionality of a chip might increase the cost more than twenty times over.

The use of Moore's Law to predict the ultimate manufacturability of a chip was very important, and those companies that could make accurate predictions had considerable market advantage. This importance can be appreciated based on the fact that the design of a complex chip might take a year, and so predicting when it could be manufacturable dictated the acceptable complexity of its design.

In my own case, it was understanding and applying Moore's Law that guided how complex the first microprocessor could be in order to perform the functions for the Busicom calculator (the first use of a microprocessor). If the processor design was too aggressive, it might not have been manufacturable at an acceptable price. The 4004 processor was conceived using a guideline target on the order of 2000 transistors.

Moore's Law played a role in subsequent microprocessors as well. The target specifications for the 8008 were written about 6 months after those for the 4004, and the use of Moore's Law allowed for the assumption that the design of the 8008 microprocessor could include 50% more transistors than the 4004.

In 1974, a group from IBM, including Bob Dennard (who was recognized by the National Inventors Hall of Fame for the first single-transistor cell DRAM) published a paper in the IEEE Journal of Solid State Circuits which noted advantages from making circuitry smaller. The paper noted that the current minimum feature size was about 5 microns (i.e. 5,000 nanometers) and proposed feature size to be reduced to 1 micron. Hence, 25 times as many circuits could be made from the same amount of silicon, each circuit would consume 1/25 of the power of the original, and it would operate five times faster. Thus a five times reduction in linear dimensions allowed for about 125 times the performance at hopefully about the same cost.

This reduction in feature size has helped keep Moore's Law relevant to this day. Minimum feature size is now on the order of ten nanometers, and efforts are underway to see if it can be reduced further.

Another effort to keep Moore's Law alive is in the area of three-dimensional ICs. Packaging still remains a part of the cost of the final circuit, and the ability to stack layers of functionality is one way to add more function to each final package.

I believe Gordon Moore viewed his work as observation, but the acceptance of the concept has helped set targets for the technologists of the industry for over 50 years. Hence, the progress in meeting those targets has ensured that Gordon's observation should be considered a "law."