# Milestones:IEEE Standard 754 for Floating Point Arithmetic

## Title

IEEE Standard 754 for Binary Floating-Point Arithmetic, 1985

## Citation

In 1978, faculty and students at U.C. Berkeley drafted what became IEEE Standard 754 for Binary Floating-Point Arithmetic. Inspired by ongoing collaboration with Intel, the proposal revolutionized numerical computing. Its carefully crafted arithmetic and standard data types promoted unprecedented software reliability and portability. By 1980, microprocessor companies were already implementing the proposal. Once approved in 1985, IEEE 754 was widely adopted as the standard for robust numerical computing.

## Street address(es) and GPS coordinates of the Milestone Plaque Sites

Soda Hall, Le Roy Ave, Berkeley, CA 94709; 37.875624, -122.258882, Soda Hall, Le Roy Ave, Berkeley, CA 94709; 37.875624, -122.258882

## Details of the physical location of the plaque

The plaque will be mounted on the wall in the 3rd floor corridor, adjacent to the existing IEEE Milestone plaque for First RISC Microprocessor, and with comparable mounting.

## How the intended plaque site is protected/secured

Soda Hall is a campus building of offices, lecture halls, and classrooms, with all the attendant public access and off-hours security.

## Historical significance of the work

IEEE 754 marks a unique opportunity in history, when technology could take a huge leap forward. With 8-bit microprocessors well established, semiconductor technology was advancing rapidly and the prospect for scientific, engineering, and financial computation was just ahead. The mainframe and minicomputer industries had long splintered into factions of binary, octal, and hexadecimal arithmetic on data elements of varying sizes. Portability of numerical codes was a nightmare. A new standard for floating point arithmetic offered the prospect of high quality, dependable arithmetic across a wide range of computers and programming languages.

Virtually every implementation of floating point arithmetic since 1980 has followed the IEEE 754 standard. As of 2022, it inspires designs in specialized processors for graphics, machine learning, and signal processing, far beyond the scope set out in 1978. IEEE 754 remains an active standard, updated regularly according to the requirements of the IEEE Standards Committee. The most recent revision was approved in 2019.

## Features that set this work apart from similar achievements

What sets IEEE 754 apart is its longevity and the breadth of its impact. The floating point standard has influenced every processor design since 1980, five years before its adoption. Although Cray supercomputers maintained their legacy floating point arithmetic for years, newer massively-parallel designs incorporated IEEE 754 by virtue of the microprocessors they comprised. Now the new generation of machine learning processors adapt features of 754 to their specialized, narrow numeric types.

The standard also has significant implications for programming languages. For example, the standard has been well supported in C and C++ beginning with C99, and has been supported in Fortran since the 2003 revision. The references show widespread adoption of the IEEE 754 formats and operations in new programming languages of the past twenty years.

IEEE 754 also enjoys a timelessness not shared by standards rooted in a specific technology. Floating point arithmetic arose with early electro-mechanical designs and it is likely to survive for many years to come.

IEEE 754 is a one-of-a-kind standard whose like may never be seen again. As the microprocessor industry converged on data sizes of 8, 16, 32, and 64 bits, IEEE 754 defined 32-bit and 64-bit formats. The standard cleaned up a host of problems on minicomputers and mainframes. For example, results would be rounded predictably, multiplication would be commutative, "if x == 0.0" could be trusted, and every operation would deliver a logical result, without stopping execution.

Other forms of arithmetic continue to be proposed for engineering and scientific computation, but none has come close to displacing floating point arithmetic. There have been no commercially significant proposals for floating point designs different from IEEE 754. Given the pervasiveness of floating point arithmetic in hardware designs and programming languages, and the vast stores of accumulated floating point data, any change from current practice will require significant time and effort.

## Significant references

### Annotated Citation

A line-by-line elaboration of the citation tells a bit more of what can't be said in 70 words.

- In 1978, faculty and students at U.C. Berkeley drafted what became IEEE Standard 754 for Binary Floating-Point Arithmetic.
- Prof. William Kahan, who had attended the second IEEE 754 meeting in September 1977, recruited visiting Prof. Harold Stone and graduate student Jerome Coonen (co-proposer of this Milestone) to help write an initial proposal for the next subcommittee meeting, in April 1978.
- Inspired by ongoing collaboration with Intel,
- In 1976, John Palmer at Intel approached Kahan to help develop a corporate floating point standard, whose first targets would be the 432 processor and the 8087 floating-point coprocessor for the 8086 and 8088 microprocessors. When the 754 subcommittee appeared, Kahan appealed to Intel management to permit him to develop the arithmetic toward an international standard, while leaving many proprietary features of the Intel standard undisclosed.
- the proposal revolutionized numerical computing.
- IEEE 754 changed everything. Programmers could depend on the floating point data types presented in programming languages. They could depend on the behavior of the arithmetic.
- Its carefully crafted arithmetic and standard data types
- IEEE standard arithmetic is designed to deliver
*the most sensible*result for every operation and any combination of operands. Its 32-bit and 64-bit data types facilitate data interchange in compact binary form. - promoted unprecedented software reliability and portability.
- The keyword here is
*unprecedented*. Until IEEE 754, the best approach to portability was first to quantify machine characteristics such as radix – binary, octal, decimal, hexadecimal; the number of significant digits carried; and the maximum and minimum values*for which the computer behaved reliably*. Then, based on those characteristics, one could enumerate axioms of arithmetic that every relevant computer would adhere to. By embracing such a broad range of behaviors, such axioms necessarily represented an abstract machine worse than any ever actually built. This was not a feasible path to portability. The emergence of microprocessors offered a one-time opportunity to start with a clean slate. - By 1980, microprocessor companies were already implementing the proposal.
- Intel delivered the 8087 floating-point coprocessor in 1980. Motorola was working on the 68881 floating-point coprocessor for its 68000 family. Zilog was actively developing the Z8070 floating-point coprocessor for the Z8000 family, though it never shipped. National Semiconductor developed its 16081 floating-point coprocessor. Apple, years away from being a chip company, fully endorsed the proposed standard with software implementations on the Apple ][, Apple ///, Lisa, and Macintosh. Apple's ambitious Pascal programming system supported all the features of the 754 proposal, including all the recommended support functions.
- Once approved in 1985,
- The IEEE process is necessarily deliberate. The subcommittee replied to many alternative proposals, and to many questions. IEEE 754 was approved as an IEEE standard in 1985 and was subsequently adopted as international standard ISO/IEC 60559 in 1989.
- IEEE 754 was widely adopted as the standard for robust numerical computing.
- As of 2022, all general-purpose processors with floating point arithmetic adhere substantially to IEEE 754. Special-purpose chips such as digital signal processors (DSPs) use IEEE 754 formats and arithmetic for the operations they support. Also as of 2022, the newest Machine Learning and GPU chips take inspiration from IEEE 754, as evidenced by the 16-bit types half-float and bfloat16, and the attention to unbiased rounding to nearest in supported operations.

### James Demmel – Expert's Report

I am delighted to provide an “Expert” review of this plaque proposal. I was a PhD student at UC Berkeley from 1978-1983, advised by Prof. William Kahan, who spearheaded the proposal ultimately adopted as IEEE Standard 754. Along with my numerical analysis pursuits, I tracked the progress of the proposed standard and wrote the first of several papers (see the References) on how to use the novel features of the proposed standard to improve numerical computation. I later served on subsequent IEEE 754 standard committees that are periodically convened to review and update the standard, most recently for the version approved in 2019.

1. Is the suggested wording of the Plaque Citation accurate?

- Yes, I agree with the wording and elaboration.

2. Is the evidence presented in the proposal of sufficient substance and accuracy to support the Citation?

- The evidence presents a compelling historical overview, describing both the historical impact, on hardware (virtually all computers support the standard) and software (including numerical software, programming languages and compilers), and the technical and political challenges that needed to be overcome (convincing many different computer companies to agree on something that differed from what they did before).

3. Does the proposed milestone represent a significant technical achievement?

- There are three overwhelming metrics to justify recognition of the IEEE 754 standard with an IEEE Plaque. First, virtually all computers manufactured after the standard’s approval by the IEEE in 1985 adopted the standard, and have continued to do so until the present day. Second, like other IEEE standards, it is reviewed every 10 years by a new standards committee to make sure it is still relevant; there have been some additions and correction over the years but the basic design remains unchanged, with the most recent reapproval in 2019. Finally, in 1989, a few years after the standard’s approval by the IEEE in 1985, William Kahan was granted the Turing Award, the highest recognition in Computer Science, for his leadership in creating IEEE Standard 754.

### David Goldberg – Expert's Report

It is an honor to be selected as an expert reviewer for the IEEE Floating-Point Standard plaque, given the enormous impact of this standard. I considered floating-point to be a rather dull subject until I attended a course on the topic given by Professor Kahan in 1988. I was so taken with his lectures, that I expanded them into a survey article that was published in the ACM Computing Surveys, published in March 1991. It's one of my most cited papers, with over 2700 citations in Google Scholar.

As I understand it, my job is to comment on three things:

1) Is the suggested wording of the Plaque Citation accurate? I believe it is accurate.

2) Is the evidence presented in the proposal of sufficient substance and accuracy to support the Citation? The proposal includes an impressive list of references, ranging from before the standard was accepted to the present time, and includes items of sufficient depth and authority to support the citation.

3) Does the proposed milestone represent a significant technical achievement? A significant technical achievement should be non-obvious, accepted by diverse experts and have wide impact. The proposal does a good job of explaining why the IEEE floating-point standard meets each of these.

### Harold Stone – Report from the Field

[Harold Stone worked on the very earliest drafts of the proposal for a standard, then continued to promote the standard over the years.]

The IEEE 754 floating-point standard was proposed at a time that the microprocessor industry was poised for a technological breakout. That growth was driven by advances in VLSI that sustained exponential increases in circuit density for the next several decades. By the late 1970s, the technology was barely adequate to handle floating point operations on 64-bit operands. In 1978, the year of the proposal submission, several manufacturers were planning math coprocessor chips to complement their integer processors. Fully integrated integer and floating-point chips were not yet on the horizon.

Mainframes and minicomputers used formats and functional implementations specific to each vendor. Arithmetic software codes produced different results, depending on the machine on which they were executed. Although the results across machine were generally close numerically, the detailed differences required arithmetic library codes to be tweaked for each different machine on which they were run. And in some rare cases, the differences produced were substantial. Several different microprocessors were already being manufactured in 1978, with more variations on the drawing boards to be put in production in the next few years. If floating-point arithmetic implementations were to follow the then current practice, the hodge podge of machine-dependent code would have continued.

Since no microprocessor architecture included floating-point instructions at that time, it was conceivable that all microprocessors could evolve to a single standard floating-point arithmetic. This would simplify the implementation of floating-point libraries, and assure interoperability of numeric methods across microcomputers. It was too late to assume that mainframes and minicomputers would change their existing instructions to conform. They would probably continue in the future as they had in the past, offering a variety of formats and arithmetic systems with libraries especially designed for each different architecture. But microprocessors could enjoy the benefits of a standard for floating-point arithmetic.

**Within a decade, the success of the IEEE 754 standard was well beyond what was foreseen in 1978. Not only did virtually all microprocessors across many different vendors conform to the standard, but, surprisingly, mainframe and minicomputer manufacturers adopted it as well.**

By the new millennium, billions of processors ran the standard. With the advent of smartphones, the world’s population as a whole uses IEEE 754 floating point on a daily basis, as a core support for JPEG decompression to display images on the phones.

The success of the standard goes beyond the timing of the proposal. It is technologically advanced. Computer arithmetic is inherently imperfect because it uses finite precision to represent results that may require more precision than is available. An arithmetic standard must assure that the results are as accurate as the technology allows. The IEEE 754 standard introduced the novel concept of gradual underflow. Numbers too small to represent in normalized format are retained in a special format, denormalized, to permit them to be used in subsequent computations after they appear as intermediate results. In prior arithmetic implementations, such results were treated as arithmetic underflow, and replaced by zero. Because the IEEE 754 standard allows computation with denormalized numbers, some iterative arithmetic codes can produce more accurate results with the standard than was possible previously by treating underflows as zeros.

Other key aspects of IEEE 754 include (1) correct rounding, (2) completeness (NaNs, infinities, as well as gradual underflow described above), and (3) exception handling (most useful result and non-stop default). All of these features improved upon arithmetic implementations available previously.

Computer arithmetic experts have carefully analyzed functional behavior of IEEE 754 Floating Point codes as compared to codes based on prior implementations. A notable example is a study authored by Dr. William J. Cody of Argonne National Laboratory, which examined in detail the controversy that had built up around gradual underflow. He concluded, "I personally support gradual underflow, because I believe it enlarges the set of problems that can be safely solved in a natural way without penalizing previously successful methods."

In one example, Cody says, "consider the simple computation

- (Y – X) + X

where Y – X underflows. Then gradual underflow always returns Y exactly [while] flush-to-zero returns X." He concludes with the statement, "I prefer to look at [this example] as the preservation of the associative law of addition to within rounding error."

Prof. Donald Knuth of Stanford University also offered his analysis and support of the standard. Prof. Knuth is a world renowned computer scientist, a Turing Award winner, and an expert in computer algorithms. His letter of support confesses to initial skepticism, “because it appears to be needlessly complicated to gain a few bits at the end of the range.” But further analysis on his part convinced Prof. Knuth that the alleged complication was worth the gain. “The thing that I missed was that gradual underflow adds an element of *completeness* to the system that seems impossible to achieve in any other way.” [Emphasis in the original]

Links to the Cody paper and the Knuth letter appear in the References below.

In summary, the IEEE 754 Standard for Floating-Point Arithmetic advanced the technology that came before it by providing more reliable results, adherence to arithmetic properties, and computational completeness than what was previously available. Its nearly universal adoption confirms that the standard lends itself to practical implementation for the benefit of all of its users.

## Supporting materials

### Primary References

*IEEE Standard for Floating-Point Arithmetic, ANSI/IEEE Standard 754-2019*, Institute of Electrical and Electronics Engineers, New York, USA, 2019.
The current revision of IEEE 754 is available for a fee at the link above. The shorter original, superseded version is listed below.

Jean-Michel Muller, Nicolas Brunie, Florent de Dinechin, Claude-Pierre Jeannerod, Mioara Joldes, Vincent Lefèvre, Guillaume Melquiond, Nathalie Revol, and Serge Torres. *Handbook of Floating-Point Arithmetic*, 2nd edition, Cham, Switzerland, 2018, chapter 3, appendix B.
Muller and his colleagues present IEEE standard arithmetic and its utility in the larger context of computer arithmetic. Chapter 3 is provided here.

Nicholas J. HIgham. *Accuracy and Stability of Numerical Algorithms*, 2nd edition, SIAM, Philadelphia, PA, USA, 2002, chapters 2, 27.
Higham presents IEEE standard arithmetic in the context of scientific and engineering numerical computation. Chapter 2 is provided here.

William Kahan. Why do we need a floating-point arithmetic standard? Technical Report, University of California, Berkeley, CA, USA, February 1981. Kahan, the principal designer of IEEE 754, provides a detailed survey of the numerical challenges inspiring many features of the standard.

Donald MacKenzie. Negotiating Arithmetic, Constructing Proof: The Sociology of Mathematics and Information Technology, *Social Studies of Science*, SAGE, SAGE, London, Newbury Park, New Delhi, 1993, 37-65.
MacKenzie, who studies the sociology of mathematics, computing, and finance, summarizes the process of IEEE 754, several years after its initial adoption.

W. J. Cody, Analysis of Proposals for the Floating-Point Standard, *Computer*, 14:3(63-68), 1981.
Cody presents the three leading proposals to become IEEE 754. Cody went on to chair the committee developing IEEE 854, a variant of IEEE 754 supporting decimal as well as binary floating point arithmetic and specifying a range of possible data formats. Standard 854 was subsumed into a later revision of 754.

David Goldberg. What every computer scientist should know about floating-point arithmetic, *ACM Computing Surveys*, 23(1):5-48, 1991.
Goldberg's widely-read paper surveys the state of floating point arithmetic following wide adoption of IEEE 754.

### Historical References

*IEEE Standard for Binary Floating-Point Arithmetic, ANSI/IEEE Standard 754-1985*, Institute of Electrical and Electronics Engineers, New York, USA, 1985. Reprinted in SIGPLAN Notices 22(2):9-25, 1987.
The original binary standard is superseded by the 2019 revision cited above, but it offers a simpler first read of the essentials of standard binary floating point.

Harold Stone, William Kahan, Jerome Coonen. Draft of material on a Floating-Point Standard, from discussions with H. Stone, W. Kaan, J. Coonen, 1978.
First draft of what came to be known as the *KCS* proposal to the IEEE 754 subcommittee.

Thomas Haigh and Paul E. Ceruzzi. *A New History of Modern Computing*, The MIT Press, Cambridge, MA, USA, 2021, pages 220-221.
Haigh and Ceruzzi recount the arrival of the Intel 8087 in 1980, implementing the proposal on its way to becoming IEEE Standard 754. The personal computer passed from a plaything to a tool with the computational power of a minicomputer.

Donald E. Knuth. Open letter to Richard Delp, chair of the IEEE 754 subcommittee, 1980.
Knuth, author of *The Art of Computer Programming* and creator of TeX, urges adoption of the proposal that became IEEE 754.

Donald E. Knuth. *The Art of Computer Programming, Volume 2, Seminumerical Algorithms*, Third edition, Addison-Wesley, Reading, MA, USA, 1998, pages 222, 226.
Knuth's famous line introduces his many references to the standard: *A revolutionary change in floating point hardware took place when most manufacturers started to adopt ANSI/IEEE Standard 754 during the late 1980s.*

Jerome Coonen. Contributions to a Proposed Standard for Binary Floating-Point Arithmetic, PhD diss., University of California, 1983. Coonen's thesis contains some of the earliest papers on the IEEE 754 proposal, gradual underflow, binary-decimal conversion, and conformance testing.

William Kahan and John Palmer. On a Proposed Floating-Point Standard, *ACM Signum Newsletter*, 14:13-21, 1979.
Kahan and Palmer present features of the Intel 8087, announced but not yet released, with mathematical and programming examples.

David G. Hough. The IEEE Standard 754: One for the History Books, *ACM Signum Newsletter*, 14:13-21, 2019.
Hough, who chaired the committee developing the 2019 revision of IEEE 754, summarizes the recent updates, following an enlightening tour of the motivations and issues surrounding the previous versions.

### Computing with IEEE 754

Doug Priest. Differences Among IEEE 754 Implementations, *Numerical Computation Guide*, Sun Microsystems, Appendix D, 1997.
Priest's analysis is an expansion on the earlier reference from Goldberg, explaining features and pitfalls of extended precision as recommended by IEEE 754, 1985.

Michael L. Overton. *Numerical Computing with IEEE Floating Point Arithmetic: Including One Theorem, One Rule of Thumb, and One Hundred and One Exercises*, SIAM, Philadelphia, PA, USA, 2001.
Overton's friendly book takes the reader from first steps through some of the subtle points (and misconceptions) about numerical programming.

W. Kahan and Jerome T. Coonen. The Near Orthogonality of Syntax, Semantics, and Diagnostics in Numerical Programming Environments, *The Relationship between Numerical Computation and Programming Languages*, J. K. Reid, Ed. North Holland, Amsterdam, 1982, 103-115.
Kahan and Coonen discuss programming language issues underlying the proposed floating point standard.

James Demmel. Underflow and the Reliability of Numerical Software, *SIAM Journal on Scientific & Statistical Computing*, 5(4):887-919, 1984.
Demmel explores the benefits of gradual underflow, compared to flushing underflows to zero, over a variety of common codes.

James W. Demmel and Xiaoye Li. Faster Numerical Algorithms via Exception Handling, *IEEE Transactions on Computers*, 43(8):983-992, 1994.
Demmel and Li exploit IEEE 754 exception handling to create multi-layer codes able to handle "typical" cases quickly, while falling back gracefully to more elaborate code to handle tricky cases reliably.

Peter Ahrens, James Demmel, and Hong Diep Nguyen. Algorithms for Efficient Reproducible Floating Point Summation, *ACM Transactions on Mathematical Software*, 46(3):22:1-22:49, 2020.
Ahrens et al. use IEEE standard arithmetic to compute matching sums across vastly different computer systems. The fact that (a + b) + c does not generally match a + (b + c) for floating point values a, b, and c has implications for computations large and small.

John Hauser. Handling floating-point exceptions in numeric programs, *ACM Transactions on Programming Languages and Systems*, 18(2):139-174, 1996.
Hauser's accessible tutorial explains programming in the face of exceptions like overflow, underflow, and division by zero. He compares IEEE standard arithmetic with older, less robust systems.

### IEEE 754 in Programming Languages

*ISO/IEC 9899: 2018 Information technology – Programming languages – C*, ISO/IEC JTC1 / SC22 / WG14, 2018.
The C standard, available for purchase here, offers bindings to many features of IEEE 754 arithmetic.

*ISO/IEC 14882:2020(E) – Programming Language C++*, ISO/IEC JTC1 / SC22 / WG21,2022.
The C++ standard, available for purchase here, offers bindings to many features of IEEE 754 arithmetic, consistent with the C standard above.

*C# Language Specification*, Microsoft, 2022.
C# cites the international IEEE 754 standard when it specifies that its float and double types correspond to IEEE 754 single and double, respectively.

*Fortran 2018 Working Draft*, Technical Committee JTC 1/ SC 22/WG 5, 2018.
The Fortran 2018 draft standard, available free here, supports many features of IEEE 754 arithmetic. In particular, see Clause 17, *Exceptions and IEEE Arithmetic*.

*The Java Language Specification: Java SE 18 Edition*, James Gosling, Bill Joy, Guy Steele, Gilad Bracha, Alex Buckley, Daniel Smith, and Gavin Bierman.
Java specifies its primitive float and double types to be the corresponding IEEE 754 types. The language supports infinite and NaN values, and other features of IEEE 754 arithmetic.

*ECMAScript 2022 Language Specifiction*, Ecma International.
ECMAScript, colloquially JavaScript, defines its Number type to be IEEE 754 double. The language supports infinite and NaN values, with rounding to nearest, ties to even. There is just one Number type, used for both floating point and integer values.

*The Python Language Reference*, Python Software Foundation.
Python supports one Real type, namely, "machine-level double precision." The specification leaves the user, "at the mercy of the underlying machine architecture (and C or Java implementation)," meaning most implementations will support IEEE 754 double.

*R Language Definition*, R Core Team.
R specifies vectors of type double, a floating point type inherited from the underlying C language implementation. Most implementations will support IEEE 754 double.

*Julia 1.8 Documentation*, Julia Community.
Julia's types Float32 and Float64 are defined to be IEEE 754 single and double, respectively. Julia supports infinite and NaN values and other features of IEEE 754.

*PHP 8.1", The PHP Group.*
The scripting language PHP, "typically uses the IEEE 754 double precision format," for its sole floating point type.

*The Go Programming Language Specification*, Google Go Team.
The Go types float32 and float64 are defined to be IEEE 754 single and double, respectively.

*Swift*, Swift.org.
Swift defines its Float and Double types to be IEEE 754 single and double, respectively. It supports infinite and NaN values. The FloatingPoint protocol of Swift permits other types, but, "enforces the basic requirements of any IEEE 754 floating-point type."

*The Kotlin Language*, JetBrains.
Kotlin is a functional programming language inspired by Java, JavaScript, and other languages used for mobile app development. It can be compiled to run on the Java Virtual Machine or compiled to JavaScript, so it inherits Java's use of IEEE single and double as its floating point types.

Guy L. Steele. *Common Lisp the Language*, 2nd Edition, Digital Press, Massachusetts, 1990.
Lisp recommends that single and double float values "approximate" IEEE 754 single and double values.

*Forth 2012 Section 12: The optional Floating-Point word set*, Forth200x committee.
Forth 2012 specifies the use of IEEE 754 arithmetic on single and double data types.

### IEEE 754 in Programming Environments

*MATLAB Numeric Types*, The MathWorks.
MATLAB's single and double floating point types are defined to be IEEE 754 single and double.

*Maple Numerics*, Maplesoft.
Maple defines its "hardware floating-point type" as IEEE 754 double. The Maple specification gives, "consistency with IEEE standards," as one goal of its Numeric Computation Environment.

*Maxima 5.46.0 Manual*, The Maxima Project.
The Maxima Computer Algebra System is based on DOE (Department of Energy) Macsyma. It is implemented in Lisp and distributed under the GNU Public License. Maxima supports a system floating point type inherited from the underlying Lisp system, whose types "approximate" the IEEE 754 single and double types. See *Common Lisp the Language* above.

*LAPACK Users' Guide 3rd Edition – Further Details: Floating Point Arithmetic*, E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, D. Sorensen.
The cited section of the LAPACK guide notes, "Actually, most machines, but not yet all, do have the same machine parameters because they implement IEEE Standard Floating Point Arithmetic." LAPACK depends on the presence of infinite and NaN values in specified routines and does rely on accurate addition and subtraction in others, all features guaranteed by IEEE 754.

*ScaLAPACK Users' Guide – Sources of Error in Numerical Calculations*, L. S. Blackford, J. Choi, A. Cleary, E. D'Azevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, R. C. Whaley.
Like its predecessor, ScaLAPACK does not require IEEE 754 arithmetic. Unlike LAPACK, it does, "take advantage of arithmetic with [signed infinities]," to accelerate some routines.

### IEEE 754 Verification and Libraries

*Berkeley SoftFloat*, John Hauser, 2017.
Hauser's widely-used reference implementation conforms scrupulously to the 1985 standard, with some updates for the 2008 revision. It is available free under license from the Regents of the University of California.

*Berkeley TestFloat*, John Hauser, 2018.
Hauser's TestFloat generates test cases to compare a floating point implementation with TestFloat's own embedded software implementation. It is available free under license from the Regents of the University of California.

*C17/C11/C99 FPCE Test Suite*, Fred J. Tydeman, 2022.
Tydeman offers extensive tests of a C/C++ compiler's support of robust numerical computing, including many IEEE 754 features. Some tests are free, including file *readme.1st* with its pocket history of IEEE 754 support in the C/C++ standards over time.

*fdlibm*, Sun Microsystems, 1993.
Sun, now a part of Oracle, offered a *freely distributable* collection of functions normally found in the Unix *libm* mathematical function library. The functions are designed for IEEE standard arithmetic. The collection is available free at the NetLib.

*UCBTest*, Zhishun Alex Liu et al., 1995.
Under the direction of Prof. William Kahan, Liu and other students developed a suite of programs to test elementary functions. It is available free at the NetLib.