Milestone-Proposal:First Demonstration of the Fast Fourier Transform (FFT), 1964
To see comments, or add a comment to this discussion, click here.
Docket #:2024-21
This proposal has been submitted for review.
To the proposer’s knowledge, is this achievement subject to litigation? No
Is the achievement you are proposing more than 25 years old? Yes
Is the achievement you are proposing within IEEE’s designated fields as defined by IEEE Bylaw I-104.11, namely: Engineering, Computer Sciences and Information Technology, Physical Sciences, Biological and Medical Sciences, Mathematics, Technical Communications, Education, Management, and Law and Policy. Yes
Did the achievement provide a meaningful benefit for humanity? Yes
Was it of at least regional importance? Yes
Has an IEEE Organizational Unit agreed to pay for the milestone plaque(s)? Yes
Has the IEEE Section(s) in which the plaque(s) will be located agreed to arrange the dedication ceremony? Yes
Has the IEEE Section in which the milestone is located agreed to take responsibility for the plaque after it is dedicated? Yes
Has the owner of the site agreed to have it designated as an IEEE Milestone? Yes
Year or range of years in which the achievement occurred:
1963-1965
Title of the proposed milestone:
First Demonstration of the Fast Fourier Transform (FFT), 1964
Plaque citation summarizing the achievement and its significance; if personal name(s) are included, such name(s) must follow the achievement itself in the citation wording: Text absolutely limited by plaque dimensions to 70 words; 60 is preferable for aesthetic reasons.
In 1964, a computer program implementing a highly efficient Fourier analysis algorithm was demonstrated at IBM Research. Jointly developed by Princeton University and IBM collaborators, the Cooley-Tukey technique calculated discrete Fourier transforms orders of magnitude faster than had been previously demonstrated. Known as the Fast Fourier Transform (FFT), its speed impacted numerous applications including computerized tomography, audio and video compression, signal processing, scientific computing, and real-time data streaming.
200-250 word abstract describing the significance of the technical achievement being proposed, the person(s) involved, historical context, humanitarian and social impact, as well as any possible controversies the advocate might need to review.
The motivation for Prof. John Tukey of Princeton University that led to the FFT is attributed to national defense. At a 1963 Kennedy’s Science Advisory Committee regarding nuclear test detection, Richard Garwin learned that committee member Tukey knew a method for vastly reducing the computation time of Fourier Transforms. Garwin later enlisted IBM’s Dr. James Cooley to go to Princeton to initiate a collaboration with Tukey to develop a practical implementation. Working together at IBM, Cooley and Tukey led a team that demonstrated the power of the FFT.
Prior to that demonstration, Fourier Transforms were widely believed to require a number of operations that grew quadratically in problem size. The Fast Fourier Transform requires a number of operations grows only as N log N for problems of size N. For problems of that time, the FFT was 100 times faster than the existing Fourier Transform computer codes. The FFT enables the computation of transforms that were previously considered prohibitively too lengthy to be of practical use.
An important application of the FFT is for image and video processing using JPEG and MPEG standards for data compression. The JPEG and MPEG standards are based on fast algorithms in the FFT family. Billions of people worldwide use the FFT every day when they access the internet to share images and videos. Measurements of internet traffic show that video images occupy from 60% to 80% of internet traffic in the 2020s decade. All of this traffic depends on FFT-related compression and decompression.
IEEE technical societies and technical councils within whose fields of interest the Milestone proposal resides.
IEEE Computer Society, IEEE Acoustics, Speech and Signal Processing Society, IEEE Solid States Circuit Society, IEEE Circuits and Systems Society, IEEE Information Theory Society, IEEE Engineering in Medicine and Biology Society, IEEE Consumer Technology Society, IEEE Microwave Society
In what IEEE section(s) does it reside?
IEEE New York Section, IEEE Princeton Central Jersey Section
IEEE Organizational Unit(s) which have agreed to sponsor the Milestone:
IEEE Organizational Unit(s) paying for milestone plaque(s):
Unit: IEEE Princeton Central Jersey Section
Senior Officer Name: Shubha Bommalingaiahnapallya
Unit: IEEE New York Section
Senior Officer Name: Chamara Johnson
IEEE Organizational Unit(s) arranging the dedication ceremony:
Unit: IEEE Princeton Central Jersey Section
Senior Officer Name: Shubha Bommalingaiahnapallya
Unit: Princeton University, ECE Department
Senior Officer Name: James Sturm
Unit: IEEE New York Section
Senior Officer Name: Chamara Johnson
Unit: IBM Yorktown Heights Research Center
Senior Officer Name: Dario Gil
IEEE section(s) monitoring the plaque(s):
IEEE Section: IEEE Princeton Central Jersey Section
IEEE Section Chair name: Shubha Bommalingaiahnapallya
IEEE Section: IEEE New York Section
IEEE Section Chair name: Chamara Johnson
Milestone proposer(s):
Proposer name: Harold Stone
Proposer email: Proposer's email masked to public
Proposer name: James Wynne
Proposer email: Proposer's email masked to public
Proposer name: Ruby Lee
Proposer email: Proposer's email masked to public
Please note: your email address and contact information will be masked on the website for privacy reasons. Only IEEE History Center Staff will be able to view the email address.
Street address(es) and GPS coordinates in decimal form of the intended milestone plaque site(s):
Princeton site:: Department of Electrical and Computer Engineering Engineering Quadrangle, 41 Olden St. Princeton, NJ 08544 Main Hallway, (GPS coordinates: 40.35088° N, 74.65120° W)
IBM Site: IBM Thomas J. Watson Research Center 1101 Kitchawan Road Yorktown Heights, NY 10598 Main Hallway: (GPS coordinates: 41.2098° N, 73.8026° W)
Describe briefly the intended site(s) of the milestone plaque(s). The intended site(s) must have a direct connection with the achievement (e.g. where developed, invented, tested, demonstrated, installed, or operated, etc.). A museum where a device or example of the technology is displayed, or the university where the inventor studied, are not, in themselves, sufficient connection for a milestone plaque.
Please give the address(es) of the plaque site(s) (GPS coordinates if you have them). Also please give the details of the mounting, i.e. on the outside of the building, in the ground floor entrance hall, on a plinth on the grounds, etc. If visitors to the plaque site will need to go through security, or make an appointment, please give the contact information visitors will need. Princeton site: The plaque at Princeton will be on the 200 level (entrance level) of the B-wing (the main wing of the Electrical and Computer Engineering) of the E-Quad building of the Princeton University School of Engineering and Applied Science. The location is across the hall from the main ECE department office, about 100 feet from the main entrance lobby.
IBM site: The IBM site main lobby currently houses two IEEE History Milestone Plaques. The FFT Milestone Plaque would be collocated with those plaques.
Are the original buildings extant?
Princeton U. Yes. John Tukey was a Professor in Statistics at Princeton at the time of his research. That department no longer exists, and its site now is occupied by the Departments of East Asian and Near Eastern Studies. The original building is devoted to academic studies unrelated to the engineering.
IBM site: Yes. The plaque will be located in the original building.
Details of the plaque mounting:
Princeton site: Main Hallway, on the wall via a secure wall mounting.
IBM site: Main entrance lobby area, which is open to the public: Plaque to be mounted on a structural column in the lobby just after one enters the lobby through the main entrance doors. 2009 and 2024 IEEE Milestone plaques are already mounted on structural columns in the lobby. The Milestone plaque of 2009 is for multiple achievements at IBM from 1960 to 1984. The Milestone plaque of 2024 is for the semiconductor laser, 1962. The new Milestone plaque will be mounted adjacent to these IEEE Milestone plaques.
How is the site protected/secured, and in what ways is it accessible to the public?
Princeton site: The location is open to the public (unlocked outside entrance, no security guard) during normal business hours Monday – Friday.
IBM site: Visitors must be registered and approved to access the IBM Watson Research Center. Visitors can send an email to IBM.YKT.IEEE.AWARD@ibm.com, requesting a specific date and time when they wish to view the IEEE Milestone plaques during the lab's standard operating hours: 8:00 am – 5:00 pm, non-holiday weekdays.
Who is the present owner of the site(s)?
Princeton University and the IBM Corporation
What is the historical significance of the work (its technological, scientific, or social importance)? If personal names are included in citation, include detailed support at the end of this section preceded by "Justification for Inclusion of Name(s)". (see section 6 of Milestone Guidelines)
How the FFT Development Began
The original motivation for Prof. John Tukey of Princeton University to initiate his study that led to the FFT is attributed to national defense. At a 1963 meeting of President Kennedy’s Science Advisory Committee discussing regarding underground nuclear test detection, Tukey was working on Fourier transforms. Dr. Richard Garwin of IBM Research, who also attended that meeting, recognized that Tukey had a scheme to do a complete discrete Fourier transform with a vastly reduced number of mathematical calculations. Consequently, Garwin asked the IBM Research’s Director of Mathematical Sciences to identify a mathematical analyst who could produce a generally useful algorithm, and IBM’s Dr. James Cooley was recruited to go to Princeton to initiate a collaboration with Tukey. Working together at IBM, Cooley and Tukey led a team that produced working computer code, thereby demonstrating the power of the FFT.
Prior to the publication of “An Algorithm for the Machine Calculation of Complex Fourier Series,” by James W. Cooley and John W. Tukey, Math. of Comp., vol. 19, pp.297-301, April 1965, the computation of Fourier Transforms was widely believed to require a number of operations that grew quadratically in problem size. The Fast Fourier Transform requires a number of operations that is proportional to N log N for problems of size N. For large problems of that time, the FFT was 100 time faster than the existing Fourier Transform computer codes. The FFT enables the computation of transforms that were previously considered prohibitively too lengthy to be of practical use.
IBM 7094 Used for Demonstration
Cooley and Tukey report that their algorithm was developed in 1964 on an IBM 7094 computer. [Cooley, 1965] Garwin [personal communication] recalls that the program was coded in FORTRAN. Their paper describes the classic Fourier Transform algorithm in a single equation that is the sum of N complex multiplications to obtain the Fourier coefficient for one frequency. This equation is evaluated for each of the N Fourier frequencies to get the full transform. They describe their FFT as an equation of exactly the same form, except it is the sum of only log N complex multiplications, not the sum of N of them. This equation has to be evaluated N times to obtain the Fourier coefficients for each of the N Fourier frequencies. This exceptionally clear explanation demonstrates the reduction in arithmetic terms, and shows immediately why the speedup is proportional to N / log N for problems of size N.
The entire computations for both implementations of the Fourier Transform are dominated by the arithmetic operations. Overhead operations to control two nested loops account for very little. For the IBM 7094 used for the demonstration, the number of machine cycles used just for the arithmetic computations is in the range of 28 to 68, depending on whether the implementation uses fixed-point or floating-point arithmetic. Indexing accounts for less than 5 cycles in each loop. Moreover, the overhead instructions for the classic algorithm are the same as the overhead instructions for the fast algorithm, except for a small 2N cycle overhead incurred once per transform to reorder the Fourier coefficients as calculated by the FFT. Even this additional overhead is not incurred for some FFT applications. So because the execution time of the arithmetic operations strongly dominates the total execution time, and because the FFT reduces those operations by a factor of N / log N, speedups of the order of N / log N were assured for almost every computer that could execute FFTs after the development of the FFT was disclosed.
Cooley and Tukey ran examples of size N = 2048, 4096 and 8192. The complex arithmetic operations for these examples were reduced by factors of about 186, 341, and 630, respectively. Strang [Strang, 1993, pp. 290-292] describes how the Cooley-Tukey algorithm actually reduces the number of real arithmetic operations by twice as much as the reduction of the complex arithmetic operations by taking advantage of the conjugate symmetry of the Fourier coefficients. Using 2 N / log N instead of N / log N as the coefficient for the reduction of the operation count produces speedups of 372, 682, and 1260, for problems of the sizes actually reported by Cooley and Tukey. Although speedups for actual implementations of the Cooley-Tukey algorithm may not be exactly the speedups derived from Strang's formula, they can only be marginally different because almost all of the execution time is spent on arithmetic operations whose count is reduced, and very little is expended on loop control instructions and other housekeeping. With the speedups noted above, it is clear that the FFT produced at least two orders of magnitude (100 times) faster code than was previously obtainable with the classic version of the algorithm.
The FFT Role in Video and Photo Compression/Decompression Standards
Perhaps the most important application of the FFT is for image and video processing using JPEG and MPEG standards for data compression. The idea is that image pixels in typical real images are similar to each other over large regions. The individual pixels need to be stored in a way to represent image regions efficiently. Image compression relies on the transformation of pixels from a spatial domain into a representation by frequencies in the Fourier domain. The property of real images is that most of the perceptual information is in the low frequencies. This allows the high frequencies to be represented by fewer bits, or even discarded completely, to achieve data compression with little visible degradation of the image when it is decompressed. The low complexity of the FFT makes practical the transformations from and to the pixel domain. The JPEG and MPEG standards use Discrete Cosine Transforms (DCTs), which are in the family of Fourier Transforms, and enjoy the same speed enhancement from the work of Cooley and Tukey as enjoyed by Fourier Transforms. This process would not be practical without the FFT. Billions of people worldwide use the FFT transform process every day when they access the internet to share images and videos. Measurements of internet traffic show that video images occupy from 60% to 80% of internet traffic in the 2020s decade. All of this traffic uses the DCT variation of the FFT for the compression and decompression operations.
The FFT Makes Possible Fast Convolution and Correlation Analysis
The FFT was embraced almost immediately after its publication as a new mathematical tool. It enabled new analyses that were previously impractical to implement in computer code. As a typical example, signal processing analysis employed convolution techniques to calculate how a filter modifies time samples of a signal. However, convolution of the time samples of the signals requires a number of operations that grows quadratically in N for problems of size N. It was well known that convolution can be done in the frequency domain by using a process whose operations grow only linearly with N in problem size N. Prior to the publication of the FFT algorithm, the only known techniques for converting from time samples to frequency samples, and back again, grew quadratically with problem size. So, there was no apparent advantage to converting to the frequency domain to take advantage of the efficient processing available in that domain, because the cost of converting back and forth was on the order of what otherwise might be saved.
The publication of the FFT algorithm changed that. Instead of expending a quadratically growing number of operations to convolve time samples of signals with filter coefficients, those time samples can be converted to and from frequency samples by means of an FFT using only on the order of N log N operations. Once transformed to frequency samples, the cost of convolution is only proportional to N. Convolutions of signals comprising 1024 data samples formerly required on the order of 1 million operations can now be done with roughly 100 times fewer operations. The improvement factor is larger for longer signal streams.
It is also well known that signal correlation is directly related to convolution. With the FFT algorithm, the correlation of two signals of length 1024 can be done roughly 100 times faster than the former method of correlating time samples of the signals. The FFT made such correlations practical where they had formerly been prohibitively lengthy.
Awards and Recognition for Tukey and Cooley for the FFT
The importance of the FFT has been recognized through medals, awards, and other citations. John Tukey received the 1982 IEEE Medal of Honor for his work on random processes and the FFT. He also received the 1965 S. S. Wilks Award from the American Statistics Society, and the US Medal of Science in 1973. James Cooley received the IEEE Jack Kilby Signal Processing Medal in 2002 for his work on the FFT. He also received the IEEE Centennial Medal.
Importance of FFT Recognized in Journal Articles
The FFT algorithm is widely recognized as arguably one of the most important algorithms to be proposed. Strang, 1993, wrote the following:
"A typical calculation with n = [1024] changes (1024)(1024) multiplications to (5)(1024). This saving by a factor greater than 200 is genuine. The result is that the FFT has revolutionized signal processing. Whole industries are changed from slow to fast by this one idea — which is pure mathematics." [Strang, 1993, p. 290]
"This is achieved by the Fast Fourier Transform — the most valuable numerical algorithm in our lifetime." [Strang, 1993, p. 290]
In a special issue of IEEE Computing in Science and Engineering devoted to the top 10 algorithms, editors Jack Dongarra and Francis Sullivan [Dongarra, 2000] described one of those 10, the FFT, in the excerpt below. [Dongarra and Sullivan. "Guest Editors’ Introduction, The top 10 algorithms" Computing in Science & Engineering 2.1 (2000): 22-23].
"The FFT is perhaps the most ubiquitous algorithm in use today to analyze and manipulate digital or discrete data." [Dongarra, 2000, p. 22]
The paper in that issue that describes the FFT is authored by Daniel Rockmore. [Rockmore, 2000] He comments on the very large number of applications of the FFT in the excerpt below.
"My own research experience with various flavors of the FFT is evidence of its wide range of applicability: electroacoustic music and audio signal processing, medical imaging, image processing, pattern recognition, computational chemistry, error-correcting codes, spectral methods for partial differential equations, and last but not least, mathematics. Of course, I could list many more applications, notably in radar and communications, but space and time restrict. E. Oran Brigham's book is an excellent place to start, especially pages two and three, which contain a (nonexhaustive) list of 77 applications." [Rockmore, 2000, p. 60]
The FFT in Medical Imaging
The FFT was recognized as a powerful tool and widely used within a few years of its publication. It is now used in medical imaging. For example, Stark, et al., [Stark, 1981] , report that the standard algorithm for reconstructing images in computerized tomography (CT) is filtered convolution back projection (FCBP), where back projection is based on Fourier techniques. [Stark, 1981, p. 496]
"The standard algorithm for reconstructing images in computerized tomography (CT) is filtered convolution backprojection (FCBP). This procedure which is widely discussed in the literature [1], [2] produces high-quality imagery by convolving the projection data with a "deconvolution" function before backprojecting." [Stark, 1981, p. 496]
They indicate that the FFT algorithm is a key step in the process of approximating a CT image.
"The problem then becomes one of interpolating from the known values at the polar points to the values required over a rectangular Cartesian grid which allows the approximate realization of (4) via an inverse two-dimensional FFT routine." ." [Stark, 1981, p. 497]
The FFT in Image Processing
Image processing also benefits from the FFT. Filtering of images can sharpen, blur, and apply sophisticated tone mapping to images. Since filtering of images in the spatial domain is similar to filtering of signals in the time domain, such image filtering can also be done in the frequency domain by transforming images from the spatial domain to the frequency domain for efficient filtering in that domain. Then they can be transformed back to the spatial domain for viewing. This directly corresponds to fast convolution and correlation of time signals. For example, Fialka et al., [Fialka, 2006] report a significant reduction in computation cost by using the FFT for doing image filtering. .,
"However, the fast Fourier transform can outperform convolution in several cases: 1. When applying multiple filters on the same image with FFT, we only need one forward transformation. 2. Unseparable filter kernels – i.e. the edge detection filters in chosen directions. Convolution of unseparable filters has quadratic complexity and therefore becomes very slow when increasing the kernel size. Furthermore, it is strictly limited by the number of instructions a GPU can execute. 3. Very large filter kernels – more than 33 pixels for one or two channel images, more than 51 pixels for three or four channel images. [Fialka, 2006, p. 5]
The FFT in Partial Differential Equations
The FFT has enabled the numerical solution of partial differential equations (PDEs), such as the Navier-Stokes equations for turbulent fluid flow with periodic boundary conditions in what is termed "spectral methods" by C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang in Spectral Methods: Fundamentals in Single Domains [Canuto, 2007] . Chebyshev spectral methods also exploit the FFT to solve numerical problems on non-periodic intervals, and these are exemplified by well-used codes such as Chebfun and Dedalus. Further PDE applications are the numerical Ewald methods for solving the Poisson equation in a periodic box, the electrostatic interaction in molecular dynamics for protein modeling, and the Stokes fluid interactions in viscous hydrodynamics.
The FFT in Telecommunications
The FFT has played a central role in communication systems, including 4G, 5G, digital video broadcasting (DVB), digital audio broadcasting (DAB), and satellite communication systems. It will also be critical in future 6G technology, targeting worldwide commercialization in 2030 for complete 3D-coverage communications, regardless of geographical limitation. In addition, the core transmission of technology of ubiquitous Wi-Fi systems also relies heavily on FFT. All these systems adopt the concept of multicarrier modulation (MCM) techniques which are realized by FFT with high speed and low complexity. For the foreseeable future, there is no other technology that can compete with the FFT-based MCM transmission technology and its variations. Further, mainstream wireline communication systems such as ADSL, VDSL, and cable modems are also based on the same MCM scheme, which is called discrete multitone (DMT) modulation.
The Extensive Use of FFT in Other Applications
Both of the examples above were enabled by the FFT algorithm because they were impractical with the former classic Fourier transform algorithms. But these two examples are merely two of dozens of new applications made possible by the FFT. Rockmore in the quote above mentions 77 applications.
Historical Development Prior to Cooley and Tukey
The FFT was preceded in the mathematical literature by Gauss and others. Gauss was interested in plotting the orbits of heavenly bodies. For data, he used observation points, which today we recognize as time samples. Filling the gaps between time samples is a form of filtering. Gauss, in theory, could apply a filter to his observations and obtain an estimate of the continuous orbit. Gauss did not have the luxury of high-speed digital computers, so his published work did not reflect on the complexity of proposed algorithms. The algorithm he proposed bears similarities to the FFT in its use in fast filtering. But the Gauss algorithm was published before the world was ready for an FFT, and it did not have the impact of the publication of the FFT. [See Heideman, 1984, for additional details]
Other work that followed disclosed FFT-like algorithms for Fourier applications. In the original FFT paper, the authors credited I J. Good with prior related work that contained some of their observations. After publication of the FFT paper in 1965, Cooley, et al., [Cooley, 1967] reported prior efforts related to the FFT algorithm that contained some of its ideas. This work includes G. C. Danielson and C. Lanczos, 1942, on Fourier Techniques for X-Ray scattering, and C. Runge, 1903 on the basic teaching of mathematical sciences. [See Cooley, et al., 1967, for more details] A survey of a remarkable number of variations of the original FFT that have been proposed appears in [Kumar, 2019].
What distinguishes the FFT from the related prior work is that the FFT publication incorporated a complexity analysis to prove that the FFT complexity is N log N, as well as a comparison of various implementations to demonstrate actual reduction to practice.
What obstacles (technical, political, geographic) needed to be overcome?
Because basic properties of the Fourier Transform were well known, the main obstacle to its use was the high cost of computation of the Fourier Transform codes in use before the FFT. The culture at the time perceived the Fourier Transform algorithm as one whose operation count grew quadratically with problem size. To formulate the FFT required a new approach to the computation. The Cooley-Tukey paper independently discovered this approach. Cooley and Tukey and their colleagues were able to implement and evaluate the new algorithm to prove its practicality. Once an efficient tool for computing the Fourier transform was in place, its use for the known applications of Fourier analysis followed almost immediately. The presence of a powerful tool encouraged researchers to seek other ways to apply it, and dozens of successful new applications resulted.
What features set this work apart from similar achievements?
The single most important feature of the FFT is that it enabled new applications that have proven to be important contributions to science and to everyday life. Computerized Tomography has saved lives. The FFT made it practical. The FFT is central to image and video stream compression/decompression through the JPEG and MPEG standards. Without it, internet traffic of photos wo
Why was the achievement successful and impactful?
Supporting texts and citations to establish the dates, location, and importance of the achievement: Minimum of five (5), but as many as needed to support the milestone, such as patents, contemporary newspaper articles, journal articles, or chapters in scholarly books. 'Scholarly' is defined as peer-reviewed, with references, and published. You must supply the texts or excerpts themselves, not just the references. At least one of the references must be from a scholarly book or journal article. All supporting materials must be in English, or accompanied by an English translation.
The eight excerpts below are intended to illustrate the diversity of FFT applications and the importance of the FFT in each. They represent only a small sample of huge volume of papers citing to instances of the use of the FFT. Google Scholar indicates over 1 million articles mention the FFT. The excerpts are the following:
Garwin
From [Garwin, 1969]. This excerpt, from Richard Garwin’s section of the paper, describes stages of improvement possible in solving mathematical problems.
One is to improve the speed of the operation currently in use to do the job. Thus, one can use a faster computer which does the arithmetic in one-tenth the previous time. The same number of multiplications and additions are done as before. The next step is to use a new algorithm. The fast Fourier transform typifies this type of improvement. A different algorithm for Fourier transform can thus make a factor 100, 100, or 1 million improvement in speed; the larger the job the larger the improvement. [Garwin, p. 71]
Stark
From [Stark, 1981]. The excerpts below establish that the standard algorithm for computerized tomography is Fourier based, and that the FFT is an essential step in that analysis.
"The standard algorithm for reconstructing images in computerized tomography (CT) is filtered convolution backprojection (FCBP). This procedure which is widely discussed in the literature [1], [2] produces high-quality imagery by convolving the projection data with a "deconvolution" function before backprojecting. A virtue of the technique is that it avoids the errors associated with the interpolation of Fourier projection data from polar to Cartesian coordinates." [Stark, p. 496]
"The problem then becomes one of interpolating from the known values at the polar points to the values required over a rectangular Cartesian grid which allows the approximate realization of (4) via an inverse two-dimensional FFT routine." [Stark, p. 497]
Fialka
From [Fialka, 2006.]. The excerpts below confirm that the filtering of images in the spatial domain is done by transforming to the frequency domain, filtering in that domain by point-by-point multiplication and then transforming back to the spatial domain. It also confirms that the FFT is used for the conversion to and from the frequency domain to filter images. The second excerpt indicates that the FFT is faster than spatial filtering under a variety of circumstances.
"While in the spatial domain we apply filter by simple convolution of the image and the kernel of the filter function, in the frequency domain we must convert the image into the frequency domain using Fourier transform, then we must apply the filter by multiplication and finally we have to transform the result back to spatial domain using inverse transform. " [Fialka, 2006, at p. 1]
"However, the fast Fourier transform can outperform convolution in several cases: 1. When applying multiple filters on the same image –with FFT, we only need one forward transformation. 2. Unseparable filter kernels – i.e. the edge detection filters in chosen directions. Convolution of unseparable filters has quadratic complexity and therefore becomes very slow when increasing the kernel size. Furthermore, it is strictly limited by the number of instructions a GPU can execute. 3. Very large filter kernels – more than 33 pixels for one or two channel images, more than 51 pixels for three or four channel images. " [Fialka, 2006, at p. 5]
Bulis
From [Buijs, H., 1974.] The excerpts below describe the use of frequency domain processing of images and confirms that the FFT algorithm is a source of a “tremendous saving” in the calculating effort for transforming images from the spatial domain to the frequency domain.
"In order to treat a sampled two-dimensional function or image-in the Fourier transform domain, it is necessary to have a means to transform into and out of this domain. ,,, If one-dimensional transforms can be carried out in a very short time, then it may be possible to treat relatively rapidly varying images on a continuous basis. The introduction of the fast Fourier transform (FFT) algorithm by Cooley and Tukey [l] in 1965 allowed a tremendous saving in calculating effort in carrying out a discrete complex Fourier transformation,: [Buijs, p. 420]
Uma
From [Uma, R. 2011.] The excerpts below confirm that the JPEG Standard for Image Compression is based on the Discrete Cosine Transform. This transform is related to the Fourier Transform. It can be computed by making simple modifications to Fourier Transform algorithms. The FFT version of the Fourier Transform has been converted to a fast Cosine Transform algorithm which is cited in this paper. The conversion from fast Fourier to fast Cosine transforms is straightforward given the way that the FFT achieves its speedup.
"[In the JPEG Standard, t]he discrete transform (DCT) is then applied to each data unit to create a 8x8 maps of frequency components. They represent the average pixel value and successive higher-frequency changes within the group." [Uma, p. 2]
"The development of efficient algorithms for the computation of DCT (more specifically DCT –II) began soon after Ahmed, et al., (1974) [14] reported their work on DCT. It was natural for initial attempts to focus on the computation of DCT by using Fast Fourier Transform (FFT) algorithms. Many algorithms for fast computation of DCT are reported in the literature [10-13]." [Uma, p.1]
Linzer
From [Linzer, 1996] The excerpts below confirm that the discrete cosine transform (DCT), a transform related to the FFT is incorporated in the MPEG standard for video compression, and that the authors of the article propose a use of frequency domain cross correlation based on the FFT to discover motion vectors essential for MPEG video compression.
"The MPEG (Moving Picture Expert Group) is a standard for both transmitting and recording compressed video [2]. The MPEG algorithm achieves compression by exploiting spatial and temporal redundancy in video. The DCT (Discrete Cosine Transform) based compression is used to exploit spatial redundancy." [Linzer, p. 1934]
"Motion estimation requires computation of one or more motion vectors for a 16 × 16 block of the picture. The motion vectors are typically computed by minimizing a cost function representing the mismatch between a block in the current picture and the set of potential predictor blocks in a reference picture. Full search covers all potential predictor blocks in the picture." [Linzer, p. 1934]
[Details of a fast search algorithm for motion estimation] A circular convolution of size Lx × Ly can be computed by computing the two-dimensiona1 discrete Fourier transform (DFT) of the input sequences, forming the pointwise product of the transform domain sequence and taking the inverse DFT (IDFT) of the result. By using FFTs and inverse FFT (IFFTs) to compute the DFTs and IDFTs, we get an FFT based cross-correlation algorithm. [Linzer, p. 1936]
Madanayake
From [Madanayake, 2020] The excerpts below confirm that an approximate form of the discrete FFT (ADFT) has enabled advances in muli-input, multi-output (MIMO) wireless communication systems.
"Joint spatial division and multiplexing (JSDM) is an approach to multi-user MIMO downlinks that exploits the structure of channel correlations in order to allow a large number of antennas at the base station while requiring reduced-dimensional channel state information at the transmitter [1],[2]. This uses a multi-user MIMO downlink precoder obtained from an array pre-beamforming matrix, and incurs no loss of optimality for a large number of array elements. A DFT-based pre-beamforming matrix is near-optimal for uniform linear arrays (ULAs) of antennas, and requires only coarse information about the users’ angles of arrival and angular spread [3]." [Madanayake, pp. 96613-99614]
Lin
Machine learning algorithms have incorporated FFT techniques in various ways to improve the results of training. FFTs can be incorporated into machine learning feature extraction codes to enable a reduction in the size of the calculations that follow without comprising learning. The excerpt from [Lin, 2018] below describes the problems of dealing with large language models and how FFT techniques can deal with them to yield significant improvements.
"(i) Confined by the communication bandwidth of embedded systems, which are usually mobile terminals, it is still challenging to download large-size DNN models, even which can be offline-trained in data centers. (ii) The large model size of deep learning also imposes stringent requirements on the computing resources and memory size of embedded systems. ... In this work, we propose a Fast Fourier Transform (FFT)-based DNN training and inference model suitable for embedded systems due to reduced asymptotic complexity of both computation and storage. ... We develop the training and inference algorithms based on FFT as the computing kernel and deploy the FFT-based inference model on embedded platforms. Experimental test results demonstrate that our model provides the optimization in different languages and achieve[s] a significant improvement." [Lin, p. 1045]
References
[Buijs, 1974]. Buijs, H., et al. "Implementation of a fast Fourier transform (FFT) for image processing applications." IEEE Transactions on Acoustics, Speech, and Signal Processing 22.6 (1974): 420-424]
[Canuto, 2007] Canuto, Claudio, M. Yousuff Hussaini, Alfio Quarteroni, and Thomas A. Zang. Spectral methods: evolution to complex geometries and applications to fluid dynamics. Springer Science & Business Media, 2007.
[Cisco web page] https://www.cisco.com/c/dam/m/en_us/solutions/service-provider/vni-forecast-highlights/pdf/Global_Device_Growth_Traffic_Profiles.pdf
[Cooley, 1965] Cooley, James W., and John W. Tukey. "An algorithm for the machine calculation of complex Fourier series." Mathematics of computation 19.90 (1965): 297-301.
[Cooley, 1967] Cooley, James W., Peter AW Lewis, and Peter D. Welch. "Historical notes on the fast Fourier transform." Proceedings of the IEEE 55.10 (1967): 1675-1677.
[Dongarra, 2000] Dongarra and Sullivan. "Guest Editors’ Introduction, The top 10 algorithms" Computing in Science & Engineering 2.1 (2000): 22-23.
[Fialka. 2006] Fialka, Ondirej, and Martin Cadik. "FFT and convolution performance in image filtering on GPU." Tenth International Conference on Information Visualisation (IV'06). IEEE, 2006.
[Garwin, 1969] Cooley, Garwin, Rader, Bogert, and Stockman. “The 1968 Arden house Workshop on fast Fourier transform processing.” IEEE Transactions on Audio and Electroacoustics 17.2 (June 1969): 66-76.
[Heideman, 1984] Heideman, Michael, Don Johnson, and Charles Burrus. "Gauss and the history of the fast Fourier transform." IEEE ASSP Magazine 1.4 (1984): 14-21.
[Kumar, 2019] Kumar, G. Ganesh, Subhendu K. Sahoo, and Pramod Kumar Meher. "50 years of FFT algorithms and applications." Circuits, Systems, and Signal Processing 38 (2019): 5665-5698.
[Lin, 2018] Lin, Sheng, Ning Liu, Mahdi Nazemi, Hongjia Li, Caiwen Ding, Yanzhi Wang, and Massoud Pedram. "FFT-based deep learning deployment in embedded systems." In 2018 Design, Automation & Test in Europe Conference & Exhibition, pp. 1045-1050. IEEE, 2018.
[Linzer, 1996] Linzer, Elliot, Prasoon Tiwari, and Mohammad Zubair. "High performance algorithms for MPEG motion estimation." 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings. Vol. 4. IEEE, 1996, pp. 1934- 1937
[Madanyake, 2020] Madanayake, Arjuna, Renato J. Cintra, Najath Akram, Viduneth Ariyarathna, Soumyajit Mandal, Vitor A. Coutinho, Fabio M. Bayer, Diego Coelho, and Theodore S. Rappaport. "Fast radix-32 approximate DFTs for 1024-beam digital RF beamforming." IEEE Access 8 (2020): 96613-96627.
[Rockmore, 2000] . Rockmore, Daniel N. "The FFT: an algorithm the whole family can use." Computing in Science & Engineering 2.1 (2000): 60-64. [Stark, 1981] Stark, Henry, et al. "An investigation of computerized tomography by direct Fourier inversion and optimum interpolation." IEEE Transactions on Biomedical Engineering 7 (1981): 496-505.
[Stark, 1981] Stark, Henry, et al. "An investigation of computerized tomography by direct Fourier inversion and optimum interpolation." IEEE Transactions on Biomedical Engineering 7 (1981): 496-505.
[Strang, 1993] Strang, Gilbert. "Wavelet transforms versus Fourier transforms." Bulletin of the American Mathematical Society 28.2 (1993): 288-305.
[Uma, R., 2011] Uma, R. "FPGA implementation of 2-D DCT for JPEG image compression." International journal of advanced engineering sciences and technologies (IJAEST) 7.1 (2011): 001-009.
Supporting materials (supported formats: GIF, JPEG, PNG, PDF, DOC): All supporting materials must be in English, or if not in English, accompanied by an English translation. You must supply the texts or excerpts themselves, not just the references. For documents that are copyright-encumbered, or which you do not have rights to post, email the documents themselves to ieee-history@ieee.org. Please see the Milestone Program Guidelines for more information.
Media:Cisco Web Page Global_Device_Growth_Traffic_Profiles.pdf
Please email a jpeg or PDF a letter in English, or with English translation, from the site owner(s) giving permission to place IEEE milestone plaque on the property, and a letter (or forwarded email) from the appropriate Section Chair supporting the Milestone application to ieee-history@ieee.org with the subject line "Attention: Milestone Administrator." Note that there are multiple texts of the letter depending on whether an IEEE organizational unit other than the section will be paying for the plaque(s).
Please recommend reviewers by emailing their names and email addresses to ieee-history@ieee.org. Please include the docket number and brief title of your proposal in the subject line of all emails.