A Brief History of Finite Element Analysis – Part II

As we mentioned in Part I, the history of Finite Element Analysis is deeply intertwined with the evolution of computing. It seems only fitting that the FEA software used to design the world’s most cutting-edge products should have the most cutting-edge computational techniques at its disposal. From the early punch days of the 60’s through the 2000’s, FEA companies have found unique ways to take advantage of the ever-changing computer landscape. 

GUIs – “1984 won’t be like 1984” 

1984 – The Apple “Lisa” was released. Named after Steve Jobs’s daughter, the computer was a commercial flop, but would pave the way for the Graphical User Interface and the industry-changing Macintosh. 

1985 – The same year that Microsoft unveiled the Windows OS, AutoCAD 2 was released. It was designed to run on “microcomputers,” including two of the new 16-bit systems, the Victor 9000 and the IBM Personal Computer (PC). This version consisted of over 100,000 lines of C code and had a list price of $2,000. 

1985 – Altair Engineering was founded in a garage in Detroit, MI. Their first product was HyperMesh, followed by the award-winning FE based topology optimization tool, OptiStructu. A product they would later buy, the RADIOSS Finite Element solver, required 20 hours to solve a 20 K element crash simulation in 1987. If you fast forward to 2018, RADIOSS is now able to parallelize a 15 million element crash simulation to 128 cores and see results in 5 hours. That represents a nearly 4,000% increase in computational power. Most of this gain, however, can be accredited to the doubling of computational speed every 18 months. 

1991 – NEi Software was founded as Noran Engineering, Inc. Their product, NEi Nastran was a spinoff of the original MSC/NASA codebase, but with a GUI and improved performance. 

64 bits and Parallelization

1999 – To improve simulation accuracy, the Direct Numerical Simulation was created, which modeled all relevant details of a part as unique finite elements. The method was introduced in 1999 as a solution for modeling the heterogeneity in composite structures. 

2004 – ANSYS was the first simulation software company to solve a structural analysis model with more than 100 million degrees of freedom. The company did so using an SGI Altix server with six 64-bit Intel titanium 2 processors to solve a structural analysis problem with 111 million degrees of freedom in just 8.6 hours of solver time. 


ANSYS achieved this improved computing performance by running their code on a 64-bit high performance cluster, which is now the standard for all production FEA implementations. 


ANSYS’s improvements set a new standard in performance that all other software vendors would soon follow. 

What’s Next? 

From a hardware standpoint, several vendors and even governments are continuing to push the limits of high performance computing. SGI, the company that supplied ANSYS with the machine for their record-breaking simulation, plans to achieve a 500-fold increase in performance by 2018, in order to achieve one exaflop. 

But speed and availability of processors does not necessarily improve the accuracy of an FEA solver, it only helps you get inaccurate results faster. For any practical problem using heterogeneous materials, the mesh sizes and computational resources required to perform an accurate direct numerical simulation would exceed the capacity of the most powerful computers available. 

In our next post, we will discuss multiscale simulation, a best-of-both-worlds alternative to brute-force DNS.