FPGAs dominate reconfigurable high compute applications

Underscoring how far semiconductor makers have come during the pandemic, Xilinx, Intel and Lattice all announced releases of their largest field-programmable gate array (FPGA) platforms to date, and newer platforms with higher I/O count, memory bandwidth and transceiver bandwidth are sure to follow. Terabit bandwidths used to be unheard of, but today’s data center grade FPGAs can comfortably reach these bandwidths to support extremely high compute applications.

The releases weren’t major newsmakers, especially with the industry in the midst of an ongoing chip shortage, as well as everyone in the industry talking about reshoring. Still, these developments illustrate the limits of the traditional embedded computing architecture, and where we can expect to see greater FPGA usage in the future. When we look at advanced technologies requiring high compute or highly parallelizable embedded architecture, FPGAs show that their worth extends far beyond prototyping.

What’s driving FPGA usage?

The typical embedded architecture is based around a microcontroller (MCU) with application-specific integrated circuits (ASIC) peripherals laid out on a PCB. This architecture rose to prominence alongside the ASIC revolution of the 1970s and 1980s, and it is now the default mode of constructing embedded systems that are not running a full-scale embedded OS. It should be no surprise that the most popular embedded development platform is Arduino rather than an FPGA platform.

Despite the popularity of the MCU+ASIC architecture, FPGA usage has been increasing and is projected to increase at 14.1% CAGR through 2027. Compare this to MCU usage, which is expected to grow at only 10.1% CAGR through 2028. While MCU market share is currently approximately three times that of FPGA market share, the industry growth trend predicts the gap will narrow as more FPGA-based devices are brought to market. FPGAs tend to have higher cost, meaning the number of MCUs projected to enter the field still dwarfs the number of FPGAs.

There are several factors driving FPGA usage, all of them relating to the need for greater compute and acceleration at the hardware level. FPGAs are among the fastest reconfigurable devices on the market. It is not just the speed of individual logic elements that makes these platforms fast, it is the ability to configure the switching fabric to execute logic operations in a way that is tailored for specific computing applications.

Application areas for FPGAs

In general, FPGAs excel in any application where high compute is required, but where it can’t be achieved using the typical combinational/sequential logic architecture used in today’s MCUs and ASICs. Some of these areas include:

  • Embedded AI: Where inference models are created and updated in logic blocks rather than being stored and implemented in memory.
  • High compute edge systems.
  • Systems that implement sensor fusion for applications like ADAS, industrial control and robotics.
  • Interoperable systems, such as some new daughtercard and sensor products for embedded military computing platforms.
  • Specialty processing in any of the above areas in data centers.
  • Systems where peripherals are consolidated into a single device, rather than spread among multiple chips.

Designers who are not familiar with FPGA development, but who might know the basic concepts that make FPGAs unique, should not find the above list of development applications surprising. Beginning around 2019, the industry started trying to support additional compute in these application areas with a new wave of ASICs. Google Coral is a perfect example, which is effectively a TensorFlow accelerator that acts like an add-on for PCIe-enabled microprocessor chips (MPUs).

Customizable high compute is the major advantage of FPGAs in upcoming technology areas, but these devices provide another major advantage not found in the typical MCU+ASIC architecture: reconfigurability.

Hardware virtualization and reconfiguration

When you develop an application for an MCU, you’re telling the MCU how it will interact with its I/Os. In an FPGA, you are instantiating logic directly on the device that could just as well appear in an external ASIC. Whenever suitable ASIC options do not exist, the solution is to implement them in an FPGA. This reflects the last point in the above list, which helps future-proof a product and makes its supply chain more resilient. Rather than relying on a slew of ASICs, some of which may not have replacements, an application could be implemented on multiple FPGAs.

The reconfigurability of FPGAs is what makes them so useful in areas like AI, where new models and application updates need to be periodically implemented. New capabilities can be added to the device over time simply by updating the device’s application. You could never do that with an MCU+ASIC system architecture as you are bound by the limits of the on-board hardware. I see this as another driver of FPGA usage recently; the products where these devices are implemented demand long lifecycles; using an FPGA as the host controller ensures a product’s lifecycle can be extended with a reconfigured application as needed.

What’s next for FPGAs?

FPGAs operate extremely well at the high compute end of the embedded spectrum. I would expect to see a host of lower-compute reconfigurable devices begin to dominate the mid-range portion of the market to bring the advanced capabilities listed above into smaller devices. This design approach is preferable to adding accelerator ASICs that still don’t provide the same advantages as a standalone FPGA. Designers will still be able to use ASICs with FPGA host controllers as they see fit, so they will still get the best of both worlds.

Source: FPGAs dominate reconfigurable high compute applications

About The Author

Scroll to Top
Read previous post:
Make a CNC mill with a laser cutter

So, you have a CNC laser cutter but you want a CNC mill? Problem solved. This project is a small...