Diagramatic representation of the components inside an FPGA chip

An FPGA, or Field-Programmable Gate Array, is a programmable semiconductor chip enabling the implementation of various electronic circuits by circuit designers. Unlike fixed physical circuits in traditional processors, FPGA's digital circuits can be configured and reconfigured for diverse tasks. Importantly, no physical changes occur during FPGA programming. Instead, a matrix of programmable logic blocks and interconnects exists within FPGA chips, allowing construction of desired electronic circuits by enabling/disabling as needed. This flexibility is valuable for developing device prototypes or optimising electronic circuits for real-world applications. Today, FPGAs are proving valuable in Artificial Intelligence, raising the question of whether they will be the key processor in future AI.


FPGAs are frequently compared to microcontrollers (MCUs). However, although they can often be used interchangeably within a particular application, FPGAs differ from MCUs in significant ways:

  • FPGAs make hardware more flexibleFPGAs do not have fixed electronic circuits whereas those of MCUs are printed into the silicon. Therefore, once an MCU is manufactured, the electronic circuitry cannot be changed. With an FPGA, an electronic circuit can be modified, often without even removing the chip from the device it is operating within.
  • FPGAs do not need a runtime loop. Unlike MCUs, which continuously run a program in a loop upon power-up, FPGAs, by default, behave more like actual physical electric circuits. Consequently, there is less of a time lag between inputs and outputs on an FPGA chip as operations can proceed instantly rather than as part of a runtime loop. However, it should be noted that FPGAs can be programmed to run like MCUs when their clock function is activated.
  • FPGAs can execute operations in parallel. FPGAs can be configured so that multiple digital circuits run at the same time in parallel. This is particularly useful for high-throughput compute operations such as those implemented in Artificial Intelligence (AI). MCUs, on the other hand, can only execute their operations sequentially as part of their runtime loop.
  • FPGAs cost more than MCUs. Due to their flexibility, FPGAs are generally more expensive than MCUs. However, FPGAs may be more economical in the long run if circuit design changes are necessary once installed in finished devices.
  • Better security at the hardware level. By their very nature, FPGAs are always going to be more secure than MCUs. This is because updates can be made to FPGA 'hardware' if a hardware-related vulnerability is discovered at a later stage. In contrast, if a hardware vulnerability is discovered in an embedded MCU, there is little anyone can do about it without recalling the hardware and physically changing it.

What is an eFPGA?

An eFPGA or embedded Field-Programmable Gate Array is simply an FPGA that is part of the electronic circuitry of another chip. Typically, the chip is an Application-Specific Integrated Circuit (ASIC) or System on a Chip (SoC: essentially all the components needed for a complete system on a single chip). Unlike traditional FPGAs that come as standalone devices, eFPGAs are optimised to work seamlessly with the surrounding circuitry of the larger chip. Due to their overall smaller footprint, these embedded FPGAs are also particularly useful when the power and physical space in a device are limited.

FPGA Applications

FPGAs are most often used in certain key areas:

  • Hardware Prototyping: electronic devices can be optimised using FPGA-mediated digital circuits before an unmodifiable physical integrated circuit is manufactured.
  • Space Electronics: FPGAs are often used in space hardware, such as satellites, where continued physical access to the electronics is difficult. Using FPGAs allow for changes and upgrades to the electronic circuitry to be done remotely. In doing so, hardware upgrades or the fixing of any radiation-induced damage to the system are possible even when deployed in outer space.
  • Artificial Intelligence: FPGAs can be designed to have multiple independent circuits working in parallel. In this way, they can be made to run as electronic neural networks which are the underlying mainstay of artificial intelligence capabilities.

FPGAs and AI

Diagramatic representation of an artificial neural network

An artificial neural network

As earlier alluded, configurable logic blocks make up FPGAs. Each of these logic blocks contains multiple logic cells capable of carrying out mathematical operations. AI models, on the other hand, encompass networks of artificial neurons performing numerous mathematical multiply-accumulate (MAC) operations on tensors (multi-dimensional matrices). Therefore, an FPGA serves as an excellent platform for operating as an artificial neural network.

Importantly, various types of artificial neural networks exist in AI, differing in the number of neurons, layers, and connection arrangements. As a result, each type of neural network requires its own unique configuration of logic blocks within the FPGA.

Crucially, however, during the AI 'learning' process, adjustments occur in the mathematical MAC operations rather than the reconfiguration of connections or logic itself. Learning entails modifying a parameter of the logic block (weights or connection strength) without altering the logic blocks themselves.

So Which Type of Semiconductor Chip will Dominate the Future of AI?

Artificial neural networks can be made to run on various processors including CPUs, GPUs, MCUs, ASICs and of course FPGAs. However, each has its advantages and disadvantages when it comes to processing power, speed, energy efficiency, and cost. 



Central Processing Units (CPUs) represent the central brain of most computers. While possessing considerable power for running neural networks, CPUs are specifically designed for sequential calculations, not parallel tasks. Consequently, even though CPUs are often the most powerful processing component of any computing system, they struggle with the parallelised requirements of a neural network. This means that it often takes them far longer to process AI workloads than less powerful but inherently parallelised GPUs. 



Microcontrollers (MCUs) resemble scaled-down CPUs, running instructions sequentially rather than in parallel. Like CPUs, this makes them similarly less suitable for AI. Furthermore, MCUs also have the problem of not being powerful enough to run AI neural networks. This is especially true when it comes to the training aspect of machine learning. Recently, however, AI inference, where the AI model has been pre-trained and is simply responding to the features or inputs is now becoming possible with TinyML. For TinyML, the MCU simply runs a miniaturised version of a pre-trained AI model.  



Unlike CPUs and MCUs, GPUs consist of multiple cores that can run mathematical operations in parallel. This architecture is much better suited to running an artificial neural network especially when it comes to training an AI model. However, GPUs, as their names suggest, were originally designed and optimised for running the mathematical operations of graphical workloads. Consequently, AI algorithms have to be adapted to run on such hardware which is not optimised for neural network processing. With FPGAs and ASICs (see below), the hardware can instead be tailored to fit the AI algorithm, ensuring optimal performance for machine learning models.



The primary advantage of FPGAs (and ASICs) lies in their ability to configure hardware specifically for artificial neural network operations. This ensures superior AI processing speed and power efficiency compared to equivalent GPUs. Unlike ASICs, FPGAs offer unlimited reconfigurations, optimising hardware for application-specific neural networks. Consequently, FPGAs are the preferred choice for machine learning model development and AI-infused electronic device prototyping.

FPGAs are, however, unlikely to become a universal chip for widespread use in finished consumer AI devices. This is because configuring FPGAs is more complex than fixed-hardware chips, and most users won't use the flexibility they offer. This keeps FPGAs in the domain of experts rather than the general public. After all, why include a more expensive, hardware-flexible chip in a finished device if the end user won't utilise that flexibility, and a fixed-hardware chip can perform the same job?



Finally, we arrive at Application-Specific Integrated Chips (ASICs). This term encompasses processors optimised for specific real-world applications. In AI, ASICs include Google's TPUs and Amazon's Trainium and Inferentia chips among others. These chips are hardware-optimised for the precise mathematical computations needed in training and deploying machine learning models. Unlike FPGAs, ASICs are fixed hardware devices, unalterable once produced. Due to inherent cost-effectiveness compared to equivalent FPGAs, and considering that most end users don't require FPGA's hardware flexibility, ASICs emerge as the preferred choice for finalised AI devices. Consequently, they stand out as the most probable candidates for widespread use in future devices.

FPGA Programming

Programming Languages

Unlike physical electronic circuits, schematics are typically not employed in the design of digital circuits for an FPGA. Instead, software tools describe the desired circuit behavior, which is programmatically implemented on the FPGA. Programming an FPGA involves using lower-level languages called Hardware Description Languages (HDL). Common ones include: 

  • Verilog
  • VHDL
  • Lucid

While there is some support for higher-level languages like Python, C, C++, Tensorflow, and Pytorch in FPGA programming, they are not commonly utilised by FPGA engineers. This is due to their limitations when it comes to optimising the FPGA chip for an application. Consequently, most FPGA programming is still carried out directly using HDLs.

Programming Platforms

Programming an FPGA typically entails employing an FPGA programming platform, where software configures the FPGA based on your code. Major FPGA chip manufacturers offer dedicated platforms for their respective FPGAs. For example, AMD (Xilinx) FPGAs use the Xilinx Vivado platform, while for Intel (Altera) FPGAs, the Intel Quartus Prime software is the platform to use.

Top FPGA Manufacturers

Currently, the top FPGA semiconductor manufacturers are:

  • AMD (FPGA-related acquisitions: Xilinx)
  • Intel (FPGA-related acquisitions: Altera)
  • Lattice Semiconductor Corporation (FPGA-related acquisitions: SiliconBlue Technologies)
  • Microchip Technology (FPGA-related acquisitions: Atmel, Microsemi, Actel)
  • Achronix Semiconductor Corporation
  • QuickLogic Corporation
  • Efinix
  • GOWIN Semiconductor Corporation


Considering the advantages of FPGAs in AI, it's evident that their future role in the field has been heavily hyped. Despite having key benefits for optimising machine learning models compared to other processors, widespread use in future consumer devices seems unlikely. Nevertheless, FPGAs still play a role in AI for both the present and future. Their ability to optimise hardware for any neural network type makes them ideal for prototyping processors in AI application development. Therefore, FPGAs are likely to remain the preferred processors for AI researchers, developers, and anyone optimising a neural network for a specific application.