Data processing units (DPUs) are a new breed of high-performance processors that are reprogrammable. Linked with high-performance network interfaces, DPUs enhance work and stimulate network and storage functions sent forth by the data center servers. Many IT specialists believe that DPUs have the potential to join forces with CPUs and GPUs to be the pillars of the computing world.
What does a DPU do?
A DPU is a chip technology standard for the IT industry and is a high-performing programmable multi-core CPU that results in a high-performance network interface. DPUs are also flexible and software-programmable for acceleration engines. This makes it essential to enable an isolated, cloud-based computing platform. Eventually, this determines the next age of cloud-scale computing technology.

DPUs as a stand-alone embedded processor
You can use DPUs as an independent embedded processor. However, they are usually combined with a Smart Network Interface Card or SMARTNIC, a network interface controller used as a powerful tool for contemporary hi-tech servers.
Some vendors utilize established processors that benefit from the broad Arm CPU ecosystem’s progressive development and application infrastructure. Some IT people also believe that you can fully utilize DPUs and concentrate only on the embedded high-performance CPU data path processing.
Support storage of a DPU
The DPU is a glorified innovation for a reason. They can be very versatile, and you can use them as support storage in your data center. To illustrate, you can utilize the Non-Volatile Memory Express or NVMe storage devices by connecting them to the DPU’s Peripheral Component Interconnect Express or PCIe bus.

In addition, DPU gives you more stable access to past storage devices that depend on NVMe-oF. The DPU performs these small storage devices as standard NVMe devices to the system. This system optimizes your connectivity to the remote storage because you no longer need special drivers to connect to these small storage devices.
Data processing whatnots
Skipping the regular x86 CPU to use the high-performance attack is not worth it. Think about this: would an embedded CPU perform better if 100 Gigabit/sec packet processing brings an x86 to its knees?
Instead, the network interface must be solid and workable enough to manage network data processing. On the other hand, the CPU should be used for control path initialization and exception processing, nothing more.
Final thoughts
Now that we know what DPU really is, we understand why people are making a fuss about its high-performing data center servers. It’s all thanks to DPU’s capacity to unload storage and network functions, enabling the processor to concentrate on running the operating system.
DPUs are indeed an asset to tech giants. By adding this new technology to their servers, the users can finally maximize the CPU to its full extent while eliminating the normal network and storage access. If you’re looking for manufacturers, you can check what NVIDIA, Marvell, Fungible, Pensando, Broadcom, Intel, and Kalray are up to. Those are a good group to start when you’re investigating this type of solution.
YouTube: What is a DPU – A Quick STH Primer to the New Processor (ServeTheHome)
Photo credit: All images shown are owned by NVIDIA and were provided as part of their press assets.