fbpx

Planning for obsolescence might be difficult, but looking at the cutting-edge technology of component development could give advanced clues on how to adapt to future technologies.

Last week we outlined 3 strategies for dealing with end-of-life notices. This week, we’re digging a little deeper to uncover what the high-tech industry might be able to tell us about the future of electronics design.

Are New Design Approaches Game Changers For GPUs?

Graphics processing units (GPUs), originally designed for gaming, have been a vital complement to central processing units (CPUs) for deep learning models. This is because GPUs can process large amounts of relatively simple calculations in parallel, while CPUs are better suited for more complex algorithms that can be performed sequentially.

But two alternatives are growing in popularity to boost processing speeds as machine learning scales in complexity: Application-Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs). In contrast to general-purpose integrated circuits (ICs), ASICs are fine-tuned for a specific purpose. ASICs are lightning-fast and more compact than ICs, but their increased complexity makes them more prone to tape-out failure. As their functions are hardwired into their circuitry, their printed circuit boards (PCBs) would need to be altered for adjustments and upgrades, which is technically possible, but difficult. This makes them ideal for short-lived, computationally-heavy processes like crypto mining.

Hardware Upgrade Problems Are Being Solved With Software

FPGAs turn the ASIC hardware upgrade problem into a software solution. They’re a bit bulkier, slower, and power-heavy than optimized ASICs, but users can completely reprogram microchip functions without modifying hardware components, making them the ideal choice for flexible systems. In practical, real-world application, Microsoft Research has already utilized Intel FPGAs to build the world’s first hyperscale supercomputer. The work of Project Catapult, the company’s code name for this enterprise-level initiative, is summarized as this:

[This] innovative board-level architecture is highly flexible. The FPGA can act as a local compute accelerator, an inline processor, or a remote accelerator for distributed computing. In this design, the FPGA sits between the datacenter’s top-of-rack (ToR) network switches and the server’s network interface chip (NIC). As a result, all network traffic is routed through the FPGA, which can perform line-rate computation on even high-bandwidth network flows.

System-On-A-Chip (SoC) Devices Are Emerging To Enhance Computing Power

NVIDIA created data processing units (DPUs) to superpower their hyperscale generative AI algorithms. Here’s how the company describes it:

In a modern software-defined data center, the OS executing virtualization, network, storage, and security can consume nearly half of the data center’s CPU cores and associated power. Data centers must accelerate every workload to reclaim power and free CPUs for revenue-generating workloads. NVIDIA BlueField data processing units (DPUs)offload and accelerate the data center OS and infrastructure software.

Similarly, Synopsis has released SoC neural processing units (NPUs) that can individually perform up to 440 trillion operations per second. The company believes this new development will address the demands of real-time computing with ultra-low power consumption for AI applications.

These are just a few of many innovations that have the potential to impact the development of new electronic components, and in turn, signal the end of the road for existing components. Some EOLAs might be difficult to predict, but looking at the cutting-edge of component development could give advanced clues on how to adapt to future technologies.

Read more: