What is… edge computing?

A monthly tech explainer series about the technology shaping our world today, from the Garage.

By Jeff Wise — January 12, 2023

A much-cited tech origin story recounts how in 1943, someone asked IBM CEO Thomas J. Watson how many computers he thought the world needed. He replied, “Maybe five.” He was off by… a few. He’d probably be shocked by the ever-growing number of devices in our orbit, from smartphones and laptops to doorbell cameras and thermostats. Unlike Watson’s hulking mainframes, these gadgets connect directly to the internet and to each other, an architecture called “edge computing,” since it happens at the edge of a network. 

Edge computing can save time and bandwidth to carry out computation close to where it’s needed. Experts estimate that by 2025, three-quarters of all enterprise-generated data will be processed in this way.

Eric Chow

How it works

The idea is to extract maximum value from the flood of data available, a goal that isn’t always met by the standard approach. One recent McKinsey study, for instance, found that an oil rig had 30,000 sensors but only 1% of their data was utilized for decision-making. Edge computing can make better use of that data by processing it closer to where it’s generated and where it will be applied. Robots moving around inside a warehouse, for instance, can collectively decide how to plot their trajectories much more quickly and with more efficient use of bandwidth than if they have to send the information off to a cloud server farm. That could mean lower costs for companies and faster turnaround for consumers.

RELATED: What is… computer vision?

The a-ha moment

Cloud computing turned the computational power of server farms into a commodity, one that we take for granted as always-on and expansive in capacity. But Carnegie Mellon researcher Mahadev Satyanarayanan perceived the approach’s limitations — especially long latency — and in a 2009 paper in Nature Electronics suggested an alternative: smaller, more widely distributed servers (or nodes) that could be located in close proximity to end users. It became the foundational document for the new field of edge computing. 

What it means for everyday life

Edge computing can multiply the efficacy of computational resources. For instance, it could allow autonomous vehicles to collectively optimize traffic flow on the fly, help retailers personalize their customers’ shopping experience, or allow wearable health monitors to understand and predict a user’s physical condition. One potential drawback is that distributed decision-making provides much greater surface area through which hackers can launch attacks. That’s why HP is working to make sure that devices at the edge like PCs, printers, networked peripherals have security built in so that they can't be compromised.

A major driver of edge computing is the rise of AI. In the past the tools didn’t yet exist to take advantage of the wealth of data available at the edge. Machine learning allows us to convert this data into business insights and actions.

How it might change the world

In the years to come, the spread of 5G wireless technology is going to bring the power of advanced computation — think generative AI and even quantum computing — ever further out into the world. For end users, it will mean the seamless emergence of powerful new capabilities. For enterprise customers, it will mean fast, fine-grained control and instantaneous understanding of their competitive environment. For all of us, it will mean digital technology that works faster, better, and smarter. You could say that’s the beauty of life on the edge.

READ MORE: What is… 5G?