Posts tagged augmented reality
Why Edge Computing Is The Next Technology Trend Worth Watching
AAMAAQDGAAoAAQAAAAAAAA8oAAAAJGRlZDc3ZDhlLTQyZDItNGIzYy1iY2FmLWFlZGY0ZDg0ZGI4ZQ.jpg

Originally published on Huffington Post on January 10, 2018

Bitcoin buzz has reached stratospheric levels that are tough to rival. Nonetheless, it does lead one to think about which of today’s emerging trends will bear fruit in the coming years and become the next Bitcoin. Edge computing, though not likely to hit Bitcoin levels of buzz, is one trend worth watching.

It might be helpful to start with an explanation and a little history around general-purpose processing (compute) platforms. For the sake of clarity, let’s define the term “platform” as a well-documented computing resource with interfaces that are broadly adopted by third-party developers.

Processing platforms emerged nearly 80 years ago. The first programmable platforms took the form of highly centralized computers that came to be called mainframes. Think IBM. Then, in the 1970s, the era of the PC was born, driving a wave of distributed processing and applications and the rise of Intel and Microsoft. Finally, with the rise of internet application architectures, cloud compute platforms emerged, in a new wave of centralization — think Amazon AWS.

Now the tides are shifting back to a wave of distributed processing platforms. Mobile platforms are one example. Edge computing is an emerging distributed platform. Sometimes referred to as “fog compute,” edge compute is general purpose processing power, deployed at the edge of the distribution network, at the last point before data reaches the device that requests it. Edge computing can be situated at a cable company head end, next to a wireless tower or in an enterprise facility.

Companies like Vapor.io and EdgeMicro are making a play for edge computing. AT&T is betting on it too. Who ultimately dominates edge computing is anyone’s guess, though IoT (Internet of Things) sensors and 5G (the fifth generation of wireless networking technology) look to be key drivers of the phenomenon.

Data volumes will be the first accelerant.

With IoT, we’re entering an age where we’re instrumenting nearly everything that can be measured. That generates data — mind-numbing amounts of it. Video feeds create high data volumes, but so do energy metering and emerging vehicle platforms, to name two. According to Intel, each autonomous vehicle will generate four terabytes of data a day. And Ericsson estimates (registration required) that mobile data traffic will grow eight times in the next six years.

Sending all that data across the entire network is inefficient and saturates data pipes. As a result, there are strong motivations to move processing power ever closer to those data sources, to locally distill the data and derive conclusions — i.e., edge computing.

Many IoT devices are currently embedding their data processing logic into dedicated chips for point functions, but more generalized computational resources are needed soon. Why? As soon as one burns logic onto a chip and deploys it, someone else comes up with a secondary and a tertiary application for that data. Burning all of those follow-on “algorithms” into dedicated chips (that then need to be deployed) is simply untenable over the long haul. We need more generalized processing resources that can be upgraded dynamically.

Latency will eventually become a driver.

As 5G rolls out, with incredible “last hop” network performance, entirely new capabilities become possible around all the buzzwordy stuff. Augmented reality. Autonomous vehicles. Drones.

Many of these applications are exceptionally time sensitive in their execution and do not have the luxury of the time required to go to the cloud and back. For instance, lags in autonomous vehicle intelligence will lead to accidents and loss of life. Lags in augmented reality interfaces to motion sickness. Lags in drone traffic systems to chunks of metal raining from the sky.

Long story short, many applications can’t wait for the full round trip to distant processing, just like soldiers can’t wait for a messenger to ask central command when to shoot. Some decisions need to be made locally and immediately. General-purpose, localized processing power can serve such scenarios.

Still, big questions loom.

There are many considerations and implications beyond data volumes and latency. We’re at the earliest stages of the journey and massive questions loom. To identify a couple questions that arise immediately:

  • Who will run the developer platform(s) that exposes edge processing resources in a clean, predictable way? Will it be a localized extension of Azure or Amazon AWS, or someone new?
  • What are the unique consequences of having location-specific computing, specifically around commerce, authentication and security?
  • What role will the wireless carriers and last-mile networks play around edge computing? How will systems evolve to handle latency-sensitive solutions that require coordination across carrier networks?
  • What are the regulatory considerations and controls that need to be exposed for property owners and city councils to assert policy and manage capabilities within their domains?
  • What is the role of open source software? What new standards will emerge to move the bar for innovation upward?

It’s exciting to think about the scale and impact such systems might ultimately drive. Many of the most promising technologies of tomorrow might be built atop edge compute, such as real-time extraction of intelligence from of video feeds, augmented reality experiences in public spaces, autonomous vehicle orchestration in high-traffic zones, drone traffic control systems and advanced and resilient energy networks.

Precise futures are tough to predict. Nevertheless, edge computing could easily become the next major development platform underpinning smart cities. And who knows? Edge computing might even eventually incorporate Bitcoin as the transactive layer supporting it.