Skip to main content
Discover the latest edge computing trends and technologies at ONE Summit, April 29-May 1 in San Jose | REGISTER

In the Clouds: The Times They Are a-Changin’

By February 3, 2020December 3rd, 2020Postcards from the Edge

By Mahdi Yahya

CEO & Co-founder, Ori

I spent my twenties in data centers around the world, plugging in cables, building networks, and chomping on tuna sandwiches with engineers in the breakroom—all while I was coming from or going to my next Meisner class or a Coriolanus rehearsal. 

Living between the damp, smelly backstage hallways of London theatres and the cold and noisy ones of Telehouse was probably the most formative time of my life. Both are hidden, distant worlds, unacknowledged by the general public, yet continually evolving to produce a better show to the masses.

There is no doubt that the cloud—the technical backbone of our current modern world—is changing. And a lot of that change is attributed to the recent rise of investments in edge computing.  

However, while the ecosystem around edge computing is on the rise, we should stop looking at edge computing as a successor to the cloud, but rather as an essential gateway to what will come next on a global level.

The marvel of edge computing is that it attracts all sorts of dreamers. It is a concept so distinct in thinking, yet incredibly complex in execution. 

No doubt, investments in edge computing are leading us towards an immersive and autonomous future. However, I am exhausted by the promises of seamlessly connected worlds, cars that drive themselves, and smart cities that are shamelessly touted as a show of marvel and wonder. 

The promise on the poster outside the theatre is pretty far from what is on stage. We may be able to achieve flying cars, but there is a lot to be addressed first.

It’s Time to Talk About Computing

We’ve become accustomed to accessing resources in highly available, centralized data centers for some time now. But how do we access computing resources that are outside these comfortable environments? As a developer or enterprise, how do I distribute my application across thousands of individual locations, and in dozens of geographies?

Applying the principle of ease-of-access pioneered by centralized computing to the broader, distributed resources around will enable computing anywhere and everywhere. As a result, end devices will become disposable, free and—ultimately—worthless. 

Personal-Computing-as-a-Service

The boundaries between our phones, watches, tablets, computers, and TVs are blurring. Machines, things and humans are in constant communication producing extraordinary amounts of data daily.

And I can envision a near future where paying $1,000 for an iPhone will become a thing of the distant past. Dummy devices will be able to pull an operating system from the nearby edge with infrastructure consumed in real-time, en masse: invisible, but highly available. 

Is it farfetched? Not really. Global infrastructure is in constant flux, and we are at the cusp of the next significant wave, a change that will pave the way for computing anywhere. Edge computing might be the trigger that ignites a structural metamorphosis to the global backbone of the internet—changing it for generations to come.  

The Old New World

Cloud services allowed developers to create centrally managed applications while globally deployed to nearly every region of the planet. There has always existed a border between cloud and telecommunications networks, where cloud providers offered a set of technologies to develop applications for global deployment, while telecommunications networks provided access to these applications at the local level.

These boundaries are starting to blur, and the spectrum between internet, public, private, hybrid clouds and telco networks is beginning to merge.

Communication carriers operate in multiple geographies, and already manage vastly distributed resources. Precisely the kind of infrastructure that could support a new generation of services delivered closer to the edge, opening up new commercial opportunities and an ideal environment for innovation.

This challenging move, however, is not as straightforward as it sounds, and many refer to it as the “Last Mile” challenge. The telcos that solve the challenge of last-mile delivery optimization have an opportunity to capture significant value in this new world. But capturing this market requires an intelligent and careful approach—one that involves getting more out of what we have, making the most out of what we build, and optimizing the network’s geographic distribution, wide-area presence, and local capabilities.

In a future where everything is connected, communication networks must deliver where the cloud falls short. 

Today, the telco edge is viewed by many as a completely separate realm from what we deploy in the cloud, and, to some extent, this is an accurate assessment given the specialized hardware and location-specific deployments characteristic of telco networks. These specialized systems are indeed what will make telco edge- computing initially. Internal virtualization initiatives that aim to transform the telco network will be a crucial driver for edge computing early adoption, much like Amazon built AWS to serve the needs of its e-commerce business. And while the telco edge is measured today by a location or a specific set of hardware, it is software that will make it reach global scale and mass market.

Hello Software, My New Friend

Figuring out how to best run workloads is not a new problem. We continuously strive to build up new ways to run workloads almost anywhere, working to abstract away—and not having to worry about—what’s ticking away underneath.

Hardware is by its nature stable and permanent. The only way we can achieve the promise of the edge is by evolving how software interacts with physical hardware: infrastructure needs to be as agile and flexible as possible.

Parallels between network virtualization strategies and cloud-native distributed models must not be ignored. Through the numerous network virtualisation, open architectures and open interfaces efforts over the past few years, telcos are preparing the ideal environment to redefine third-party access to their infrastructure.

The growing adoption of Network Functions Virtualization (NFV) is freeing networks from being locked-into monolithic machines to seamlessly run over generic hardware, virtual machines or even containers. Giving networks new capabilities to scale hardware independently from the network itself, and the flexibility to host both network functions and external applications in the same location if not the same machine.

Many view edge computing as merely a compute resource. Yet hosting compute resources in a highly available network environment goes beyond offering mere compute power or storage capabilities to developers. 

New connectivity services will emerge from edge computing in the future, and the borders between internet, commercial clouds, hybrid, private clouds, core network and edge computing (fixed and mobile) will blur over time. Developers can then design their services with specific filters in mind (compute, storage, bandwidth, latency, location, density…) This seamless connection from public and private clouds to telco networks is what will give developers the fluidity to run workloads dynamically between a mixture of software, network and cloud environments. 

And software-defined network virtualization is an essential first step towards reaching that fluidity, by transforming the network into a widely distributed plane, we are creating hundreds of points in the network that are potentially ideal edge environments. A “cloudified” environment that connects and integrates to existing centralized cloud environments, with the end-to-end operator networks becoming a natural extension of regional data centers.

Innovations in software abstracting away the need to manage disaggregated hardware is what will make the case for edge in the coming years. However, there is no doubt that the edge is a mixture of network capabilities, alongside different hardware and new specialized software stacks that can address this jungle of connectivity and servers—ultimately giving developers and enterprises the ability to run any workload, anywhere (or everywhere!). Paving the way to the immersive, autonomous, and smart future we are all promised.

From Republic to Empire

While the public clouds were able to globally scale by working independently, reaching a global scale with the edge, and particularity the telco edge requires a collaborative approach between all players. Telcos no doubt, need to federate, agree and collaborate on standards, APIs and frameworks. 

The first battle of the clouds is coming to an end. The likes of AWS, Azure, and GCP are now ruling in a triumvirate fashion, with AWS as Pompey. But we all know what happened to Pompey.

Edge computing is a stepping stone for the old world of the telco to merge with the new, giving birth to a new layer of infrastructure that will power our world for the next 20 years.

Whether it’s the telcos, the cloud folks, or all of the other players, it all goes back to the fundamental principle that global infrastructure is in constant flux. But what triggers this periodic transformation? Is it demand? Or is it people’s imagination? 

What will come first? Flying Cars or Cloud-Edge-Fog computing?

I might have the answer in 2035.


Based in London, Ori empower developers and networks to build future applications through smart, immersive, and autonomous infrastructure.