Skip to main content
Discover the latest edge computing trends and technologies at ONE Summit, April 29-May 1 in San Jose | REGISTER
Category

Blog

Silhouette Man Edge Icons

Four Killer Services for the Wireless Edge

By Blog

As telco operators look to harden their business models around edge investment, the conversation always comes back to use cases. Joseph Noronha of Detecon serves up what he sees as four killer services for the wireless edge.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

In my previous post, I argued that the infrastructure edge, owned and operated by the telecom carriers, has become the critical bridge between the wired and wireless worlds. Telco operators have unique assets and are one of the best positions to catalyze the next generation of the internet apps by deploying what I call “killer services.” These killer services could create new revenue streams for telco operators and accelerate the work of edge-native developers.

When I discuss edge computing among my peers, it quickly comes down to the “use case” question. Every operator would like to see concrete and validated use cases before doubling-down on their commitment to edge computing. While most operators today see the inevitability of edge computing, they don’t fully agree on whether it is an extension of BAU or an opportunity to create a whole new market. Instead of engaging in this debate, I propose to lay out what I see as four killer service categories for the wireless edge, which are:

  • Network services
  • Data compression/condensation
  • Compute offload
  • Artificial Intelligence

I’ll tackle each of these in turn.

Network Services

What is It?

Telco networks are becoming increasingly programmable, and operators now have the tools to expose network assets in a manner which application developers can easily consume. This is more of an evolutionary rather than a revolutionary case, but one of the most powerful. Think of the API’s that companies like Twilio offer. At their core, these APIs take straightforward telco capabilities such as SMS and voice calls and encapsulates them in a developer friendly offering. At the wireless edge, there are new types of capabilities that are mostly mobile, location specific and temporal, such as network congestion or traffic type within a cell sector. How could this information be monetized? It is not that operators have not tried. They have, and a good example is Norway’s Telenor. However, for all the power of the Telenor APIs, they never became a widespread solution. We still do not have a large-scale, multi-country way to programmatically access wireless network services, which hinders adoption by developers. Another barrier is the usability of the same; I daresay apart from a few exceptions such as Telenor, many developers would rather do without than go through the hoops to understand how to gain access and use these assets.

Why is it a Killer Service?

The infrastructure edge, at its essence, is all about location and things that are locally relevant. Use case scenarios such as anti-spoofing, location collaboration, and network utilization have limited benefit if handled by the central cloud; the edge serves as a quicker, simpler way of offering these services for developers, which can unlock new services for customers and new revenue streams for operators.

Data Compression/Condensation

What is It?

Compressing and condensing data close to the network edge addresses two challenges faced by today’s wireless networks:

  • Right now traffic is asymmetric. It is greater than 90-95% downstream, with limited upstream capacity. As enterprises and cities deploy IoT sensors that generate terabytes of data and consumers start looking to upload 8K video, it is anyone’s guess what long term impact that will have on the network.
  • Shipping data to a centralized facility is not free. As an end consumer, it may seem free, but when you’re transmitting zettabytes of data the costs add up quickly.

 

This then begets the question, is all this data relevant and essential to be handled in the central cloud? Would it be more prudent and economical to condense this close to the source in order to manage the network load?

Why is it a Killer Service?

While 5G promises significant increases in data-carrying capacity, simply shunting data around requires additional spend — and either the operator pays for this, or the developer of the service that uses this bandwidth does, or it’s offloaded to the end consumer in the form of higher prices. If there is a way to better handle this stream, it could provide manifold benefits to the trifecta in this equation.

Compute offload

What is It?

Compute offload refers to either moving compute off the device, or bringing the central cloud closer to the user. This is not necessarily about low latency; adding latency and strict SLA’s to the equation brings a host of other challenges. It is more about saving battery life, form factor and cost on mobile devices, while offering users significantly greater computing power at their fingertips. Mobile operators can offer cloud-like services at the edge and charge for them in ways similar to centralized cloud services. You would not need it to run all your applications, but on the occasions when you do, you can tap into the resources at the wireless edge. In turn, the wireless edge can run at a high utilization by serving a large number of users: a win-win scenario. With specialized computing capabilities at the edge, such as those offered by GPUs, end users may get longer lives out of their smartphones. If latest-generation phone capabilities care delivered via edge computing, then consumers may not need upgrade their phones every  6 to 18 months to get the latest capabilities.

Why is it a Killer Service?

Simply because applications are continually demanding more and more resources and device vendors are continuously having to provide more and better chipsets does not mean that model will continue to scale. There are two issues here. First, device release cycles are still much slower than app release timelines, which means even the highest-end phones can quickly get out of date. Second, it’s not clear that today’s device economics will continue to work out (I may stand corrected here if within 2 years people get used to paying $2000 for an iPhone, but I suspect there may be a threshold)

Artificial Intelligence

What is It?

Artificial Intelligence at the edge is really an extension of compute offload. It refers to operators hosting AI and machine learning microservices on the edge. These would likely be workloads that are too computationally intensive to be run on the end devices (especially sensor types) and the wireless edge could serve as a natural host.

Why is it a Killer Service?

With all the hype around AI, it is easy to miss the fact that we are just at the initial stages of discovering its true impact. By most estimates, AI will become increasingly commonplace over the next decade. The proliferation of microservices and the rise of serverless computing, make it practical to host AI-related services in an edge environment such that they may be called upon using secure resources, tasked to execute instructions and then to release compute resources when complete, all in a seamless fashion. AI at the edge could spawn an entire ecosystem of third-party microservices, built by companies that provide key enabling services rather than complete end user applications. A rich ecosystem of services would likely beget a marketplace focused on offering AI capabilities, similar to those of Microsoft and Algorithmia. Developers would have access to these services, which would be verified to work with edge infrastructure and available on a pay-as-needed basis; all factors further reducing the barrier to develop the next generation of pervasive and immersive applications for man or machine.

Summary

The next time you want to think about what to do with the infrastructure edge, consider these four killer services. Based upon where you are in your edge strategy and deployment, they could justify a business investment and help accelerate the large-scale rollout of edge computing.

Joseph Noronha is a Director at Detecon Inc., Deutsche Telekom’s management consulting division leading their practice in Emerging Technologies and Infrastructure. He has extensive “on the ground” experience with infrastructure players around the world spanning the Americas, Europe, Middle East, Africa and Asia. His interests lie in around Next generation Connectivity IoT, XaaS and more recently in Edge computing – from product conceptualization and ecosystem building to driving and managing the commercial deployment of these services

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author.

Woman sitting on mountain ledge

Uniting Behind a Cloud-Native Edge

By Blog

As the number of internet-connected devices reaches into the billions, we need cloud-native models that can facilitate edge and IoT applications. Jason Shepherd of the open source EdgeX Foundry explores this concept.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

A few years back, the number of devices connected to the internet surpassed the number of people. In fact, it’s estimated that in 2019, there will be more connected “things” online than traditional end-user devices. Estimates vary widely on the total number of connected things over time, but no matter how how you count, it’s going to be a lot. All of these things represent new actors on the internet and a huge catalyst for digital transformation.

The proliferation of connected things creates an inexorable shift to distributed computing, particularly at the edge of the network—often referred to as edge computing. For edge computing to be as robust as the cloud, we need a cloud-native ecosystem for building and deploying applications.

In this article, I will try to provide historical and technical context for edge computing, while also clearing up some of the confusing edge lingo that’s emerged. I’ll also touch upon how we must extend cloud-native practices to the edge, highlighting the importance of the EdgeX Foundry and Akraino projects within The Linux Foundation for facilitating an interoperable edge computing ecosystem.

The Pendulum Shifts to Edge

In the history of computing, the pendulum has faithfully swung every 10 to 15 years between centralized and distributed models. Given the sheer volume of networked devices going forward, it’s inevitable that we need distributed architectures because it’s simply not feasible to send all the collected data directly to the cloud.

There are three key technical requirements driving demand for edge computing:

  • Latency: It doesn’t matter how fast and reliable your network is — you just don’t deploy something like a car airbag from a cloud data center thousands of miles away;
  • Bandwidth: There’s an inherent cost associated when moving data, which is especially bad when transporting over cellular and even worse via satellite.
  • Security: Many legacy systems were never designed to be connected to broader networks, let alone the internet. Edge computing nodes close to field devices can perform functions such as root of trust, identity, encryption, segmentation and threat analytics for these as well as highly constrained devices that don’t have the horsepower to protect themselves. It’s important to be as close to the data source as possible so any issues are remedied before they proliferate and wreak havoc on broader networks.

Beyond the above listed technical reasons, the kicker for needing an increasing amount of edge computing is the consideration of the total lifecycle cost of data. People that start with heavily cloud-centric solutions often quickly realize that chatty IoT devices hitting public cloud APIs can get super expensive. And on top of that, you then have to pay to get your own data back!

Exploring the Many Edges

So what is the “edge”? Fact is, there isn’t a single one.

  • To a telco, the edge is the bottom of their cell towers or at their baseband units. This is the closest location to subscribers that they have complete control over moving content and services to in order to minimize latency for optimal end user experience and reduce overall bandwidth consumption throughout their core networks.
  • To an ISP or Content Delivery Network (CDN), the edge is their IT equipment in data centers on key internet hubs – they might call this the “cloud edge”. Same reasons.
  • Other edges include on-prem data centers, both the traditional kind as well as the ever-increasing proliferation of micro-modular data centers that help get more server class compute closer to the producers and consumers of data.
  • Then comes localized systems, including hyper-converged infrastructure and edge gateways sitting immediately upstream of sensors and control systems. A key differentiator for all of these on-prem edges is that they’re on the same LAN/PAN as the field devices themselves, so now we’re talking benefits for security and uptime for mission-critical applications.
  • And to an OT (Operational Technology) professional, the edge means the controllers and field devices (e.g. sensors and actuators) that gather data from the physical word and run their processes.

In effect, the location of edge computing is based on context, but all edge computing initiatives share the same goal of moving compute as close as both necessary and feasible to the users and devices needing it.

Fog vs. Cloud

The term “fog” is … foggy to a lot of people. Simply put, fog computing refers to the combination of all the edges and the networks in between effectively everything from device to cloud. The fog and cloud are not incompatible. In fact, fog and cloud will work together.

The bottom line is that regardless of how we label things, we need scalable solutions for distributed computing resources to work together along with public, private and hybrid clouds while meeting the needs of OT and IT organizations in areas such as data ingestion, analytics, security and manageability.

IoT Needs a Cloud-Native Edge to scale

Key to the concept of cloud-native is the utilization of modern devops, continuous delivery, loosely-coupled microservices and overall platform-independence. Important to understand is that the term cloud-native is more about how software is built and deployed than where’s it’s actually run.

It’s only logical that the same reasons that cloud-native principles help companies develop and deploy massively scalable applications in the cloud also make them highly applicable across all the different edges. In fact, I would contend that these principles are necessary in advanced class IoT.

EdgeX Foundry: Facilitating an Interoperable Cloud-Native Edge Ecosystem

Launched last year by The Linux Foundation and already backed by nearly 70 member organizations spanning 16 countries, the goal of the vendor-neutral EdgeX Foundry open source project is to build an open framework for edge computing to facilitate an interoperable cloud-native edge ecosystem. It’s not a standard, rather looking to be a defacto standard framework to bring together any mix of existing connectivity protocols with an ecosystem of value-added applications.

The EdgeX project is focused on doing just enough to drive industry alignment through common APIs governed by the project’s Technical Steering Committee without encroaching on where the real IoT money is – infrastructure, applications and services.

Nobody wins if you’re the hundredth person this week to write an application level driver for that same device using the same “standard,” or trying to come up with foundational tools for security and management that end users can trust. A term I heard at a conference last week is that this type of stuff is “undifferentiated heavy lifting.”

You can read about the EdgeX community’s accomplishments in the first nine months after the April 2017 project launch as well as key tenets and priorities for this year in my post here. Eric Brown of Linux.com also did a great writeup on the recent “California” code release as well as what’s in store for the project’s “Delhi” release in October where we’re also going bigger at IoT Solutions World Congress with the launch of developer kits and more Vertical Solution Working Groups in areas such as Buildings and Transportation.

For more info on the project or to learn how to get involved including in these domain-specific working groups visit www.edgexfoundry.org or email info@edgexfoundry.org. We invite you to join the growing community as a project member, contributor or end user. Or better yet, all of the above!

Collaboration with the Akraino project

Arpit Joshipura, the GM of Networking and Orchestration at The Linux Foundation, often talks about the mission and scope for the Akraino project. I had the pleasure of attending the Akraino Summit and the energy in the room was fantastic. We talked about how the EdgeX Foundry and Akraino projects are highly complementary and how we can collaborate to ensure that each effort is valuable independently but curated to work great together as a full open source stack with interoperability APIs that address OT and IT needs across the many edges.

In particular, we identified the opportunity to increase context awareness between EdgeX application-level APIs for secure and manageable interoperability between devices and applications and Akraino APIs for fostering interoperability between underlying distributed edge infrastructure functions such as workload orchestration and networking. The possibilities here are huge—underlying infrastructure that’s able to dynamically optimize itself based on context to serve the needs of any collection of interoperable EdgeX-compliant microservices (using the key EdgeX APIs regardless of how proprietary the overall code is).

We also spoke about the potential to coordinate efforts between the Vertical Solution Working Groups in EdgeX and the domain-specific blueprints offered by Akraino. Further, we discussed bridging to other key edge efforts including testbed activity that’s spinning up with EdgeX in the Industrial Internet Consortium as well as broad, unifying resources such as the Open Glossary of Edge Computing.

Winning together

Net-net, we all win if we work together to drive an open and massively-scalable cloud-native edge ecosystem. Plus, by linking important open source efforts like EdgeX and Akraino with distributed ledger technologies over time we can achieve what I believe is the holy grail of digital – monetizing data, resource-sharing and services through people you don’t even know!

Jason Shepherd is Chair of the EdgeX Foundry Governing Board and CTO for IoT and Edge Computing at Dell Technologies. To read more about the concepts in this article and beyond, check out his 5-part Tech Target blog series running through mid-October. Follow Jason and EdgeX Foundry on Twitter at @defshepherd and @EdgeXFoundry, respectively.

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author.

 

Waymo Vehicle

The Infrastructure for Autonomous Vehicles

By Blog

Autonomous vehicles promise to transform how we move about our world, as well as how we deliver and receive goods and services. However, they also create significant challenges for modern city infrastructures. 

Whether we’re talking about personal cars saving us from the daily commute, unmanned surveillance aircraft keeping us safe or aerial package delivery systems conjuring up the next frontier of e-commerce—we need to first upgrade our infrastructure in order to bring about this new autonomous world.

To create a robust platform for autonomous vehicles, we need to improve our infrastructure in three ways. First, we need to improve our wireless networks to provide near instantaneous connectivity for thousands of devices. Second, we need to add megawatts of data center capacity for edge computing. Third, we need large-scale electrical system upgrades—mostly in the form of networked charging stations—to keep all of these self-navigating machines operating and charged.

What do we Mean by Autonomous Vehicle?

The self-driving car has been the most visible protagonist in the autonomous vehicle story. Whether you’ve been excited about the cute Waymo cars roaming the streets of San Francisco or you’ve been sobered by the Uber accident in Tempe, most of us have been exposed to the idea of self-driving cars in one form or another. Since most of us drive to work, we can easily imagine a self-driving car mitigating our daily commute. But the self-driving car is just part of the autonomous vehicle story​, and most likely ​a latter chapter at that.

Prior to self-driving cars arriving en masse, we’ll see smaller, lighter unmanned vehicles that don’t present the same life-safety issues. In downtown Berkeley, for example, tiny delivery vehicles share the sidewalk with pedestrians and shuttle food to Cal students without the need for human drivers.

Even more astonishing are the autonomous flight systems for self-driving drones that are being deployed today. Unlike cars, autonomous drones ​occupy the largely unobstructed low-altitude sky and ​are both less expensive and more versatile. They are already beginning to transform industries, removing a lot of the cost and time required to inspect construction sites and public works, for example, as a way to quickly and efficiently identify safety issues, quality concerns and operational improvements. Autonomous drones gather data and insights at a fraction of the usual time and cost, while providing a new level of real-time visual intelligence. Tomorrow, aerial package delivery systems will fundamentally transform how companies like Amazon and UPS operate. And in the future, Uber Elevate will revolutionize urban air transportation.

Wireless Networks

To operate safely and effectively, autonomous vehicles of all shapes and sizes need fast, always-on wireless network connectivity. The optimal networks to provide this connectivity are cellular networks, thanks to their ability to support client devices which are roaming at high speeds across large areas as many autonomous vehicles will do. 5G cellular networks aim to provide lower latencies than their predecessor 4G networks of today, which is crucial for autonomous vehicles to work at their maximum level of efficiency and safety.

Network connectivity is especially important for all types of autonomous vehicle, as it connects each vehicle to a larger pool of compute and data storage resources than it is able to include locally. This is an essential step, providing autonomous vehicles with the power required to operate at high speeds, prevent safety issues and process the large amounts of data which they generate in real-time.

Beyond safe and effective operation, network connectivity for each autonomous vehicle will also be required to provide tracking. It is likely that when autonomous vehicles are let loose on the streets of a modern city, they will be tracked in terms of speed, location and direction at all times by a local regulatory agency as well as by the vehicle fleet operator. To do this, GPS and high-performance wireless network connectivity are essential.

Edge Data Centers

Data centers — specifically edge data centers — are a key to scalable autonomous vehicles. These are data centers which are smaller than the centralized data centers used by Amazon and others today. They will be deployed in dozens of locations throughout a metropolitan area, such as at the base of cell towers—approximately every 15-20 Km—placing them as close to the network edge, and to its users, as possible. These edge data centers will allow software applications to run in close proximity to the vehicles themselves, ensuring the software for these autonomous fleets has the minimum possible latency to the vehicles in the local area.

While autonomous vehicles will have specialized control processes that run on board to ensure continuous operation even during a network outage, they will also depend greatly on local edge data centers to “see” beyond the range of their sensors, receive complex decision support, and coordinate with other vehicles and traffic flows. This is vital for safety and efficiency.

Edge data centers will also enhance vehicle data collection and analysis, as the amount of data generated by autonomous vehicles per second is enormous, from LIDAR and HD video camera data to many sensors on the wheels and transmission of the vehicle. In order to most thoroughly analyze the data in real time, data will not only be analyzed on board the vehicle but the majority will also be transmitted to nearby edge data centers where it can be processed more completely, with the results then sent back to the vehicle as quickly as possible. By synthesizing data from many autonomous vehicles, the edge data center can extract new and valuable information such as identifying patterns of traffic congestion or detecting potholes or debris in the road. This type of collaborative processing improves safety, enhances traffic coordination and lowers costs.

To fulfill their full potential, autonomous vehicles must operate to the same or a greater degree of safety than a human driver. By being able to draw on this collaborative data analysis in real-time, a self-driving car is able to detect and respond to dangerous situations it hasn’t encountered or is unable to sense with its on-board systems, to provide high levels of safety.

Collaborative processing in edge data centers will also enable real-time low-flying air traffic control, which is required to allow large numbers of pilot-less aircraft to operate within a space such as a city. Each of these aircraft will take off on a path defined by a predetermined mission, but must be able to react to urban micro-weather, and to many other potential hazards such as conflicting aircraft or debris. The edge data center, to which the aircraft is connected using a high-speed wireless network, is the ideal place to perform the complex real-time processing required to make this a reality.

Battery Charging Stations

Many autonomous vehicles will be powered by electricity. Whether they are electric cars, buses or small drones, their easy and local access to an electric charging station is crucial. Not only must sufficient numbers of electric charging stations be available, they must also coordinate with the autonomous vehicles themselves. Consider, for example, that a self-driving car must also be self-refueling. When faced with the choice of which charging station to use, it must be able to know which station is available, whether there is spare electrical capacity, whether the station is reserved, the cost of the fuel, and other data.

Autonomous aircraft will also need a sophisticated relationship to the electric charging network. Robotic drones will be programmed to perform multiple missions in a day without returning to home base, which means they must able to offload to an edge data center the vast amounts of data they collect in real-time. Even so, they must also periodically pause to recharge their batteries—even more frequently than cars. Therefore, the autonomous use of drone charging stations will be the most effective way to perform complex, long-duration missions of many kinds.

Don’t Forget the Infrastructure

It’s easy to forget about the supporting infrastructure when so much attention is cast onto the vehicles themselves, and on all of the new ways they stand to enhance our businesses and our personal lives. However, autonomous vehicles will not operate properly at scale without contemporaneous infrastructure upgrades. Even a modern city will need significant, strategic improvements to support thousands of autonomous vehicles. The infrastructure upgrades will touch many critical systems, from cellular network connectivity and local data center capacity to networks of electric vehicle charging stations, local weather stations and low-flying air traffic control systems.

Autonomous vehicles rely on the same infrastructure as human-operated vehicles do, but to compensate for their lack of direct human control, they must also be supported by additional layers of infrastructure. Primary among these are high-performance cellular network connectivity and edge data centers, allowing an autonomous vehicle to communicate instantly with systems that can process data, collaboratively, across an entire region. Electric charging stations which require no human interaction are also essential. With these three elements, the infrastructure of a modern city can be enhanced to support autonomous vehicles.

Matt Trifiro is the CMO of Vapor IO and Co-Chair of State of the Edge. Most recently, he led the announcement of Vapor IO’s series C financing and the company’s commercial Kinetic Edge™ offering. Alex Marcham is a networking expert, one of the primary authors of the State of the Edge report and a principal contributor to the Open Glossary of Edge Computing. He maintains the Network Architecture 2020 website and just published his first book.

Opinions expressed in this article do not necessarily reflect the opinions of any persons or entities other than the authors.

Beach

Infrastructure Edge: Beachfront Property for the Mobile Economy

By Blog

Mobile operators and other owners of wireless infrastructure command the most valuable “beachfront property” in the race to develop a global edge cloud. Joseph Noronha of Detecon takes us on a journey along the coastline to explore this idea.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

When I began looking into edge computing just over 24 months ago, weeks would go by with hardly a whimper on the topic, apart from sporadic briefs about local on-premises deployments. Back then, there was no State of the Edge Report and certainly no Open Glossary of Edge Computing. Today, an hour barely passes before my RSS feed buzzes with the “next big announcement” around edge. Edge computing has clearly arrived. When Gartner releases their 2018 Gartner Hype Cycle later this month, I expect edge computing to be at the steepest point in the hype cycle.

Coming from a mobile operator heritage, I have developed a unique perspective on edge computing, and would like to double-click on one particular aspect of this phenomenon, the infrastructure edge and its implication for the broader ecosystem.

The Centralized Data Center and the Wireless Edge

So many of the today’s discussions about edge computing ascribe magical qualities to the cloud, suggesting that it’s amorphous, ubiquitous and everywhere. But this is a misnomer. Ninety percent of what we think of as cloud is concentrated in a small handful of centralized data centers, often thousands of miles and dozens of network hops away. When experts talk about connecting edge devices to the cloud, it’s common to oversimplify and emphasize the two endpoints: the device edge and the centralized data center, skipping over the critical infrastructure that connects these two extremes—namely, the cell towers, RF radios, routers, interconnection points, network hops, fiber backbones, and other critical communications systems that liaise between edge devices and the central cloud.

In the wireless world, this is not a single point; rather, it is distributed among the cell towers, DAS hubs, central offices and fiber routes that make up the infrastructure side of the last mile. This is the wireless edge, with assets currently owned and/or operated by network operators and, in some cases, tower companies.

The Edge Computing Land Grab

The wireless edge will play a profound and essential role in connecting devices to the cloud. Let me use an analogy of a coastline to illustrate my point.

Imagine a coastline stretching from the ocean to the hills. The intertidal zone, where the waves lap upon the shore, is like the device edge, full of exciting activities and a robust ecosystem, but too ephemeral and subject to change for building a permanent structure. Many large players, including Microsoft, Google, Amazon, and Apple, are vying to win this prized spot closest at the water’s edge (and the end user) with on-premises gateways and devices. This is the domain of AWS Greengrass and Microsoft IoT Edge. It’s also the battleground for consumers, with products like Alexa, Android, and iOS devices, In this area of the beach, the battle is primarily between the internet giants.

On the other side of the coastline, opposite the water, you have the ridgeline and cliffs, from where you have an eagle view of the entire surroundings. This “inland” side of the coastline is the domain of regional data centers, such as those owned by Equinix and Digital Realty. These data centers provide an important aggregation point for connecting back to the centralized cloud and, in fact, most of the major cloud providers have equipment in these colocation facilities.

And in the middle — yes, on the beach itself — lies the infrastructure edge, possibly the ideal location for a beachfront property. This space is ripe for development. It has never been extensively monetized, yet one would be foolhardy to believe that it has no value.

In the past, the wireless operators who caretake this premier beachfront space haven’t been successful in building platforms that developers want to use. Developers have always desired global reach along with a unified, developer-friendly experience, both of which are offered by the large cloud providers. Operators, in contrast, have largely failed on both fronts—they are primarily national, maybe regional, but not global, and their area of expertise is in complex architectures rather than ease of use.

This does not imply that the operator is sitting idle here. On the contrary, every major wireless operator is actively re-engineering their networks to roll out Network Function Virtualization (NFV) and Software Defined Networking (SDN), along the path to 5G. These software-driven network enhancements will demand large amounts of compute capacity at the edge, which will often mean micro data centers at the base of cell towers or in local antenna hubs. However, these are primarily inward-looking use cases, driven more from a cost optimization standpoint rather than revenue generating one. In our beach example, it is more akin to building a hotel call center on a beachfront rather than open it up primarily to guests. It may satisfy your internal needs, but does not generate top line growth.

Developing the Beachfront

Operators are not oblivious to the opportunities that may emerge from integrating edge computing into their network; however, there is a great lack of clarity about how to go about doing this. While powerful standards are emerging from the telco world, Multi-Access Edge Computing (MEC) being one of the most notable, which provides API access to the RAN, there is still no obvious mechanism for stitching these together into a global platform; one that offers a developer-centric user experience.

All is not lost for the operator;, there are a few firms such as Vapor IO and MobiledgeX that have close ties to the  infrastructure and operator communities, and are tackling the problems of deploying shared compute infrastructure and building a global platform for developers, respectively. Success is predicated on operators joining forces, rather than going it alone or adopting divergent and non-compatible approaches.

In the end, just like a developed shoreline caters to the needs of visitors and vacationers, every part of the edge ecosystem will rightly focus on attracting today’s developer with tools and amenities that provide universal reach and ease-of-use. Operators have a lot to lose by not making the right bets on programmable infrastructure at the edge that developers clamor to use.  Hesitate and they may very well find themselves eroded and sidelined by other players, including the major cloud providers, in what is looking to be one of the more exciting evolutions to come out of the cloud and edge computing space.

Joseph Noronha is a Director at Detecon Inc., Deutsche Telekom’s management consulting division leading their practice in Emerging Technologies and Infrastructure. He has extensive “on the ground” experience with infrastructure players around the world spanning the Americas, Europe, Middle East, Africa and Asia. His interests lie in around Next generation Connectivity IoT, XaaS and more recently in Edge computing – from product conceptualization and ecosystem building to driving and managing the commercial deployment of these services

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author.

Cellular Towers

The Inevitable Obviousness of the Wireless Edge Cloud

By Blog

Peter Christy, a former 451 analyst, asks the question: Is a wireless edge cloud a bold new wave of computing, or just as the obvious?

Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

Thirty-five years ago, if a science-fiction writer extrapolated the future from an IBM PC connected to a timesharing system by a 1200 baud modem, he or she might have envisioned today’s wireless Internet. Armed with a rudimentary understanding of Moore’s “Law” (2X improvements every two years), it wouldn’t be that far fetched to envision a powerful, wireless, battery-powered handheld computer intimately attached to a rich set of remote (cloud) resources. It’s even less astounding that we live in that world today, considering technology has improved by a factor of 217 (roughly 100 million times) in that time period. For an imaginative free spirit, today’s smartphone would have been pretty obvious. It’s only complex when you know the history.

A Convoluted Path to the Internet

The precursor to the modern Internet was born in 1969 as the ARPANET, a Department of Defense Computer Science research project that connected three research centers in California and one in Utah via 56Kb “backbone” links. The engineers who designed that network were solving for a military problem—building a telephony network that could survive a nuclear war—but they ended up creating the understructure for today’s internet, though it would take another 20 years. Fiber optic communication took a decade to arrive. The IBM PC, the ancestor of the smartphone, didn’t show up until 1981, and the Mac in 1984. The World Wide Web didn’t come until 1990, over two decades after those four research centers were connected. The iPhone — the personal computer that we always wanted — wasn’t announced until nearly 40 years after the ARPANET.

And then came the wireless internet, for which the iPhone was the turning point. Demand created by iPhone users drove the buildout of the 4G/LTE network, and that only in the last decade, 45 years after the Internet. This is the convoluted and time-consuming history that gave us today’s ubiquitous, high-bandwidth, cost-effective wireless Internet.

The path to today’s internet might be labyrinthine, but wasn’t it obvious this is what we wanted all along?

The Emergence of Cloud Computing

TImesharing systems—multi-user systems that provided computing services without each user having to buy and operate a computer—first showed up in 1964 (the Dartmouth Time Sharing System), five years before the Internet. The time sharing computers occupied entire rooms. They were expensive, cumbersome and few and far between, so we let many users share them from remote locations by connecting to them with “dumb” terminals using voice communication links with modems.  

As computers got more powerful and cheaper, we momentarily stopped sharing them as we all got our own “personal” computer that sat on our desk. Many of the early PCs (the Apple II, IBM PC) were often not even connected to a network—files and data were shared on floppy disks. Even when there was a “network,” it was typically to support file sharing. Sophisticated businesses would implement centralized storage on a “server” and applications that were shared or needed bigger systems started appearing on those servers as well

These servers, as they were called, became increasingly more difficult to operate and even small businesses had to start hiring IT experts to maintain even the simplest of systems. As computers got cheaper and cheaper, the complexity and cost of running them grew, and the desirability of using a managed computing service increased. As companies became comfortable “outsourcing” their servers to third parties, the door to cloud was opened.

VMWare introduced robust virtualization in 2001 that let disparate software workloads share the same hardware, making these new centralized servers look a lot like the old time sharing systems, only running modern applications. Virtualization became the definitive way to share common infrastructure while maintaining security between clients, which paved the way for the massive shift from on-premises servers to what we now call cloud computing.

The  seminal event (the “iPhone” of cloud computing) was Amazon’s unveiling of Amazon Web Services (AWS) in 2006. AWS offered virtual machines as an on-demand, pay as you go service. All of a sudden the distinction between timesharing and having your own server essentially disappeared. Anything you could do on your own server you could do on AWS without buying and operating the computer.

The Obvious Arrival of the Wireless Edge

Today, the number of mobile devices exceeds the population of the planet. With the advent of 5G mobile services and the accelerating demand for low-latency clouds, we’re seeing a next-generation wireless edge cloud emerge.

Operational automation became the final missing link required to make edge cloud computing possible. The hyperscale cloud providers all realized they had to reduce human requirements in their operations. First, the only way to run massively-scaled systems in high availability is to eliminate or at least mitigate the possibility of human error. Second, humans managing hundreds of thousands of servers in a data center is not only be unwieldy, but slow and error-prone. The major cloud service providers all adopted what could be called a “NoOps” strategies (in contrast to DevOps; Google’s Site Reliability Engineering offers a documented example). Edge cloud computing, comprised of thousands small data centers housing resources in un-manned locations, requires automated deployment and operation which will evolve naturally out of the large scale automation already developed.

The goal of edge computing is to maximize the performance of “cloud” (on demand, managed services) by locating key resources so they can be accessed via the Internet with minimum latency and jitter and maximum bandwidth. In other words, to provide services that are close as possible to what you could do with a local server without having to buy or operate the service. As was the case with the wireless Internet, it took a lot of hard work and serial invention to get to where we are today with edge cloud computing. But as was the case with networking the answer is obvious — it’s what you would want and expect if you didn’t know how hard it was to create it.

As cloud providers and companies like MobiledgeX provide managed services for placing workloads out at the edge of the wireless network, a wireless edge cloud becomes the natural outcome. The wireless edge cloud will bring all the conveniences of cloud computing to the edge of the network, enabling the next-generation of wireless applications, including mobile AR/VR, autonomous vehicles and large-scale IoT.

Obvious, right?

Peter Christy is an independent industry analyst and marketing consultant. Peter was Research Director at 451 Research and ran the networking service earlier, and before that a founder and partner at Internet Research Group. Peter was one of the first analysts to cover content delivery networks when they emerged, and tracked and covered network and application acceleration technology and services since. Recently he has worked with MobiledgeX, a Deutsche Telekom funded, Silicon Valley located startup that is building an edge platform. His first post on the State of the Edge blog was Edge Platforms.

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author

Man With Mechanical Brain

Are we Smart Enough to Build the Intelligent Edge?

By Blog

Artificial intelligence (AI) has been advancing at phenomenal speeds on many fronts, but it’s also beset by challenges in a few key areas. Can edge computing help?

Ed Nelson, director of the AI Hardware Summit and Edge AI Summit, helps us understand a few of the challenges facing artificial intelligence and then explains how edge computing can help resolve them.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

AI has Outgrown the Traditional Cloud Paradigm

In May, OpenAI reported that the amount of compute needed to process the largest deep learning training runs is doubling roughly every 3.5 months. As the semiconductor industry wrangles with the challenges of moving to 7 nanometer nodes, and Moore’s Law looks to be all but coming to its logical and physical conclusion, the requirements for computational power continue to increase exponentially. In the main, the development of computationally-intensive and sophisticated algorithms   has been outpacing the advancements of hardware—and this trend shows no signs of abating.

In the data center, where the majority of computational resources are located, and where all but a few machine learning models are trained, AI’s insatiable thirst for computation brings its own unique set of logistical challenges. How to maximize throughput while minimizing power consumption, how to disperse heat, how to transfer data between processor and memory, and how to reduce latency, to name a few. In the last 24 months, many of the largest global cloud providers have announced custom-built processing units for AI training and inference, as the semiconductor industry gets a much-needed injection of innovation and growth from this burgeoning market. On top of this, at least 45 new AI chip startups have appeared across the globe in an effort to be the first to reap the rewards of increasing demands for AI.

Issues of data transfer further compound the challenges of inadequate hardware. Use cases for machine learning are emerging that cannot afford to be constrained by power consumption, bandwidth, latency, connectivity and security issues. The self-driving car offers a use case that reflects all of these concerns. An autonomous vehicle needs to make life-or-death decisions in a time-critical manner, regardless of whether cloud connectivity exists or not, and the data contained within the car needs to be totally secure. Additionally, inferencing done on board the car needs to be carried out at extremely low power so that the majority of the car’s energy can be used for its primary purpose —getting its passengers from A to B. Currently, with computational resources located primarily in centrally-located cloud data centers, and latency and connectivity issues being far from resolved, the conditions simply do not exist wherein an (almost) totally reliable, safe and energy-efficient autonomous vehicle can operate.

The Edge can Help Resolve the Challenge

All is not lost. The world of edge computing offers compelling solutions to these challenges. Chip companies, cloud providers, edge infrastructure companies, IoT-invested enterprises and AI solutions developers are increasingly focused on delivering the “Intelligent Edge”, a computing paradigm that will unleash a wave of new markets and business models and make AI truly ubiquitous.

The Intelligent Edge is not a particularly revolutionary term and is defined in a variety of ways by a variety of people and institutions. In the light of the recent State of the Edge report, I will posit my own summary of what this ecosystem may look like.

The Intelligent Edge will take the form of a new decentralized internet paradigm, wherein computational resources, and thus AI workloads, are distributed more evenly between the centralized cloud and the edge of the network. Machine learning-enabled devices at the device edge will handle low-level AI tasks, supported by micro data centers and edge computing nodes positioned at the infrastructure edge, which will be geographically and logically close to devices, and will be capable of handling much larger data sets and much more complex workloads. Thus, complex AI workloads that cannot be processed on the device will be handed off to cloud workloads at the edge.Workloads of a less urgent nature, or of larger scope, will be split between the cloud resources at the infrastructure edge and the much larger resources of the centralized cloud.

By deploying micro data centers and edge computing nodes  with the latest AI processing capabilities,, we shorten the distance between the collection of data and its cloud processing. Rather than devices shunting all of their data to the centralized cloud for processing, and the cloud shunting it back (at great cost in time, money and security), the Intelligent Edge will allow for certain time-critical and security-sensitive AI applications to operate either entirely on a device, or in conjunction with localized data centers, vastly reducing latency, bandwidth requirements, power consumption and cost, while improving security and privacy.

How might that work?

In the case of our autonomous car, the Intelligent Edge would enable a single vehicle to identify a pothole in a road in Boston, for example, and take the following actions:

1. On-Device (Device Edge): Identify the pothole via AI inference and make the necessary adjustments to avoid it.

2. Local Micro Data Center (Infrastructure Edge, Edge Cloud): Communicate to a local data center the location of the pothole, its specifications and the timing that it was spotted, so that cars in the area of Boston may be alerted to its presence. If any more complex decision making needs to be done and communicated to the car in question, or several in the region, it can be done here.

3. Centralized Cloud: Communicate metadata to the Cloud that may be stored in a database of national significance or utilized in future training scenarios. Any decision-making that might need to be done that includes huge numbers of parameters (our pothole being one) and affects thousands of cars nationwide could be done here.

Let’s build the Intelligent Edge!

Many people are already building the Intelligent Edge. Advancements at the device edge predominantly focus on hardware and ever-smaller intelligent processing units that run at very low power. Software developers are migrating existing AI workloads towards the edge, and developing edge-native AI applications optimized for edge environments.

At the infrastructure edge, several companies are rolling out massive international micro data center deployments, many working with the telcos to distribute edge computing nodes throughout the world. As telcos upgrade their  networks to support 5G, they will also be in a position to deliver the infrastructure needed to realize the Intelligent Edge. Between the edge devices, the infrastructure edge and the centralized cloud, there is a vibrant ecosystem of companies focused on connecting, securing and optimizing the edge.

Bringing forth this new internet infrastructure will require equal attention to both the device edge and the infrastructure edge, which are significant and difficult tasks. But a  number of companies have taken up this task and have identified the benefits of building the edge and making it intelligent. The pursuit of this goal represents nothing less than a wave of opportunity; firstly, to reduce the pressure on today’s cloud and move away from centralized computing; secondly, to solve many of the issues that we face in artificial intelligence R&D; finally, to unleash a new generation of services, applications and business models that are enabled by AI at the edge of the network.

Ed Nelson is a conference producer at Kisaco Research, a London-based commercial events company that executes industry-leading conferences in the technology, pharmaceutical and consumer lifestyle industries, among others. He has held several roles spanning from software design analysis to reservist military service. Ed heads up KR’s technology portfolio, which has historically covered Robotic Process Automation & Digital Transformation, and now includes AI Hardware and Edge AI. He holds a degree in History from Newcastle University, and a postgraduate degree from the University of Leeds.

Please consider attending one or more of Ed’s upcoming conferences, including the AI Hardware Summit, September 18-19, 2018, Computer History Museum, Mountain View, CA and the Edge AI Summit, December 11, 2018, San Francisco, CA.

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author.

Man jumping into water

Edge: The End of Cloud as we Know It?

By Blog

Many have predicted that edge computing will completely replace cloud. Is this even possible? Reasonable? Let’s find out.

Peter Levine, a well-regarded thought leader and partner at Andreessen Horowitz gave a presentation in 2017 titled The End of Cloud Computing.

In this talk, Peter puts forth the observation that we’re flooding our world with intelligent devices, from the latest generation smartphones to ubiquitous sensors and autonomous cars. He argues that in order to support these devices, more and more of our workloads will need to be run at the edge. By shifting the bulk of compute to the edge, he says, we destroy what we mean today by cloud computing.

Peter’s exaggerations make for great headlines, but they also muddle the serious conversation. Pitting cloud against edge is a false comparison. The cloud will certainly change —on that point Peter is spot on — but it won’t be cloud versus edge. It will be cloud and edge..

The cloud as we know it today will expand to the edge of the network. The large centralized cloud data centers, such as those owned by Microsoft and Google, will be augmented with thousands of micro data centers at the edge of the last mile network. These micro data centers will be placed as close as possible to the devices and people they are serving, such as at the base of cell towers and on the roofs of buildings.

By embracing these micro data centers and treating them as highly-distributed regions and availability zones, a new paradigm of cloud will come about. Increasingly, applications won’t just be “cloud native,” they will be “edge-native.” Edge computing won’t destroy the cloud; edge and cloud will merge.

The Reports of My Death Have Been Greatly Exaggerated

Edge computing, like any emerging technology, needs commercially-viable use cases to justify its capital outlay. As the dust settles in accounting, many of the most celebrated applications of edge computing, including autonomous vehicles and augmented reality, seem futuristic and potentially far-fetched. Whether or not you believe these technologies to be imminent or fanciful, you have to ask a more basic question: Are there practical applications of edge computing today that will drive the initial capital expenditures?

Indeed, the answer is: Yes.

There are powerful near-term uses of edge computing that are, by themselves, capable of fueling the rollout of edge infrastructure. These are edge-enhanced applications such as:

Content Delivery Networks (CDNs): 20 years ago, the founders of Akamai invented what can be seen as the early proto-stages of edge computing. They realized, back then, that the speed of light is too slow and multiple hops across the network are too unreliable. By placing content and caching servers out in the field, near the edge, they could greatly reduce network congestion and significantly speed up the delivery of web content, including streaming media like Netflix. Today, the major CDNs operate over a million edge servers worldwide and are keeping pace with delivery demands by adding more servers and extending their reach to the furthest edge locations.

Telco Network Function Virtualization (NFV): As the major cellular operators rush to bring 5th Generation (5G) cellular technology to market, they will be moving their network functions (almost all of the network’s capabilities) off of proprietary hardware and into virtualized software functions running on white box servers. These servers will need to be housed in micro data centers at the edge, near the towers and baseband units, as these network functions can only tolerate a few milliseconds of latency.

Internet of Things (IoT): The IoT is real. Organizations worldwide are deploying billions of sensors into the field—into factories, onto cars, at intersections, on top of buildings, and just about everywhere else you might imagine. Each of these sensors generates data, which in aggregate will soon approach exabytes each day. To avoid the significant cost of shipping all of the data back to a central location as well as to provide real-time analysis and responsiveness, many different organizations will employ edge computing. Data analyzed at the edge can be responded to in milliseconds, data can be stored locally if it will be used locally, and algorithms can sift through the mountains of local data to extract the most important bits to ship to a centralized storage warehouse or back-end processes.

CDN, NFV, and IoT represent billions of dollars of value to the economy and are, by themselves, creating the economic incentives for rapid deployments of edge data centers in every major metropolitan region. As this edge infrastructure comes online, the large cloud providers will widen their service portfolio to reach from core cloud all the way to edge. Soon you will be able to purchase edge VMs, run Kubernetes clusters at the edge, and employ AI toolchains that include edge processing—all from the dashboard of your favorite cloud provider.

Something Wicked This Way Comes

Edge computing will unleash a tsunami of applications, as tools improve and offerings get more sophisticated and lower in cost. At the outset, edge computing may often be more expensive than centralized cloud computing for a specific job, as the major cloud providers begin offering it in the form of premium priced products.

As early use cases offset the cost of deployment, and competition increases, prices will fall. As with most new technologies, increased use will drive edge computing down the cost curve such that, very soon, the marginal cost of retrofitting an existing application to embrace edge computing will become minimal. The world will quickly flood with edge-enhanced applications  and we will begin to see the emergence of new and transformative edge-native applications.

The proliferation of tools, best practices and general availability for cloud-based edge computing will unleash waves of developer innovation extending all the way from gaming and other entertainment experiences to mission or life-critical applications such as remote surgery.

Matt Trifiro is CMO of Vapor IO and Co-Chair of the State of the Edge Project. He is also the Co-Creator and a principal contributor to the Open Glossary of Edge Computing. Please follow him on Twitter at @mtrifiro.

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author.

The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

Man standing on building ledge

Edge Platforms

By Blog

Peter Christy, former analyst at 451 Research, helps us find the edge and understand the platforms that will make it more accessible.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

As an early CDN analyst, I’ve studied “edge” computing for nearly twenty years. It’s not a new topic, but has become more visible today with the emergence of IoT, Machine Learning and other applications that benefit from services near the device. For those newer to the topic, an obvious question is exactly where is the edge? I wish I could give a simple answer and just point at the location, but I can’t. It’s more complicated (and more interesting) than that. And it’s why I believe that platforms that deploy and manage code at the edge will play an important role.

Where Exactly Is the Edge? It Depends…

From the perspective of the user of an application (person or device), the edge of a network refers to the parts of the network “nearest” to you as measured by the access performance through the network. Whether or not being near the edge makes a difference for your application depends both on the network demands of that application and on the performance of the network. For most applications that run entirely within a data center, the internal network is fast enough, and everything in the data center is adequately “nearby” (of course there are exceptions like high-performance trading and high-performance grid computing where the location within the data center matters.) The Internet, however, is an entirely different matter because Internet performance is far more problematic than LAN network performance within a data center.

There are some edge applications where the location is clear. If you want to do automatic braking for a car, the application has to run in the car. But what about other applications? Many cloud applications will benefit from services running closer to the edge, but in which exact location should they run? It isn’t obvious because the choice is necessarily complicated. We can easily envision scenarios where we want to store data  and perform computation nearer to a connected device than in one of today’s large cloud data centers (e.g., IoT, machine learning and AI, autonomous systems and augmented reality systems), and hence where want to be closer to the edge.

Picking a Location is Like Picking a Hotel in LA — or Even More Difficult

As a way to understand the complexity of picking an edge location for any given workload, consider the analogy of visiting Los Angeles on a trip that combines business and pleasure, and asking an LA friend where you should stay. Your friend couldn’t possibly give you a meaningful answer without asking a few more questions: How are you arriving? Where and when are your meetings? Do you have particular restaurants, museums or performances you want to include? Like the Internet, the Los Angeles area offers an amazingly rich set of resources, and when there isn’t any traffic, all are reasonably accessible. But also like the Internet, the traffic in LA is often anything but perfect, and at the worst times even short trips can take what seems like forever. And your friend also couldn’t give you a good answer without asking about prices and priorities: if you can’t do it all, what is most important? Is the cost of the hotel an issue (is your expense account unlimited or are you on a government per diem?) Picking the right hotel in LA is interesting and complicated.

Now imagine picking the ideal location for your workload at the edge, in real time, under continuously changing conditions. It’s even harder than picking a hotel but the issues are quite similar: how valuable is it to execute closer to the attached device (what is the tangible value?); What other applications or services do you need to connect to? How much are you willing to pay if edge computation is more expensive?

Edge Platforms are an Answer

A critical part of effective application development is focussing your effort where it counts the most, for example, where it provides the most business value or differentiation. Operating systems and cloud platforms are designed to do all the other tasks and it makes sense that edge platforms will be a key enabler for edge computing as well.

Jason Hoffman, CEO of MobiledgeX

Watch Jason Hoffman, CEO of MobiledgeX, discuss developer-facing services for edge computing.

By and large, edge platforms will complement, and be used in conjunction with other platforms (e.g., the existing cloud platforms). The edge will be exploited by moving specific application components onto an edge platform or embedding edge services in an existing application. Some applications will run entirely on the edge as well.

In all cases,  edge platform will discover and manage available edge resources, provide services to deploy and manage customer code running at the edge, provide integration services with other platforms, and presumably provide new services based on new capabilities at the edge, such as integration with the cellular infrastructure. Technology costs have come down far more rapidly than programming costs,, so platforms that simplify application development play a key role in ensuring we continue to benefit from cheaper technology by reducing the application development cost. It would be very surprising if the same isn’t true for edge computing.

Peter Christy is an independent industry analyst and marketing consultant. Peter was Research Director at 451 Research and ran the networking service earlier, and before that a founder and partner at Internet Research Group. Peter was one of the first analysts to cover content delivery networks when they emerged, and tracked and covered network and application acceleration technology and services since. Recently he has worked with MobiledgeX, a Deutsche Telekom funded, Silicon Valley located startup that is building an edge platform.

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author.

 

Shelby Mustang

Start Your Engines!

By Blog

About a year ago, Matt Trifiro and I were enjoying a few beers at a brewery in the Mission District during one of my trips to the Bay Area from Vermont. We were talking all things Edge and having a fantastic time…and yet it became clear that we actually had pretty different ideas about what edge computing was, how it might work, and who the players were.

This was surprising, as we were both knee-deep in the edge computing space, and partners in many areas as well. If we weren’t on the same page, how would those who didn’t live and breathe it each day find context in all the hype? With our CMO hats on, we dreamed up the idea of a “State of the Edge” report. We walked out of the bar ready to put pen to paper, and then promptly got super busy.

Nine months later, at the urging of Yves Boudreau (Ericsson UDN), Jeff Chu (Arm), and Haseeb Budhani (Rafay Systems) we finally took the plunge: we hired Jim Davis, Phil Shih, and Alex Marcham and started to work. Our goal seemed simple, but it quickly became obvious that a typical white paper wasn’t going to cut it — we needed to add true value to the ecosystem. While Jim interviewed dozens of CTO/CIO’s, end users, and industry leaders, our working group expanded its scope to include an exhaustive Open Glossary of Edge Computing and the first version of an Edge Computing Landscape map.

Why Does it Matter?

There is no shortage of material on edge computing these days. In fact, “edge washing” is practically a mainstream marketing technique by now! What we hoped to accomplish with this project was inspired by watching the cloud native community over the last five years: bring together the players and technologies around a common understanding, agree upon the language, embrace input from the community, and create a movement.

It seems auspicious to compare the buzz and froth of edge computing to the tidal wave that cloud native has become. And yet, if you live in the world of infrastructure, you can feel the massive change coming. Technologies like 5G and the rise of a new distributed computing architectures are but two of the many factors that are pushing for a reinvention. At the very least, there will be substantial change in how we deploy and manage infrastructure, but more exciting are all the untold use cases that will be created if we (the edge computing ecosystem) do our jobs right.

In other words, start your engines.