Geoffrey Moore’s landmark book Crossing the Chasm offers insight into how wireless operators are being challenged to make edge computing mainstream. Read on to understand the gap between what will satisfy innovators and early adopters and what is required to be adopted by the mainstream.
Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.
In 1991, Geoffrey Moore introduced the world to Crossing the Chasm, one of the most influential business books (and corresponding ideas) of that decade. In this book, Moore convincingly argues that all new technologies proceed through a predictable technology adoption life cycle, starting with innovators and early adopters and ultimately reaching early majority, late majority and laggards. Moore’s primary contribution, and the focus of his book, is the recognition that most new technologies hit a stall point as they transition from serving innovators and early adopters and seek to expand their solution to also serve the early majority.
Judging from the hype around edge computing, one might conclude that this is an exceptional technology, effortlessly leaping across the chasm and quickly becoming mainstream. Don’t be fooled: it takes more than hype for a technology to cross the chasm. If we’re not careful, we’ll overlook some of the key obstacles to wide scale adoption of edge computing in the belief that they will somehow iron themselves out.
As pointed out in Infrastructure Edge: Beachfront Property for the Mobile Economy, wireless operators have a unique opportunity to leverage their proximity to the last mile network and profit from the explosion of edge services. However, operators also have a reputation for making lofty promises that are rarely delivered. No wonder that apart from a few forward-leaning operators (AT&T and Deutsche Telekom come to mind), most are sitting on the precipice of the edge, uncertain on how best to proceed.The industry must face, head on, the key barriers to the edge from going mainstream, acknowledge the challenges ahead, and begin advocating for solutions. In particular, we see two essential problems which must be solved:
- Developers need a uniform infrastructure to deliver a seamless experience without a lot of bespoke coding and high-complexity operations
- Infrastructure owners—and the entire edge computing industry—need to develop efficient unit economics to drive edge computing down the cost curve at scale.
The rest of this article will present these two barriers in detail, as well as offer some ideas for how they may be surmounted.
Infrastructure that can deliver a seamless experience
Today’s developers leverage cloud infrastructure by simply going to one of the main providers (Amazon, Google, Microsoft), selecting a configuration and, a few clicks later, they’re ready to begin pushing code. The developer can be assured that the service will be available and familiar because, irrespective of the region, the major public cloud providers own and operate an extensive infrastructure that has been engineered for conformity. The developer simply needs to focus on developing their application and getting it to the market, resting easy that wherever they have access to the provider of their choice, the application will just work!
Now think about this in the context of the infrastructure edge, with thousands of micro data centers located at the base of cell towers and in wireless aggregation hubs. The most likely outcome will consist of a vast, distributed compute infrastructure, owned not by one single entity (e.g. Amazon or Microsoft) but by several smaller national or regional operators.
We see some promising, such as Akraino and ETSI-MEC, that hope to present open source API’s that expedite the development of edge applications. But many of these initiatives are backed by their own vested interest groups and there is a danger that the proliferation of such groups may result in the fragmentation of the ecosystem at a time where just the opposite is needed. This view is not isolated, with folks such as Axel Clauberg sounding similar warnings in recent months.
While these software-driven efforts show promise, they do not address the underlying structural challenges. For example, you may have one operator with a 3-year old CPU-heavy edge infrastructure and another operator who has a state of the art GPU configuration. While we might be able to abstract away the underlying software stack variations, how can a developer be sure of rendering the same experience to their end users on top of such heterogeneous computing assets on a global basis?
Solving for a Seamless Infrastructure
Delivering a seamless infrastructure is not something that can be easily solved by operators alone. Most multi-operator initiatives have fizzled out (remember Joyn/RCS?) or have been too slow to be effective in a fast-evolving environment. Solving for seamless infrastructure may require thinking outside the “operator box,” contemplating new business practices, partnerships and models. Here are two ideas:
Engage with the existing cloud providers
Partnering with the large cloud providers may not be appetizing for many, given that operators have long obsessed about owning their control points—but partnering with web giants is indeed a viable option, especially for the smaller players. Engaging with cloud providers could be direct (e.g., deploying your own data centers and standing up an Azure Stack type solution in partnership with Microsoft) or via 3rd party firms such as Vapor IO, which is deploying carrier-neutral data centers that will host equipment from all the major cloud providers. There is money to be made in partnership with cloud providers, albeit one does give up some level of control.
Engage via a neutral entity
An increasingly viable option is for a neutral entity to step in and drive this discussion. One that understands the developer concerns and has the ability to drive a uniform approach. A variety of players could fulfil this need. A good example is the operator-founded MobiledgeX, which aims to provide a prescriptive design along with a vendor ecosystem that can deliver solutions based upon the type of end applications that the operator would be open to support. Yet another option is align with with players such as Intel and Nvidia, or large system integrators, as these are all companies that can drive reference designs and implementations.
Driving efficient unit economics
While it is one thing to be able to offer infrastructure edge, it is another thing to be able to offer it at a compelling price. Looking at current use cases, we see a few which are critically dependent upon edge for functionality—these applications simply will not function with edge infrastructure. However, a large number of use cases can benefit from edge infrastructure but are not dependent upon it. For the former, the sky’s the limit in terms of pricing—the application simply will not work without edge deployments. For the remaining use cases, it comes down to whether it makes economic sense to enhance the experience with edge infrastructure.
The rapid pace at which compute and storage components improve puts a great deal of pressure on infrastructure owners to continuously upgrade their equipment, further complicating the delivery of low cost unit economics. For example, the performance of Nvidia GPUs has nearly doubled every year since 2015.
Application developers quickly find use of increased horsepower. The cloud providers are well aware of this and wield significant technical and financial muscle to ensure that they have the right infrastructure available to support this trend. This is relatively virgin territory for operators, who have experience in building out and maintaining infrastructure over a 5-7 year depreciation period (15-20 years for civil infrastructure) – not something that potentially needs replacing every 2 to 3 years.
Another area where cloud providers have a leg up on operators is in the operation of this infrastructure. Cloud providers have developed deep expertise in designing highly automated zero-touch systems. All of these factors combine to allow cloud providers to offer computing power at scale and with compelling unit economics. Operators, in contrast, have no track record of being cost-effective cloud providers and depend a great deal upon vendors (many of them with a legacy telecom mindsets themselves). You can see some of the challenges as operators struggle to deploy their own internal clouds to support NFV and SDN.
Put two and two together and you may end up with an operator who offers outdated infrastructure at a premium price….. You get the picture.
Solving for Efficient Unit Economics
There is unfortunately no easy shortcut to unit cost efficiency. Operators need to take a page from the cloud provider playbook to accelerate the deployment of edge infrastructure, including adding experts from the cloud world to manage their infrastructure. An alternative is to instead partner with existing cloud providers, adopting risk-sharing business practices and new business models (e.g., revenue share) to align incentives among all parties. Furthermore, operators should consider subsidizing costs at the outset rather than demanding large premium profits from day one. This will allow developers to experiment with edge computing at price points comparable to existing public cloud services.
Conclusion
Unless we can individually or collectively solve for the infrastructure and economic challenges presented above, edge computing may have a difficult time crossing the chasm—or, may fall into it!
We do need to convince the developer community of the myriad benefits that the infrastructure edge has to offer. While there are efforts to provide developer friendly API’s, there is more heavy lifting to be done in terms of offering uniform infrastructural assets at attractive prices. Who knows, these challenges may facilitate the next wave of startups aiming to solve this very problem.
Joseph Noronha is a Director at Detecon Inc., Deutsche Telekom’s management consulting division leading their practice in Emerging Technologies and Infrastructure. He has extensive “on the ground” experience with infrastructure players around the world spanning the Americas, Europe, Middle East, Africa and Asia. His interests lie in around Next Generation Connectivity IoT, XaaS and more recently in Edge computing – from product conceptualization and ecosystem building to driving and managing the commercial deployment of these services.
Vishal Gupta is a Senior Executive with strong global expertise of establishing product and business lines with focus on introduction of innovative technologies and products. His background encompasses both Mobile and Cloud technologies to address Edge Compute / 5G / Converged arena. His latest role was Vice President, Sales and Business Development at Qualcomm Datacenter Technologies, Inc.
Opinions expressed in this article do not necessarily reflect the opinions of any persons or entities other than the authors.