All Posts By

Joseph Noronha

A Tug of War at the Edge

By Postcards from the Edge

By Joseph Noronha

Director and COO, Detecon Inc.

Operators around the world are falling over themselves to claim to be first in the 5G race with a string of announcements and rollouts across Asia, Europe and North America. For many players, a parallel track is the “cloudification” of their own networks, catalyzed by deploying compute in their central offices (CO) as a start to running their own virtualized network functions. With an eye to the developments around newer latency-critical applications, some of them (e.g. Verizon, Deutsche Telekom) have begun to roll out infrastructure dedicated to serve third-party applications. Fully cognizant of the fact that the operators themselves have a non-existent track record of working with developers, some of them have even founded secondary companies (e.g. envrmnt, MobiledgeX) to attract this hitherto elusive community.

While this go-it-alone strategy may seem sound at first (“We build infra and work with another firm that offers services”), certain elements merit a rethink of this approach:

  • Developers: The companies that build applications want to reach as large an audience as possible, which means being able to deploy the same applications globally, across many operators. They also want to use the tools and platforms they are already familiar with. This requires not only a developer-friendly interface, but also standardized APIs and infrastructures. Operators, by their very nature are regional at best – both in geographical coverage and mindset, and in their urge to stand out, commonality and similarity of infrastructure are not high on their priority list. The potential result is that infrastructure standards when developed, are reduced to the least common denominator (suffering the fate of a Joyn) or become non-standardized across the world.

  • Cost: The second and equally important element is cost, specifically the price at which a compute cycle can be made available to a developer. Having failed to compete with the cloud platforms of Amazon, Microsoft and Google, operators such as Verizon and AT&T have sold off their large-scale datacenter assets and exited the business. On the other hand, players like Amazon Web Service, have developed massive economies of scale, evidenced by their ability to lower prices over 70 times during the past decade—something hard to imagine an operator doing of its own volition. While operators are experts in managing telco assets, they have not been successful in building and operating a distributed cloud infrastructure at lowest-unit-cost economics. To do so would require them to compete with the purchasing power of a hyperscale provider, while also adapting to the maintenance and replacement of equipment that has a shorter lifecycle (about 3 years for cloud servers compared to the typical 5-7 years for telco equipment). The operators would also have to build a highly-efficient cloud operation model. On their own, an operator-created cloud will likely have high costs and if costs remain high, developer interests will remain scarce except for the most demanding of applications which cannot function without edge, which in turn would dramatically reduce the overall available market.

  • Mindset: The third element is mindset, specifically that of sharing infrastructure, especially with third parties. While operators have computing infrastructure in their networks, they have traditionally only utilized it for their own internal operations, as part of their overall virtualization strategy. Deploying servers to run virtualized network functions has been driven by cost-optimization, not by a desire to enable new third-party services. While there are valid arguments against sharing—cybersecurity and network reliability being two of the most salient—the tradeoff is often underutilized idle resources. However, if there was a way to safely and securely access untapped capacity—if it were possible to operate a multi-tenant environment with the operator as an anchor tenant—it would turn a hitherto cost element into a revenue-generating engine.

Cloud providers have already confronted and dealt with  many of these very same issues operators face. For example, the cloud providers: 

  • Have fostered a rich developer ecosystem. They have access to and work with a large population of developers. They offer standardized infrastructure that’s accessible to developers around the globe, bridging national and regional constraints. While an operator could offer access to a country, a cloud provider could provide access to the world, across operators.
  • Benefit from massive economies of scale and are familiar with managing a large-distributed infrastructure in a highly automated manner. This ensures not only high availability, but also excellent unit-economics.
  • Are able to maximize asset utilization (an important consideration in a relatively constrained environment such as the infrastructure edge), enabling resource reservation down to a second.

While operators and cloud providers have contended with each other in the past, combining the strengths of these “frenemies” offers up an interesting proposition. It would incentivize the cloud players to invest towards making a uniform global edge infrastructure accessible to a legion of developers at an attractive price point. 

We are seeing glimpses of this happening. Recently, for example, AT&T has announced partnerships with both Microsoft and IBM that could easily extend to include collaboration on an edge cloud. Another example is TIM teaming up with Google. This would open up the “beachfront property,” accelerating the development of new applications that depend and can benefit from the new “edge” in the computing continuum.  

Detecon Inc. is a knowledge and consulting center that focuses on digital innovation trends originating from Silicon Valley. As part of the German-based Detecon Group, Detecon USA has a dual mission – serving as Detecon’s innovation spearhead and managing its Americas operations.

Woman Crossing Mountain Bridge

Crossing the Edge Chasm: Two Essential Problems Wireless Operators Need to Solve Before the Edge Goes Mainstream

By Blog

Geoffrey Moore’s landmark book Crossing the Chasm offers insight into how wireless operators are being challenged to make edge computing mainstream. Read on to understand the gap between what will satisfy innovators and early adopters and what is required to be adopted by the mainstream.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

In 1991, Geoffrey Moore introduced the world to Crossing the Chasm, one of the most influential business books (and corresponding ideas) of that decade. In this book, Moore convincingly argues that all new technologies proceed through a predictable technology adoption life cycle, starting with innovators and early adopters and ultimately reaching early majority, late majority and laggards. Moore’s primary contribution, and the focus of his book, is the recognition that most new technologies hit a stall point as they transition from serving innovators and early adopters and seek to expand their solution to also serve the early majority.

There is a large and difficult-to-cross “chasm” that slows and often stalls technology adoption. This is the gap between what will satisfy innovators and early adopters and what is required to be adopted by the mainstream.

Judging from the hype around edge computing, one might conclude that this is an exceptional technology, effortlessly leaping across the chasm and quickly becoming mainstream. Don’t be fooled: it takes more than hype for a technology to cross the chasm. If we’re not careful, we’ll overlook some of the key obstacles to wide scale adoption of edge computing in the belief that they will somehow iron themselves out.

As pointed out in Infrastructure Edge: Beachfront Property for the Mobile Economy, wireless operators have a unique opportunity to leverage their proximity to the last mile network and profit from the explosion of edge services. However, operators also have a reputation for making lofty promises that are rarely delivered. No wonder that apart from a few forward-leaning operators (AT&T and Deutsche Telekom come to mind), most are sitting on the precipice of the edge, uncertain on how best to proceed.The industry must face, head on, the key barriers to the edge from going mainstream, acknowledge the challenges ahead, and begin advocating for solutions. In particular, we see two essential problems which must be solved:

  • Developers need a uniform infrastructure to deliver a seamless experience without a lot of bespoke coding and high-complexity operations
  • Infrastructure owners—and the entire edge computing industry—need to develop efficient unit economics to drive edge computing down the cost curve at scale.

The rest of this article will present these two barriers in detail, as well as offer some ideas for how they may be surmounted.

Infrastructure that can deliver a seamless experience

Today’s developers leverage cloud infrastructure by simply going to one of the main providers (Amazon, Google, Microsoft), selecting a configuration and, a few clicks later, they’re ready to begin pushing code. The developer can be assured that the service will be available and familiar because, irrespective of the region, the major public cloud providers own and operate an extensive infrastructure that has been engineered for conformity. The developer simply needs to focus on developing their application and getting it to the market, resting easy that wherever they have access to the provider of their choice, the application will just work!

Now think about this in the context of the infrastructure edge, with thousands of micro data centers located at the base of cell towers and in wireless aggregation hubs. The most likely outcome will consist of a vast, distributed compute infrastructure, owned not by one single entity (e.g. Amazon or Microsoft) but by several smaller national or regional operators.

We see some promising, such as Akraino and ETSI-MEC, that hope to present open source API’s that expedite the development of edge applications. But many of these initiatives are backed by their own vested interest groups and there is a danger that the proliferation of such groups may result in the fragmentation of the ecosystem at a time where just the opposite is needed. This view is not isolated, with folks such as Axel Clauberg sounding similar warnings in recent months.

While these software-driven efforts show promise, they do not address the underlying structural challenges. For example, you may have one operator with a 3-year old CPU-heavy edge infrastructure and another operator who has a state of the art GPU configuration. While we might be able to abstract away the underlying software stack variations, how can a developer be sure of rendering the same experience to their end users on top of such heterogeneous computing assets on a global basis?

Solving for a Seamless Infrastructure

Delivering a seamless infrastructure is not something that can be easily solved by operators alone. Most multi-operator initiatives have fizzled out (remember Joyn/RCS?) or have been too slow to be effective in a fast-evolving environment. Solving for seamless infrastructure may require thinking outside the “operator box,” contemplating new business practices, partnerships  and models. Here are two ideas:

Engage with the existing cloud providers

Partnering with the large cloud providers may not be appetizing for many, given that operators have long obsessed about owning their control points—but partnering with web giants is indeed a viable option, especially for the smaller players. Engaging with cloud providers could be direct (e.g., deploying your own data centers and standing up an Azure Stack type solution in partnership with Microsoft) or via 3rd party firms such as Vapor IO, which is deploying carrier-neutral data centers that will host equipment from all the major cloud providers. There is money to be made in partnership with cloud providers, albeit one does give up some level of control.

Engage via a neutral entity

An increasingly viable option is for a neutral entity to step in and drive this discussion. One that understands the developer concerns and has the ability to drive a uniform approach. A variety of players could fulfil this need. A good example is the operator-founded MobiledgeX, which aims to provide a prescriptive design along with a vendor ecosystem that can deliver solutions based upon the type of end applications that the operator would be open to support. Yet another option is align with with players such as Intel and Nvidia, or large system integrators, as these are all companies that can drive reference designs and implementations.

Driving efficient unit economics

While it is one thing to be able to offer infrastructure edge, it is another thing to be able to offer it at a compelling price. Looking at current use cases, we see a few which are critically dependent upon edge for functionality—these applications simply will not function with edge infrastructure. However, a large number of use cases can benefit from edge infrastructure but are not dependent upon it. For the former, the sky’s the limit in terms of pricing—the application simply will not work without edge deployments. For the remaining use cases, it comes down to whether it makes economic sense to enhance the experience with edge infrastructure.

The rapid pace at which compute and storage components improve puts a great deal of pressure on infrastructure owners to continuously upgrade their equipment, further complicating the delivery of low cost unit economics. For example, the performance of Nvidia GPUs has nearly doubled every year since 2015.

Source: https://wccftech.com/nvidia-pascal-gpu-gtc-2015/

Application developers quickly find use of increased horsepower. The cloud providers are well aware of this and wield significant technical and financial muscle to ensure that they have the right infrastructure available to support this trend. This is relatively virgin territory for operators, who have experience in building out and maintaining infrastructure over a 5-7 year depreciation period (15-20 years for civil infrastructure) – not something that potentially needs replacing every 2 to 3 years.

Another area where cloud providers have a leg up on operators is in the operation of this infrastructure. Cloud providers have developed deep expertise in designing highly automated zero-touch systems. All of these factors combine to allow cloud providers to offer computing power at scale and with compelling unit economics. Operators, in contrast, have no track record of being cost-effective cloud providers and depend a great deal upon vendors (many of them with a legacy telecom mindsets themselves). You can see some of the challenges as operators struggle to deploy their own internal clouds to support NFV and SDN.

Put two and two together and you may end up with an operator who offers outdated infrastructure at a premium price….. You get the picture.

Solving for Efficient Unit Economics

There is unfortunately no easy shortcut to unit cost efficiency. Operators need to take a page from the cloud provider playbook to accelerate the deployment of edge infrastructure,  including adding experts from the cloud world to manage their infrastructure. An alternative is to instead partner with existing cloud providers, adopting risk-sharing business practices and new business models (e.g., revenue share) to align incentives among all parties. Furthermore, operators should consider subsidizing costs at the outset rather than demanding large premium profits from day one. This will allow developers to experiment with edge computing at price points comparable to existing public cloud services.

Conclusion

Unless we can individually or collectively solve for the infrastructure and economic challenges presented above, edge computing may have a difficult time crossing the chasm—or, may fall into it!

We do need to convince the developer community of the myriad benefits that the infrastructure edge has to offer. While there are efforts to provide developer friendly API’s, there is more heavy lifting to be done in terms of offering uniform infrastructural assets at attractive prices. Who knows, these challenges may facilitate the next wave of startups aiming to solve this very problem.

Joseph Noronha is a Director at Detecon Inc., Deutsche Telekom’s management consulting division leading their practice in Emerging Technologies and Infrastructure. He has extensive “on the ground” experience with infrastructure players around the world spanning the Americas, Europe, Middle East, Africa and Asia. His interests lie in around Next Generation Connectivity IoT, XaaS and more recently in Edge computing – from product conceptualization and ecosystem building to driving and managing the commercial deployment of these services.

Vishal Gupta is a Senior Executive with strong global expertise of establishing product and business lines with focus on introduction of innovative technologies and products. His background encompasses both Mobile and Cloud technologies to address Edge Compute / 5G / Converged arena. His latest role was Vice President, Sales and Business Development at Qualcomm Datacenter Technologies, Inc.

Opinions expressed in this article do not necessarily reflect the opinions of any persons or entities other than the authors.

Silhouette Man Edge Icons

Four Killer Services for the Wireless Edge

By Blog

As telco operators look to harden their business models around edge investment, the conversation always comes back to use cases. Joseph Noronha of Detecon serves up what he sees as four killer services for the wireless edge.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

In my previous post, I argued that the infrastructure edge, owned and operated by the telecom carriers, has become the critical bridge between the wired and wireless worlds. Telco operators have unique assets and are one of the best positions to catalyze the next generation of the internet apps by deploying what I call “killer services.” These killer services could create new revenue streams for telco operators and accelerate the work of edge-native developers.

When I discuss edge computing among my peers, it quickly comes down to the “use case” question. Every operator would like to see concrete and validated use cases before doubling-down on their commitment to edge computing. While most operators today see the inevitability of edge computing, they don’t fully agree on whether it is an extension of BAU or an opportunity to create a whole new market. Instead of engaging in this debate, I propose to lay out what I see as four killer service categories for the wireless edge, which are:

  • Network services
  • Data compression/condensation
  • Compute offload
  • Artificial Intelligence

I’ll tackle each of these in turn.

Network Services

What is It?

Telco networks are becoming increasingly programmable, and operators now have the tools to expose network assets in a manner which application developers can easily consume. This is more of an evolutionary rather than a revolutionary case, but one of the most powerful. Think of the API’s that companies like Twilio offer. At their core, these APIs take straightforward telco capabilities such as SMS and voice calls and encapsulates them in a developer friendly offering. At the wireless edge, there are new types of capabilities that are mostly mobile, location specific and temporal, such as network congestion or traffic type within a cell sector. How could this information be monetized? It is not that operators have not tried. They have, and a good example is Norway’s Telenor. However, for all the power of the Telenor APIs, they never became a widespread solution. We still do not have a large-scale, multi-country way to programmatically access wireless network services, which hinders adoption by developers. Another barrier is the usability of the same; I daresay apart from a few exceptions such as Telenor, many developers would rather do without than go through the hoops to understand how to gain access and use these assets.

Why is it a Killer Service?

The infrastructure edge, at its essence, is all about location and things that are locally relevant. Use case scenarios such as anti-spoofing, location collaboration, and network utilization have limited benefit if handled by the central cloud; the edge serves as a quicker, simpler way of offering these services for developers, which can unlock new services for customers and new revenue streams for operators.

Data Compression/Condensation

What is It?

Compressing and condensing data close to the network edge addresses two challenges faced by today’s wireless networks:

  • Right now traffic is asymmetric. It is greater than 90-95% downstream, with limited upstream capacity. As enterprises and cities deploy IoT sensors that generate terabytes of data and consumers start looking to upload 8K video, it is anyone’s guess what long term impact that will have on the network.
  • Shipping data to a centralized facility is not free. As an end consumer, it may seem free, but when you’re transmitting zettabytes of data the costs add up quickly.

 

This then begets the question, is all this data relevant and essential to be handled in the central cloud? Would it be more prudent and economical to condense this close to the source in order to manage the network load?

Why is it a Killer Service?

While 5G promises significant increases in data-carrying capacity, simply shunting data around requires additional spend — and either the operator pays for this, or the developer of the service that uses this bandwidth does, or it’s offloaded to the end consumer in the form of higher prices. If there is a way to better handle this stream, it could provide manifold benefits to the trifecta in this equation.

Compute offload

What is It?

Compute offload refers to either moving compute off the device, or bringing the central cloud closer to the user. This is not necessarily about low latency; adding latency and strict SLA’s to the equation brings a host of other challenges. It is more about saving battery life, form factor and cost on mobile devices, while offering users significantly greater computing power at their fingertips. Mobile operators can offer cloud-like services at the edge and charge for them in ways similar to centralized cloud services. You would not need it to run all your applications, but on the occasions when you do, you can tap into the resources at the wireless edge. In turn, the wireless edge can run at a high utilization by serving a large number of users: a win-win scenario. With specialized computing capabilities at the edge, such as those offered by GPUs, end users may get longer lives out of their smartphones. If latest-generation phone capabilities care delivered via edge computing, then consumers may not need upgrade their phones every  6 to 18 months to get the latest capabilities.

Why is it a Killer Service?

Simply because applications are continually demanding more and more resources and device vendors are continuously having to provide more and better chipsets does not mean that model will continue to scale. There are two issues here. First, device release cycles are still much slower than app release timelines, which means even the highest-end phones can quickly get out of date. Second, it’s not clear that today’s device economics will continue to work out (I may stand corrected here if within 2 years people get used to paying $2000 for an iPhone, but I suspect there may be a threshold)

Artificial Intelligence

What is It?

Artificial Intelligence at the edge is really an extension of compute offload. It refers to operators hosting AI and machine learning microservices on the edge. These would likely be workloads that are too computationally intensive to be run on the end devices (especially sensor types) and the wireless edge could serve as a natural host.

Why is it a Killer Service?

With all the hype around AI, it is easy to miss the fact that we are just at the initial stages of discovering its true impact. By most estimates, AI will become increasingly commonplace over the next decade. The proliferation of microservices and the rise of serverless computing, make it practical to host AI-related services in an edge environment such that they may be called upon using secure resources, tasked to execute instructions and then to release compute resources when complete, all in a seamless fashion. AI at the edge could spawn an entire ecosystem of third-party microservices, built by companies that provide key enabling services rather than complete end user applications. A rich ecosystem of services would likely beget a marketplace focused on offering AI capabilities, similar to those of Microsoft and Algorithmia. Developers would have access to these services, which would be verified to work with edge infrastructure and available on a pay-as-needed basis; all factors further reducing the barrier to develop the next generation of pervasive and immersive applications for man or machine.

Summary

The next time you want to think about what to do with the infrastructure edge, consider these four killer services. Based upon where you are in your edge strategy and deployment, they could justify a business investment and help accelerate the large-scale rollout of edge computing.

Joseph Noronha is a Director at Detecon Inc., Deutsche Telekom’s management consulting division leading their practice in Emerging Technologies and Infrastructure. He has extensive “on the ground” experience with infrastructure players around the world spanning the Americas, Europe, Middle East, Africa and Asia. His interests lie in around Next generation Connectivity IoT, XaaS and more recently in Edge computing – from product conceptualization and ecosystem building to driving and managing the commercial deployment of these services

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author.

Beach

Infrastructure Edge: Beachfront Property for the Mobile Economy

By Blog

Mobile operators and other owners of wireless infrastructure command the most valuable “beachfront property” in the race to develop a global edge cloud. Joseph Noronha of Detecon takes us on a journey along the coastline to explore this idea.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

When I began looking into edge computing just over 24 months ago, weeks would go by with hardly a whimper on the topic, apart from sporadic briefs about local on-premises deployments. Back then, there was no State of the Edge Report and certainly no Open Glossary of Edge Computing. Today, an hour barely passes before my RSS feed buzzes with the “next big announcement” around edge. Edge computing has clearly arrived. When Gartner releases their 2018 Gartner Hype Cycle later this month, I expect edge computing to be at the steepest point in the hype cycle.

Coming from a mobile operator heritage, I have developed a unique perspective on edge computing, and would like to double-click on one particular aspect of this phenomenon, the infrastructure edge and its implication for the broader ecosystem.

The Centralized Data Center and the Wireless Edge

So many of the today’s discussions about edge computing ascribe magical qualities to the cloud, suggesting that it’s amorphous, ubiquitous and everywhere. But this is a misnomer. Ninety percent of what we think of as cloud is concentrated in a small handful of centralized data centers, often thousands of miles and dozens of network hops away. When experts talk about connecting edge devices to the cloud, it’s common to oversimplify and emphasize the two endpoints: the device edge and the centralized data center, skipping over the critical infrastructure that connects these two extremes—namely, the cell towers, RF radios, routers, interconnection points, network hops, fiber backbones, and other critical communications systems that liaise between edge devices and the central cloud.

In the wireless world, this is not a single point; rather, it is distributed among the cell towers, DAS hubs, central offices and fiber routes that make up the infrastructure side of the last mile. This is the wireless edge, with assets currently owned and/or operated by network operators and, in some cases, tower companies.

The Edge Computing Land Grab

The wireless edge will play a profound and essential role in connecting devices to the cloud. Let me use an analogy of a coastline to illustrate my point.

Imagine a coastline stretching from the ocean to the hills. The intertidal zone, where the waves lap upon the shore, is like the device edge, full of exciting activities and a robust ecosystem, but too ephemeral and subject to change for building a permanent structure. Many large players, including Microsoft, Google, Amazon, and Apple, are vying to win this prized spot closest at the water’s edge (and the end user) with on-premises gateways and devices. This is the domain of AWS Greengrass and Microsoft IoT Edge. It’s also the battleground for consumers, with products like Alexa, Android, and iOS devices, In this area of the beach, the battle is primarily between the internet giants.

On the other side of the coastline, opposite the water, you have the ridgeline and cliffs, from where you have an eagle view of the entire surroundings. This “inland” side of the coastline is the domain of regional data centers, such as those owned by Equinix and Digital Realty. These data centers provide an important aggregation point for connecting back to the centralized cloud and, in fact, most of the major cloud providers have equipment in these colocation facilities.

And in the middle — yes, on the beach itself — lies the infrastructure edge, possibly the ideal location for a beachfront property. This space is ripe for development. It has never been extensively monetized, yet one would be foolhardy to believe that it has no value.

In the past, the wireless operators who caretake this premier beachfront space haven’t been successful in building platforms that developers want to use. Developers have always desired global reach along with a unified, developer-friendly experience, both of which are offered by the large cloud providers. Operators, in contrast, have largely failed on both fronts—they are primarily national, maybe regional, but not global, and their area of expertise is in complex architectures rather than ease of use.

This does not imply that the operator is sitting idle here. On the contrary, every major wireless operator is actively re-engineering their networks to roll out Network Function Virtualization (NFV) and Software Defined Networking (SDN), along the path to 5G. These software-driven network enhancements will demand large amounts of compute capacity at the edge, which will often mean micro data centers at the base of cell towers or in local antenna hubs. However, these are primarily inward-looking use cases, driven more from a cost optimization standpoint rather than revenue generating one. In our beach example, it is more akin to building a hotel call center on a beachfront rather than open it up primarily to guests. It may satisfy your internal needs, but does not generate top line growth.

Developing the Beachfront

Operators are not oblivious to the opportunities that may emerge from integrating edge computing into their network; however, there is a great lack of clarity about how to go about doing this. While powerful standards are emerging from the telco world, Multi-Access Edge Computing (MEC) being one of the most notable, which provides API access to the RAN, there is still no obvious mechanism for stitching these together into a global platform; one that offers a developer-centric user experience.

All is not lost for the operator;, there are a few firms such as Vapor IO and MobiledgeX that have close ties to the  infrastructure and operator communities, and are tackling the problems of deploying shared compute infrastructure and building a global platform for developers, respectively. Success is predicated on operators joining forces, rather than going it alone or adopting divergent and non-compatible approaches.

In the end, just like a developed shoreline caters to the needs of visitors and vacationers, every part of the edge ecosystem will rightly focus on attracting today’s developer with tools and amenities that provide universal reach and ease-of-use. Operators have a lot to lose by not making the right bets on programmable infrastructure at the edge that developers clamor to use.  Hesitate and they may very well find themselves eroded and sidelined by other players, including the major cloud providers, in what is looking to be one of the more exciting evolutions to come out of the cloud and edge computing space.

Joseph Noronha is a Director at Detecon Inc., Deutsche Telekom’s management consulting division leading their practice in Emerging Technologies and Infrastructure. He has extensive “on the ground” experience with infrastructure players around the world spanning the Americas, Europe, Middle East, Africa and Asia. His interests lie in around Next generation Connectivity IoT, XaaS and more recently in Edge computing – from product conceptualization and ecosystem building to driving and managing the commercial deployment of these services

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author.