Skip to main content
Discover the latest edge computing trends and technologies at ONE Summit, April 29-May 1 in San Jose | REGISTER
Category

Blog

Edge Computing and Highly-Personalized Notifications

By Blog

Nearly 3 billion people worldwide use smartphones. For many of us, these devices are our primary connection to the larger world. These personal devices store and express our digital identities and, as they become more sophisticated, they offer us more personalized experiences.

Our devices have become hyper-personalized, serving up tailored ads and social media suggestions while automating simple conveniences, such as automatically adjusting screen brightness levels to our observed tastes. In this way, our phones increasingly provide experiences that reflects our exact preferences. Customized push notifications in particular have the potential to offer users individualized alerts on a moment-appropriate basis.

But there is a dark side to hyper-personalization. Astonishingly, 2.5 quintillion bytes of data are generated per day, a rate which will only increase. This mass of data is a tremendous tool to improve the digital lives of devices users, properly employed it can be used to offer a customized experience across all platforms. Recent data security breaches, such as the Cambridge Analytica scandal, have shown the risks of exposing personal data. But where is the line between mobile engagement and mobile invasiveness?

The Architecture of Push Notification Platforms

Traditional push notification platforms take a three-step approach to sending personalized notifications to mobile users:

  1. Push tokens from the mobile app are first uploaded to an application’s backend server. 
  2. The backend server forwards the notifications to a cloud service along with the personalised notification message, such as APNS for iOS devices or Firebase for Android. 
  3. APNS or Firebase then makes reasonable efforts to deliver that notification to the user’s device. The mobile app will likely track if the user taps on the notification (when it opens the app) 

While this model works well enough, it also proves to be problematic. 

First, reliability is a key pain point in the push notification industry. In a fast-paced world where news and situations can change faster than the information about them can relay, it’s crucial to be able to communicate with users and customers in real-time. The use of traditional cloud servers can cause hours of delay as notifications are held until some data or information is collected from the mobile app on the device. In some cases, this can also cause notifications to be sent in the wrong order, or even fail to be delivered. 

Second, there are ever-increasing concerns over data security. Since the passage of the EU’s GDPR, push notification platforms have updated their privacy policies to clarify that the onus is on the mobile app itself to guarantee compliance. This means mobile apps need to ask users for their consent to have their data processed by third-party servers. 

Edge computing offers and alternative. By pushing the messaging platform to the edge, it’s possible to bypass the use of centralized servers altogether. This is where edge computing comes in.

Edge Computing and the Next Stage of Push Notifications

By processing data at the edge of the last mile network, algorithms can be built that ensure only crucial data is forwarded to a permanent, centralized server. This has tremendous potential in terms of real-time information and analytics, not to mention a reduction in the amount of data that has to be transmitted to a centralized server. It also has endless implications for ways it could disrupt the way people and devices interact, as the Internet of Things (IoT) becomes increasingly interconnected.

So how will this affect push notifications? 

By incorporating edge computing into the design of a push notification SDK, it’s possible to cut out the necessity of a cloud storage center. Data analysis for segmentation will thus occur on the device itself, with push tokens being stored on the app’s backend servers located at the edge rather than in a centralized cloud. By cutting out the roundtrip path to centralized servers, we can create a more direct, agile flow of data—with the ability to communicate and respond to data triggers in true real-time. 

Edge computing can provide more ways to communicate with users without sacrificing privacy, and these can be provided in highly context-sensitive ways, with messages for the exact moment on a person-to-person basis.

For example, a person using the Domino’s app on their phone can be coordinating with the GPS on the device without revealing the user’s location to a centralized server. Walk within a certain distance of a Domino’s restaurant and the app can offer a discount code for a pizza. Or, imagine purchasing plane tickets to Madrid with a travel app. In this scenario, future push notifications could direct them to an in-app function to book hotels for the dates required, or could suggest the top 10 tourist attractions to visit in Madrid, or could keep you up-to-date with real-time flight information. 

Sending the Right Notification at the Right Time

With access to a user’s calendars, travel habits based on phone usage, and local weather and news data, an app can pinpoint the exact moment the message will be relevant to you, which is also the moment you will be most likely to engage with the message. For example, if a traffic app that reports of inclement weather or a traffic collision in your area can send a notification warning you to drive carefully, or offer you alternate routes to their destination. 

Crucially, app developers can set notifications to pause notifications when the user is busy—say, in the morning when they’re getting ready for work, during rush hour, or when they’re sleeping. There’s a sweet spot in terms of number of notifications an app can send out before it becomes a nuisance (of course, this number varies from app to app), so it’s important to make every notification count. 

For apps the present entertaining content, the best moment to offer a notification might be when the user is relaxed, at home, and scrolling through content on their phone without other distractions. If a user has their headphone jack in use and their phone is unlocked, this can be the ideal time to send them rich content, such as a video or a GIF. If a user’s device is offline, it may be better to set notification to go out when they are back online or within range of a Wi-Fi hotspot. 

Edge Computing Lets Users Keep Agency and Ownership of Their Data

To offer rich and contextual notifications with traditional notification mechanisms requires mass quantities of data to be stored on centralized servers. This creates substantial risk for data breaches, particularly for highly sensitive data, including data from financial or medical apps. Even when data is being processed exactly as intended—for example, when user data is sold to advertisers, as is often the case for free push notification packages—this may expose companies to consumer backlash as users become much more discerning about where their data is stored and who has access to it. 

Edge computing provides a mechanism for implementing a notification platform that is more sophisticated and offers real-time interactions with users. These modern notification platforms will completely replace the problematic architecture of traditional push notification services. By interacting, processing, and analyzing data at its source—on the device itself—mobile apps can personalize their communications while reducing the risk of data breaches. Data stays on the user’s device at all times, so the user retains full ownership and agency of their data. 

In a world where personal data is fast becoming appreciated as one of our most valuable resources, edge computing means mobile app users can be confident their information isn’t being sent to unknown third parties. Similarly, the apps are safe from liability should a breach occur. And above all, user engagement is optimized by providing them with timely, hyper-personalized notifications that are tailor-made to fit their interests, needs, and changing schedule. 


David Shackleton

David Shackleton is a co-founder of OpenBack, the only mobile engagement platform that uses edge computing and device-side decisions to deliver smart push notifications. David also co-founded Ding, the largest international top-up platform which launched in 2006 and delivers a top-up every second, across more than 130 countries. Prior to Ding, David was a management consultant with the Monitor Group in the US, working in Boston and New York.

Get an Edge on Distance Education with Virtual Reality

By Blog
With social media giant Facebook’s launch of Horizon, a Virtual Reality (VR) world, it’s easy to see how VR could quickly penetrate the mainstream. From entertainment to business, VR has been establishing itself across many verticals, and that includes distance education.

With social media giant Facebook’s launch of Horizon, a Virtual Reality (VR) world, it’s easy to see how VR could quickly penetrate the mainstream. From entertainment to business, VR has been establishing itself across many verticals, and that includes distance education.

 

Defining Distance Education

For the uninitiated, distance education is a formalized way of learning remotely using electronic communication. A distance education program can be completely remote or be a combination of traditional classroom instructions and distance learning.

The main advantage of distance education is the ability to learn no matter where you live or what other responsibilities you have. You can fit your learning around your work and home life, making distance education especially beneficial for students with location or scheduling problems. And it’s definitely here to stay, as Pace University President Marvin Krislov reports that the demand for distance learning has been growing steadily in higher education.

Distance education is not without its challenges. The most commonly cited disadvantages of distance education are the lack of engagement and the difficulty of collaborating with other students. This is where VR comes in, as it can potentially revolutionize how institutions conduct distance education.

How Can VR Change Distance Learning?

Pennsylvania State University’s Conrad Tucker argues that VR can provide solutions to distance learning’s limitations by introducing tactile interaction to combat lack of engagement. By providing a real-time touch experience, where students can remotely “feel” objects, we introduce, immediate engagement and experiential learning into the equation.

Because VR provides an immersive experience, the chance of getting distracted during class is reduced compared to regular distance learning, where a student can be easily distracted when watching a lecture on his or her computer. VR can create a more connected and dynamic learning environment compared to simply looking at a screen.

But the greatest aspect of VR in education is the ability to bring practice into theory with fewer resources. People learn best through practice, and VR provides students the opportunity to interact with lifelike simulations rather than just reading about them from an e-book.

Recently, the Cleveland Clinic Lerner College of Medicine has started using VR for their anatomy classes, which lets students dissect a body without an actual cadaver. This is a welcome update to traditional approaches to learning anatomy, which are limited by the inaccessibility of certain organs and the stark differences between living bodies and cadavers. Outside of medicine, there are VR applications that teach subjects like chemistry and astronomy to students across learning levels. Virtual Lab, for example, is an application that lets students conduct lab experiments and save them to the cloud.

Many educational institutions have a lot to gain from adapting VR into their curricula, and are poised to to integrate it more completely than other organizations. For example, some schools like Maryville University  are part of Apple’s Distinguished Schools Program. This signifies how online universities today are able to innovate the educational experience in a tech-forward and future-focused manner, integrating new technologies to enhance the learning experience and usher in the digital age.

Edge Computing and Integrating VR

One of the biggest challenges VR applications face today is latency, especially when tactile (touch) components are included—Even the slightest delay when VR headset can cause the viewer to get motion sickness and any significant lag may render the application useless or too cumbersome, thereby cancelling out its benefits.

By placing servers near the edge of the access network, rich tactile VR applications can be streamed to inexpensive headsets. This both improves the experience and potentially reduces the overall costs to students.

As Peter Christy points out, mobile operators may be in the ideal position to deliver VR applications. By investing in edge infrastructure, mobile operators can provide a platform that supports remote education at very large scale.

Virtual Reality as implemented with edge computing would add great value to the quality of education provided by remote learning institutions. Through the use of edge computing, institutions would be able to provide VR experiences that are immersive and life-like, providing better educational experiences to wider audiences..

Data at the Edge Report

Get Your Free Copy of our First Topical Report: Data at the Edge

By Blog

State of the Edge has published its first topic-specific report, Data at the Edge: Managing and Activating Information in a Distributed World. Building upon the inaugural edge computing ecosystem report published in June of last year, this topical report focuses on managing and activating information using edge computing. The report is freely available here.

The 26-page Data at the Edge report is available free of charge. In the report, we examine how data is shaping the rise of the edge. It provides an overview of existing research and predictions around data growth, as well as highlights how business will become more efficient and competitive by extracting previously untapped value from data using edge computing. Key findings of the report include:

  • Data is proliferating at unprecedented speeds: Data generated by the year 2025 is expected to exceed 175 zettabytes, a tenfold increase from 2016 levels. The need to manage this staggering volume of data is going to be a key driver of distributed architecture.
  • The center of data’s gravity is shifting to the edge of the network. As a result, data will need action at or near the edge and away from the core.
  • Four key factors drive demand for edge computing: latency; high data volume accompanied by insufficient bandwidth; cost; data sovereignty and compliance.
  • It won’t be cloud versus edge; it will be cloud with edge. As massive amounts of data are created outside the traditional data center, the cloud will extend to the edge.
  • Centralized cloud computing will persist, though edge computing will create radically new ways in which we create and act upon data, creating new markets and unlocking new value.

Be sure to get your free copy.

About State of the Edge reports

State of the Edge reports are produced and funded collaboratively by a growing coalition of edge computing companies, with an explicit goal of producing original research without vendor bias and involving a diverse set of stakeholders. Supported by member funds and a community-driven philosophy, the State of the Edge mission is to accelerate the edge computing industry by developing free, shareable research that can be used by all. State of the Edge reports are made available under a Creative Commons 4.0 license, which allows materials to be shared free of change, encouraging the widest possible distribution.

An Open Source Ethos

Since its launch, State of the Edge has made significant open source contributions, including the Open Glossary of Edge Computing, which is now officially housed under  The Linux Foundation’s LF Edge group, dedicated to edge computing projects. Today the organization is announcing the donation of the Edge Computing Landscape Map to The Linux Foundation, to be led by the Open Glossary project.

How to Join

State of the Edge is open to any company in the edge computing ecosystem. Companies may inquire about membership by emailing state.of.the.edge@gmail.com. Membership is $7,500 per year for most companies, with a special discount for startups with 30 or fewer employees ($2,500/year).

Drag Racing Background

Hey, Mobile Operators: You Need to Move Faster and Be More Nimble!

By Blog

Peter Christy, a former 451 analyst, analyzes the value of agility and suggests that mobile operators could benefit by moving fast like cloud operators do.

Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

Most of today’s “edge” discussion are aspirational, and are often presented in the context of 5G—the future world of wonderful things to come. While many of the most exciting edge applications are a ways off in the future, this kind of thinking misses two critical points: (1) the edge can deliver important commercial benefits now on today’s 4G LTE infrastructure without having to wait for 5G; and (2) for mobile operators the race to the edge is more than just edge—it’s actually the future of the cloud. The cloud is going to consume the edge, and if the operators don’t act quickly they are going to lose their opportunity.

The cloud moves uncomfortably fast, and it won’t wait for the full 5G. The mobile operators need to embrace the edge now. Those that don’t will risk missing out on the next generation of cloud and the internet.

Telcos Meet the Cloud at the Edge

Over the last 10 years, as an industry analyst and consultant, I’ve watched the emerging public cloud change nearly everything in IT, with profound and often painful implications for the incumbent vendors. Many of these legacy vendors believed their position in the commercial IT ecology was pretty secure, until it wasn’t. It’s now time for mobile operators to face similar forces of change: the success of the 4G/LTE buildout has made the global mobile infrastructure a key part of modern IT, which certainly wasn’t the case ten years ago. Today’s cellular wireless systems are now a part of the cloud, like it or not.

The cloud and mobile infrastructure meet, not surprisingly, at the “edge”. Is the edge just the cloud deployed in a more distributed form, or is the edge an important new part of the mobile infrastructure? Are mobile operators at risk of cloud disruption like the IT incumbents before them?

The cloud evolves at a very different and frightening pace compared to traditional telecom—Amazon Web Services is a highly-profitable, $30 billion run-rate business, growing at 45% year over year. While those numbers are still small by the standards of the global mobile industry, that will quickly change with exponential growth. If mobile operators wait another three years to figure out how they become part of the cloud, AWS might be a $100 billion per year company; does that sound more worrisome?

Having worked in the area of cloud, telecommunications and IT for a while, I think it’s best to treat the mobile edge (or cloud, whatever you choose to call it) as new and different territory, with a land grab going on, with very aggressive real estate developers sniffing around.

Assume for the moment that the edge is the beginning of a much larger melding of cloud and mobile. If this is true—and I posit it is—the stakes are existentially high. If the telcos wait until it’s obvious how this is going to sort out, they’ll be much too late to be one of the winners. So, my advice to operators is to get engaged now, because of the likely strategic impact, because of the emerging businesses enabled by the edge, and because of how the edge can benefit their businesses now.

Perspective

While a lot of edge discussion to date has been aspirational in nature—great things to come, at some point in the future—the edge can also make a telco’s  business better now. To explain what I mean, I’m going to focus on two potential benefits of the mobile edge—business agility and edge bandwidth—both of which you may not have heard much about before.  Neither depends on 5G. Both can make the business better now. I’m going to talk about the first—business agility—in this blog, and the second—the value of edge bandwidth—in a subsequent post.

For an analyst or watcher, the mobile edge is fascinating because it’s the forced marriage of the global cellular infrastructure with the the cCloud and the Internet. Talk about different cultures. One of the biggest differences is in speed and cadence. Mobile operators move with the ponderous grace of the national telephony monopolies they used to be, with technology generations carefully designed and standardized and then rolled out at global scale over many years.

The cloud moves at, well, cloud speed, something frighteningly fast for everyone else. So when cloud meets mobile it’s like a basketball game between a team that likes to run and one that doesn’t. Whichever team defines the pace of the game has a clear advantage. In the cloud versus mobile game there is a lot at stake—the transformation of IT and communications as we have known it, just to start; and if it’s like the previous generation of IT that was displaced by cloud, there may not be a lot of assured franchises.  I don’t see the cloud slowing down, so logically it suggests that mobile operators need to speed up, and make decisions and execute new initiatives faster than they are used to, or they won’t like the consequences.

Agility (“able to move quickly and easily”)

One of the relatively undiscussed values in a Telco edge cloud is the impact that can have on Telco system and application agility (speed of development) and, in turn, on Telco business agility. The missing point is that an edge cloud can be used for internal application development as well as offered to others for rent.

The internet and the cloud have redefined business, commerce, and governance by enabling businesses that reach global markets, and enabling new forms of business structures including innovative supply and delivery chains. In the past, new business structures were grown organically by existing companies, necessarily a slow and deliberate process. Going forward new structures can be built by the network interconnection and collaboration of existing businesses, which can happen at, well, cloud speed. So incumbent businesses can’t rest on their laurels because things can change quickly and dramatically, taxis and Uber are just one example.

Business agility is also improving because of how the cloud has changed application and system development. Development cycles no longer take 18-month, followed by customer testing and then customer deployment, months or even years later. Instead, there is continuous software development and deployment with development cycles of weeks or months at the most. Many IT projects are now built using “scrum” development with short development sprints and system goals that adapt to what is learned and how the market evolves. Software development is now moving at cloud speed.

Why is IT agility such an important factor in business agility? Because, as time goes on, it is increasingly true that a company (or government for that matter) is its IT system. Looked at a different way: a modern company can’t do what its IT system can’t support. Mobile operators don’t run very agile businesses, certainly by cloud standards. Now that cloud and mobile are integrating, can that continue to be true? Edge platforms enable mobile operators to compete with OTT solutions but the competition will probably occur at the cloud pace so the agility offered by an edge platform is an essential part of the solution.

The “How” of IT Agility

Cloud IT development agility results, in part, from the different systems structures used in the cloud, as well as from a whole new development process and methodology. To explain that we have to get software geeky for a moment — sorry.

Application development speedup is enabled by “single image” software systems.  Big websites may have many servers (Google Search has millions) but they run a single version of software, and to the degree possible, all the servers are all exactly the same. When a feature is added, it’s added at the same time on all the servers; when a bug is fixed, it’s fixed everywhere. If the modified software runs on one server, it’s not going to break when run on a differently configured server. There is a single version of the software running as many instances, each on uniform infrastructure.

Modern web and cloud development agility couldn’t be more different from the legacy IT model where each business was encouraged (by self serving vendors and integrators) to have a unique hardware and software infrastructure — a different system, by design, even to do the same thing. For an application vendor wanting to sell into this market,  there were an uncountable number of subtle and not so subtle differences in the platforms that their customers used to run the application. If that wasn’t bad enough, each customer and prospect had an independent strategy and schedule for installing patches and new versions of the myriad software components and subsystems they ran. So customer platforms were all bespoke, unique components, each upgraded on a unique schedule — all different in the details that count when it comes to integration and bugs. Compared to a single instance web system, the legacy ecology is quite literally a support and development nightmare. The complexity meant a lot of effort had to be spent making applications run everywhere and keeping them running everywhere, effort that can’t be devoted to advancing the application, which in the end is what customers really want and need, and will pay for.

Automation

The other secret to agility is automation. The creators of very large web systems realized early on that they had to remove human dependencies as much as possible—any operational process that had manual steps wouldn’t scale to tens or hundreds of thousands of servers. While enterprise IT is just now adopting “DevOps”—better tools for the operational teams—he large web and cloud providers talk about what is practically “NoOps,” which in practice is quite different—an explicit goal of eliminating human administrators to the maximum degree possible. For every problem that is found and fixed (necessarily a human activity), the site automation is enhanced or repaired so that the problem never occurs again or, if it does, it is solved automatically with no human remediation. Problems in complex systems are unavoidable; repeat problems, however, are unacceptable.

Agility Delivers Business Value

Agility wins in any competitive arena, all other things being equal, including online services and applications (SaaS). How can a competitor survive for long if the market leader is intrinsically faster at developing new capabilities and features, and is always ahead of you? Many legacy application vendors have learned this painful lesson when faced by a a new web competitor. The legacy provider doesn’t go out of business immediately, as they have customers dependent on the systems they’ve purchased. Instead these providers face a long and painful erosion of the business unless they can become as agile as the newcomers, before it’s too late.

Many enterprise IT groups have also learned a painful agility lesson.  As business transformation (specifically “digital” transformation) became a common CxO strategy pillar, it created a supporting requirement of IT agility. How can a business be agile—respond to changing conditions quickly and effectively—if the IT system isn’t flexible enough to support applications that change at the same pace? More agile IT has become a CEO demand rather than a hope.  Most IT groups understandably resisted moving applications, and tried to create equally agile internal development platforms. Most failed, and when they did applications moved to the cloud anyway, over the dead bodies.

Mobile Operators and Agility

Let’s get back to the topic at hand—why mobile operators should engage with the cloud and build an edge cloud now rather than waiting. The point is that an edge cloud can be used as an internal development platform to greatly improve the agility with which a mobile operator can respond to market opportunities and challenges.

It’s completely understandable why many mobile operators probably think their franchise is secure; after all, they are descended from earlier incarnations as unassailable national telephone monopolies.  When the telephone was introduced commercially toward the end of the 19th Century, it changed life for people then, as much as the Internet has changed people’s lives more recently. The same can be said for the mobile phone, and the introduction of texting, roughly a century later. So it’s entirely understandable that mobile operators tend to see phones and services right in the middle of the modern world.

Mobile operators never intended to give up their technology and cultural  leadership role, and they hatched great plans for adding media and other services to the mobile phone experience. However, most of those plans never reached fruition, in part because of the laborious processes and lengthy development cycles of the cellular ecology: it’s hard to plan innovation so far ahead and get it right, especially if there are other games in town. And there has been a major other game in town ever since AT&T permitted the App Store and brought forth the world of independent mobile applications.  Innovations in open market smartphone applications didn’t require advanced planning with meticulous syncing to new infrastructure technology generations; they just happened if and when they made sense, and took off like wildfire if they did.

An agile development platform located at the cellular edge and integrated with the global cellular infrastructure gives the mobile operator a new way of competing with over-the-top, phone/cloud applications, that is far more agile than introducing features through the evolution of mobile infrastructure. Consider, for example, virtual and augmented reality headsets. VR has been around for nearly 30 years, and high volume consumer products just around the corner, but certainly never predictable or schedulable years in advance.  VR and AR are ideal edge applications because of the impact of latency and bandwidth. With edge agility, mobile operators don’t have to plan this all years in advance and get it integrated into global standards; they can finally just respond to market developments as they come.

Agility lets mobile operators get back into the game and again drive their subscribers’ experience, if they want to.  Compared to controlling the experience by deciding which software ran on the phone at what cost and price, responding agiley is pretty different. To play the game now, mobile operators have to be marketers, and discover and respond to opportunities, not just act as a gatekeeper or wait patiently for the next generation. Those are new challenges and hard work.  But it sure is better than letting all those opportunities go to others, over the top, don’t you think?

Summary

This is  part one of my argument (or rant) about why mobile operators need to respond faster—at cloud speed—and not get mired down in traditional mobile evolution speed. Mobile operators need to see the edge as the beginning of an interaction between cloud and cellular that may well change cellular profoundly, and they must react accordingly. That’s my personal opinion, but even if you don’t think that outcome is likely, you need to take it seriously if you believe it’s possible (if you don’t think it’s possible, review what happened to big IT incumbents and how that worked out for them).

In a coming blog I’ll talk about a second largely unrecognized value of edge applications—leveraging the high-bandwidth to the user/device (not just the lower latency).

Peter Christy is an independent industry analyst and marketing consultant. Peter was Research Director at 451 Research and ran the networking service earlier, and before that a founder and partner at Internet Research Group. Peter was one of the first analysts to cover content delivery networks when they emerged, and has tracked and covered network and application acceleration technology and services since. Recently he has been working with MobiledgeX. You can read additional posts by Peter on the State of the Edge blog, including Edge Platforms and The Inevitable Obviousness of the Wireless Edge Cloud.

Pokemon Go

Mobile Gaming at the Edge

By Blog

Edge computing promises to deliver better mobile gaming experiences, which could reinvigorate the market for games on mobile devices.

Editor’s Note: This is a guest post from an industry expert, adapted from its original version, originally published here. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

Nearly 30% of the world’s population plays video games on their phones, representing a business that exceeds $50 billion worldwide. Yet, the growth of mobile games and in-app purchases has plateaued in recent years. This is in part due to a lack of innovation and the expectation of seamless and sophisticated gaming scenarios that demand lots of storage (locally and in the cloud) and maximum processing power, which gaming companies have not yet been able to deliver. Current network, storage, and processing limitations have made delivery of this kind of sophistication on a mobile or IoT device for online gaming, virtual reality (VR) and augmented reality (AR) difficult.

Edge computing, however, promises better gaming experiences by lowering latency and improving accessibility at a more affordable cost to gamers. When workloads run at the edge of the network (instead of being sent to a few centralized locations for processing), data need only travel the minimum necessary distance, reducing associated lag time and enabling more interactive and immersive in-game experiences. Furthermore, edge computing is paving the way for more subscription-based models that could ultimately put some money back in gamers’ pockets by reducing the need for game and hardware investments.

A Better In-Game Experience

Improved Multiplayer Experience
Edge computing boosts the opportunity to serve multiplayer gaming, which is both latency sensitive and bandwidth intensive. By matching a gamer by its location then placing game servers closer to them, multiplayer latency can reach single-digit milliseconds, which dramatically decreases any lagginess.

Hatch, a spin-off from Rovio – the mobile cloud gaming company behind Angry Birds, is a Packet customer (like Section) that benefits from its micro data centers deployed in cities, close to users and its unique business model in which manufacturers and developers can implement specialized hardware at Packet’s edge data centers. This allows Hatch to quickly update and refresh the 90+ games on its monthly subscription platform as the need arises, ensuring its users get superfast access to the latest developments in their mobile games.

On Packet’s services, Hatch runs low latency multi-player-gaming streaming services to users with low-end Android devices. According to Zachary Smith, CEO of Packet, “[Hatch] needs fairly specialized ARM servers in all these markets around the world. They have customized configurations of our server offering, and we put it in eight global markets across Europe, and soon it will be 20 or 25 markets. It feels like Amazon to them, but they get to run customized hardware in every market in Europe.” Hatch could do the same thing in the public cloud in theory, however, the costs would make that an inefficient business model. Smith says, “The difference is between putting 100 users per CPU versus putting 10,000 users per CPU”. Smith believes the new model will be of interest to the latest developer generation that will be driving the next set of innovations in software.

Enabling Better VR/AR
A key advantage to edge compute for VR and AR experiences is the ability to reduce dizziness associated with low latency and slow frame refresh rates. This can lead to a laggy experience that is frustrating, potentially nausea inducing and ultimately disorienting.

AR services need an application to analyze the output from a device’s camera and/or a specific location so that a user’s experience when visiting a point of interest can be supplemented. The application needs awareness of a user’s position and the direction they are looking in, provided via the camera view, positioning techniques, or both. Following analysis, the application is then able to offer additional information in real-time to the user. As soon as the user moves, that information needs to be refreshed. Hosting the Augmented Reality service on a Mobile Edge Computing (MEC) platform instead of in the cloud is beneficial because supplementary information relevant to a point of interest is highly localized and frequently irrelevant beyond the particular point of interest. The processing of information from the camera view or user location can also be performed on a MEC server instead of a cloud server to benefit from the lower latency and higher rate of data processing possible at the edge.

gamer edge compute

The huge success of the AR game, Pokemón Go, was largely due to the way it enabled rich user interactions with the real world. Through geotagging and a connection to users’ Google data, the app could collect large amounts of data per user, including location, player movement and Internet connectivity.

The game’s worldwide success disarmed Niantic (Pokemón Go’s creators) however, who only had a minimal global presence. Server crashes, hacks that invaded user privacy and various other disruptions were experienced, leading to angry venting by users on the web and a slew of bad publicity. It’s not clear, but likely that the game’s servers were hosted on the Google Cloud Platform, which couldn’t handle the unexpectedly high volume of users. Edge computing, however, is an ideal scenario for these types of games. By moving processing to the edge, closer to the end user, similar apps could offer a superior user experience by reducing latency and service disruptions.

Improved Security/Privacy
Privacy challenges were another significant issue with the first iteration of Pokemón Go. Reports of hacking grew in number due to the game being able to access critical pieces of user data, including camera, contacts, location and Google account. Edge computing can better overcome this problem as well by keeping processing localized in neighborhood data centers, or on the device itself, rather than sending sensitive data over the network, back to the cloud.

Accessibility

The Evolution of Cloud Gaming / Subscription Services
Cloud gaming looked set to catch on and become the future of video gaming back in 2009 when OnLive, the first cloud game streaming service launched. At the time IGN wrote, “this next generation cloud technology could change videogames forever” leading to time in which “you may never need a high-end PC to play the latest games, or perhaps even ever buy a console again”. The service, which at one time received a valuation of $1.8 billion, closed down for good only six years later (in April 2015), however, unloading its patents to Sony along the way.

OnLive was intended to be the simplest iteration of “pick-up-and-play” on the market, with games running on the company’s servers and the video and audio streams compressed for transmission across the Internet to be played in the homes of gamers. The service ran into its first set of challenges in 2012 when it closed after running up $40 million in debt and losing many of its employees. It reopened in 2014, launching as a monthly subscription service, initially for $14.99, a sum which was later reduced to $7.95. The company eventually closed its doors for good the following year as the business was simply unsustainable.

However, although the business failed partly because of doubts over its ability to deliver a lag-free experience, latency-free cloud gaming sold via subscription was still a revolutionary idea. The success of other streaming subscription models that work in this vein such as Netflix, Hulu and Spotify demonstrate the potential for such an idea in gaming. Indeed, new subscription services such as Sony Playstation Now and Nvidia are beginning to gather steam in a way that OnLive never did.

Sony PlayStation Now offers “an instant, ever-changing collection of hundreds of PlayStation games – ready to download on PS4 or stream on PS4 or PC”. Last year, Nvidia unveiled a beta version in Windows of its new game streaming service, Geforce Now, which similarly to OnLive, offers users access to a library of video games in the cloud in exchange for a monthly subscription fee. A high-end PC is not needed to run the gaming client.

Game-streaming services like Sony PlayStation Now and Nvidia are placing a lot of faith in edge computing enabling their success. Latency can quickly destroy a user experience; a video game needs to be able to respond to keystrokes. Any commands issued must travel over the network in each direction to be processed fast enough by the data center for the gamer to feel like the game is responding to each keyboard and mouse stroke in real time. The sole way to ensure that kind of latency is to place the computer and processing power of the gaming data centers as close as possible to the end user.

In a recent demonstration of the service at AT&T’s Spark conference in San Francisco, Nvidia showed that the demo game, which had a resolution of 1920 by 1080, had only 16 milliseconds of delay between the laptop and AT&T’s data center in Santa Clara using its edge network.

multiplayer gaming edge compute

Affordability

Reduced Hardware Requirements
One of the great benefits of gaming subscription services for gamers is the way in which they reduce the need for regular investments in new systems (e.g. new PC, Sony Playstation, Xbox, etc) and the corresponding need to frequently update those systems and purchase the games and the components required to run them, such as graphics cards, processing, etc.

At the AT&T Spark demonstration, Paul Bommarito, vice president of Americas Enterprise Sales for Nvidia, said, “So in the past to get this level of experience, you would need a workstation with a graphics processor costing a few thousand dollars,” said Bommarito. “With GeForce Now and the graphics acceleration taking place in the cloud, you can get that level of beautiful experience on a $200 laptop. I think the best thing is 5G. If you think about that mobility capability of this high-bandwidth, low-latency network, the ability to have this gaming experience anytime, anyplace, anywhere, with GeForce Now on any device, our customers are going to love it.”

The Future of Gaming at the Edge

Edge compute makes online gaming more commercially viable than cloud compute was able to. As latency is so essential to the success of immersive mobile cloud gaming, as well as to VR and AR, compute frameworks have not been able to match its promise until now. By placing the gathering and processing of large amounts of information at the edge of the network as close to the user as possible, these challenges can start to offer the kind of low latency required to make online gaming an ongoing success.

Improved network performance in areas such as delay and packet jitter directly translates to improvements in application performance, including in areas critical to the success of online gaming, such as motion-to-photon latency and frame loss.

As Matt Caulfield, self-identified “edge computing and distributed systems enthusiast”, recently wrote in a post on Medium, “The lower the latency between a game console or PC gaming rig and the backend server, the lower the lag. The rise of competitive gaming suggests that the massive gaming community is willing to pay a premium for a better experience.”

As a result of a subscription-based edge streaming model, gamers will no longer need to regularly purchase updated hardware or software, and instead, subscribe to an edge-hosted gaming platform they can access from existing devices. Users will be able to connect to the continually evolving library of games, connecting to it remotely while the edge hardware is kept up to date elsewhere.

Perhaps edge computing, with its promise of dramatically low latencies, will reignite the streaming model in games once more. At a panel discussion at AT&T’s Spark conference, Microsoft Azure’s Royeka Jones described edge compute as “the enabler that will allow infinite possibilities around what we can do with technologies.”

Molly Wojcik is Marketing Director at Section, a developer-centric, multipurpose edge compute platform that empowers web application engineers to run any workload, anywhere. With over 10 years of experience in digital marketing leadership roles, Molly thrives on bridging the gaps between marketing and engineering teams through data-driven strategy, relatable storytelling, and growth-focused program development.

Code in Terminal

Bringing Software Developers to the Edge

By Blog

Edge computing promises better user experiences and greater efficiencies, but without software the edge is just computers. Realizing the full potential of edge needs a catalyst—software developers.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

We Need More Conversations About Software at the Edge

I’ve attended several edge-focused conferences over the past year, and I’ve noticed a dramatic absence of software-related conversations. Recently, at Edge Congress in Austin, I found myself in a session where the speaker polled the crowd to see who in the room represented data centers; half of the hands went up. Then he asked about telco; the other half of the hands went up. His final question asked who was representing software; in a session with more than 200 participants, only 3-4 hands went in the air (including mine and my co-founder’s).

To put this in context, a traditional content delivery network might need to run software on 100 data centers in order to cover the entire world’s population with better than 40ms round-trip times. A new class of edge computing applications will demand better than 10ms round-trip times, which may require thousands of data centers at the edge, such as at the base of cell towers. As telcos and data center providers deploy edge data centers, few people are talking about how we actually develop software that runs at scale across an exponentially increasing number of locations on this new infrastructure.

We Must Bring a Software Perspective to the Edge

Every engineer has a unique perspective on what and where the edge is based on their role and the application architecture in which they operate. So, rather than attach a specific definition, the edge is better thought of as a compute continuum. Depending on the scenario, the edge can span from a centralized data center to continental/national/regional data centers, to cell towers, and all the way down to IoT devices (e.g., phones, point-of-sale systems).

Each provider also has their own point of view on what the edge is. Large data center operators often say their network edge is the firewall. If you talk to a CDN provider (e.g., Akamai, Cloudflare), they’ll say that the edge is where their servers are. If you talk to a telco , they’ll say that the edge is where their servers are—whether in a local central office (CO) or at tower-connected antenna hubs. And for large enterprises, the edge for them may be walls of their own data centers.

In reality, the edge isn’t any one of these places. It’s all of them. In order to make this complex landscape useful to developers, we must approach it from a software perspective, building abstractions and systems that allow developers to interact with the edge how and where they need.

What’s Holding us Back?

Aside from a few noteworthy engineering teams, such as those at  Netflix and Chick-fil-A, who have taken it upon themselves to build distributed architectures to run innovative workloads at the edge, the extent of most edge computing today is still locked into traditional CDN workloads and systems. As more developers look to leverage the benefits of edge computing, they need more flexibility and control than current CDNs can provide.

While many CDN providers are leaning into edge computing, the legacy systems have many deficiencies that are impeding developers from advancing beyond simple caching and other standard optimization techniques. The problems include:

  • Fixed and inflexible networks translate to poor architectural choices.
  • Disparate point solutions and “black box” edge software lead to a slow rate of change.
  • Lack of integration with developer workflows and support for modern DevOps principles creates poor control of the edge.

Developers have absorbed concepts like centralized cloud, agile and DevOps, yet most developers have little experience building highly distributed systems. How can we overcome this deficit by leveraging common practices for faster edge adoption?

Requirements for Empowering Edge Development

In order to empower developers to move sophisticated parts of application logic out of the centralized infrastructure and into a service running on an unknown number of servers, there are some minimum requirements that must be addressed.

  • Local Development. Distributed systems are hard to build. Developers need a true full stack environment that allows them to make and test changes locally before pushing to production. Not only does this reflect standard practices among modern development teams, but it also brings the benefits of faster feedback and risk-free experimentation.
  • Immediate Diagnostics. Developers need comprehensive, real-time insights in order to monitor, diagnose, and optimize systems. This includes transaction traces, logging, and aggregated metrics.
  • Consistent Behavior. Developers need to have confidence in their toolsets, in both usability and performance. In order to deliver platforms that developers adopt, all decisions must come from a developer-first mindset. The complete system must work in dev the same way it works in prod.

Edge Workloads, Components, and Scheduling

Out-of-Band vs. Inline (or In-Band) Workloads

There are two high-level categories of workloads to consider when thinking about the edge from a developer’s perspective. Out-of-band workloads are the more basic of the two and can also be thought of as synchronous or transactional: a client makes a request, and the system blocks on the response. A good example is an HTTP request or static file delivery.

Things get more sophisticated when it comes to the second category—inline workloads—which can also be thought of as asynchronous or non-transactional, where custom logic at the edge processes data as it is being ingested. The computing model changes substantially when inline workloads are introduced at the edge, and this is where the true potential of edge computing starts to take shape.

Edge Workload Components

Within the workloads, there are several key components that drive decision-making for both the developer and those tooling developers at the edge.

  • Web Servers: Traditional CDN workloads have primarily relied on load balancing and reverse proxies. As edge workloads become more sophisticated, software architects are leveraging networks of containerized microservices.
  • Other Triggers: What many have termed as ‘serverless functions’ have become more common when running logic closer to end devices. This, coupled with edge cron jobs that compile and send only the necessary information back to the origin server, have established the foundation for edge computing. However, as the need for more specialized infrastructure arises, developers are looking to a ’serverless for containers’ model to run their containerized microservices at the edge without having to worry about the allocation and provisioning of servers near end users.
  • State Management: At the moment, there are a few different state management models that people talk about at the edge: ephemeral, persistent, and distributed. Distributed state management presents the most interesting challenges for edge computing. For example, a common use case for distributed state at the edge comes in web application firewalls, where security administrators want to block traffic at every endpoint and as soon as one endpoint detects it, the other endpoints should know about it. For an interesting read on this subject, check out Edge Computing is a Distributed Data Problem.
  • Messaging: Every type of workload running in an edge platform must be able to receive messages via low latency global message delivery. In order to scale this messaging, API extensibility is key.
  • Diagnostics: What a developer fundamentally needs when they’re building these systems is traceability through the entire stack. We need to be able to provide effective mechanisms for developers and operators to be able to go into a system and see what went wrong and where they can optimize.

Edge Workload Scheduling

One of the biggest topics when it comes to edge computing is scheduling. Imagine a future world where every 5G base station has a data center at the bottom of it. While there will be a massive amount of compute in these edge data centers there certainly will not be enough to run every single application in the world at every one of those towers, in parallel. We need a system that can optimize workload scheduling to run in the right place at the right time. This is a very challenging problem that nobody has completely solved yet.

As we work through these challenges, the types of scheduling models to look after include:

 

  • Static: This is what we have today with content delivery networks—set locations with pre-determined configurations.
  • Dynamic: Scheduling based on latency or volume thresholds. This is perhaps where the most opportunity lies when it comes to edge computing.
  • Enforcements: Circumstantial scheduling based on geography or sovereignty requirements, as in the case of GDPR, or compliance, such as PCI.

We Need to Bring DevOps to the Edge

To keep pace with technology, engineers must be able to conduct quick deployments in safe, reliable and repeatable ways. DevOps and continuous delivery (CD) help to support a more responsive and flexible software delivery cycle; DevOps accelerate development cycles, which helps organizations achieve a quicker pace of innovation.

As developers gain more control over the provisioning of IT resources and everyday operations, they require more flexibility, transparency and visibility in their technology stack. In keeping with the developer-first mindset, edge compute software must adhere to these same principles in order for us to continue to charge forward in this new paradigm. As companies deploy hardware into edge data centers, we must similarly advance the software that takes advantage of these new capabilities. Hardware and software must evolve together at the edge.

Daniel Bartholomew is Co-Founder & CTO at Section, a developer-centric, multipurpose edge PaaS solution that empowers web application engineers to run any workload, anywhere. Daniel has spent over twenty years in engineering leadership and technical consulting roles. His vision for a developer-friendly edge platform was born long before the term ‘edge computing’ was coined and has evolved into a pioneering technology that is focused on meeting the needs of today’s developers.

Memento Movie Poster

Edge Computing is a Distributed Data Problem

By Blog

We are told that low latency and imagination are the only prerequisites for building tomorrow’s edge applications. Tragically, this is an incomplete and false hope.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

In order for robust, world-changing edge native applications to emerge, we must first solve the very thorny problem of bringing stateful data to the edge. Without stateful data, the edge will be doomed to forever being nothing more than a place to execute stateless code that routes requests, redirects traffic or performs simple local calculations via serverless function. This would be the technological equivalent of Leonard Shelby in Christopher Nolan’s excellent movie Memento. Like Shelby, these edge applications would be incapable of remembering anything of significance, forced, instead, to constantly look up state somewhere else (e.g., the centralized cloud) for anything more than the most basic services.

Edge computing is a distributed data problem. It’s more than simply a distributed compute problem. It’s full power cannot be realized by simply spinning up stateless compute on the edge . Conventional enterprise-grade databases systems cannot deliver geo-distributed databases to the edge while also providing strong consistency guarantees. Conventional approaches fail at the large globally-distributed scale ng because our current database architectures are built around fundamental tenets of centralizing the coordination of state change and data. 

If you can take your data (state) and make giant piles of it in one data center, it’s easier to do useful things with it; but, if you have  little bits of it spread everywhere, your have the a horrendous problem of keeping everything consistent and coordinated across all the locations in order to achieve idempotent computing. 

Edge compute is easy when it’s stateless or when state is local, such as when a device maintains its own state or is trivially partitionable. Take, for example,an IOT device or a mobile phone app which only manages its own state. Similarly, stateful computing is also easy when everything is centralized. 

However, when you want to perform stateful computing at any of the many places called edge—the network edge, the infrastructure edge or even the device at the other end of the network— edge computing becomes difficult. How do you manage and coordinate state across a set of edge locations or nodes and synchronize data with consistency guarantees? Without consistency guarantees, applications, devices and users see different versions of data, which can lead to   unreliable applications, data corruption and data loss. Idempotent computing principles are violated and the edge is dead on arrival.

Centralized database architectures do not generalize to the edge

For the last 20 years, the world has been industrializing the client-server paradigm in giant, centralized hyperscale data centers. And within these clouds, efforts are being made to super-size the database to run globally and across intercity and intercontinental distances. By relaxing data consistency and quality guarantees, it is hoped that the current generation of distributed databases (distributed within a datacenter) will somehow overcome the laws of physics governing space and time to enable edge computing by becoming geo distributed multi master databases.

Distributed databases that scale out within a datacenter do not cleanly generalize to scaling out across geography and break down under the weight of their design assumptions. Traditional distributed databases depend on the following design assumptions:

  • A Reliable data center class local area network.
    • Low latency
    • High availability
    • Consistent latency & jitter behavior
    • Very few (or no) network splits
  • Accurate timekeeping using physical clocks and network time protocol (NTP)
    • NTP is good enough for use cases where data ordering is handled across servers within the same rack or data center ( NTP slippage is < 1ms).
  • Consensus mechanisms are good enough due to the low latencies and high availability of the data center class LAN.

The design assumptions for a geo distributed database are almost entirely opposite:

  • Unreliable wide area networks
    • High and variable latency especially at inter-city and intercontinental distances.
    • Dynamic network behavior with topology changes and sporadic partitions. 
  • Lossy time keeping
    • Asymmetric routes cause inter-city and intercontinental clock coordination challenges resulting in slippage of hundreds of milliseconds across a set of geo distributed time servers.
    • Atomic clocks may be used to overcome this problem but are prohibitively expensive and complex to operate.
  • Consensus is too expensive and too slow with a large number of participants to coordinate with over the internet
    • Consensus is brittle in that quorum must be centralized and highly available. If network splits (particularly asymmetric splits) occur, managing quorum and getting reliable consensus becomes very challenging.
    • Distant participants slow everyone down as it takes more time for them to send and receive messages.
    • Adding more participants (i.e., edge locations) adds more overhead and slows consensus down as more participants need to vote. 

Coordination is difficult because participants in a large geographically-distributed system need to agree that events  happen in some temporal order.- Mechanisms like quorums are used in conventional distributed systems to implement such coordination. In geo-distributed systems, the mechanisms of coordination become the constraining factor in how many participants or actors can participate in and perform complex behavior in a network of coordinating nodes.  For geo-distributed databases to support edge computing, a coordination free approach is required that minimizes or even eliminates the need for coordination among participating actors.

For edge computing to become a reality, we need geo-distributed databases that can scale across hundreds of locations worldwide yet act in concert to provide a single coherent multi-master database. This in turn requires us to design systems that work on the internet with its unpredictable network topology, use some form of time-keeping that is not lossy, and avoid centralized forms of consensus yet still arrive at some shared version of truth in real time.

For stateful edge computing to be viable at scale with the ability to handle real world workloads, edge locations need to work together in a way that is coordination-free and able to make forward progress independently even when network partitions may occur. \

Edge native databases will unlock the promise and potential of edge computing

Edge native databases are geo distributed multi-master data platforms capable of supporting (in theory) an unbounded number of edge locations connected across the internet using a co-ordination-free approach. Additionally, these edge native databases will not need application designers and architects to re-architect or redesign their cloud applications to scale and service millions of users with hyper locality at the edge. The edge native databases will provide multi-region and multi-data-center orchestration of data and code without requiring developers to have any special knowledge of how to design, architect or build these databases. 

Edge native databases are coming. When they arrive, the true power and promise of edge computing will be realized. And once that happens, it will become true that low latency and imagination will be the only prerequisites for building the applications of tomorrow.

Chetan Venkatesh and Durga Gokina are the founders of Macrometa Corporation, a Palo Alto CA based company that has built the first edge native geo-distributed database and data services platform. Macrometa is in stealth (for now).

Opinions expressed in this article do not necessarily reflect the opinions of any persons or entities other than the authors.

Woman Crossing Mountain Bridge

Crossing the Edge Chasm: Two Essential Problems Wireless Operators Need to Solve Before the Edge Goes Mainstream

By Blog

Geoffrey Moore’s landmark book Crossing the Chasm offers insight into how wireless operators are being challenged to make edge computing mainstream. Read on to understand the gap between what will satisfy innovators and early adopters and what is required to be adopted by the mainstream.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

In 1991, Geoffrey Moore introduced the world to Crossing the Chasm, one of the most influential business books (and corresponding ideas) of that decade. In this book, Moore convincingly argues that all new technologies proceed through a predictable technology adoption life cycle, starting with innovators and early adopters and ultimately reaching early majority, late majority and laggards. Moore’s primary contribution, and the focus of his book, is the recognition that most new technologies hit a stall point as they transition from serving innovators and early adopters and seek to expand their solution to also serve the early majority.

There is a large and difficult-to-cross “chasm” that slows and often stalls technology adoption. This is the gap between what will satisfy innovators and early adopters and what is required to be adopted by the mainstream.

Judging from the hype around edge computing, one might conclude that this is an exceptional technology, effortlessly leaping across the chasm and quickly becoming mainstream. Don’t be fooled: it takes more than hype for a technology to cross the chasm. If we’re not careful, we’ll overlook some of the key obstacles to wide scale adoption of edge computing in the belief that they will somehow iron themselves out.

As pointed out in Infrastructure Edge: Beachfront Property for the Mobile Economy, wireless operators have a unique opportunity to leverage their proximity to the last mile network and profit from the explosion of edge services. However, operators also have a reputation for making lofty promises that are rarely delivered. No wonder that apart from a few forward-leaning operators (AT&T and Deutsche Telekom come to mind), most are sitting on the precipice of the edge, uncertain on how best to proceed.The industry must face, head on, the key barriers to the edge from going mainstream, acknowledge the challenges ahead, and begin advocating for solutions. In particular, we see two essential problems which must be solved:

  • Developers need a uniform infrastructure to deliver a seamless experience without a lot of bespoke coding and high-complexity operations
  • Infrastructure owners—and the entire edge computing industry—need to develop efficient unit economics to drive edge computing down the cost curve at scale.

The rest of this article will present these two barriers in detail, as well as offer some ideas for how they may be surmounted.

Infrastructure that can deliver a seamless experience

Today’s developers leverage cloud infrastructure by simply going to one of the main providers (Amazon, Google, Microsoft), selecting a configuration and, a few clicks later, they’re ready to begin pushing code. The developer can be assured that the service will be available and familiar because, irrespective of the region, the major public cloud providers own and operate an extensive infrastructure that has been engineered for conformity. The developer simply needs to focus on developing their application and getting it to the market, resting easy that wherever they have access to the provider of their choice, the application will just work!

Now think about this in the context of the infrastructure edge, with thousands of micro data centers located at the base of cell towers and in wireless aggregation hubs. The most likely outcome will consist of a vast, distributed compute infrastructure, owned not by one single entity (e.g. Amazon or Microsoft) but by several smaller national or regional operators.

We see some promising, such as Akraino and ETSI-MEC, that hope to present open source API’s that expedite the development of edge applications. But many of these initiatives are backed by their own vested interest groups and there is a danger that the proliferation of such groups may result in the fragmentation of the ecosystem at a time where just the opposite is needed. This view is not isolated, with folks such as Axel Clauberg sounding similar warnings in recent months.

While these software-driven efforts show promise, they do not address the underlying structural challenges. For example, you may have one operator with a 3-year old CPU-heavy edge infrastructure and another operator who has a state of the art GPU configuration. While we might be able to abstract away the underlying software stack variations, how can a developer be sure of rendering the same experience to their end users on top of such heterogeneous computing assets on a global basis?

Solving for a Seamless Infrastructure

Delivering a seamless infrastructure is not something that can be easily solved by operators alone. Most multi-operator initiatives have fizzled out (remember Joyn/RCS?) or have been too slow to be effective in a fast-evolving environment. Solving for seamless infrastructure may require thinking outside the “operator box,” contemplating new business practices, partnerships  and models. Here are two ideas:

Engage with the existing cloud providers

Partnering with the large cloud providers may not be appetizing for many, given that operators have long obsessed about owning their control points—but partnering with web giants is indeed a viable option, especially for the smaller players. Engaging with cloud providers could be direct (e.g., deploying your own data centers and standing up an Azure Stack type solution in partnership with Microsoft) or via 3rd party firms such as Vapor IO, which is deploying carrier-neutral data centers that will host equipment from all the major cloud providers. There is money to be made in partnership with cloud providers, albeit one does give up some level of control.

Engage via a neutral entity

An increasingly viable option is for a neutral entity to step in and drive this discussion. One that understands the developer concerns and has the ability to drive a uniform approach. A variety of players could fulfil this need. A good example is the operator-founded MobiledgeX, which aims to provide a prescriptive design along with a vendor ecosystem that can deliver solutions based upon the type of end applications that the operator would be open to support. Yet another option is align with with players such as Intel and Nvidia, or large system integrators, as these are all companies that can drive reference designs and implementations.

Driving efficient unit economics

While it is one thing to be able to offer infrastructure edge, it is another thing to be able to offer it at a compelling price. Looking at current use cases, we see a few which are critically dependent upon edge for functionality—these applications simply will not function with edge infrastructure. However, a large number of use cases can benefit from edge infrastructure but are not dependent upon it. For the former, the sky’s the limit in terms of pricing—the application simply will not work without edge deployments. For the remaining use cases, it comes down to whether it makes economic sense to enhance the experience with edge infrastructure.

The rapid pace at which compute and storage components improve puts a great deal of pressure on infrastructure owners to continuously upgrade their equipment, further complicating the delivery of low cost unit economics. For example, the performance of Nvidia GPUs has nearly doubled every year since 2015.

Source: https://wccftech.com/nvidia-pascal-gpu-gtc-2015/

Application developers quickly find use of increased horsepower. The cloud providers are well aware of this and wield significant technical and financial muscle to ensure that they have the right infrastructure available to support this trend. This is relatively virgin territory for operators, who have experience in building out and maintaining infrastructure over a 5-7 year depreciation period (15-20 years for civil infrastructure) – not something that potentially needs replacing every 2 to 3 years.

Another area where cloud providers have a leg up on operators is in the operation of this infrastructure. Cloud providers have developed deep expertise in designing highly automated zero-touch systems. All of these factors combine to allow cloud providers to offer computing power at scale and with compelling unit economics. Operators, in contrast, have no track record of being cost-effective cloud providers and depend a great deal upon vendors (many of them with a legacy telecom mindsets themselves). You can see some of the challenges as operators struggle to deploy their own internal clouds to support NFV and SDN.

Put two and two together and you may end up with an operator who offers outdated infrastructure at a premium price….. You get the picture.

Solving for Efficient Unit Economics

There is unfortunately no easy shortcut to unit cost efficiency. Operators need to take a page from the cloud provider playbook to accelerate the deployment of edge infrastructure,  including adding experts from the cloud world to manage their infrastructure. An alternative is to instead partner with existing cloud providers, adopting risk-sharing business practices and new business models (e.g., revenue share) to align incentives among all parties. Furthermore, operators should consider subsidizing costs at the outset rather than demanding large premium profits from day one. This will allow developers to experiment with edge computing at price points comparable to existing public cloud services.

Conclusion

Unless we can individually or collectively solve for the infrastructure and economic challenges presented above, edge computing may have a difficult time crossing the chasm—or, may fall into it!

We do need to convince the developer community of the myriad benefits that the infrastructure edge has to offer. While there are efforts to provide developer friendly API’s, there is more heavy lifting to be done in terms of offering uniform infrastructural assets at attractive prices. Who knows, these challenges may facilitate the next wave of startups aiming to solve this very problem.

Joseph Noronha is a Director at Detecon Inc., Deutsche Telekom’s management consulting division leading their practice in Emerging Technologies and Infrastructure. He has extensive “on the ground” experience with infrastructure players around the world spanning the Americas, Europe, Middle East, Africa and Asia. His interests lie in around Next Generation Connectivity IoT, XaaS and more recently in Edge computing – from product conceptualization and ecosystem building to driving and managing the commercial deployment of these services.

Vishal Gupta is a Senior Executive with strong global expertise of establishing product and business lines with focus on introduction of innovative technologies and products. His background encompasses both Mobile and Cloud technologies to address Edge Compute / 5G / Converged arena. His latest role was Vice President, Sales and Business Development at Qualcomm Datacenter Technologies, Inc.

Opinions expressed in this article do not necessarily reflect the opinions of any persons or entities other than the authors.

Spherical Ball Desk Toy

Life on the Edge: History Repeats Itself

By Blog

As we watch the pendulum swing between core and edge, where will you place your bets? Lance Crosby, CEO and Chairman of StackPath takes us on a historical journey and offers his perspective on this fundamental shift in the internet.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

We’re at the beginning of the next era in computing. It’s entirely new and entirely similar to what we’ve seen before.

The early 1950s through the late 1970s was the age of the mainframe. Led by IBM, “big iron” centralized computing and forced everyone to go to the well for processing and data.

As processing became cheaper and networking technologies became more powerful, workloads no longer needed to live on big iron. In the early 1980s, with the advent of the personal computing, computing became decentralized, with end users performing their work directly on their desktops with local software and client systems calling home to server systems when they needed heavy lifting.

Then, in 2005, the public clouds emerged bringing us back to a centralized computing model. Instead of running on large mainframes, workloads moved to centralized data centers with racks of commodity servers stuffed full of cores, RAM, and storage.  The shift enabled developers to consume data centers as a service, and accelerated development from months to weeks, days, and sometimes even hours or minutes.

The pendulum cycle continues. Today we see, again, the emergence of a new decentralized computing model. Starting somewhere in 2015, cloud computing capabilities began moving out from the giant compute and storage farms to the Internet Exchanges (IXs), cell towers, and past the last mile limit. Developers can now work closer to the end user than ever, pushing code that runs right in their customers’ homes, cars, and even the devices on their wrists.

How disruptive will the edge be? Is it just another buzzword? Will it entirely replace today’s cloud, complement it, or just closely resemble it? Does it need to provide developers the same basic Legos of compute and storage, just in smaller, more distributed chunks? What new technologies will it introduce?

Here are my 2¢ having been here, done that, got the T-shirt, and lessons that we should take from the previous cycles in the industry:

  • Centralized computing in public clouds isn’t going to disappear, at least not for a long while. At a bare minimum, the scale of commodity processing power in the central cloud is too large to displace quickly. The enormous gigawatt data centers out in the farmlands of eastern Washington, remote Iowa, and the Carolinas have more computing capacity in one facility than existed in the whole world just a decade ago.  Millions and millions of physical machines with hundreds of millions of virtual machines and, dare I say, billions of containers?
  • The edge isn’t just hype or a solution looking for a problem. Some argue it’s just a part of the cloud, and pretty much looks and feels just like the rest of it. That’s shortsighted. There are large shifts driving the need for developers to get closer to end users, and large innovations that let them do that—further defining and distinguishing this computing model from the traditional cloud.
  • If you want a glimpse of the future impact of edge computing, just look at what smartphones did to developers and the cloud. Smartphones gave consumers convenience. The world now expects total mobility and access. That has changed the way we build applications and use compute, and has driven much of the cloud revolution. But we’ve only seen the beginning of this change, as smartphone usage and expectations continue to grow and edge computing becomes part of the mix.
  • Placing workloads in IXs is already table stakes. Getting them into 5G C-RANs is the next front to conquer. Edge seekers aspire to sit at the bottom of the groupings of high-speed towers (up to 10Gbps per device), with massive endpoint capacity and line-of-site connections to deliver services like never before. If the 5G spec of 1ms is reality, the micro data center and edge compute companies will have to explode to keep up with demand.
  • Add low-earth orbit satellites (LEOS) and the story gets even more interesting.  There are currently 12 LEOS companies in a race to fill space with low earth orbit satellites. This isn’t the HughesNet 200ms satellite service from the old days floating 45,000 miles above the earth. These LEOS sit 1,000- 1,200 miles high and deliver internet access in the 30 to 40ms range. Respectable by any current performance metrics. The kicker? It only takes about 1,000 of these puppies to cover the globe with satellite internet access and give access to billions (that’s billions with a ‘B’) of people who don’t have access today. 
  • Developers salivate over the edge and what they can write, the industries they can disrupt, and the ultimate control they can have over the experiences they deliver, from autonomous cars, to smart cities, to smart homes, to smart everything. The benefits cover the entire B2B and B2C spectrum. You will be able to binge watch Netflix faster at higher def, play Fortnite with more friends at higher speeds, and consume whatever content you want anywhere, anytime, at speeds once talked about only in backbone connections. New categories will emerge, new verticals will be built, applications will bifurcate. Your lawn will probably seed, water, and mow itself soon without your intervention because edge computing will allow it to happen.

If you need evidence of its value, just consider that the edge revolution has begun in the IXs of the world, the most expensive real estate and power known in the industry. The cost of those resources have been considered worth it by edge seekers looking to lower the latency, decentralize the processes, geograph-ize (is that a word?) data, step right into users’ worlds, and be a few milliseconds away. 

There still are many questions about edge. Some of them we need to answer soon as partners, competitors, frenemies, and everything in between so that we can better manage the vector of this revolution. Other answers will just emerge, as our customers use their wallets to vote for what they do and don’t want.

We all know one thing for sure: The only constant in IT is change. We see from the past that the industry goes through a cycle of introducing centralized capabilities and then decentralizing them. Wash, rinse, repeat. I’ve placed my bets that the decentralization brought about by edge will be monumental, both as an extension of the current cloud computing model and as a model of its own. And I’m keeping an eye out for what’s past the edge.

Lance Crosby is CEO and Chairman of StackPath. Prior to StackPath, he was Chairman and CEO Softlayer, which became the foundation of IBM’s cloud. StackPath is a platform of secure edge services that enables developers to protect, accelerate, and innovate cloud properties ranging from websites to media delivery and IoT services. More than one million customers, including early-stage and Fortune 100 companies, use StackPath services. StackPath is headquartered in Dallas and has offices across the U.S. and around the world. For more information, visit www.StackPath.com, and follow StackPath at www.fb.com/stackpathllc and www.twitter.com/stackpath.

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author.