Skip to main content
Discover the latest edge computing trends and technologies at ONE Summit, April 29-May 1 in San Jose | REGISTER
All Posts By

Peter Christy

If You Want to Understand the Edge, Just Look at Your Phone

By Postcards from the Edge

By Peter Christy

Independent Analyst


The last decade has seen a remarkable and rapid transformation of consumer and enterprise IT alike, triggered by the introduction of the smartphone and fueled by the growth of the public cloud and broadband wireless connectivity. 

Technologists tend to view the last decade’s evolution from an infrastructure perspective. Because we see the vast amounts of compute, storage and networking resources that come into play to deliver the services on our devices, we often emphasize the back-end infrastructures that power our apps. We think largely in terms of the servers and pipes that deliver the internet, and not so much about the devices that connect to them. 

But there is another perspective to explore, the way that users think if they aren’t, like us, infrastructure experts. For them, especially for younger ones (millennials) the Internet and cloud are only interesting if available from their phone.  

From the phone in, the edge cloud looks very different: it isn’t the last thing you see on the way out from the application; instead it’s the first thing you see looking in. 

Thinking about our platforms from the device in, not the cloud out, creates a new perspective. Rather than seeing the cloud as the progenitor of the device, we see the device as the driver of cloud. By starting with what already runs on the device, then extending it with an edge cloud, we open up an entirely new class of applications, ranging from augmented and virtual reality to AI-driven IoT and autonomous robotics. 

These new applications will begin with the capabilities of the device, but leverage low-latency network connections to an edge cloud to augment the device and supplement the experience. For example, a local search can be performed using augmented reality, where having the detailed local context and rendering the augmentation atop is the sine qua non of the application—and all of that will happen on the device.

Consider, also, issues of security and privacy. Privacy is more tractable on the device, especially if the phone platform is trusted and the applications vetted. Apple’s new credit card makes this point: Apple never knows what the card holder is buying or from where; the details of the transactions are saved on the card user’s phone, but are inaccessible to Apple. As Apple points out, given the architecture, they couldn’t sell your purchase history to anyone even if they wanted to because they can’t even see it. 

The edge of the Internet can be made secure and private even though the Internet as a whole is anonymous and spoofable. The user is well-known at the edge, and edge network domains can be isolated and protected from the Internet at large. If the edge access provider knows who the user is, and where they are,  then they can also assure that group, national and regional regulations are applied transparently (and hence complied with) —a problem that is very challenging when attacked in the cloud writ large.

Finally, it’s worth touching briefly on the remarkable and quite counter-intuitive nature of the modern smartphone, and all the derived computer-based devices like drones that re-use phone technology.  

We’re used to a hierarchy of computers, where the server is more powerful than the desktop PC; the desktop PC is more powerful than the laptop; and the laptop is more powerful than the handheld device. The most expensive computer is the most powerful, right? Not so fast! That’s often no longer true. Manufacturers build smartphones in such high volumes (over 1.5 billion last year), that they can define and dictate the components they use. Server and PC designers have to use what’s available.  And smartphone refresh cycles are so frequent and lucrative that the largest vendors (e.g., Apple and Samsung) can design anything that is technologically feasible into a new phone and manufacture it using the most modern semiconductor process. 

Because of this strange inversion, smartphone device capabilities can often far exceed the capabilities of a typical server for specific applications. For example, the custom hardware hardware  on the iPhone 11 makes the phone capable of photography and facial recognition tasks that put most servers to shame. For these applications, the smartphone is many times more powerful than a typical server. Although A/R optimized phones haven’t been released yet, it’s safe to assume the same will be true.

While it’s reasonable to think of the power of an application most likely coming from a server-based backend, this is not always the case. For many phone applications, much or most of the power is in the phone as counter-intuitive as that may be.

So, next time you’re trying to understand how you might use the edge cloud, make sure to think about it outside-in and not just from the cloud heading out, like all those around you on the street are — heads tilted down. I think you may be surprised by the difference.


Drag Racing Background

Hey, Mobile Operators: You Need to Move Faster and Be More Nimble!

By Blog

Peter Christy, a former 451 analyst, analyzes the value of agility and suggests that mobile operators could benefit by moving fast like cloud operators do.

Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

Most of today’s “edge” discussion are aspirational, and are often presented in the context of 5G—the future world of wonderful things to come. While many of the most exciting edge applications are a ways off in the future, this kind of thinking misses two critical points: (1) the edge can deliver important commercial benefits now on today’s 4G LTE infrastructure without having to wait for 5G; and (2) for mobile operators the race to the edge is more than just edge—it’s actually the future of the cloud. The cloud is going to consume the edge, and if the operators don’t act quickly they are going to lose their opportunity.

The cloud moves uncomfortably fast, and it won’t wait for the full 5G. The mobile operators need to embrace the edge now. Those that don’t will risk missing out on the next generation of cloud and the internet.

Telcos Meet the Cloud at the Edge

Over the last 10 years, as an industry analyst and consultant, I’ve watched the emerging public cloud change nearly everything in IT, with profound and often painful implications for the incumbent vendors. Many of these legacy vendors believed their position in the commercial IT ecology was pretty secure, until it wasn’t. It’s now time for mobile operators to face similar forces of change: the success of the 4G/LTE buildout has made the global mobile infrastructure a key part of modern IT, which certainly wasn’t the case ten years ago. Today’s cellular wireless systems are now a part of the cloud, like it or not.

The cloud and mobile infrastructure meet, not surprisingly, at the “edge”. Is the edge just the cloud deployed in a more distributed form, or is the edge an important new part of the mobile infrastructure? Are mobile operators at risk of cloud disruption like the IT incumbents before them?

The cloud evolves at a very different and frightening pace compared to traditional telecom—Amazon Web Services is a highly-profitable, $30 billion run-rate business, growing at 45% year over year. While those numbers are still small by the standards of the global mobile industry, that will quickly change with exponential growth. If mobile operators wait another three years to figure out how they become part of the cloud, AWS might be a $100 billion per year company; does that sound more worrisome?

Having worked in the area of cloud, telecommunications and IT for a while, I think it’s best to treat the mobile edge (or cloud, whatever you choose to call it) as new and different territory, with a land grab going on, with very aggressive real estate developers sniffing around.

Assume for the moment that the edge is the beginning of a much larger melding of cloud and mobile. If this is true—and I posit it is—the stakes are existentially high. If the telcos wait until it’s obvious how this is going to sort out, they’ll be much too late to be one of the winners. So, my advice to operators is to get engaged now, because of the likely strategic impact, because of the emerging businesses enabled by the edge, and because of how the edge can benefit their businesses now.


While a lot of edge discussion to date has been aspirational in nature—great things to come, at some point in the future—the edge can also make a telco’s  business better now. To explain what I mean, I’m going to focus on two potential benefits of the mobile edge—business agility and edge bandwidth—both of which you may not have heard much about before.  Neither depends on 5G. Both can make the business better now. I’m going to talk about the first—business agility—in this blog, and the second—the value of edge bandwidth—in a subsequent post.

For an analyst or watcher, the mobile edge is fascinating because it’s the forced marriage of the global cellular infrastructure with the the cCloud and the Internet. Talk about different cultures. One of the biggest differences is in speed and cadence. Mobile operators move with the ponderous grace of the national telephony monopolies they used to be, with technology generations carefully designed and standardized and then rolled out at global scale over many years.

The cloud moves at, well, cloud speed, something frighteningly fast for everyone else. So when cloud meets mobile it’s like a basketball game between a team that likes to run and one that doesn’t. Whichever team defines the pace of the game has a clear advantage. In the cloud versus mobile game there is a lot at stake—the transformation of IT and communications as we have known it, just to start; and if it’s like the previous generation of IT that was displaced by cloud, there may not be a lot of assured franchises.  I don’t see the cloud slowing down, so logically it suggests that mobile operators need to speed up, and make decisions and execute new initiatives faster than they are used to, or they won’t like the consequences.

Agility (“able to move quickly and easily”)

One of the relatively undiscussed values in a Telco edge cloud is the impact that can have on Telco system and application agility (speed of development) and, in turn, on Telco business agility. The missing point is that an edge cloud can be used for internal application development as well as offered to others for rent.

The internet and the cloud have redefined business, commerce, and governance by enabling businesses that reach global markets, and enabling new forms of business structures including innovative supply and delivery chains. In the past, new business structures were grown organically by existing companies, necessarily a slow and deliberate process. Going forward new structures can be built by the network interconnection and collaboration of existing businesses, which can happen at, well, cloud speed. So incumbent businesses can’t rest on their laurels because things can change quickly and dramatically, taxis and Uber are just one example.

Business agility is also improving because of how the cloud has changed application and system development. Development cycles no longer take 18-month, followed by customer testing and then customer deployment, months or even years later. Instead, there is continuous software development and deployment with development cycles of weeks or months at the most. Many IT projects are now built using “scrum” development with short development sprints and system goals that adapt to what is learned and how the market evolves. Software development is now moving at cloud speed.

Why is IT agility such an important factor in business agility? Because, as time goes on, it is increasingly true that a company (or government for that matter) is its IT system. Looked at a different way: a modern company can’t do what its IT system can’t support. Mobile operators don’t run very agile businesses, certainly by cloud standards. Now that cloud and mobile are integrating, can that continue to be true? Edge platforms enable mobile operators to compete with OTT solutions but the competition will probably occur at the cloud pace so the agility offered by an edge platform is an essential part of the solution.

The “How” of IT Agility

Cloud IT development agility results, in part, from the different systems structures used in the cloud, as well as from a whole new development process and methodology. To explain that we have to get software geeky for a moment — sorry.

Application development speedup is enabled by “single image” software systems.  Big websites may have many servers (Google Search has millions) but they run a single version of software, and to the degree possible, all the servers are all exactly the same. When a feature is added, it’s added at the same time on all the servers; when a bug is fixed, it’s fixed everywhere. If the modified software runs on one server, it’s not going to break when run on a differently configured server. There is a single version of the software running as many instances, each on uniform infrastructure.

Modern web and cloud development agility couldn’t be more different from the legacy IT model where each business was encouraged (by self serving vendors and integrators) to have a unique hardware and software infrastructure — a different system, by design, even to do the same thing. For an application vendor wanting to sell into this market,  there were an uncountable number of subtle and not so subtle differences in the platforms that their customers used to run the application. If that wasn’t bad enough, each customer and prospect had an independent strategy and schedule for installing patches and new versions of the myriad software components and subsystems they ran. So customer platforms were all bespoke, unique components, each upgraded on a unique schedule — all different in the details that count when it comes to integration and bugs. Compared to a single instance web system, the legacy ecology is quite literally a support and development nightmare. The complexity meant a lot of effort had to be spent making applications run everywhere and keeping them running everywhere, effort that can’t be devoted to advancing the application, which in the end is what customers really want and need, and will pay for.


The other secret to agility is automation. The creators of very large web systems realized early on that they had to remove human dependencies as much as possible—any operational process that had manual steps wouldn’t scale to tens or hundreds of thousands of servers. While enterprise IT is just now adopting “DevOps”—better tools for the operational teams—he large web and cloud providers talk about what is practically “NoOps,” which in practice is quite different—an explicit goal of eliminating human administrators to the maximum degree possible. For every problem that is found and fixed (necessarily a human activity), the site automation is enhanced or repaired so that the problem never occurs again or, if it does, it is solved automatically with no human remediation. Problems in complex systems are unavoidable; repeat problems, however, are unacceptable.

Agility Delivers Business Value

Agility wins in any competitive arena, all other things being equal, including online services and applications (SaaS). How can a competitor survive for long if the market leader is intrinsically faster at developing new capabilities and features, and is always ahead of you? Many legacy application vendors have learned this painful lesson when faced by a a new web competitor. The legacy provider doesn’t go out of business immediately, as they have customers dependent on the systems they’ve purchased. Instead these providers face a long and painful erosion of the business unless they can become as agile as the newcomers, before it’s too late.

Many enterprise IT groups have also learned a painful agility lesson.  As business transformation (specifically “digital” transformation) became a common CxO strategy pillar, it created a supporting requirement of IT agility. How can a business be agile—respond to changing conditions quickly and effectively—if the IT system isn’t flexible enough to support applications that change at the same pace? More agile IT has become a CEO demand rather than a hope.  Most IT groups understandably resisted moving applications, and tried to create equally agile internal development platforms. Most failed, and when they did applications moved to the cloud anyway, over the dead bodies.

Mobile Operators and Agility

Let’s get back to the topic at hand—why mobile operators should engage with the cloud and build an edge cloud now rather than waiting. The point is that an edge cloud can be used as an internal development platform to greatly improve the agility with which a mobile operator can respond to market opportunities and challenges.

It’s completely understandable why many mobile operators probably think their franchise is secure; after all, they are descended from earlier incarnations as unassailable national telephone monopolies.  When the telephone was introduced commercially toward the end of the 19th Century, it changed life for people then, as much as the Internet has changed people’s lives more recently. The same can be said for the mobile phone, and the introduction of texting, roughly a century later. So it’s entirely understandable that mobile operators tend to see phones and services right in the middle of the modern world.

Mobile operators never intended to give up their technology and cultural  leadership role, and they hatched great plans for adding media and other services to the mobile phone experience. However, most of those plans never reached fruition, in part because of the laborious processes and lengthy development cycles of the cellular ecology: it’s hard to plan innovation so far ahead and get it right, especially if there are other games in town. And there has been a major other game in town ever since AT&T permitted the App Store and brought forth the world of independent mobile applications.  Innovations in open market smartphone applications didn’t require advanced planning with meticulous syncing to new infrastructure technology generations; they just happened if and when they made sense, and took off like wildfire if they did.

An agile development platform located at the cellular edge and integrated with the global cellular infrastructure gives the mobile operator a new way of competing with over-the-top, phone/cloud applications, that is far more agile than introducing features through the evolution of mobile infrastructure. Consider, for example, virtual and augmented reality headsets. VR has been around for nearly 30 years, and high volume consumer products just around the corner, but certainly never predictable or schedulable years in advance.  VR and AR are ideal edge applications because of the impact of latency and bandwidth. With edge agility, mobile operators don’t have to plan this all years in advance and get it integrated into global standards; they can finally just respond to market developments as they come.

Agility lets mobile operators get back into the game and again drive their subscribers’ experience, if they want to.  Compared to controlling the experience by deciding which software ran on the phone at what cost and price, responding agiley is pretty different. To play the game now, mobile operators have to be marketers, and discover and respond to opportunities, not just act as a gatekeeper or wait patiently for the next generation. Those are new challenges and hard work.  But it sure is better than letting all those opportunities go to others, over the top, don’t you think?


This is  part one of my argument (or rant) about why mobile operators need to respond faster—at cloud speed—and not get mired down in traditional mobile evolution speed. Mobile operators need to see the edge as the beginning of an interaction between cloud and cellular that may well change cellular profoundly, and they must react accordingly. That’s my personal opinion, but even if you don’t think that outcome is likely, you need to take it seriously if you believe it’s possible (if you don’t think it’s possible, review what happened to big IT incumbents and how that worked out for them).

In a coming blog I’ll talk about a second largely unrecognized value of edge applications—leveraging the high-bandwidth to the user/device (not just the lower latency).

Peter Christy is an independent industry analyst and marketing consultant. Peter was Research Director at 451 Research and ran the networking service earlier, and before that a founder and partner at Internet Research Group. Peter was one of the first analysts to cover content delivery networks when they emerged, and has tracked and covered network and application acceleration technology and services since. Recently he has been working with MobiledgeX. You can read additional posts by Peter on the State of the Edge blog, including Edge Platforms and The Inevitable Obviousness of the Wireless Edge Cloud.

Cellular Towers

The Inevitable Obviousness of the Wireless Edge Cloud

By Blog

Peter Christy, a former 451 analyst, asks the question: Is a wireless edge cloud a bold new wave of computing, or just as the obvious?

Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

Thirty-five years ago, if a science-fiction writer extrapolated the future from an IBM PC connected to a timesharing system by a 1200 baud modem, he or she might have envisioned today’s wireless Internet. Armed with a rudimentary understanding of Moore’s “Law” (2X improvements every two years), it wouldn’t be that far fetched to envision a powerful, wireless, battery-powered handheld computer intimately attached to a rich set of remote (cloud) resources. It’s even less astounding that we live in that world today, considering technology has improved by a factor of 217 (roughly 100 million times) in that time period. For an imaginative free spirit, today’s smartphone would have been pretty obvious. It’s only complex when you know the history.

A Convoluted Path to the Internet

The precursor to the modern Internet was born in 1969 as the ARPANET, a Department of Defense Computer Science research project that connected three research centers in California and one in Utah via 56Kb “backbone” links. The engineers who designed that network were solving for a military problem—building a telephony network that could survive a nuclear war—but they ended up creating the understructure for today’s internet, though it would take another 20 years. Fiber optic communication took a decade to arrive. The IBM PC, the ancestor of the smartphone, didn’t show up until 1981, and the Mac in 1984. The World Wide Web didn’t come until 1990, over two decades after those four research centers were connected. The iPhone — the personal computer that we always wanted — wasn’t announced until nearly 40 years after the ARPANET.

And then came the wireless internet, for which the iPhone was the turning point. Demand created by iPhone users drove the buildout of the 4G/LTE network, and that only in the last decade, 45 years after the Internet. This is the convoluted and time-consuming history that gave us today’s ubiquitous, high-bandwidth, cost-effective wireless Internet.

The path to today’s internet might be labyrinthine, but wasn’t it obvious this is what we wanted all along?

The Emergence of Cloud Computing

TImesharing systems—multi-user systems that provided computing services without each user having to buy and operate a computer—first showed up in 1964 (the Dartmouth Time Sharing System), five years before the Internet. The time sharing computers occupied entire rooms. They were expensive, cumbersome and few and far between, so we let many users share them from remote locations by connecting to them with “dumb” terminals using voice communication links with modems.  

As computers got more powerful and cheaper, we momentarily stopped sharing them as we all got our own “personal” computer that sat on our desk. Many of the early PCs (the Apple II, IBM PC) were often not even connected to a network—files and data were shared on floppy disks. Even when there was a “network,” it was typically to support file sharing. Sophisticated businesses would implement centralized storage on a “server” and applications that were shared or needed bigger systems started appearing on those servers as well

These servers, as they were called, became increasingly more difficult to operate and even small businesses had to start hiring IT experts to maintain even the simplest of systems. As computers got cheaper and cheaper, the complexity and cost of running them grew, and the desirability of using a managed computing service increased. As companies became comfortable “outsourcing” their servers to third parties, the door to cloud was opened.

VMWare introduced robust virtualization in 2001 that let disparate software workloads share the same hardware, making these new centralized servers look a lot like the old time sharing systems, only running modern applications. Virtualization became the definitive way to share common infrastructure while maintaining security between clients, which paved the way for the massive shift from on-premises servers to what we now call cloud computing.

The  seminal event (the “iPhone” of cloud computing) was Amazon’s unveiling of Amazon Web Services (AWS) in 2006. AWS offered virtual machines as an on-demand, pay as you go service. All of a sudden the distinction between timesharing and having your own server essentially disappeared. Anything you could do on your own server you could do on AWS without buying and operating the computer.

The Obvious Arrival of the Wireless Edge

Today, the number of mobile devices exceeds the population of the planet. With the advent of 5G mobile services and the accelerating demand for low-latency clouds, we’re seeing a next-generation wireless edge cloud emerge.

Operational automation became the final missing link required to make edge cloud computing possible. The hyperscale cloud providers all realized they had to reduce human requirements in their operations. First, the only way to run massively-scaled systems in high availability is to eliminate or at least mitigate the possibility of human error. Second, humans managing hundreds of thousands of servers in a data center is not only be unwieldy, but slow and error-prone. The major cloud service providers all adopted what could be called a “NoOps” strategies (in contrast to DevOps; Google’s Site Reliability Engineering offers a documented example). Edge cloud computing, comprised of thousands small data centers housing resources in un-manned locations, requires automated deployment and operation which will evolve naturally out of the large scale automation already developed.

The goal of edge computing is to maximize the performance of “cloud” (on demand, managed services) by locating key resources so they can be accessed via the Internet with minimum latency and jitter and maximum bandwidth. In other words, to provide services that are close as possible to what you could do with a local server without having to buy or operate the service. As was the case with the wireless Internet, it took a lot of hard work and serial invention to get to where we are today with edge cloud computing. But as was the case with networking the answer is obvious — it’s what you would want and expect if you didn’t know how hard it was to create it.

As cloud providers and companies like MobiledgeX provide managed services for placing workloads out at the edge of the wireless network, a wireless edge cloud becomes the natural outcome. The wireless edge cloud will bring all the conveniences of cloud computing to the edge of the network, enabling the next-generation of wireless applications, including mobile AR/VR, autonomous vehicles and large-scale IoT.

Obvious, right?

Peter Christy is an independent industry analyst and marketing consultant. Peter was Research Director at 451 Research and ran the networking service earlier, and before that a founder and partner at Internet Research Group. Peter was one of the first analysts to cover content delivery networks when they emerged, and tracked and covered network and application acceleration technology and services since. Recently he has worked with MobiledgeX, a Deutsche Telekom funded, Silicon Valley located startup that is building an edge platform. His first post on the State of the Edge blog was Edge Platforms.

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author

Man standing on building ledge

Edge Platforms

By Blog

Peter Christy, former analyst at 451 Research, helps us find the edge and understand the platforms that will make it more accessible.

Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.

As an early CDN analyst, I’ve studied “edge” computing for nearly twenty years. It’s not a new topic, but has become more visible today with the emergence of IoT, Machine Learning and other applications that benefit from services near the device. For those newer to the topic, an obvious question is exactly where is the edge? I wish I could give a simple answer and just point at the location, but I can’t. It’s more complicated (and more interesting) than that. And it’s why I believe that platforms that deploy and manage code at the edge will play an important role.

Where Exactly Is the Edge? It Depends…

From the perspective of the user of an application (person or device), the edge of a network refers to the parts of the network “nearest” to you as measured by the access performance through the network. Whether or not being near the edge makes a difference for your application depends both on the network demands of that application and on the performance of the network. For most applications that run entirely within a data center, the internal network is fast enough, and everything in the data center is adequately “nearby” (of course there are exceptions like high-performance trading and high-performance grid computing where the location within the data center matters.) The Internet, however, is an entirely different matter because Internet performance is far more problematic than LAN network performance within a data center.

There are some edge applications where the location is clear. If you want to do automatic braking for a car, the application has to run in the car. But what about other applications? Many cloud applications will benefit from services running closer to the edge, but in which exact location should they run? It isn’t obvious because the choice is necessarily complicated. We can easily envision scenarios where we want to store data  and perform computation nearer to a connected device than in one of today’s large cloud data centers (e.g., IoT, machine learning and AI, autonomous systems and augmented reality systems), and hence where want to be closer to the edge.

Picking a Location is Like Picking a Hotel in LA — or Even More Difficult

As a way to understand the complexity of picking an edge location for any given workload, consider the analogy of visiting Los Angeles on a trip that combines business and pleasure, and asking an LA friend where you should stay. Your friend couldn’t possibly give you a meaningful answer without asking a few more questions: How are you arriving? Where and when are your meetings? Do you have particular restaurants, museums or performances you want to include? Like the Internet, the Los Angeles area offers an amazingly rich set of resources, and when there isn’t any traffic, all are reasonably accessible. But also like the Internet, the traffic in LA is often anything but perfect, and at the worst times even short trips can take what seems like forever. And your friend also couldn’t give you a good answer without asking about prices and priorities: if you can’t do it all, what is most important? Is the cost of the hotel an issue (is your expense account unlimited or are you on a government per diem?) Picking the right hotel in LA is interesting and complicated.

Now imagine picking the ideal location for your workload at the edge, in real time, under continuously changing conditions. It’s even harder than picking a hotel but the issues are quite similar: how valuable is it to execute closer to the attached device (what is the tangible value?); What other applications or services do you need to connect to? How much are you willing to pay if edge computation is more expensive?

Edge Platforms are an Answer

A critical part of effective application development is focussing your effort where it counts the most, for example, where it provides the most business value or differentiation. Operating systems and cloud platforms are designed to do all the other tasks and it makes sense that edge platforms will be a key enabler for edge computing as well.

Jason Hoffman, CEO of MobiledgeX

Watch Jason Hoffman, CEO of MobiledgeX, discuss developer-facing services for edge computing.

By and large, edge platforms will complement, and be used in conjunction with other platforms (e.g., the existing cloud platforms). The edge will be exploited by moving specific application components onto an edge platform or embedding edge services in an existing application. Some applications will run entirely on the edge as well.

In all cases,  edge platform will discover and manage available edge resources, provide services to deploy and manage customer code running at the edge, provide integration services with other platforms, and presumably provide new services based on new capabilities at the edge, such as integration with the cellular infrastructure. Technology costs have come down far more rapidly than programming costs,, so platforms that simplify application development play a key role in ensuring we continue to benefit from cheaper technology by reducing the application development cost. It would be very surprising if the same isn’t true for edge computing.

Peter Christy is an independent industry analyst and marketing consultant. Peter was Research Director at 451 Research and ran the networking service earlier, and before that a founder and partner at Internet Research Group. Peter was one of the first analysts to cover content delivery networks when they emerged, and tracked and covered network and application acceleration technology and services since. Recently he has worked with MobiledgeX, a Deutsche Telekom funded, Silicon Valley located startup that is building an edge platform.

Opinions expressed in this article do not necessarily reflect the opinions of any person or entity other than the author.