Skip to main content

Postcards from the Edge

And Now the Time Has Come: 5G & the Edge

By Postcards from the Edge

By Geoff Hollingworth

CMO, MobiledgeX


We (the mobile ecosystem, all of us) are at a watershed moment—or at least that’s how I think we’ll see it a few years down the road. We’re starting to discover exactly how and why 5G provides value as we uncover specific applications and use cases.

Does it surprise you to hear that this is only now happening? Did you think all of that was sorted out already, given the 5G evangelism and hype you’ve been hearing for years? I’m personally not surprised, but then I’ve spent almost all of my career deep in the innards of mobile. For what it’s worth, here’s how I think about all of this and why.

First, mobile infrastructure advances in big, complex, long cycles. That’s the penalty we pay for having a global system that works so well, and so transparently—which is only possible because of the breadth and completeness of the underpinning standards (we just produced a paper on this if you’re interested in learning more). That long generational cadence, in turn, leads to the strange marketing and investing dance we’re now in the middle of. Before anything real gets done, you have to have the standard: The standard has to be conceived, fleshed out and ratified (globally) long before the market impact.

But the dance isn’t over when the standard is done. Rather it’s just beginning because we still have to sell the future to mobile operators and get them to invest. The operators want the next generation to be great as much as anyone—their bright future depends on continuing progress and growth. But, at the same time, there has to be a return on their next-generation investment. They aren’t going to mortgage their future (literally) on marketing speculation. They need to “see the meat.”

So the second phase of the dance is the selling of the promise—although it’s (again) a little complicated because the promise of a new generation always evolves over time.  If you want an amazing example of how true that can be, consider 4G/LTE. Today it’s pretty obvious that the magic of 4G/LTE is broadband mobile connectivity and all the new services and applications that it enabled (like almost everything we use today). But broadband connectivity wasn’t the goal of 4G; that was an unanticipated benefit. the original goal was to evolve the infrastructure from a circuit-switched architecture to a packet-switched one. Stuff happens along the way.

As much as each generation tries to anticipate demand, it’s impossible to do that precisely years in advance. With the cloud and the Internet, it’s quite different—innovation is much more incremental: You change the dogfood a little, then see what the dogs think. We can’t do that for mobile because of the standards, the slow cadence of progress, and because of the magnitude of investment required for a global deployment of something new.

We’re to the point in the 5G dance where things are getting very real. The initial round of standards are long done. The equipment is designed. The evangelism drum has been beating for years. Everyone has heard a lot about 5G, although many don’t really understand it. All the mobile operators proudly have their first bits of 5G in the market and are touting their success and belittling their competition (oh what fun!).  But it doesn’t seem to be going all that well, at least so far.

How can we say that when the ads and offers you hear sound like it is? It’s not just us—that seems to be what the operators and subscribers are saying as well, at least according to an interesting new report from Ovum Research on 5G pricing. If 5G was meaningfully“better” to end-users, then it stands to reason that operators would be charging a premium for it. But, by and large, they aren’t—at least that’s what Ovum seems to see and say. 

Does that mean we should abandon hope and just wait for 6G? We don’t think so. We think it just means the value of 5G isn’t simply the speed, it’s something more. I think it’s in the new services and new devices that are enabled by the increased bandwidth and increased network performance, including the kind of edge cloud services MobiledgeX is developing.

The one operator that Ovum calls out as doing more than the obvious is SK Telecom—the large Korean operator. They are investing much more heavily, including in innovative services and edge infrastructure, and they are aggressively working with local developers to nurture discovery and innovation. 

The good news is we’re at an exciting point in 5G, the “put up or shut up” point. The bad news is that the value of 5G isn’t simple to calculate—it’s not just a faster reading on an Internet speed test. If you’ve been in mobile as long as I have, that’s hardly surprising. Finding the real value will take a little more work, but it will also be much more rewarding—it’s about new devices and new user experiences and all sorts of human and machine services. The value of the public cloud wasn’t obvious to most when Amazon introduced virtual CPUs and virtual storage. It’s a lot more exciting now because of all the innovation built on top of it.

We don’t think all the good in 5G comes from edge cloud services (not that we would mind that), but we’re pretty sure that being able to develop services much nearer to the user and integrated with (and taking advantage of)the unique characteristics of the global mobile infrastructure is an important part of it.

If you wait for this all to be obvious it will be too late to benefit much from it (doh!). The innovators are the ones standing up lots of new bits and encouraging exploration and innovation. If you think that monetizing 5G is going to be just the speed, we think that is clearly not the case. It’s going to be more interesting and a lot more fun.

San Francisco-based MobiledgeX is creating a marketplace of edge resources & services to power the next generation of applications.

The Data Layer Challenge at the Rapidly Evolving Edge

By Postcards from the Edge

By Ellen Rubin

CEO, Clear Sky Data


In the early days of the cloud, it was common to hear tech visionaries talk about how nearly all enterprise infrastructure would soon migrate to a few large cloud providers. But while the dream of liberating enterprise IT from managing on-premises infrastructure is still very much alive, the early cloud hype clearly got out ahead of reality, because it turns out that public cloud has an Achilles’ heel: lag. 

The three big public cloud providers build their enormous facilities in sparsely populated geographies where real estate is cheap, which means these data centers are usually hundreds or even thousands of miles from the cities where customers are located. Even the speed of light isn’t fast enough to overcome those kinds of distances, and the result is unavoidable: unacceptable latency. 

The solution to this latency problem is the edge, which is evolving much faster than did the cloud because there’s such an obvious, urgent need. The only way to provide the performance required for emerging use cases such as IoT, connected cars, and smart cities is to bring compute and storage close to the end user. 

But though the edge’s evolution is proceeding rapidly, the edge is still early in its development, and there’s still a lot left to do. That’s especially true for to the data layer, because the initial emphasis on edge build-out has been to provide compute capabilities, with little thought given to storage. This is often the case with new technologies. After all, when containers were first introduced, they had no persistent storage, even though that’s a basic requirement of any storage system. So while it’s not surprising that we’re seeing the same trend at the edge, it’s time we focused on the edge data layer. 

In 2013, IoT generated 100,000 PB of data. By 2020, that figure will grow to 4.4 million PB, exploding to more than 79 million PB by 2025, according to IDC—and IoT is just one of the use cases for the edge. The edge will need to have a robust data layer that can not only handle a crushing amount of data, but can also address the many specific data challenges of the edge. 

Opportunities at the edge

Before taking a look at the data layer, let’s take a step back to examine how the edge is currently evolving.

At first glance, the hyperscale cloud players look to be in the best position to capitalize on the new opportunity of edge, but just as on-premise data center incumbents initially dismissed the cloud, the big cloud providers initially dismissed the edge. Only recently have they started rolling out their own edge-based services.

But the edge is very different from a hyperscale cloud. The edge needs to be highly distributed and able to serve data sources via many small facilities connected together into a high-performance network. So while the edge must be able to integrate with the cloud, building and operating the edge requires a very different skill set.

Ironically, some colo providers that were struggling to compete in the emerging hyperscale cloud market are now perfectly positioned to provide edge services. And they’re not the only industry that is now starting to take the lead at the edge. Telecommunications carriers’ metro facilities and mobile carriers’ cell towers are located right in the midst of urban customers, and they’ve already got the power, security, and connectivity required for an edge datacenter. And, of course, there are plenty of new edge providers using new business and delivery models to stake their claim.

Storage Challenges 

But in almost every case, whatever model or mix of models wins out, storage at the edge will look very different from both traditional on-prem storage and hyperscale cloud storage. For starters, edge data centers will face space and power constraints. Most will need to be small, because real estate is very expensive in the metro areas where these facilities must be located, and at many sites power will be in short supply and aggressively metered.

Edge storage must also be distributed and highly connected. Autonomous vehicles, for example, will need to communicate with multiple facilities as they move in and out of range, and data will need to follow them. Storage at the edge will also need to interact effortlessly with apps and end users on-premises and in the cloud. After all, the edge needs the cloud just as much as the cloud needs the edge. The gargantuan amount of valuable data that edge use cases will generate cannot be stored forever at the edge—it’s far too expensive. The cloud is the perfect place to store and analyze big data, so long as it doesn’t require a fast response.

Finally, data at the edge will need to be protected, and that’s no simple matter when storage is so distributed. If one edge facility is destroyed or disabled by fire or a lightning strike, there needs to be instant failover to another facility nearby, with access to the same data. 

Both traditional storage hardware and storage systems designed for cloud environments are too large and power-hungry to deliver enough capacity to power edge applications within the constraints of edge environments. Plus, neither currently has the intelligence required to automate the movement of data across a large, distributed network—a key capability for the edge. In short, we must deconstruct the traditional storage architectures and rebuild them to suit the needs of the edge.

Additionally, organizations should be very wary of “edge-washing,” which is when a company slaps an “edge-ready” or “edge-compatible” label on a product or service that, in reality, is no different than it was before the new branding. Because the edge is developing so much faster and the need is much more apparent, there’s an even greater danger of “edge washing” than there was for “cloud washing” less than a decade ago.

The rise of the edge is now making the next-generation capabilities promised by the cloud a reality. It’s enabling enterprise IT to decommission on-premises infrastructure, powering advanced IoT use cases and paving the way for smart cities. But the edge can’t do any of this without a next-generation data layer that can address its unique storage needs.

It’s time for the industry to start giving the data layer the attention it deserves. 

ClearSky Data, based in Boston, uses the cloud and the edge to provide on-demand primary storage with built-in offsite backup and disaster recovery (DR) as a service.


Bridging the Last Mile: Convergence at the Infrastructure Edge

By Postcards from the Edge


By Cole Crawford

CEO, Vapor IO

The last mile represents the final hop in our end-to-end telecommunications networks, where data bridges from infrastructure to device. Examples include the coaxial systems owned by cable providers, wireless networks owned by telecom operators, and fiber-optic systems offered by the likes of FIOS by Verizon and Sonic. 

For as long as communication networks have existed, the last mile has presented unique infrastructural challenges and opportunities. As the most distributed portion of the internet, the last mile is often the hardest to build and operate. By being the closest to the end user or device, however, it’s also the most important part of the network for enabling next-generation applications.

The last mile occupies a strange place in the networking world. Without it, the people and devices depending on the internet wouldn’t be able to access it. Yet, despite the crucial importance of the last mile network, investment has not kept pace with demand. Historically, it has required network operators to make massive investments in fixed fortifications. 

Although the last mile provides one of the most crucial links in our internet transport system, the inflexible investments required have made it the most underserved. For most of its recent life, we’ve treated the last mile as a “dumb pipe”—a way to get bits to and from the internet, but not a place where any interesting compute occurs, or where significant value is added.

Edge Innovation

Over the past few years, we’ve seen an exploding demand for low-latency and high-bandwidth connections to the internet. This has been driven, in part, by the proliferation of smartphones and the popularity of over-the-top (OTT) services such as Hulu and Netflix, but today, the billions of new connected devices and petabytes of data we expect them to generate have been eclipsing these early drivers.

Latency—a measure of how long it takes one piece of data to reach its destination—needs to be lower than it is today for most internet users to support new types of applications. Streaming gaming and autonomous vehicles, whether cars or drones, are some of the most popular examples. At the same time, to support the needs of many of these same emerging applications, a measure of how much data can be received over a network connection per second (bandwidth)needs to be higher than it is today for most internet users.

Network operators have turned to transformative technologies, such as network function virtualization (NFV), software-defined networking (SDN), and cloud radio access networking (C-RAN) to support this new demand. These new network capabilities will rewrite how we design, build and operate networks by removing the need for fixed appliances in the network topology. Rather than deploying more closed-box devices for network functionality, operators are replacing them with software running network function in cloud-like environments atop general-purpose servers.

Convergence At The Edge

Historically, internet applications have taken data from the edge and transported it to the “cloud,” which was most often instantiated on servers in some far-off datacenter. However, applications of the near future will demand that the cloud come to them, which means building micro data centers at the edge of the last mile, to house cloud servers near the data and devices they support.

These new cloud servers will be embedded at the edge of the last mile infrastructure. The deployment of infrastructure edge computing—in the form of micro data centers at the edge of the wireless and wired networks—will bring powerful cloud resources to the edge. By turning network functionality into software running at the edge, the historically separate silos of networking and compute will converge to operate seamlessly together on the same underlying infrastructure. Compute and storage will fan out across the network, creating a gradient of cloud resources that occupy new edge data centers extending all the way to the last mile.

Fixing The Last Mile

Solving the challenges of the last mile is not as simple as it seems. Consider the cellular network: Data sent from one device to another attached to the same cell tower, or to the internet, cannot take a straight path to its destination. Instead, due to convoluted legacy network architectures, data in transit often takes a meandering, ineffective path, sometimes “tromboning” (looping out and back) thousands of miles to do so.

Fixing the last mile requires real convergence between the networking and compute layers of the internet. Edge data centers deployed at the infrastructure edge—that is, at locations on the operator side of the last mile, such as near the base of cell towers—become the catalyst for this fundamental rearchitecting of the internet. Deploying thousands of edge data centers , each with its own integrated meet-me rooms and internet exchange points, makes it possible to fully converge networking and compute, bypassing the awkward and inefficient legacy data routing.

Brilliant transformation is occurring not just in nature, but in our networks as well.

In winter months, dark, barely alive branches of trees stretch out as far as the roots have managed to grow. Spring will bring life again with new, vibrant leaves appearing at the very tip of every branch. As in nature, the tree trunks and branches get us there, but the leaves are where magic happens. By making it possible to converge network and compute at the edge, infrastructure edge computing becomes the foundation, or the leaves, for a next-generation internet, and the way to fix the last mile.


The Event-Driven Edge is the Most Important Idea in Cloud Computing

By Postcards from the Edge

By Chetan Venkatesh

CEO & Co-founder, Macrometa Corp. 

For those who closely follow the evolution of the edge, a big question to reckon with is whether the edge is the mortal enemy of the cloud or its friendly ally. Will edge  architectures break the cloud free from the shackles of centralization? 

Those who dabble in such intellectual explorations often the true simplicity of the “cloud-edge” dichotomy. Like two sides of the same coin, the edge and the cloud are forever opposite to each other, yet two parts of the same whole.  In this post, we expose the fundamental differences between the edge and the cloud by exploring how the edge is really meant for a new type of application architecture — an event driven architecture (EDA) — while the cloud’s roots in client/server interactions will continue to prefer a request-response architecture.

Decoupling Clients & Servers

Event-based and request-based systems are polar opposites of each other. In a request-response system, the receipt of a request triggers an action; whereas in an event-driven system, the receipt of an event triggers other event-driven systems. 

Fundamentally, the nature of an event is very different from the nature of a request. A typical client request says, “Please do this for me.” While, an event says, “Hello, this thing just happened.” In a request-response system, the client chooses what action to take. In an event-driven system, the client merely states that some event has already happened. 

The decoupling of client and server in event-driven architecture is even more extreme than in client/server architectures because the event processors (a.k.a., the servers) have no obligation to the event generator (a.k.a., the clients). While the clients (or event generators)  in an event-driven system are obligated to report events and to respond to directives, a client in a request-response system has no obligation to the server. It can either make a request or not. Servers. on the receiving end, are expected to fulfill requests. 


Request Based Event Based
Payload Request Receipt Event Receipt
Nature of Payload “Do this” “This event happened”
Obligation to fulfill At Server At Client 
Interpretation of payload On Client (what request to call) On Server (what to do about this event)


Where does the Edge fit in?

Digital services usually instantiate in response to specific actions triggered by business-related events and situations. These events and their associated opportunities are mostly lost in a traditional request-driven model, which uses rigid hierarchies and orchestrated responses to specific requests, while ignoring everything else that is going on.  This is efficient when developing simple and invariant tasks, but fails spectacularly in services that need to interact with humans and especially anticipate in advance what might be needed. 

Event-driven architectures continuously monitor incoming events and store them while letting the server decide the best to respond and to which actions. This opens up the possibility of not just making real-time decisions in response to those events, but also the ability to respond differently to the same event because of the additional context that related events or conditions might provide. 

Where are these events happening? Well, we know where  they are not happening: they are not happening near the giant centralized datacenters we call clouds! Instead, they are happening in the real world, all around us—in our homes, offices, on the streets, in cafes, and in our cars. They are happening every time we use our smartphones, turn on an appliance, or step into a mall, a coffee shop, or a hair salon.  And this is where the edge comes in to play.  

By putting event-driven applications and web services at the edge, we can take advantage of the infinite streams of event data to create remarkable new digital services that don’t just record what humans and their devices are doing, but can also anticipate and predict what might happen next.  

That incredible future of intelligent systems at the intersection of man-machine interfaces are not going to be built on deep learning, statistical regression, and generative adversarial networks alone. They are going to be expressed as event-driven distributed systems that run across hundreds of thousands of edge servers in each and every metropolis, town, village, and street corner. That is why the event-driven edge is possibly the most important new development in cloud computing. 

Based in Palo Alto, Macrometa is a “Geo Distributed Fast Data As a Service”​, for cross-region, multi-cloud and edge computing applications. The company is based in Palo Alto, California.


A Tug of War at the Edge

By Postcards from the Edge

By Joseph Noronha

Director and COO, Detecon Inc.

Operators around the world are falling over themselves to claim to be first in the 5G race with a string of announcements and rollouts across Asia, Europe and North America. For many players, a parallel track is the “cloudification” of their own networks, catalyzed by deploying compute in their central offices (CO) as a start to running their own virtualized network functions. With an eye to the developments around newer latency-critical applications, some of them (e.g. Verizon, Deutsche Telekom) have begun to roll out infrastructure dedicated to serve third-party applications. Fully cognizant of the fact that the operators themselves have a non-existent track record of working with developers, some of them have even founded secondary companies (e.g. envrmnt, MobiledgeX) to attract this hitherto elusive community.

While this go-it-alone strategy may seem sound at first (“We build infra and work with another firm that offers services”), certain elements merit a rethink of this approach:

  • Developers: The companies that build applications want to reach as large an audience as possible, which means being able to deploy the same applications globally, across many operators. They also want to use the tools and platforms they are already familiar with. This requires not only a developer-friendly interface, but also standardized APIs and infrastructures. Operators, by their very nature are regional at best – both in geographical coverage and mindset, and in their urge to stand out, commonality and similarity of infrastructure are not high on their priority list. The potential result is that infrastructure standards when developed, are reduced to the least common denominator (suffering the fate of a Joyn) or become non-standardized across the world.

  • Cost: The second and equally important element is cost, specifically the price at which a compute cycle can be made available to a developer. Having failed to compete with the cloud platforms of Amazon, Microsoft and Google, operators such as Verizon and AT&T have sold off their large-scale datacenter assets and exited the business. On the other hand, players like Amazon Web Service, have developed massive economies of scale, evidenced by their ability to lower prices over 70 times during the past decade—something hard to imagine an operator doing of its own volition. While operators are experts in managing telco assets, they have not been successful in building and operating a distributed cloud infrastructure at lowest-unit-cost economics. To do so would require them to compete with the purchasing power of a hyperscale provider, while also adapting to the maintenance and replacement of equipment that has a shorter lifecycle (about 3 years for cloud servers compared to the typical 5-7 years for telco equipment). The operators would also have to build a highly-efficient cloud operation model. On their own, an operator-created cloud will likely have high costs and if costs remain high, developer interests will remain scarce except for the most demanding of applications which cannot function without edge, which in turn would dramatically reduce the overall available market.

  • Mindset: The third element is mindset, specifically that of sharing infrastructure, especially with third parties. While operators have computing infrastructure in their networks, they have traditionally only utilized it for their own internal operations, as part of their overall virtualization strategy. Deploying servers to run virtualized network functions has been driven by cost-optimization, not by a desire to enable new third-party services. While there are valid arguments against sharing—cybersecurity and network reliability being two of the most salient—the tradeoff is often underutilized idle resources. However, if there was a way to safely and securely access untapped capacity—if it were possible to operate a multi-tenant environment with the operator as an anchor tenant—it would turn a hitherto cost element into a revenue-generating engine.

Cloud providers have already confronted and dealt with  many of these very same issues operators face. For example, the cloud providers: 

  • Have fostered a rich developer ecosystem. They have access to and work with a large population of developers. They offer standardized infrastructure that’s accessible to developers around the globe, bridging national and regional constraints. While an operator could offer access to a country, a cloud provider could provide access to the world, across operators.
  • Benefit from massive economies of scale and are familiar with managing a large-distributed infrastructure in a highly automated manner. This ensures not only high availability, but also excellent unit-economics.
  • Are able to maximize asset utilization (an important consideration in a relatively constrained environment such as the infrastructure edge), enabling resource reservation down to a second.

While operators and cloud providers have contended with each other in the past, combining the strengths of these “frenemies” offers up an interesting proposition. It would incentivize the cloud players to invest towards making a uniform global edge infrastructure accessible to a legion of developers at an attractive price point. 

We are seeing glimpses of this happening. Recently, for example, AT&T has announced partnerships with both Microsoft and IBM that could easily extend to include collaboration on an edge cloud. Another example is TIM teaming up with Google. This would open up the “beachfront property,” accelerating the development of new applications that depend and can benefit from the new “edge” in the computing continuum.  

Detecon Inc. is a knowledge and consulting center that focuses on digital innovation trends originating from Silicon Valley. As part of the German-based Detecon Group, Detecon USA has a dual mission – serving as Detecon’s innovation spearhead and managing its Americas operations.

How Much Will the Edge Cost?

By Postcards from the Edge


by Iain Gillott
Founder and President of iGR


Factors impacting the cost of the edge

Edge computing emerged on the wireless industry stage several years ago, and there are several different versions and approaches. Regardless of the approach, edge computing has the potential to be as disruptive a technology as any of the other fundamental transformations affecting the wireless industry, including 5G New Radio, NFV/SDN, C-RAN, etc. In fact, edge computing will quite likely help realize the promise of 5G particularly since virtualization underpins the new 5G system architecture.

Defining the edge

Before we discuss the cost of deploying edge computing, we should first agree on a definition of edge computing in the context of wireless infrastructures. iGR defines an edge computing hardware platform as a secure, virtualized platform which can be “opened up” to third parties, such as content providers and application developers. Such a platform might incorporate a wireless technology, such as an LTE radio, Wi-Fi, 5G NR or some combination of those. Historically, most edge implementations have used Ethernet or Wi-Fi and not cellular. Over time, iGR believes that will change as private LTE networks get deployed and vendors bring 4G/5G-based IoT devices to market.

Mobile edge computing is a natural extension of the mobile cloud concept in which user equipment (UE) takes advantage of remote storage and compute resources accessed via transport provided by the Radio Access Network (RAN), to:

  • Introduce new applications that cannot run on the UE
  • Extend the battery life of the UE by offloading compute to the cloud
  • Increase storage via the cloud or obviating the need for it entirely, such as with streaming music and video.

In this model, edge computing essentially moves the cloud servers closer to the UE in order to mitigate the downsides of mobile cloud computing, including increased mobile data usage and higher latency.

Network operators also benefit from edge computing since they are able to:

  • Offload traffic from their networks sooner, thereby reducing the burden on their RAN and backhaul
  • Use edge computing to improve the efficiencies of their network operations, which ties into the NFV/SDN trend as well as the evolution of their networks from LTE and the EPC to 5G NR and the next-gen core.
  • Offer new services to enterprises via the edge computing platforms.

Locating the “edge” and the associated cost

You could say that the edge is located where, if you take one more step, you fall of the cliff! 

In networking terms, the edge typically means a location that is very close to the end consumer of the content. This can mean different locations for an operator versus an enterprise. Depending on the type of content, the type of end user and the specific needs of the application, the edge can be in very different places. The location of the edge fundamentally impacts the cost to deploy an edge solution. For example, just consider the real estate required to house the edge computing equipment and the varying costs of space in commercial buildings. 

Where an operator locates their edge computing will impact cost in many ways. Operators must consider much of the following when making these decisions:

  • How much latency can the given function tolerate? Does the edge have to be located close to the mobile radios for latency reasons, for example — if so, this will prove more expensive to deploy than putting the edge servers further back in the network.
  • What are the benefits of doing the computation at the edge versus somewhere else? How does the storage location of the data itself impact this cost/benefit analysis? Generally, storage and compute power are cheaper when they are clustered — think of the difference between a small data center and a large cloud center (the latter is far cheaper on a per-bit basis).
  • How is backhaul impacted? What else is going on in the network? If the edge compute installation requires additional backhaul to support those capabilities, costs will increase significantly.
  • Does the end user have a say in where the processing occurs? The answer might be different for enterprise and consumer users. If the end user has constraints, then this will increase the costs of delivering the edge services. Ultimately, this question relates to the value of the edge application being supported and the value of the customer.
  • How scalable do the edge compute resources have to be? For example, will they require excess capacity to handle additional offloading? As the solution scales, will it require additional space, with the corresponding increase in rent, power, environmentals, and so on?
  • Will the edge computing infrastructure host 4G LTE and 5G packet core components, as well as third-party applications? What measures will be put in place to prevent those third-party applications from hogging resources the network needs? Alternatively, are multiple edge compute platforms required to prevent this type of situation? Obviously, the need for multiple servers will increase deployment costs.
  • Are there security concerns regarding where the processing is done? Could hackers gain access to the 4G or 5G packet core by exploiting a weakness in the edge computing platform? How much physical security will be required? Will the edge compute solution share space with other servers or must the edge computing be located in its own secure location? Again, this will directly impact costs.

These and many other questions must be answered on an application-by-application, company-by-company and/or operator-by-operator basis. The important point is that there is a lot more to the cost of deploying edge computing than simply pricing the server hardware and installing at an available location. 

Other factors influencing cost

Edge compute servers, by definition, must be close to the end user, have a reliable source of power and backhaul, be in a secure location (both physically and from a network perspective), and be accessible for maintenance. The following factors also influence cost:

  • What maintenance may the servers require? Will a technician need to be deployed for maintenance and troubleshooting? Or is the edge computing server “disposable,” where the cost of visiting the location exceeds the cost of the hardware? In this case, if there is a problem with an edge computing server, the workload can migrate to another location and the rogue server decommissioned.
  • Is the edge computing server on a truck or vehicle — for example, a small server can be placed on a refrigerated truck to monitor the various sensors for the cargo, engine and other systems. In this case, any maintenance could be performed when the vehicle returns ‘home’. But the cost of backhaul must also be factored into this scenario.
  • Who owns the location for the edge computing? For example, the costs for locating at the base of a cell tower will be very different from a commercial data center or a data center in the base of an office building.
  • Are there any firewall issues — this really applies to enterprise locations. edge computing can be located in a wide range of possible locations from the enterprise main data center to retail and manufacturing locations, depending on the application.
  • How accessible is the location for edge computing installation and maintenance? For example, while the basement of a commercial building may be secure, have good power and offer good security, but it may not be accessible after-hours or on weekends.
  • Are there any right of way and attachment fees? Many cities charge attachment fees to locate equipment on their buildings or in their right-of-way. These vary by city and location, but the reality is that simply putting a box at the side of the road containing the edge computing node may involve significant permitting cost.


To summarize, the cost of deploying an edge computing solution involves more than just the cost of hardware, software and physical installation. In the case of edge computing, where the node is actually located has a major impact on the cost:

  • How will backhaul be provided? How much? Will the backhaul need to scale?
  • Does the edge computing node need regular updating or maintenance that will require site access? Is site access available 24×7 or on a more limited basis? Who controls access?
  • Are their right-of-way or permitting issues with the location? Who controls this?
  • What are the physical and network security considerations? How must the server hardware be protected or hardened?

Edge computing servers may be placed anywhere there is power and backhaul in a secure location. But the particulars can severely impact the cost of deployment and operation. As such, the cost of no two edge computing installations are alike.

iGR is a market strategy consultancy focused on the wireless and mobile communications industry. Founded by Iain Gillott, one of the wireless industry’s leading analysts, we research and analyze the impact that  new wireless and mobile technologies will have on the industry, on vendors’ competitive positioning, and on our clients’ strategic business plans. A more complete profile of the company can be found at http://www.iGR

Disclaimer: The opinions expressed in this market study are those of iGR and do not reflect the opinions of the companies or organizations referenced in this paper. All research was conducted exclusively and independently by iGR.



The Enterprise and the Edge

By Postcards from the Edge


By Zac Smith 

CEO and Co-founder, Packet

If you asked 100 network engineers where the “edge” of today’s internet lives, I bet a fair number of them would point you to a map of Equinix’s IBX sites. Gravity defines public networks like the internet, and Equinix (along with regional players like Interxion) have built up a portfolio of sites where most of today’s networks physically connect.

Now the shape of the internet is starting to change. 5G wireless deployments, coupled with moves by hyperscalers, eyeball networks, content delivery networks, and new kinds of connectivity (CBRS! balloons! satellites!) promise that whatever the internet looks like in 10 years, it will be different from the one we know today.

Consider the first types deployments that make sense to move beyond the current shape of the internet, such as specialty applications including industrial IoT, network security, and 4G/5G wireless.

These workloads are nothing new, but their value increases exponentially as the data from our connected world explodes. While many use cases don’t yet make sense at the edge, it absolutely makes sense for these large-scale and consolidated applications.  

These first edge uses cases have a few things in common:

  1. How it works only matters to “me” (or a few players that look like me), but my way is special;
  2. It’s extremely important / critical / fundamental to the success of my business;
  3. I’m willing to spend a lot of money to make it happen.   

Sounds a lot like a well-heeled enterprise buyer to me!

What Does This Mean for Edge Computing?

If this fast-emerging edge is the domain of an expanding group of large enterprise use cases, it stands to reason that it will be some time before we have an edge computing ecosystem that looks or feels like today’s centralized cloud: infinitely scalable at incremental cost, instantly. In fact, I would argue that we may never get there.

There is no doubt that the edge will be driven by the DevOps-style experience that has evolved along with the cloud: automation will definitely rule the roost. But will the Edge be powered by pre-deployed infrastructure, configured to meet the demands of a wide variety of use cases and accessible with the swipe of a card and $100 in free testing credits? Will resources be priced by the hour or minute, with significant burst capacity?   My guess is: no!

Sure, we will see (and are seeing already) a number of amazing platforms that are extending the cloud experience to the edge. But until the traffic gravity of the broader internet moves beyond its current shape, we are going to see bespoke deployments dominate.  

Take Carriers, For Example

Wireless carriers are in many ways the archetypal enterprise. As they pump billions of dollars into the next generation of their wireless networks, we are seeing vastly different implementations. From the number of locations and type of infrastructure to the approaches around software or spectrum — it’s all pretty different. The only shared aspect may be the real estate (e.g. towers, fiber, and small cells). In short, each is trying to obtain a competitive advantage while spending only on the aspects that are unique to them — and sharing whatever else can be shared.

Why would we expect any other large scale use case to behave any differently?  Granted, automotive mobility and medical aren’t deploying with the same unified technology refresh undercurrent that is driving 5G wireless investments, but the premise is the same. If the market opportunity is big enough, then the implementations are likely to be consolidated to a few players and bespoke. It’s quite a contradiction, but it makes sense: if you are willing to spend a premium to deploy a market-leading tech substrate, you’d probably want to have it your way, right?

The Edge Opportunity is Real

With 5G deployments accelerating, the hybrid cloud market booming, and a compelling ecosystem of innovators making the technological promise of the edge a reality, I’m actually quite bullish that this year we’ll see the first iteration of a truly viable edge computing market. The big players in real estate and connectivity are making moves at the edge; a fast maturing cloud native and edge native software ecosystem is eating away at the complexity of deploying in dozens or hundreds of places; and the big hyperscalers are pushing rapidly to the edge.

As an industry and ecosystem, however, we can expect some twists and turns.  Enterprises don’t always get it right at first. But if we listen and iterate, we are likely to change the shape of the internet.  And just like with previous waves of innovation, new use cases will be unleashed that we cannot even dream of today.

Packet is a NYC-based bare metal cloud provider that empowers developer-driven companies to deploy physical infrastructure at global scale

The Edgeless Cloud and Flatnets

By Postcards from the Edge



By Francis McInerney

Managing Director of North River Ventures

The math of Cloud Inflation says that, at some point, your smartphone becomes my server. So, forget everything you hear about edge servers harnessing the Cloud; the cloud has no edge.

There is absolutely no reason why each home in the world should not become a combination cell tower, data center and blockchain revenue engine scaling with Moore’s Law and the Memory-Density Curve.  When this happens, the Cloud loses its edge. Whence, the Edgeless Cloud.

Edgeless elements will be meshed together in topologically flat networks, or “Flatnets.”  These are virtualized, blockchain-fueled, wireless systems growing in power outside the existing telecommunications network.  Flatnets make the Cloud edgeless with no near, no far, no inside, no outside. And open a whole new set of revenue opportunities.  In short, Flatnets are the first end-to-end redesign of the telecommunications systems that connect us since Bell founded AT&T in 1877.

Instead of paying carriers for access every month, blockchain will allow users to make money from access and content on scalable, meshed data centers that they control.  Think of the Mississippi changing direction and flowing North to the Atlantic through the Gulf of St. Lawrence. Its entire ecosystem will be different.

Thus, in the process of becoming an edgeless, virtualized network, the Cloud dissolves all the phone, cable and cell companies worldwide, every company in their ecosystems and all their shareholders and employees.  Uber on steroids

Flatnets are the logical outcome of applying Moore’s Law and the Memory-Density Curve to the FCC’s 1976 Carterfone decision.  This ruling, which made it legal to connect third-party devices to the phone system, opened the market to customer-premises equipment.  Unshackled from Ma Bell, and with no restrictions on the processors and software that users could connect to the network, the power of user devices exploded.  By projecting those trends into the future, we could map with precision the day when network polarity would reverse— when there would be more computing, networking and storage outside the network than on it— and with it the network’s revenue streams.

We have seen the effects of these trends for years in Wi-Fi.  Because it is extremely capital-efficient, Wi-Fi has gobbled up large parts of the “App Delivery Membrane” and sucked all the growth out of cell nets.  Most cellular networks are in revenue decline. Capital-efficient flatnets in the Edgeless Cloud will eat up the rest.


In two steps, the first already taken and the second almost complete.

In the first step, we are already seeing companies deploy distributed data centers and sophisticated cloud provider services at cell towers, all connected on their own fiber backhaul.  When you realize that these clouds host the bulk of the world’s content, the impact of Flatnets hits home hard.

In the second step, we will attach meshed Flatnets to this structure.  A member of our FutureCreators program has just been granted a patent covering blockchain on all wireless devices.  Mississippi reversal-style, this will unleash huge new revenue flows for the owners of these tower-connected data centers and edge cloud services.


Francis McInerney, Managing Director at NRV,  has been building businesses since the 1980’s. He is the Business Model Sherpa for the Zettabyte era.



Monolith to Microservices in the Physical World

By Postcards from the Edge


By Antonio “Pelle” Pellegrino — Founder & CEO, Mutable

Mutable helps software developers create scalable and fast web services by automating DevOps and providing edge technology all around the world.

IoT devices will transform everything we do and how we do them. With the lower latency and computing power provided by edge computing, data processing happens in much closer physical proximity to IoT devices. These advances transform industries  industries across the board, freeing humans to pursue creative roles, and letting the machines take care of the straightforward repeatable tasks.

Over the past 10 years —and particularly in the last three—we have observed a shift from monolithic application development to microservices, which fundamentally changes how apps are built and deployed. A similar shift is about to upend the devices in our physical world. By treating devices not as single-function monoliths, but, instead, as a collection of smaller services, we can create a reality where devices will operate with more autonomy while also integrating and sharing data with other devices.

As the name suggests, a monolithic application is like a standalone black box. Loosely defined, this can mean that all the inputs, outputs, and everything in-between reside in a single code base. Typically, the results that come out are predictable and narrowly focused. A good example would be a common home appliance like a doorbell. While robust and reliable, monoliths are difficult to modify and costly to iterate due to the intertwined nature of the wiring of internal components.

Microservices, on the other hand, provide the independence needed for rapid iteration, language-agnostic development, and functional programming paradigms. For example, the same data can be processed by different services yielding different results. This way of developing calls for API-oriented architecture where services talk to each other.

The result is the freedom to rearrange services and their functionalities, while decreasing the time it takes to deploy products and solutions. Think of a microservice as a small sensor or an IoT device. These services can be scaled, replaced, and rebuilt independent of each other. However, these advantages come at the price of increased network communication, since everything depends on network calls to one another for additional functionality.

The Digitization of Hardware

Manufacturers have been rapidly digitizing physical hardware. This transformation has unfolded on a device-by-device, company-by-company basis, and is in need of abstractions and simplifications. Fortunately, todays common yet complex everyday appliances can have their functions broken down into three core components: inputs, outputs, and processes.

By separating device functions into inputs, outputs, and processes, we can create an infinite combination of solutions using the same physical devices. Going back to our doorbell  example, a doorbell can leverage a camera and become an extension to your security system, a remote intercom, or automatic door entry with facial recognition.

Today, these are built as a monolith, as it is a single-purpose black box. The input and the outputs are the same every time. However, if we redefine the doorbell as a collection of microservices, then it’s easier to break it out of its historically closed-loop systems. Yes, the input stays the same (it is a cheap camera) — what changes is the access to the input, how and where it is processed, and the derived outputs.

This new doorbell can now be programmed to let you know when your kids are home. Or, it could become part of a larger network of devices that function as a neighborhood watch, capable identifying and catching the local package thief.

Part of a Larger Trend

The shift from on-premises infrastructure to the cloud, combined with the evolution of virtual machines (VMs) to containers, has catalyzed the shift from monoliths to microservices. In turn, open-source tools and cloud services have continuously evolved to meet developer demand for microservices-oriented architectures.

This proliferation of tools has benefited from the patronage of industry leaders such as Netflix, Google, and Amazon, and their eagerness to publicly share their expertise. This has eased, if not removed, the aforementioned difficulties pertaining to the use and development of microservices.

Organizations using microservices have reaped the benefits over time from decreased costs, increased velocity, and overwhelming versatility they generate. Anecdotal examples of these competitive advantages include Amazon developers deploying code every 11 seconds, and the 13 services which are used for a single Google search.

Presently, monolithic architectures dominate the physical realm. Once we leave the confines of software, the tangible devices around us are mainly built as stand-alone items.

Microservices on the Edge

To further illustrate this point, we need only look to the Tesla Model S, which is equipped with eight cameras. These cameras and sensors are essential components for its autonomous operation and advanced safety features, but what else can they be used for? What if the data feed from the car could be accessed by other services – ones operated by Tesla, but also ones operated by other companies and entities? Think of Google Maps being constantly enriched by fresh data pushes, or letting other autonomous vehicles share the data to make predictions easier, cheaper, and safer.

The Model S right now is a monolith on the cusp of  joining the shift towards microservices on the edge.

Among the many challenges that must be overcome to enable this transformation, three stand out: security, latency, and bandwidth.

Security: Whenever you are dealing with mass data collection, as in this case the cameras on the Model S, security, and use of the data, becomes a concern. The ecosystem in which these services function will need to be secure and meet local, state, federal, and international data privacy laws. We will need to make use of the data immediately, the data is most valuable as soon as it is captured. Think of the other cars or autonomous operations happening in the vicinity: they will need to know about each other in real-time and quickly react to a changing environment.
Latency: By placing edge computing infrastructure, as in the form of micro data centers, in close proximity to the devices, data processing can happen outside the device while still providing rapid analysis and response.
Bandwidth: Many IoT devices will produce large streams of data. In the example of the Model S, there are at least eight video streams per vehicle that could be pushed out for external processing. If all of this data must traverse the entirety of the Internet to land in a centralized data center, it will place a huge strain on our Internet infrastructure, and will be wasteful when only local devices need the data in real time.
One downside of microservices is communication. Luckily, networks have caught up with one gigabit connections and 5G cellular technology will soon be widely deployed. What was once a bottleneck is now an enabler that warrants the need for edge computing and allows for these devices to be connected microservices. More than ever, these devices have increased accessibility, are easier to mass produce, and are adaptable for many different solutions.

The Fourth Industrial Revolution

If we can separate the inputs, outputs, and processes, we can create an infinite number of solutions using the same physical devices. The world of IoT transforms everything we do and how we do it. Changing industries across the board, freeing humans to pursue creative roles, and letting the machines take care of the straightforward repeatable tasks. This will enable a new era of machine-to-machine communication, the key to the Fourth Industrial Revolution’s success.

Who Will Build & Pay for the Edge

By Postcards from the Edge



By Monica Paolini — Principal at Senza Fili

Senza Fili is an analyst and consulting firm that provides advisory support on wireless technologies and services.

This much is clear: the days of a highly centralized, monolithic networks are over, and pervasive connectivity requires a distributed topology where more and more functionality and content is pushed at the edge. To meet our connectivity expectations and requirements, networks need to have lower latency, better security, higher reliability and local awareness – in other words, they need to move to the edge.

We have the technology to support edge computing and build distributed networks. And new technologies (such as 5G, virtualization and network slicing) do not require anything to happen at the edge, they crucially benefit from it.

There is broad industry consensus on the need and value of the edge, but it is still uncertain how the move to distributed networks will unfold, especially in the enterprise, alongside the rise of IoT and private networks. Who will build and pay for the edge infrastructure? Who will enable the services that will run on it and integrate them with the rest of the network? And who will run the edge infrastructure?

In centralized networks, service providers meet the connectivity needs of the enterprise and typically own the infrastructure. This makes sense because the business model is predicated on the reach, ability and expertise of the service provider to serve a large number of enterprises that could not independently own or operate the network.

In distributed networks such as Wi-Fi networks, the opposite happens. Because these networks are built to serve the specific needs of an enterprise or real estate owners, they are most commonly funded, deployed and operated by the enterprise or venue owner, either directly or through a third party that works directly on behalf of the enterprise or venue owner.

By introducing edge infrastructure, hybrid network models emerge that combine centralized and distributed elements, and are tightly integrated and yet functionally separated. This creates the opportunity – and, I would argue, also the need – for new ownership and operational models to take off, and for a wider range of relationships between service providers and enterprises.

At one end, service providers may deploy the edge infrastructure on their dime as part of an organic network evolution towards a distributed architecture, in line with what they are doing to serve their retail subscribers. The problem with this approach is that it requires a level of investment and involvement with individual enterprises that service provider are not able to shoulder across their footprint. They will invest in some enterprises and venues, but, especially as they move beyond the top enterprises, most desirable locations, their involvement will quickly fade out.

At the other end, the enterprise may choose to follow in the path of the Wi-Fi model and build their own private networks that are designed to meet their requirements and connect them to WANs through neutral hosts or other parties. This approach gives the enterprise full control and visibility into their network and the ability to roll out the services and IoT applications they choose.

In the US, CBRS-based OnGo will encourage the enterprise to build these types of networks. In other countries the enterprise may not have spectrum that is well suited to deploy a stand-alone network, but this is changing and we are seeing new spectrum allocations that are more friendly to enterprise users. enterprises see the value of on-prem connectivity and the services it supports, and they are increasingly willing to pay for it because it gives them the control on the network that they need for critical services and IoT applications.

The downside of this approach is that enterprises have to take a more intensive role in managing the network that they wish or that have the capabilities to take on, and that they may not be able to fully benefit from the add-on functionality and expertise of service providers.

But there is a middle ground, where most of the potential for growth, innovation and disruption lies. It is also where most of the challenges come from because this is a largely uncharted territory, where established and new players have to build new types of partnerships and financial relationships. The difficult part is to establish a balanced ecosystem that is beneficial to all involved – i.e., the players can’t be too greedy, have inflated expectations on their role or capabilities, or have unrealistic requirements – in an environment where the contribution and value that each party brings to the table is still being negotiated.

The new business models that set a solid foundation for the deployment of the edge infrastructure and the services it enables harness the ability of the enterprise to fund and take a more active role in planning and managing on-prem networks, and the expertise of service providers in operating large networks. In many markets and verticals, neutral hosts and other intermediaries are going to play a significant role in connecting enterprises and service providers. Service providers have also started to develop closer relationships with each other, e.g., network operators and cloud service providers, or network operators and IoT players.

It is unlikely that a single business model will prevail globally, or even within a market, but there are some dynamics that will shape the edge and private network ecosystem across geographies and verticals:

  • The enterprise needs to have control and complete visibility over edge infrastructure, especially if running mission-critical or low-latency IoT applications. Service providers have to be willing to give up some of the control they have on the public network or risk to be sidelined.
  • The enterprise is better placed to decide what network meets its requirements and to pay for it. The service provider plays a crucial role in integrating the local infrastructure with the rest of the network, and support wide-area functionality and services.
  • Service providers have the experience the enterprise needs to deploy and manage the edge infrastructure, but neutral hosts have it too and allow enterprise networks to be integrated with multiple network operators.
  • Large, medium and small enterprises have different needs, and so do enterprises in different verticals. We need a diverse set of business models and ecosystems to scale and adapt to these contexts and the players have to recognize how their role changes across them.

The balance between centralized and distributed elements that serve the enterprise is another crucial factor that will shape the edge financial and operations models. For instance, applications or verticals that have tight low latency requirements, specific security requirements, or high levels of location-based data or processing, need a more distributed topology in which the edge elements are closest to the access network. In this type of networks, the enterprise will be more involved than in networks that rely more on a centralized cloud typically managed by a service provider.

Both the enterprise and service providers may try to position themselves to dominate the edge infrastructure, but it is more likely, more efficient and cost effective for them to share the helm of the edge and build new and deeper working and financial relationships than they have been able to do until now.