Skip to main content
Discover the latest edge computing trends and technologies at ONE Summit, April 29-May 1 in San Jose | REGISTER

If You Want to Understand the Edge, Just Look at Your Phone

By February 6, 2020July 21st, 2020Postcards from the Edge

By Peter Christy

Independent Analyst

 

The last decade has seen a remarkable and rapid transformation of consumer and enterprise IT alike, triggered by the introduction of the smartphone and fueled by the growth of the public cloud and broadband wireless connectivity. 

Technologists tend to view the last decade’s evolution from an infrastructure perspective. Because we see the vast amounts of compute, storage and networking resources that come into play to deliver the services on our devices, we often emphasize the back-end infrastructures that power our apps. We think largely in terms of the servers and pipes that deliver the internet, and not so much about the devices that connect to them. 

But there is another perspective to explore, the way that users think if they aren’t, like us, infrastructure experts. For them, especially for younger ones (millennials) the Internet and cloud are only interesting if available from their phone.  

From the phone in, the edge cloud looks very different: it isn’t the last thing you see on the way out from the application; instead it’s the first thing you see looking in. 

Thinking about our platforms from the device in, not the cloud out, creates a new perspective. Rather than seeing the cloud as the progenitor of the device, we see the device as the driver of cloud. By starting with what already runs on the device, then extending it with an edge cloud, we open up an entirely new class of applications, ranging from augmented and virtual reality to AI-driven IoT and autonomous robotics. 

These new applications will begin with the capabilities of the device, but leverage low-latency network connections to an edge cloud to augment the device and supplement the experience. For example, a local search can be performed using augmented reality, where having the detailed local context and rendering the augmentation atop is the sine qua non of the application—and all of that will happen on the device.

Consider, also, issues of security and privacy. Privacy is more tractable on the device, especially if the phone platform is trusted and the applications vetted. Apple’s new credit card makes this point: Apple never knows what the card holder is buying or from where; the details of the transactions are saved on the card user’s phone, but are inaccessible to Apple. As Apple points out, given the architecture, they couldn’t sell your purchase history to anyone even if they wanted to because they can’t even see it. 

The edge of the Internet can be made secure and private even though the Internet as a whole is anonymous and spoofable. The user is well-known at the edge, and edge network domains can be isolated and protected from the Internet at large. If the edge access provider knows who the user is, and where they are,  then they can also assure that group, national and regional regulations are applied transparently (and hence complied with) —a problem that is very challenging when attacked in the cloud writ large.

Finally, it’s worth touching briefly on the remarkable and quite counter-intuitive nature of the modern smartphone, and all the derived computer-based devices like drones that re-use phone technology.  

We’re used to a hierarchy of computers, where the server is more powerful than the desktop PC; the desktop PC is more powerful than the laptop; and the laptop is more powerful than the handheld device. The most expensive computer is the most powerful, right? Not so fast! That’s often no longer true. Manufacturers build smartphones in such high volumes (over 1.5 billion last year), that they can define and dictate the components they use. Server and PC designers have to use what’s available.  And smartphone refresh cycles are so frequent and lucrative that the largest vendors (e.g., Apple and Samsung) can design anything that is technologically feasible into a new phone and manufacture it using the most modern semiconductor process. 

Because of this strange inversion, smartphone device capabilities can often far exceed the capabilities of a typical server for specific applications. For example, the custom hardware hardware  on the iPhone 11 makes the phone capable of photography and facial recognition tasks that put most servers to shame. For these applications, the smartphone is many times more powerful than a typical server. Although A/R optimized phones haven’t been released yet, it’s safe to assume the same will be true.

While it’s reasonable to think of the power of an application most likely coming from a server-based backend, this is not always the case. For many phone applications, much or most of the power is in the phone as counter-intuitive as that may be.

So, next time you’re trying to understand how you might use the edge cloud, make sure to think about it outside-in and not just from the cloud heading out, like all those around you on the street are — heads tilted down. I think you may be surprised by the difference.