Skip to main content
Discover the latest edge computing trends and technologies at ONE Summit, April 29-May 1 in San Jose | REGISTER
Category

Postcards from the Edge

Announcing the Second Annual Edge Woman of the Year Award

By Postcards from the Edge

State of the Edge and Edge Computing World Present the Second Annual Edge Woman of the Year Award

Edge Computing Industry Seeks to Recognize Women Shaping the Future of Edge and Invites Nominations for 2020

July 01, 2020 09:00 AM Eastern Daylight Time

AUSTIN, Texas–(BUSINESS WIRE)–State of the Edge, an open source project under the LF Edge umbrella dedicated to publishing free research on edge computing, and Edge Computing World, an event that brings together the entire edge ecosystem, have announced they are accepting nominations for the Second Annual Edge Woman of the Year Award 2020. The award recognizes leaders who have been impacting their organization’s strategy, technology or communications around edge computing, edge software, edge infrastructure or edge systems. The organizers encourage industry participants to nominate their colleagues for qualified women to nominate themselves. The “Top Ten Women in Edge” finalists will be selected by the organizers and the final winner will be chosen by a panel of industry judges. Finalists will be announced at Edge Computing World, being held virtually October 12-15, 2020.

“By honoring the innovative women pushing the edge computing industry forward, we acknowledge the importance of their work and the continued need for diversity in a burgeoning and innovative field,” said Candice Digby, Partnerships and Events Manager at Vapor IO. “We are thrilled to host the second annual Edge Woman of the Year award program and look forward to honoring this year’s leader.”

State of the Edge and Edge Computing World are proud to sponsor the second annual Edge Woman of the Year Award, presented to outstanding female and non-binary professionals in edge computing for exceptional performance in their roles elevating Edge. This award highlights the growing importance of the contributions and accomplishments of women in this innovative industry. Nominations are now being accepted, and can be entered here.

Nominees will be evaluated on the following criteria:

  • Career contributions and involvements (ex. industry associations, open-source contributions, etc.)
  • Overall involvement in greater technology industry and demonstration of leadership qualities
  • Specific contributions to edge computing (team projects and collaborations admissible)
  • Contributions and involvement need not be technical; the award may be given to those in functions that include senior leadership, sales, marketing, etc.

Advisory Board of the 2020 Edge Woman of the Year Award include:

  • Nadine Alameh, CEO, Open Geospatial Consortium
  • Samantha Clarke, Director of Business Development, Seagate Technology
  • Michelle Davis, Manager, DoD/IC Specialist SA team, Red Hat
  • Eliane Fiolet, Co-Founder, Ubergizmo
  • Janet George, GVP Autonomous Enterprise, Oracle Cloud
  • Maribel Lopez, Founder and Principal Analyst, Lopez Research
  • Maemalynn Meanor, Senior PR and Marketing Manager, The Linux Foundation
  • Carolina Milanesi, Founder, The Heart of Tech
  • Molly Wojcik, Director of Education & Awareness, Section

“It was an honor to acknowledge an exceptionally strong group of nominees last year, and we look forward to again recognizing those iterating on the edge computing technology in exceptionally creative ways this year,” said Gavin Whitechurch of Topio Networks and Edge Computing World. “It is imperative we take note of and acknowledge our colleagues leading the edge computing revolution, and we look forward to doing that with this year’s Edge Woman of the Year award.”

For more information on the Women in Edge Award, please visit http://www.edgecomputingworld.com/edgewomanoftheyear.

About State of the Edge

State of the Edge is an open source project under the LF Edge umbrella that publishes free research on edge computing. It is a Stage 2 project (growth) under LF Edge and is divided into three working groups: Open Glossary of Edge Computing, the Edge Computing Landscape and the State of the Edge reports. All State of the Edge research is offered free-of-charge under a Creative Commons license, including the landmark 2018 State of the Edge report, the 2019 Data at the Edge report and, most recently, the 2020 State of the Edge report.

About Edge Computing World

Edge Computing World is the only event that brings together users and developers with the entire edge ecosystem to accelerate the edge market & build the next generation of the internet. For 2020 the virtual event focuses on expanding the market, with new features including the Free-to-Attend Edge Developers Conference & the Free-to-End Users Edge Executive Conference.

State of the Edge Joins LF Edge

By Postcards from the Edge

On April 8, 2020, State of the Edge became part of The Linux foundation. This extends a long-standing relationship between the two organizations, which began in 2018 when we contributed the Open Glossary of Edge Computing to the foundation and it became a top-level project within the LF Edge umbrella.

Founded in 2017 by industry pioneers Vapor IO, Packet by Equinix, Edge Gravity by Ericsson, Arm, and Rafay Systems, the State of the Edge organization has published three major edge research reports, all offered free-of-charge under a Creative Commons license: the landmark 2018 State of the Edge report, the 2019 Data at the Edge report and, most recently, the 2020 State of the Edge report. The organization’s founding co-chairs, Matthew Trifiro, CMO of Vapor IO, and Jacob Smith, VP Bare Metal Strategy & Marketing of Equinix, will remain as co-chairs of State of the Edge.

Operating under the auspices of The Linux Foundation, State of the Edge oversees three project working groups:

Many believe edge computing will be one of the most transformative technologies of the next decade, and State of the Edge seeks to document it.

An open and collaborative community of organizations and individuals, State of the Edge seeks to cultivate a passion about the future of edge computing. The project seeks to advance edge computing through research, consensus-building, ecosystem development and effective communication. To that end, State of the Edge reports curate contributions from a diverse community of writers and analysts. By including many voices, State of the Edge publications avoid the often incomplete, skewed and overly vendor-driven research typically available.

Democratizing Edge Computing Research

The first State of the Edge report, released in 2018, established a baseline of knowledge from across the edge computing industry. This made it possible for readers to accurately assess what edge computing meant for them, their customers and their unique use cases. This first report covered what were many new and often misunderstood concepts, tying them together in a way that enabled more people than ever before to appreciate and understand the edge.

At the beginning of 2019, we created Data at the Edge using a grant we received from Seagate. And at the end of 2019, we released the State of the Edge 2020 report. The project participants are especially proud of the 2020 report because it offered a comprehensive forecast model for edge computing the predicted capital spend on data centers and related infrastructure. While forecast models on edge computing exist, they are often proprietary and are not built transparently. Moreover, they are typically locked behind expensive paywalls that limit the number of people that can benefit from them.

The State of the Edge is run as an open source project and publishes all of its reports under a Creative Commons license, making it freely available to anyone who is interested. This approach allows the community to benefit from shared knowledge and valuable research on edge computing, without limiting it to those with money to spend.

Available to Read Now

The State of the Edge 2020 report is available to read now for free. We encourage anyone who is interested in edge computing to give it a read and to send any feedback to State of the Edge.

International Women’s Day: Edge Woman of the Year

By Postcards from the Edge

Edge Woman of the Year Award LogoIn honor of International Women’s Day, we wanted to again celebrate our first ever Edge Woman of the Year, Farah Papaioannou, Co-Founder and President of Edgeworx, Inc. This award was created by  State of the Edge in partnership with Edge Computing World. Teams from both organizations worked with a panel of judges to select Papaioannou from ten impressive finalists and presented her with the award at Edge Computing World 2019 conference last December. 

The judges recognized Papaioannou for her outstanding impact on the edge computing industry and her multidimensional leadership in the technology industry including venture capital, edge cloud computing, and open source projects. Under her leadership, Edgeworx has become widely recognized as one of the leaders in the Edge Cloud Computing category and  Papaioannou has been lauded for her relentless work on the ioFog open source project, which is part of the Eclipse Foundation. ioFog is the fastest-growing Eclipse IoT project with core parts of the project being downloaded over 100,000 times a month.

Presenting the First Edge Woman of the Year Award

Matthew Trifiro (Vapor IO), Farah Papaioannou (Edgeworx), Gavin Whitechurch (Edge Computing World)

As the edge computing industry grows, innovative leaders like Papaioannou will become the driving force behind the future of data infrastructure and tomorrow’s networking technologies. The Edge Woman of the Year Award seeks  to highlight the important contributions female leaders are making in the industry and the value of their voices in its future evolution.

This year’s Edge Woman of the Year 2020 Award will begin accepting nominations on June 16th. This year’s award ceremony will take place in October at the Santa Clara at Edge Computing World 2020. 

Edge Computing World 2020 will be expanded to include a dedicated two day developer conference on the Developer Stage, which will be the main feature of a 22,000 sq ft exhibition of edge solutions. Programming content for the show will also double in size, from two tracks to four. The conference will emphasize vertical markets,  featuring several one-day vertical summits, including the Industrial Edge Summit, Auto Edge Summit, Cloud Gaming Summit, Retail Edge Summit & Telco Edge Summit. The show will also feature technical summits, including an Edge AI Summit. The annual event is the leading place to share the latest edge information and best practices. 

For more information on Edge Computing World 2020, visit: https://www.edgecomputingworld.com/2020-edge-computing-world-brochure/

For more information on the Women in Edge Award visit: http://www.edgecomputingworld.com/edgewomanoftheyear.

Following up on the 2019 Edge Woman of the Year Award, an Interview with Farah Papaioannou

By Postcards from the Edge

Edge Woman of the Year Award Logo

Last December, State of the Edge and Edge Computing World presented edge industry leader Farah Papaioannou with the First Annual Edge Woman of the Year Award 2019.  It was a great honor to present Papaioannou, Co-Founder and President of Edgeworx, Inc., with a trophy at the Edge Computing World on December 11, 2019, hosted in Silicon Valley. 

Listen to Papaioannou discuss edge computing and women in the industry in this exclusive  IT Visionaries interview

Presenting the First Edge Woman of the Year Award

Matthew Trifiro (State of the Edge), Farah Papaioannou (Edgeworx), Gavin Whitechurch (Edge Computing World)

The Edge Woman of the Year 2019 award recognized Papaioannou for her outstanding impact on the edge computing industry and her multidimensional technology leadership, including venture capital, edge cloud computing, and open-source projects. 

The Edge Woman Award highlights the growing importance of contributions and accomplishments of women in this innovative industry. As the edge industry continues to grow, increasing the diversity of voices of those who drive this industry forward will become essential to its success. By recognizing the work done by female leaders in the field, the creators of this award hope to promote the future growth of female opportunity and leadership in edge.

The organizers encouraged the edge industry to nominate their colleagues, and also encourage qualified women to nominate themselves for the award. Nominees impacting the direction of their organization’s strategy, technology or communications around edge computing, edge software, edge infrastructure or edge systems applied. The “Top Ten Women in Edge” list of finalists alone is an impressive snapshot of the difficult choice faced by the panelists and judges when it came time to award the designation.

Papaioannou is one of those ten outstanding women recognized by the judges for their leadership efforts. Under her leadership, Edgeworx has become widely recognized as one of the leaders in the Edge Cloud Computing category, having also been recognized as a finalist for the Leading Lights Award for the Most Innovative Edge Computing Strategy.

Papaioannou has worked tirelessly to move the edge computing industry forward via the ioFog open source project, which is part of the Eclipse Foundation. ioFog has been promoted to a top-level project at the Eclipse Foundation, and is the fastest-growing Eclipse IoT project with core parts of the project being downloaded over 100,000 times a month.

“I’m honored to have been chosen as Edge Woman of the Year 2019 and to be recognized alongside many inspiring and innovative women across the industry,” said Papaioannou. “I appreciate the recognition and look forward to continuing my work in the industry; together we have a lot to accomplish.”

Don’t forget to listen to Papaioannou discuss edge computing and women in the industry in this exclusive  IT Visionaries interview.

Applications at the Industrial Edge

By Postcards from the Edge

By Vineet Anshuman
Co-founder, Cloudlyte

& Vikram Balimidi
Director Of Product and Marketing, SD-WAN Services

 

Major strides have been made in the last few years in making Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR) more accessible and affordable, enabling the market to move towards mass adoption. What’s also becoming apparent is that enterprises and industries can really benefit from these technologies (for a primer on the differences between these technologies, click here).

According to IDC, the AR/VR market spend was around $17.8 billion in 2018 and growing at over 95%. Funding for these technologies is only increasing YoY, and the use-cases are moving beyond gaming, media and entertainment. Industrial use-cases are fast emerging as key drivers for AR/VR technology, encompassing uses ranging from design, engineering, training, and field services to live remote support.

Among these related technologies, AR is expected to be implemented more widely, with over 66% of respondents in recent Capgemini research agreeing the technology will be more impactful in real-life scenarios.

Specific use-cases are emerging in manufacturing, heavy industries, retail, utilities, automotive and construction sectors. AR is  extremely useful because it enables the digitization and automation of workflows, enables companies to train technicians on technology and safety, and ultimately resulting in reduced manufacturing and operational errors, improved operational efficiency, enhanced collaboration and faster innovation. The end results are significant cost savings and enhanced top line results.

  • Boeing has used augmented reality for technicians, increasing productivity by 40% and reducing wiring production time by 25%.
  • Walmart has used VR technology  to train employees; they’ve acquired 17,000 Oculus headsets for training in over 4,700 US stores. 
  • Other use-cases around remote-assistance and product training have shown improved efficiency by reducing the time for support calls by 25% and an attendant cost reduction of 35% by cutting travel expenses, as well as gains in product quality. 

AR is definitely a growing space and the ecosystem to support these use-cases is emerging quickly. Collaboration amongst the purveyors of hardware, software, services and infrastructure ensure the best user experience is always being delivered. 

However, barriers to AR/VR still exist.oday, AR/MR experiences are predominantly delivered from a centralized cloud hundreds or thousands of miles away from end users. The issue with hosting these applications in a centralized cloud is that as the use-cases move towards more real-time requirements and low-latency (within 10ms-20ms), moving the cloud closer to the end-user is an absolute necessity.

Rendering and processing complex hi-definition 3D models and visualizations in a centralized cloud and continuously delivering the results to user headsets limits the kinds of immersive experiences that can be delivered. Not overcoming this barrier will create a significant hindrance to user adoption, and ultimately the successful implementation of enterprise AR use-cases.

Today’s enterprise AR/MR applications and services require a highly responsive compute infrastructure and 24/7 availability. Some examples from live industrial environments show some of the critical challenges:

  • An airline engineer who is less experienced needs to interact with a more qualified engineer to troubleshoot and repair an engine issue. An AR application enables the technicians to interact and delivers advanced visualizations that can help reduce errors, reduce costs and speed up the repair process. When this scenario was tested, the researchers found the delays in the real-time interaction and visualization led to  user acceptance issues, which impacted adoption.
  • Many use cases require the projection of large and complex 3D industrial models, combined with real-time video assistance, all overlayed onto an image of the physical environment around a worker. In order to deliver a good experience that won’t disrupt the viewer’s balance, real time calculations must be made to incorporate the user’s FOV (field of view), motion and orientation.

To deliver a low-latency experience, services and instances of the applications need to be deployed near the end-users where it is most relevant — near the edge of the network. 

Allowing AR applications to be built and delivered across the cloud-to-edge continuum requires tools that help developers  access and consume the best resources in the ideal locations. 

Next-generation edge computing infrastructure should move beyond the current deployment models to deliver a PaaS-like experience characterized by:

  • Support for standard languages 
  • Act as an extension of cloud tools and development methods
  • Deploy multiple workloads including containers and serverless
  • An ability to support stateful applications
  • Are fully programmable by the developer
  • Intelligent infrastructure and application deployment
  • Fully secure and encrypted 
  • Provide monitoring, troubleshooting and diagnostics.

The edge PaaS to deliver this should be flexible and scale as needed, with the ability to process large portions of an application when needed.

AR application developers and platform providers need to be offered a consumable model to access the infrastructure whenever needed with minimal effort and integrating into their current CI/CD development pipeline.

For AR application developers and platform providers, edge computing offers a critical component to solve the use experience issues impacting AR applications and adoption and should be as easy to program as any other cloud application.  

IoT, AI & Networking at the Edge

By Postcards from the Edge

by Mike Capuano

CMO at Pluribus Networks

 

5G is the first upgrade to the cellular network that will be justified not only by higher speeds and new capabilities targeted at consumer applications such as low latency gaming but also  by its ability to support enterprise applications. The Internet of Things (IoT) will become the essential fuel for this revolution, as it transforms almost every business, government, and educational institution. By installing sensors, video cameras and other devices in buildings,  factories, stadiums and in other locations, such as in vehicles, enterprises can collect and act on data to make them more efficient and more competitive. This digital transformation will create a better and safer environment for employees and to deliver the best user experience possible to end customers. In this emerging world of 5G-enabled IoT, edge computing will play a critical role. 

IoT will leverage public and private 5G, AI, and edge compute. In many cases, analysis of the IoT data will be highly complex, requiring the correlation of multiple data input streams fed into an AI inference model—often in real time. Use cases include factory automation and safety, energy production, smart cities, traffic management, large venue crowd management, and many more. Because the data streams will be large and will often require immediate decision-making, they will benefit from edge compute infrastructure that is in close proximity to the data in order to reduce latency and data transit costs, as well as ensure autonomy if the connection to a central data center is cut.

Owing to these requirements, AI stacks will be deployed in multiple edge locations, including on premises, in 5G base station aggregation sites, in telco central offices and at many more “edges”. We are rapidly moving from a centralized data center model to a highly distributed compute architecture. Developers will place workloads in edge locations where applications can deliver their services at the highest performance with the lowest cost. 

Critical to all of this will be the networking that will connect all of these edge locations and their interrelated compute clusters. Network capabilities must now scale to a highly distributed model, providing automation capabilities that include Software Defined Networking (SDN) of both the physical and virtual networks. 

Network virtualization—like the virtualization of the compute layer— is a flexible, software-based representation of the network built on top of its physical properties. The physical network is obviously required for basic connectivity – we must move bits across the wire, after all. SDN automates this physical “underlay” but it is still rigid since there are physical boundaries. Network virtualization is a complete abstraction of the physical network. It consists of dynamically-constructed VXLAN tunnels supported by virtual routers, virtual switches and virtual firewalls — all defined and instantiated in software, all of which can be manipulated in seconds and is much faster than reconfiguring the physical network (even with SDN).

To satisfy the requirements of a real-time, edge-driven IoT environment, we must innovate to deliver cost effective, simple, and unified software defined networking across both the physical and virtual networks that support the edge. The traditional model for SDN was not based on these requirements. It  was built for large hyperscale and enterprise data centers that rely on multiple servers and software licenses incurring costs and consuming space and power. This approach also requires complex integrations to deploy and orchestrate the physical underlay, the virtual overlay, along with storage and compute. 

Deploying traditional SDN into edge environments is not an attractive solution. Often there will not be space to deploy multiple servers for management and SDN control of the physical and virtual networks. Furthermore, all the SDN controllers need to be orchestrated by a higher layer “controller of controllers” to synchronize network state, which adds unnecessary latency, cost and complexity. 

In some cases, companies also deploy SmartNICs (a network interface controller that also performs networking functions to offload processing from the CPU). SmartNICS allow packet processing associated with network virtualization without burdening the primary compute (which is better utilized supporting other workloads). Also, hardware-based taps, probes and packet brokers are being deployed to support network telemtry and analytics. 

Applying the network automation model we built for large centralized data centers will be expensive and cumbersome, as well as space and power inefficient, in edge environments. The industry needs to rethink the approach to edge network automation and deliver a solution designed from the ground up for distributed environments. Ideally this solution does not require additional hardware and software but can leverage the switches and compute that are already being deployed to resource constrained edge environments. 

The good news is that a number of companies are developing new approaches that deliver highly distributed SDN control that unifies the physical underlay and virtual overlay along with providing network analytics — all with no additional external hardware or software. These new technologies can utilize, for example, the fairly powerful and underutilized CPU and memory in “white box” Top of Rack (TOR) switches to deliver SDN, network virtualization, network analytics and a DCGW (Data Center Gateway) router function. In other words, these solutions have been designed with the edge in mind and are delivering powerful automation with no extra hardware and additional software licenses – supporting the edge with a cost effective solution that also saves space and power.


Pluribus Networks delivers open networking solutions based on a unique next-gen SDN fabric for data centers and distributed cloud edge compute environments.

Edge Is the New Cloud

By Postcards from the Edge

By Lance Crosby

Founder & CEO at Stackpath

 

As the founder and CEO of SoftLayer (what is now IBM Cloud) I had the opportunity to be at the forefront of the cloud era since before anyone really knew what the “cloud” even was. With all the market confusion and rapid, drastic changes that are happening, it feels like 2009 all over again. And I can tell you that development of the “edge”—though, technically, a part of the cloud — is leading to as dramatic a revolution. 

And a revolution is called for, if only to meet changes happening in the media industry. Video on demand, over-the-top media, and music/audio streaming are quickly making traditional media obsolete. In 2020 more people will watch more online video than TV, and 1 million minutes of online video will be consumed every second. Delivery speed, capacity, and quality is key, and media and entertainment companies have increasingly stringent requirements. Customers are becoming more and more impatient. If a media company can’t get content to the customers or “eyeballs” quickly and easily, somebody else will.

It used to be fine to have enormous data farms out in the middle of nowhere in a corn field in Iowa or Eastern Washington, wherever there was cheap real estate, but things have changed. The traditional cloud model where workloads reside mainly in mega-data centers that sit way out past the exurbs just isn’t good enough.

The edge has a unique and strategic location in the overall topology of the cloud. Edge-optimized workloads share computing, storage and delivery responsibilities between origin data centers and computing, storage, and delivery resources in PoPs closer to the eyeballs. That provides an opportunity for companies to decentralize processing, better aggregate and consolidate data, reduce latency, and increase security. These enable real-world, measurable business value such as faster data analysis, lower network traffic costs, optimized cloud and on-premise costs, better quality of service, and improved regulatory compliance. In the era of GDPR, this is now more important than ever.

So far, the industry has barely tried tapping the full potential of the edge. It’s mostly just produced off-the-shelf edge services. What customers need now is a true platform that is on the edge and close to the users. A platform that is secure, provides composable infrastructure services, and can quickly and easily accommodate an increased demand, so they can leverage edge services and even create and deliver edge services of their own.

So, who will own the edge?

I don’t think it will be the public cloud services providers. They may own the cloud as it exists today, but they are not agnostic. With their closed ecosystems, customers won’t be able to securely and seamlessly integrate services from multiple providers at the edge as will be necessary.

It won’t be the legacy CDN service providers either, unless they find a way to stop time and invest massive CAPEX to rebuild their existing infrastructures. They are great at providing content delivery on their own infrastructures but cannot give third parties the ability to build on their infrastructure. This will be necessary to tap the full potential of the edge.

It also won’t be the legacy security service providers. They deploy their services on others’ clouds and edge infrastructures, but don’t have the expertise, scale or scope to build their own platforms.

This is exactly why I founded StackPath, a platform of secure edge services that enables developers to protect, accelerate, and innovate cloud properties ranging from websites to media delivery and IoT services. A platform that is origin agnostic, able to hyperscale, is inherently secure, and allows users to easily connect to a fully secure SaaS world. Integrated and automated through a single API and customer portal which allows anyone to build and deploy their own solutions at the edge. It includes:

  • Edge compute: Deploy container instances and virtual machines with varying levels of CPU and RAM to any edge location on StackPath’s global network. Or simplify setup by deploying functions at the edge with serverless.
  • Edge delivery: Deliver small and large objects at the edge with a global flat-rate CDN, use managed DNS to automatically route traffic to the nearest DNS server, and use object storage to eliminate data transfer charges from third party storage providers.
  • Edge security: Stop threats and bad traffic at the Internet’s edge with WAF and DDoS protection.
  • Edge monitoring: Track the performance and availability of endpoints, APIs, websites, and applications from the local perspective of users.

These services include individual “Stacks” for latency-sensitive applications. Many of our customers just use one Stack, but for those that manage multiple web properties — which may benefit from separate tracking or require different set of tools —  creating multiple Stacks is the way to go.

Similar to how many customers only use one Stack, many only use a single service. But there’s a shift happening right now. Organizations are starting to understand the performance benefits of using a single edge platform. 

Take Future PLC, a global media company with 171 million monthly users. After using StackPath’s CDN to deliver ads, they brainstormed other ways to optimize their programmatic advertising stack. An opportunity was right in front of them: serverless, or functions at the edge.

Using StackPath’s serverless product, Future PLC was able to decrease latency for their real-time bidding platform which improved the experience for their advertisers. At the same time, they were able to get rid of their old third-party provider, simplify their advertising stack, and save 30% on costs.

So, when we talk about the edge this shows that we’re not talking about this big expensive thing that’s trendy and doesn’t deliver. In reality, it delivers and does much more. We’re not trying to be at the edge of the Internet here. We’re already at it and those who join us are already seeing the benefits.

If You Want to Understand the Edge, Just Look at Your Phone

By Postcards from the Edge

By Peter Christy

Independent Analyst

 

The last decade has seen a remarkable and rapid transformation of consumer and enterprise IT alike, triggered by the introduction of the smartphone and fueled by the growth of the public cloud and broadband wireless connectivity. 

Technologists tend to view the last decade’s evolution from an infrastructure perspective. Because we see the vast amounts of compute, storage and networking resources that come into play to deliver the services on our devices, we often emphasize the back-end infrastructures that power our apps. We think largely in terms of the servers and pipes that deliver the internet, and not so much about the devices that connect to them. 

But there is another perspective to explore, the way that users think if they aren’t, like us, infrastructure experts. For them, especially for younger ones (millennials) the Internet and cloud are only interesting if available from their phone.  

From the phone in, the edge cloud looks very different: it isn’t the last thing you see on the way out from the application; instead it’s the first thing you see looking in. 

Thinking about our platforms from the device in, not the cloud out, creates a new perspective. Rather than seeing the cloud as the progenitor of the device, we see the device as the driver of cloud. By starting with what already runs on the device, then extending it with an edge cloud, we open up an entirely new class of applications, ranging from augmented and virtual reality to AI-driven IoT and autonomous robotics. 

These new applications will begin with the capabilities of the device, but leverage low-latency network connections to an edge cloud to augment the device and supplement the experience. For example, a local search can be performed using augmented reality, where having the detailed local context and rendering the augmentation atop is the sine qua non of the application—and all of that will happen on the device.

Consider, also, issues of security and privacy. Privacy is more tractable on the device, especially if the phone platform is trusted and the applications vetted. Apple’s new credit card makes this point: Apple never knows what the card holder is buying or from where; the details of the transactions are saved on the card user’s phone, but are inaccessible to Apple. As Apple points out, given the architecture, they couldn’t sell your purchase history to anyone even if they wanted to because they can’t even see it. 

The edge of the Internet can be made secure and private even though the Internet as a whole is anonymous and spoofable. The user is well-known at the edge, and edge network domains can be isolated and protected from the Internet at large. If the edge access provider knows who the user is, and where they are,  then they can also assure that group, national and regional regulations are applied transparently (and hence complied with) —a problem that is very challenging when attacked in the cloud writ large.

Finally, it’s worth touching briefly on the remarkable and quite counter-intuitive nature of the modern smartphone, and all the derived computer-based devices like drones that re-use phone technology.  

We’re used to a hierarchy of computers, where the server is more powerful than the desktop PC; the desktop PC is more powerful than the laptop; and the laptop is more powerful than the handheld device. The most expensive computer is the most powerful, right? Not so fast! That’s often no longer true. Manufacturers build smartphones in such high volumes (over 1.5 billion last year), that they can define and dictate the components they use. Server and PC designers have to use what’s available.  And smartphone refresh cycles are so frequent and lucrative that the largest vendors (e.g., Apple and Samsung) can design anything that is technologically feasible into a new phone and manufacture it using the most modern semiconductor process. 

Because of this strange inversion, smartphone device capabilities can often far exceed the capabilities of a typical server for specific applications. For example, the custom hardware hardware  on the iPhone 11 makes the phone capable of photography and facial recognition tasks that put most servers to shame. For these applications, the smartphone is many times more powerful than a typical server. Although A/R optimized phones haven’t been released yet, it’s safe to assume the same will be true.

While it’s reasonable to think of the power of an application most likely coming from a server-based backend, this is not always the case. For many phone applications, much or most of the power is in the phone as counter-intuitive as that may be.

So, next time you’re trying to understand how you might use the edge cloud, make sure to think about it outside-in and not just from the cloud heading out, like all those around you on the street are — heads tilted down. I think you may be surprised by the difference.

 

In the Clouds: The Times They Are a-Changin’

By Postcards from the Edge

By Mahdi Yahya

CEO & Co-founder, Ori

I spent my twenties in data centers around the world, plugging in cables, building networks, and chomping on tuna sandwiches with engineers in the breakroom—all while I was coming from or going to my next Meisner class or a Coriolanus rehearsal. 

Living between the damp, smelly backstage hallways of London theatres and the cold and noisy ones of Telehouse was probably the most formative time of my life. Both are hidden, distant worlds, unacknowledged by the general public, yet continually evolving to produce a better show to the masses.

There is no doubt that the cloud—the technical backbone of our current modern world—is changing. And a lot of that change is attributed to the recent rise of investments in edge computing.  

However, while the ecosystem around edge computing is on the rise, we should stop looking at edge computing as a successor to the cloud, but rather as an essential gateway to what will come next on a global level.

The marvel of edge computing is that it attracts all sorts of dreamers. It is a concept so distinct in thinking, yet incredibly complex in execution. 

No doubt, investments in edge computing are leading us towards an immersive and autonomous future. However, I am exhausted by the promises of seamlessly connected worlds, cars that drive themselves, and smart cities that are shamelessly touted as a show of marvel and wonder. 

The promise on the poster outside the theatre is pretty far from what is on stage. We may be able to achieve flying cars, but there is a lot to be addressed first.

It’s Time to Talk About Computing

We’ve become accustomed to accessing resources in highly available, centralized data centers for some time now. But how do we access computing resources that are outside these comfortable environments? As a developer or enterprise, how do I distribute my application across thousands of individual locations, and in dozens of geographies?

Applying the principle of ease-of-access pioneered by centralized computing to the broader, distributed resources around will enable computing anywhere and everywhere. As a result, end devices will become disposable, free and—ultimately—worthless. 

Personal-Computing-as-a-Service

The boundaries between our phones, watches, tablets, computers, and TVs are blurring. Machines, things and humans are in constant communication producing extraordinary amounts of data daily.

And I can envision a near future where paying $1,000 for an iPhone will become a thing of the distant past. Dummy devices will be able to pull an operating system from the nearby edge with infrastructure consumed in real-time, en masse: invisible, but highly available. 

Is it farfetched? Not really. Global infrastructure is in constant flux, and we are at the cusp of the next significant wave, a change that will pave the way for computing anywhere. Edge computing might be the trigger that ignites a structural metamorphosis to the global backbone of the internet—changing it for generations to come.  

The Old New World

Cloud services allowed developers to create centrally managed applications while globally deployed to nearly every region of the planet. There has always existed a border between cloud and telecommunications networks, where cloud providers offered a set of technologies to develop applications for global deployment, while telecommunications networks provided access to these applications at the local level.

These boundaries are starting to blur, and the spectrum between internet, public, private, hybrid clouds and telco networks is beginning to merge.

Communication carriers operate in multiple geographies, and already manage vastly distributed resources. Precisely the kind of infrastructure that could support a new generation of services delivered closer to the edge, opening up new commercial opportunities and an ideal environment for innovation.

This challenging move, however, is not as straightforward as it sounds, and many refer to it as the “Last Mile” challenge. The telcos that solve the challenge of last-mile delivery optimization have an opportunity to capture significant value in this new world. But capturing this market requires an intelligent and careful approach—one that involves getting more out of what we have, making the most out of what we build, and optimizing the network’s geographic distribution, wide-area presence, and local capabilities.

In a future where everything is connected, communication networks must deliver where the cloud falls short. 

Today, the telco edge is viewed by many as a completely separate realm from what we deploy in the cloud, and, to some extent, this is an accurate assessment given the specialized hardware and location-specific deployments characteristic of telco networks. These specialized systems are indeed what will make telco edge- computing initially. Internal virtualization initiatives that aim to transform the telco network will be a crucial driver for edge computing early adoption, much like Amazon built AWS to serve the needs of its e-commerce business. And while the telco edge is measured today by a location or a specific set of hardware, it is software that will make it reach global scale and mass market.

Hello Software, My New Friend

Figuring out how to best run workloads is not a new problem. We continuously strive to build up new ways to run workloads almost anywhere, working to abstract away—and not having to worry about—what’s ticking away underneath.

Hardware is by its nature stable and permanent. The only way we can achieve the promise of the edge is by evolving how software interacts with physical hardware: infrastructure needs to be as agile and flexible as possible.

Parallels between network virtualization strategies and cloud-native distributed models must not be ignored. Through the numerous network virtualisation, open architectures and open interfaces efforts over the past few years, telcos are preparing the ideal environment to redefine third-party access to their infrastructure.

The growing adoption of Network Functions Virtualization (NFV) is freeing networks from being locked-into monolithic machines to seamlessly run over generic hardware, virtual machines or even containers. Giving networks new capabilities to scale hardware independently from the network itself, and the flexibility to host both network functions and external applications in the same location if not the same machine.

Many view edge computing as merely a compute resource. Yet hosting compute resources in a highly available network environment goes beyond offering mere compute power or storage capabilities to developers. 

New connectivity services will emerge from edge computing in the future, and the borders between internet, commercial clouds, hybrid, private clouds, core network and edge computing (fixed and mobile) will blur over time. Developers can then design their services with specific filters in mind (compute, storage, bandwidth, latency, location, density…) This seamless connection from public and private clouds to telco networks is what will give developers the fluidity to run workloads dynamically between a mixture of software, network and cloud environments. 

And software-defined network virtualization is an essential first step towards reaching that fluidity, by transforming the network into a widely distributed plane, we are creating hundreds of points in the network that are potentially ideal edge environments. A “cloudified” environment that connects and integrates to existing centralized cloud environments, with the end-to-end operator networks becoming a natural extension of regional data centers.

Innovations in software abstracting away the need to manage disaggregated hardware is what will make the case for edge in the coming years. However, there is no doubt that the edge is a mixture of network capabilities, alongside different hardware and new specialized software stacks that can address this jungle of connectivity and servers—ultimately giving developers and enterprises the ability to run any workload, anywhere (or everywhere!). Paving the way to the immersive, autonomous, and smart future we are all promised.

From Republic to Empire

While the public clouds were able to globally scale by working independently, reaching a global scale with the edge, and particularity the telco edge requires a collaborative approach between all players. Telcos no doubt, need to federate, agree and collaborate on standards, APIs and frameworks. 

The first battle of the clouds is coming to an end. The likes of AWS, Azure, and GCP are now ruling in a triumvirate fashion, with AWS as Pompey. But we all know what happened to Pompey.

Edge computing is a stepping stone for the old world of the telco to merge with the new, giving birth to a new layer of infrastructure that will power our world for the next 20 years.

Whether it’s the telcos, the cloud folks, or all of the other players, it all goes back to the fundamental principle that global infrastructure is in constant flux. But what triggers this periodic transformation? Is it demand? Or is it people’s imagination? 

What will come first? Flying Cars or Cloud-Edge-Fog computing?

I might have the answer in 2035.


Based in London, Ori empower developers and networks to build future applications through smart, immersive, and autonomous infrastructure.

What the Cutting Edge Looks Like Today

By Postcards from the Edge

By Jim Davis

Principal Analyst, Edge Research Group

 

Since last year’s State of the Edge Report I’ve presented at conferences in Mexico City, Toronto, Austin, Chicago, Dallas, Las Vegas, Richmond, San Francisco, and San Jose. I’ve hosted over one hundred phone calls with people from the US, Japan, UK, Spain, and Brazil, and I’ve engaged in conversations with folks worldwide in other mediums. What I’ve learned: edge computing is happening, and it is enabling the transformation of companies.

Edge computing is appearing in many forms, and not all of them are what might be strictly termed edge computing deployments by the definitions in the Open Glossary of Edge Computing. But when viewed on a continuum, there is definitely a movement towards a new generation of edge computing.

Example: A company that manufactures equipment for oil and gas extraction has created a platform built on industrial-strength Linux and a standard x86 4-core processor to gather data for trucks used for fracking extraction of natural gas and petroleum. Customers are operating a $20 million fleet of vehicles under strenuous environmental conditions—keeping them running costs another $9 million a year. Using an in-house edge computing platform to gather 88,000 readings per second from each truck, this company can remotely monitor the condition of their equipment and automatically create a maintenance alert that simultaneously triggers a product order at the factory. 

Many industries, especially those engaged in IIoT (industrial IoT) initiatives, don’t view themselves as using edge computing. Strictly speaking, they are often correct. There are also instances where equipment in a factory is instrumented, but the data is siloed (sometimes formatted in old or proprietary communications protocols). Initial steps in an edge computing strategy might involve extracting data and translating it into a standard format using standard server technology that’s deployed in a factory. Indeed, there are examples of companies looking to retrofit machine learning-assisted vision to “read” analog gauges on machinery and stream data back for monitoring and analytics. That wasn’t possible before.

 

Edge, AI are intertwined with digital transformation

Whether retrofitted or standard, companies are finding that it isn’t easy to find actionable insights from new data streams, which leads to the observation that edge computing, AI and ML technologies and digital transformation efforts are all intertwined. This has been particularly apparent when discussing IIoT case studies in industries such as manufacturing.

Opening up new data streams can lead to many different levels of change, including optimization or wholesale reinvention of a manufacturing process and creation of new business models.

Example: Returning to the previous example of the manufacturer of equipment for oil and gas companies: Drill pipes extending down from offshore drilling platforms to the sea bed are obviously critical components, with some 2.5 million pounds of load on the pipe and collar weld (where pipes are secured together), making maintenance critical. In the past, rig owners would remove the pipe and perform maintenance at scheduled intervals. The problem there is that, because of imperfections in the welding process (among other issues), the maintenance process itself often actually increases the chances of equipment failure.

By enabling monitoring of the conditions under which the equipment is actually being used, and applying ML models based on factory testing of materials and experience with previous equipment failures, the manufacturer can predict with a high degree of accuracy when the pipe needs repairs. The customers have been able to keep equipment in production for upwards of twice as long before conducting repairs. In short, they are generating more revenue in between maintenance cycles, and the manufacturer is now selling a new predictive maintenance service to the customer.  

Example: a cement manufacturing company looks to apply AI to its manufacturing process. Cement seems simple on the outside, but that belies the complex chemistry required to produce the consistent quality needed for building bridges and high-rise buildings. The company has plenty of sensor data from existing systems, as well as large data sets for training AI algorithms. 

After months of training (with hands-on assistance from a technology specialist), the company sees energy savings of 2-5% and a yield increase of 2% to 7%, along with reduced maintenance and system downtime. In a multi-billion dollar business, these results are going to impact the bottom line as the company rolls out the technology to all its plants around the world. Looking ahead, the company will gradually move towards leveraging AI to continuously manage all elements of the production process—something done manually in 10-minute intervals before—and enable autopilot mode (i.e., without human supervision) of the production system.

 

Impact: Power and Cooling for the Edge (and Core) Cloud

Applying edge computing and AI to industry offers huge potential. Simply looking at the physical infrastructure, it’s clear that having datacenters closer to data sources can help solve the networking problem. But this also raises the issue of powering and cooling the systems that are driving the business insights. Whether processing data in a core cloud or edge cloud, enterprises need to take into account the cost of power. Make no mistake: AI, ML, and other data processing workloads are power-hungry. 

Chips are already drawing in the range of 200-plus watts, and next-generation Intel Xeon chips are estimated to draw as much as 330 watts. How many chips are needed for AI workloads? Some supercomputer applications use tens of thousands of processors. Even dividing that workload among hundreds or tens of facilities still translates into significant power and cooling requirements. 

Even in some well-developed markets sometimes lack adequate grid power for data centers, such as in regions of Europe and Asia. On-site power generation and other considerations will factor into sizing an edge datacenter—and determining whether it’s economically feasible at all.

All told, the impact of edge computing, IIoT data and the use of AI/ML will require the focus of more attention on the development of “right-sized” data centers. What size will they be? How many will be deployed in a given metro area? The challenge vendors in edge computing will face over the next 12-18 months will be to help customers build a new equation that contemplates location, energy, and connectivity for facilities that will accommodate the changing demands in enterprise workloads.


Based in Fresno, California, Edge Research Group provides market research, strategic advisory, and content marketing services for technology firms.