Skip to main content
Discover the latest edge computing trends and technologies at ONE Summit, April 29-May 1 in San Jose | REGISTER
All Posts By

Jim Davis

What the Cutting Edge Looks Like Today

By Postcards from the Edge

By Jim Davis

Principal Analyst, Edge Research Group

 

Since last year’s State of the Edge Report I’ve presented at conferences in Mexico City, Toronto, Austin, Chicago, Dallas, Las Vegas, Richmond, San Francisco, and San Jose. I’ve hosted over one hundred phone calls with people from the US, Japan, UK, Spain, and Brazil, and I’ve engaged in conversations with folks worldwide in other mediums. What I’ve learned: edge computing is happening, and it is enabling the transformation of companies.

Edge computing is appearing in many forms, and not all of them are what might be strictly termed edge computing deployments by the definitions in the Open Glossary of Edge Computing. But when viewed on a continuum, there is definitely a movement towards a new generation of edge computing.

Example: A company that manufactures equipment for oil and gas extraction has created a platform built on industrial-strength Linux and a standard x86 4-core processor to gather data for trucks used for fracking extraction of natural gas and petroleum. Customers are operating a $20 million fleet of vehicles under strenuous environmental conditions—keeping them running costs another $9 million a year. Using an in-house edge computing platform to gather 88,000 readings per second from each truck, this company can remotely monitor the condition of their equipment and automatically create a maintenance alert that simultaneously triggers a product order at the factory. 

Many industries, especially those engaged in IIoT (industrial IoT) initiatives, don’t view themselves as using edge computing. Strictly speaking, they are often correct. There are also instances where equipment in a factory is instrumented, but the data is siloed (sometimes formatted in old or proprietary communications protocols). Initial steps in an edge computing strategy might involve extracting data and translating it into a standard format using standard server technology that’s deployed in a factory. Indeed, there are examples of companies looking to retrofit machine learning-assisted vision to “read” analog gauges on machinery and stream data back for monitoring and analytics. That wasn’t possible before.

 

Edge, AI are intertwined with digital transformation

Whether retrofitted or standard, companies are finding that it isn’t easy to find actionable insights from new data streams, which leads to the observation that edge computing, AI and ML technologies and digital transformation efforts are all intertwined. This has been particularly apparent when discussing IIoT case studies in industries such as manufacturing.

Opening up new data streams can lead to many different levels of change, including optimization or wholesale reinvention of a manufacturing process and creation of new business models.

Example: Returning to the previous example of the manufacturer of equipment for oil and gas companies: Drill pipes extending down from offshore drilling platforms to the sea bed are obviously critical components, with some 2.5 million pounds of load on the pipe and collar weld (where pipes are secured together), making maintenance critical. In the past, rig owners would remove the pipe and perform maintenance at scheduled intervals. The problem there is that, because of imperfections in the welding process (among other issues), the maintenance process itself often actually increases the chances of equipment failure.

By enabling monitoring of the conditions under which the equipment is actually being used, and applying ML models based on factory testing of materials and experience with previous equipment failures, the manufacturer can predict with a high degree of accuracy when the pipe needs repairs. The customers have been able to keep equipment in production for upwards of twice as long before conducting repairs. In short, they are generating more revenue in between maintenance cycles, and the manufacturer is now selling a new predictive maintenance service to the customer.  

Example: a cement manufacturing company looks to apply AI to its manufacturing process. Cement seems simple on the outside, but that belies the complex chemistry required to produce the consistent quality needed for building bridges and high-rise buildings. The company has plenty of sensor data from existing systems, as well as large data sets for training AI algorithms. 

After months of training (with hands-on assistance from a technology specialist), the company sees energy savings of 2-5% and a yield increase of 2% to 7%, along with reduced maintenance and system downtime. In a multi-billion dollar business, these results are going to impact the bottom line as the company rolls out the technology to all its plants around the world. Looking ahead, the company will gradually move towards leveraging AI to continuously manage all elements of the production process—something done manually in 10-minute intervals before—and enable autopilot mode (i.e., without human supervision) of the production system.

 

Impact: Power and Cooling for the Edge (and Core) Cloud

Applying edge computing and AI to industry offers huge potential. Simply looking at the physical infrastructure, it’s clear that having datacenters closer to data sources can help solve the networking problem. But this also raises the issue of powering and cooling the systems that are driving the business insights. Whether processing data in a core cloud or edge cloud, enterprises need to take into account the cost of power. Make no mistake: AI, ML, and other data processing workloads are power-hungry. 

Chips are already drawing in the range of 200-plus watts, and next-generation Intel Xeon chips are estimated to draw as much as 330 watts. How many chips are needed for AI workloads? Some supercomputer applications use tens of thousands of processors. Even dividing that workload among hundreds or tens of facilities still translates into significant power and cooling requirements. 

Even in some well-developed markets sometimes lack adequate grid power for data centers, such as in regions of Europe and Asia. On-site power generation and other considerations will factor into sizing an edge datacenter—and determining whether it’s economically feasible at all.

All told, the impact of edge computing, IIoT data and the use of AI/ML will require the focus of more attention on the development of “right-sized” data centers. What size will they be? How many will be deployed in a given metro area? The challenge vendors in edge computing will face over the next 12-18 months will be to help customers build a new equation that contemplates location, energy, and connectivity for facilities that will accommodate the changing demands in enterprise workloads.


Based in Fresno, California, Edge Research Group provides market research, strategic advisory, and content marketing services for technology firms.

What the Cutting Edge Looks Like Today

By Postcards from the Edge

By Jim Davis

Principal Analyst, Edge Research Group

 

Since last year’s State of the Edge Report I’ve presented at conferences in Mexico City, Toronto, Austin, Chicago, Dallas, Las Vegas, Richmond, San Francisco, and San Jose. I’ve hosted over one hundred phone calls with people from the US, Japan, UK, Spain, and Brazil, and I’ve engaged in conversations with folks worldwide in other mediums. What I’ve learned: edge computing is happening, and it is enabling the transformation of companies. 

Edge computing is appearing in many forms, and not all of them are what might be strictly termed edge computing deployments by the definitions in the Open Glossary of Edge Computing. But when viewed on a continuum, there is definitely a movement towards a new generation of edge computing.

Example: A company that manufactures equipment for oil and gas extraction has created a platform built on industrial-strength Linux and a standard x86 4-core processor to gather data for trucks used for fracking extraction of natural gas and petroleum. Customers are operating a $20 million fleet of vehicles under strenuous environmental conditions—keeping them running costs another $9 million a year. Using an in-house edge computing platform to gather 88,000 readings per second from each truck, this company can remotely monitor the condition of their equipment and automatically create a maintenance alert that simultaneously triggers a product order at the factory.

Many industries, especially those engaged in IIoT (industrial IoT) initiatives, don’t view themselves as using edge computing. Strictly speaking, they are often correct. There are also instances where equipment in a factory is instrumented, but the data is siloed (sometimes formatted in old or proprietary communications protocols). Initial steps in an edge computing strategy might involve extracting data and translating it into a standard format using standard server technology that’s deployed in a factory. Indeed, there are examples of companies looking to retrofit machine learning-assisted vision to “read” analog gauges on machinery and stream data back for monitoring and analytics. That wasn’t possible before.

 

Edge, AI are intertwined with digital transformation

Whether retrofitted or standard, companies are finding that it isn’t easy to find actionable insights from new data streams, which leads to the observation that edge computing, AI and ML technologies and digital transformation efforts are all intertwined. This has been particularly apparent when discussing IIoT case studies in industries such as manufacturing.

Opening up new data streams can lead to many different levels of change, including optimization or wholesale reinvention of a manufacturing process and creation of new business models.

Example: Returning to the previous example of the manufacturer of equipment for oil and gas companies: Drill pipes extending down from offshore drilling platforms to the sea bed are obviously critical components, with some 2.5 million pounds of load on the pipe and collar weld (where pipes are secured together), making maintenance critical. In the past, rig owners would remove the pipe and perform maintenance at scheduled intervals. The problem there is that, because of imperfections in the welding process (among other issues), the maintenance process itself often actually increases the chances of equipment failure.

By enabling monitoring of the conditions under which the equipment is actually being used, and applying ML models based on factory testing of materials and experience with previous equipment failures, the manufacturer can predict with a high degree of accuracy when the pipe needs repairs. The customers have been able to keep equipment in production for upwards of twice as long before conducting repairs. In short, they are generating more revenue in between maintenance cycles, and the manufacturer is now selling a new predictive maintenance service to the customer.  

Example: a cement manufacturing company looks to apply AI to its manufacturing process. Cement seems simple on the outside, but that belies the complex chemistry required to produce the consistent quality needed for building bridges and high-rise buildings. The company has plenty of sensor data from existing systems, as well as large data sets for training AI algorithms. 

After months of training (with hands-on assistance from a technology specialist), the company sees energy savings of 2-5% and a yield increase of 2% to 7%, along with reduced maintenance and system downtime. In a multi-billion dollar business, these results are going to impact the bottom line as the company rolls out the technology to all its plants around the world. Looking ahead, the company will gradually move towards leveraging AI to continuously manage all elements of the production process—something done manually in 10-minute intervals before—and enable autopilot mode (i.e., without human supervision) of the production system. 

 

Impact: Power and Cooling for the Edge (and Core) Cloud

Applying edge computing and AI to industry offers huge potential.. Simply looking at the physical infrastructure, it’s clear that having data centers closer to data sources can help solve the networking problem. But this also raises the issue of powering and cooling the systems that are driving the business insights. Whether processing data in a core cloud or edge cloud, enterprises need to take into account the cost of power. Make no mistake: AI, ML, and other data processing workloads are power-hungry.

Chips are already drawing in the range of 200-plus watts, and next-generation Intel Xeon chips are estimated to draw as much as 330 watts. How many chips are needed for AI workloads? Some supercomputer applications use tens of thousands of processors. Even dividing that workload among hundreds or tens of facilities still translates into significant power and cooling requirements.

Even in some well-developed markets sometimes lack adequate grid power for data centers, such as in regions of Europe and Asia. On-site power generation and other considerations will factor into sizing an edge datacenter—and determining whether it’s economically feasible at all.

All told, the impact of edge computing, IIoT data and the use of AI/ML will require the focus of more attention on the development of “right-sized” data centers. What size will they be? How many will be deployed in a given metro area? The challenge vendors in edge computing will face over the next 12-18 months will be to help customers build a new equation that contemplates location, energy, and connectivity for facilities that will accommodate the changing demands in enterprise workloads.


Based in Fresno, California, Edge Research Group provides market research, strategic advisory, and content marketing services for technology firms.