We are told that low latency and imagination are the only prerequisites for building tomorrow’s edge applications. Tragically, this is an incomplete and false hope.
Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.
In order for robust, world-changing edge native applications to emerge, we must first solve the very thorny problem of bringing stateful data to the edge. Without stateful data, the edge will be doomed to forever being nothing more than a place to execute stateless code that routes requests, redirects traffic or performs simple local calculations via serverless function. This would be the technological equivalent of Leonard Shelby in Christopher Nolan’s excellent movie Memento. Like Shelby, these edge applications would be incapable of remembering anything of significance, forced, instead, to constantly look up state somewhere else (e.g., the centralized cloud) for anything more than the most basic services.
Edge computing is a distributed data problem. It’s more than simply a distributed compute problem. It’s full power cannot be realized by simply spinning up stateless compute on the edge . Conventional enterprise-grade databases systems cannot deliver geo-distributed databases to the edge while also providing strong consistency guarantees. Conventional approaches fail at the large globally-distributed scale ng because our current database architectures are built around fundamental tenets of centralizing the coordination of state change and data.
If you can take your data (state) and make giant piles of it in one data center, it’s easier to do useful things with it; but, if you have little bits of it spread everywhere, your have the a horrendous problem of keeping everything consistent and coordinated across all the locations in order to achieve idempotent computing.
Edge compute is easy when it’s stateless or when state is local, such as when a device maintains its own state or is trivially partitionable. Take, for example,an IOT device or a mobile phone app which only manages its own state. Similarly, stateful computing is also easy when everything is centralized.
However, when you want to perform stateful computing at any of the many places called edge—the network edge, the infrastructure edge or even the device at the other end of the network— edge computing becomes difficult. How do you manage and coordinate state across a set of edge locations or nodes and synchronize data with consistency guarantees? Without consistency guarantees, applications, devices and users see different versions of data, which can lead to unreliable applications, data corruption and data loss. Idempotent computing principles are violated and the edge is dead on arrival.
Centralized database architectures do not generalize to the edge
For the last 20 years, the world has been industrializing the client-server paradigm in giant, centralized hyperscale data centers. And within these clouds, efforts are being made to super-size the database to run globally and across intercity and intercontinental distances. By relaxing data consistency and quality guarantees, it is hoped that the current generation of distributed databases (distributed within a datacenter) will somehow overcome the laws of physics governing space and time to enable edge computing by becoming geo distributed multi master databases.
Distributed databases that scale out within a datacenter do not cleanly generalize to scaling out across geography and break down under the weight of their design assumptions. Traditional distributed databases depend on the following design assumptions:
- A Reliable data center class local area network.
- Low latency
- High availability
- Consistent latency & jitter behavior
- Very few (or no) network splits
- Accurate timekeeping using physical clocks and network time protocol (NTP)
-
- NTP is good enough for use cases where data ordering is handled across servers within the same rack or data center ( NTP slippage is < 1ms).
- Consensus mechanisms are good enough due to the low latencies and high availability of the data center class LAN.
The design assumptions for a geo distributed database are almost entirely opposite:
- Unreliable wide area networks
- High and variable latency especially at inter-city and intercontinental distances.
- Dynamic network behavior with topology changes and sporadic partitions.
- Lossy time keeping
-
- Asymmetric routes cause inter-city and intercontinental clock coordination challenges resulting in slippage of hundreds of milliseconds across a set of geo distributed time servers.
- Atomic clocks may be used to overcome this problem but are prohibitively expensive and complex to operate.
- Consensus is too expensive and too slow with a large number of participants to coordinate with over the internet
-
- Consensus is brittle in that quorum must be centralized and highly available. If network splits (particularly asymmetric splits) occur, managing quorum and getting reliable consensus becomes very challenging.
- Distant participants slow everyone down as it takes more time for them to send and receive messages.
- Adding more participants (i.e., edge locations) adds more overhead and slows consensus down as more participants need to vote.
Coordination is difficult because participants in a large geographically-distributed system need to agree that events happen in some temporal order.- Mechanisms like quorums are used in conventional distributed systems to implement such coordination. In geo-distributed systems, the mechanisms of coordination become the constraining factor in how many participants or actors can participate in and perform complex behavior in a network of coordinating nodes. For geo-distributed databases to support edge computing, a coordination free approach is required that minimizes or even eliminates the need for coordination among participating actors.
For edge computing to become a reality, we need geo-distributed databases that can scale across hundreds of locations worldwide yet act in concert to provide a single coherent multi-master database. This in turn requires us to design systems that work on the internet with its unpredictable network topology, use some form of time-keeping that is not lossy, and avoid centralized forms of consensus yet still arrive at some shared version of truth in real time.
For stateful edge computing to be viable at scale with the ability to handle real world workloads, edge locations need to work together in a way that is coordination-free and able to make forward progress independently even when network partitions may occur. \
Edge native databases will unlock the promise and potential of edge computing
Edge native databases are geo distributed multi-master data platforms capable of supporting (in theory) an unbounded number of edge locations connected across the internet using a co-ordination-free approach. Additionally, these edge native databases will not need application designers and architects to re-architect or redesign their cloud applications to scale and service millions of users with hyper locality at the edge. The edge native databases will provide multi-region and multi-data-center orchestration of data and code without requiring developers to have any special knowledge of how to design, architect or build these databases.
Edge native databases are coming. When they arrive, the true power and promise of edge computing will be realized. And once that happens, it will become true that low latency and imagination will be the only prerequisites for building the applications of tomorrow.
Chetan Venkatesh and Durga Gokina are the founders of Macrometa Corporation, a Palo Alto CA based company that has built the first edge native geo-distributed database and data services platform. Macrometa is in stealth (for now).
Opinions expressed in this article do not necessarily reflect the opinions of any persons or entities other than the authors.