Building for the Edge

learn
Building for the Edge

When we consider “the Edge,” it can be easy to forget that the edge is you. It’s your data. It’s your phone, your laptop, your wireless headphones, your smart tv, your browser tab.

Edge is not just remote environments like a 5G tower or a cargo ship in the middle of the Pacific ocean or a train barreling through the Yukon backcountry (but it’s those, too). Businesses make edge-related decisions when choosing a datacenter - opting for a hosting location 100 miles closer to your home because it has better latency. Picking a hosting location for your Fortnite game? That’s an edge decision, too.

It doesn’t matter if we’re talking about applications providing spatial audio in your wireless headphones or applications predicting engine failure on a cargo train: today, these features require your data to make a trip up to a datacenter (for analysis and decision making) and back (for implementation). Whether you’re streaming movies at home or monitoring 34,000 sensors on a ship, applications running at the edge need to make decisions with only locally-available information.

Proliferation of software at the edge accounts for the exploding demand in edge compute, At the same time, the increasingly sophisticated decision-making of those applications means latency and compute power at the edge are now first-class concerns every business needs to tackle. But not all data is useful or actionable. Businesses, utilities, and infrastructure generate orders of magnitude more data than they can handle--in scope and gravity-- round the clock.

For example, a transportation company with thousands of sensors linked to a rail system generates far more data at the edge than any uplink could possibly handle. Applications monitoring train components need to make fast decisions - is there an imminent failure? Is it still safe to operate the train? Is there a dangerous obstacle on the tracks?. But if the data needs to travel to and from a datacenter for centralized processing, then the train and its operators can’t make informed decisions on the spot. If a train is relying on photo sensor data to identify that there is a person on the tracks two miles down the road, that train cannot wait on that decision’s roundtrip from a datacenter several states away before applying the brakes.

Current best-in-class architectures often store all data locally and upload it when the connection supports doing so. Centralized batch processing on the aggregated data picks up from there. But that doesn’t enable real-time decision making at the edge at critical moments. Edge computing means we can’t assume the bandwidth or network reliability to send all our data back to a centralized database.

There are more centralized data centers than ever. But centralized computing environments fail when low latency is critical. Even locally-oriented edge compute locations on 5G towers or CDN points-of-presence (PoPs) are always going to be slower than what an application can do locally.

Even worse: If your application relies on a centralized database to confirm the data has been stored and replicated, then latency deteriorates further still.

And if that database isn’t available? Your application is toast.

How do we avoid this trap? By changing our perspective.

Historically, cloud-based software providers prioritized centralized decision making by centralized analytics processes. Providers cared less about giving the user the snappiest or most offline-capable experience. Reliability - ensuring that an application running on your satellite, car, or mobile phone functions even when disconnected - evaporated as a principal concern.

For example, a Kafka endpoint on AWS will not always be available to your application, even if the device running your application is always connected to the Internet. Internet service providers (ISPs) target 99.5% availability, which equates to 3.65 hours of downtime per month. And that’s just underlying connectivity availability at your link level. The rest of the centralized services your application depends on will have their own downtime, meaning more hours (or days) of lost productivity and revenue, and an army of frustrated users.

The edge needs strategies around computing that reflect and mitigate these constraints. Put simply, applications need to work offline, using local data. When reconnected, those applications need to sync a minimum of data, deterministically converge on a single state, and do so with hyper-efficiency.

In other words, compute must reside at the source of the data.

The market already reflects this trend. 5G providers continue to build out localized compute at 5G endpoints/towers, giving application providers the ability to run right where your cell phone connects. Companies are dropping in containerized edge PoPs around the country because our new reality is that compute must be moved closer to the point of data origination and consumption.

We believe the trend toward more PoPs nearer to users is an important part of the next generation of applications. However, a tremendous amount of latent compute already sits idle in existing edge devices. A phone, a laptop, a gaming console, a browser tab - most spend their lives dramatically below their total computing capacity.

Considered in conjunction with today’s global chip shortage, this ocean of untapped compute appears doubly attractive. The CEO of IBM, Arvind Krishan, recently predicted many of these supply chains won’t recover until 2024. Organizations stand to gain significantly in this environment by diversifying their portfolio of available compute targets.

Our mission at Mycelial is to give organizations a competitive advantage by leveraging the multiplicity of compute already at their disposal, while maintaining the simplicity of developing software for a single device.

MORE ABOUT MYCELIAL