As the physical world grows ever more digitized with data emanating from billions of devices and sensors, the advantages of edge computing meet organizations’ demands for better performance, reliability and security.
Who controls or accesses what, when, how and based on what parameters are questions traditionally decided centrally, because devices and physical hardware have historically been incapable of such programming. Cloud computing and centralized data processing may be the predominant architectural paradigm of today, but there is a reason the world’s largest cloud organizations heavily invest in the edge. As we push intelligence outwards, value chains will shift.
An economic shift
Shifting to more distributed computing frameworks, including hybrid, will affect business models for cloud service providers, as well as millions of business adopters across every industry. Organizations today generate about 10% of their data outside a traditional data center or cloud, but Gartner predicts that number will increase to 75% within just five years. The exponential data growth compounds the demand to actually use the data in real-time and for longer-term strategic decision-making, but also will decrease financial losses in the future.
Kaleido Insights’ research analysis found the advantages of edge computing start deep in the stack, but will spread far beyond and drive the shift from centralized data processing.
Transitioning to the edge reduces the data volume sent to the cloud. Transmitting data from an endpoint to the cloud doesn’t come free; the costs include bandwidth, distance traveled, associated network hardware, man-hours to configure and monitor, never mind security of data in transit. Less volume translates to less traffic.
Processing data locally doesn’t just reduce the distance data must travel and associated transmission costs to be usable, it reduces connectivity constraints of difficult environments — such as oil rigs and remote farms — rendering them more viable for real-time and even mission-critical applications.
Organizations will see reduced data latency when they transmit less data. Current latency suffices for media apps, but countless next-gen applications — such as autonomous vehicles and remote surgery — need reduced latency.
Organizations waste an estimated $62 billion a year paying for extra data storage capacity that they don’t need, according to a recent study by Stanford researcher Jonathan Koomey. Computationally intensive data processing also requires additional costs to cleanse and analyze data and sort signal from noise.
Storing, managing and extracting value from that data consumes significant energy. In fact, data centers accounted for 2% of the total U.S. energy consumption in 2014, according to a study by the U.S. government on data center energy use. Activating the edge introduces limits and logic to data transmission and could tap other local generators such as light, kinetic, thermal or RF support for low-power applications.
Data in transit and data in a centralized cloud invite risk — particularly sensitive financial, biometric or proprietary data — whether from malicious actors or in the event of network failure. Considering the average cost of a data breach was $3.86 million in 2018 and the costs of downtime or tarnished reputations, mitigating the risks of sensitive centralized data has significant financial implications.
Shifting the customer experience
Arguably more interesting than financial advantages of edge computing alone is how they could shift deeply ingrained business practices. For example, take the standard practice of sending personal data to the cloud, which most organizations do to extract personalized insights via computationally intensive analysis and machine learning. The resulting honeypots of these sensitive data have exacerbated the privacy crisis, increased personally identifiable information data breaches, concerned consumers and risked compliance with GDPR.
Consider the win-win when edge-level intelligence manages the ability to extract insight for personalization and avoids the vulnerability of a centralized cloud repository. Consumers’ personal data remains more secure without sacrificing functionality, and organizations can continue to deliver personalization and reduce associated risks, latency and costs. This same shift applies in other areas too, such as sharing compute with external entities, contributing to data marketplaces or configuring compliance into device performance.