In a distributed infrastructure, edge computing describes a scenario where client data is processed as close to the original source as possible. This is often at the fringes or "edge" of a network, and involves moving some or all of an organization's resources away from centralized data centers. That includes both database storage and computing capacity.
What makes edge computing useful?
In a traditional computing environment, incoming data is sent back to a central location for processing—forming a lengthy communication channel depending on where the client globally resides. Latency increases with distance. Performance can also suffer in situations where bandwidth is limited, which is only exacerbated by increased physical separation.
In this way, edge computing is for data processing what a content delivery network (CDN) is for request response time. It's about optimizing performance, extending service availability to users normally at—or beyond—the boundary of an availability zone, and ensuring that businesses can manage ever-expanding data pipelines. Any congestion can negatively impact how a system functions.
Edge computing therefore benefits end users and the organizations that leverage collected data for business intelligence. Bringing more users into the fold expands the data pool, promoting deeper data analysis and better business outcomes for teams that can cut through the noise. It acknowledges the growing number of connected devices (IoT included), which might overwhelm centralized data centers and company networks. It also recognizes that users don't (or can't) often change locations easily while accessing services on a daily basis.
Edge computing may also be more compatible with modern, multi-cloud deployments—in which applications, data, and compute are spread across many public clouds. With users and services scattered so greatly, the computing world needed a matching solution. And ironically, edge computing isn't even a new concept, its namesake having emerged in the early 2000s. It's becoming increasingly indispensable, however.
What are the challenges of edge computing?
While edge computing has many advantages, establishing an edge network isn't a trivial process. Organizations may encounter the following while getting set up:
Cost constraints pertaining to setup, management, and later expansion
Complexity and planning for "known unknowns" (what will we need, etc.)
Grappling with how to best utilize limited resources
New data management challenges around ingestion, quantity, and quality (through analysis)
Data security and device management concerns
Poor or inconsistent connectivity and how that intersects with edge network capabilities
For these reasons, setting up edge computing isn't always straightforward. It mirrors general digital transformation and modernization, in the sense that you need a strong understanding of your infrastructure requirements, existing weaknesses, user base demographics, and the technical knowledge needed to execute.
Organizations must clearly answer why they need edge computing and how it'll transform the business from all relevant angles. And then there's monitoring—how are we going to oversee all of this new infrastructure? How can we transition from maintaining an application to also becoming a managed service provider?
Does HAProxy offer edge computing?
Yes! HAProxy Edge is a globally-distributed application delivery network, or ADN, that provides a wide range of turnkey application delivery services at massive scale and with first-class observability. These services include advanced security, application and content acceleration, and load balancing. Alternatively if you wish to set up your own edge computing infrastructure HAProxy’s ACL language allows for intelligent routing that can decide what can be processed locally and what needs to be sent to the central servers.
To learn more about edge computing in HAProxy, check out our HAProxy Edge page or our HAProxy Edge documentation (login required).