These days many people understand at least the basics of the cloud, including the benefits it offers. They may not, however, be able to answer the question “What is edge computing?”. If that sounds like you, here is a quick guide to what you need to know about the edge.
Edge computing is the strategy of trying to keep data as close to its user as possible. Achieving this generally requires distributed architecture. This may well mean a cloud. In fact, edge computing and the cloud often fit together very well.
The three main factors driving the shift to edge computing are privacy and security, speed, and cost.
There are two main reasons why edge computing has privacy and security benefits. The first is the fact that data is generally at its most vulnerable when it’s on the move. It therefore follows that the less data has to move, the less vulnerable it is.
With traditional, centralized architecture, data is collected at endpoints and then transferred to a central point. It may be stored or processed (or both). When it is needed, again, however, it often needs to be returned to its originating point. Even when distances are short, data is still vulnerable to interception. As distances increase, so does this risk.
With edge computing, by contrast, data stays as close as possible to its originating point. This vastly reduces, or even eliminates, the risk of data being intercepted in transmission.
The second reason is linked to the first. National regulators are becoming increasingly strict about the way private entities store and process data belonging to their residents. Keeping personal data at source (i.e. in its home country) tends to be looked on very favorably by them. It therefore reduces legal risks.
The further data has to travel, the longer it takes to reach its destination. In the case of traditional, centralized architecture, a lot of data essentially has to make a round trip. This takes even longer than just going one way.
What’s more, the further data has to travel, the more vulnerable it is to disruptions in its journey. These may not be related to security issues, such as interception. In fact, realistically, they are much more likely to be standard technical issues such as digital bottlenecks.
Again, using edge computing, reduces, if not eliminates, this issue. This is a benefit to all business sectors. It is a huge selling point for some. For example, anyone familiar with ecommerce knows the importance of website loading speeds, especially during the checkout process. Using edge computing means that websites load as quickly as possible. This helps to minimize cart abandonment.
Most public cloud operators subdivide their business into zones. They will generally assign customers to a default (home) zone and allow them to access other zones. Usually, the costs in the default zone are lower than the costs in the other zones.
With edge computing, businesses can establish different operating entities with different default zones. They can then keep as much data as possible exclusively in these zones. This can significantly reduce their costs.
Possibly one of the main reasons why people struggle to understand edge computing is that it can be hard to define it separately from cloud computing. The easiest way to understand the difference is to view the edge as the why and the cloud as the how.
The edge puts computing resources as close as possible to where they are used. It is essentially the digital equivalent of the branch offices that handle local work.
Edge-based resources do, however, generally need to have some kind of link to a bigger network. They need it for much the same reasons as branch offices need a connection to the broader company. That connection is generally provided by cloud infrastructure. This may mean a single cloud or it may mean different clouds used for different purposes.
Fog computing is essentially a variation of the edge. It is used in situations where it is not, currently, possible to adopt a full, edge approach. For example, it is often used in smart buildings. These have multiple sensors generating significant quantities of data.
It is simply not practical (or cost-effective) to equip these sensors with the resources needed to process and/or store this data. Instead, the data is sent to fog nodes. These are essentially mini-hubs, usually within a cloud. These nodes will handle as much as they can on-site but link to a cloud when a task exceeds their capacity.
Discover the DataBank Difference today:
Hybrid infrastructure solutions with boundless edge reach and a human touch.