Both public cloud and edge computing are technologies that complement the traditional data center and help optimize resource utilization and management, but they are not ideal for all organizations or scenarios. The use case is an important factor in deciding which deployment model makes the most sense for certain workloads, but before IT administrators can decide whether to use the cloud or the edge or, more likely, a mix of both they need to understand the pros and cons of each.
Cloud Computing Basics
By now, most people are familiar with cloud computing : It creates a platform where resources compute, storage, and network can be flexibly applied to a workload in a highly virtualized manner to best meet the needs of modern dynamic workloads.
Advantages of cloud computing
The cloud has many advantages, including:
- Flexible and highly dynamic resource provisioning. With the right configuration, the cloud can flex the resources applied to a workload on demand. For example, a workload that has a sudden spike in its need for computing power can cause it to be applied from lots of virtual resources. When the demand spike ends, the resource can be released and placed back on the heap, ready for the needs of the next workload.
- Highly virtualized. Again, in a well-designed cloud, platform virtualization means workloads gain high levels of portability. An instance of an application can be moved from one part of the cloud to another, if necessary, and this can be done quickly. This improves availability and performance.
Disadvantages of cloud computing
The cloud also has its disadvantages, such as:
- There is still a resource limit. This is true, especially in private clouds. No organization wants to run a platform where they are paying for too much resource capacity. Costs associated with poor resource utilization include not just the power needs to keep everything running, but the overall size of a data center, required cooling, operating system and application licenses, maintenance, etc. Public clouds most likely already handle hundreds of thousands of workloads, so they can handle this aspect better; private clouds that only manage tens or hundreds of workloads might not have the necessary headroom.
- Difficulties in dealing with the more physical aspects of an environment. While resource virtualization is a powerful cloud advantage, it cannot be done when relying on the physical location or capabilities of a single asset.
- Public clouds are typically not connected to an organization’s more physical needs via high-bandwidth links. Using a centralized cloud platform in one environment is generally a good idea. However, if that cloud relies on a slow connection and lower bandwidth to access data in an environment, significant issues such as data sawing and packet collisions could occur when data loads are high.
The cloud in a modern IT environment
Objectively, the cloud is a great idea, but it faces some problems when looking at the changes in the IT environment in the last two years. The main one is the burgeoning field of IoT. Here, devices are spread across an organization’s physical IT environment, carrying out a number of different tasks, from simple measurements to complex actions based on the requirements of, say, a production line or a smart building. IoT devices are data rich as they tend to create a large amount of data. This data tends to be chatty: it is not a continuous stream, but is created over a series of events. Much of the data is useless; IoT devices tend to create data that is often limited to indicating that all is well. This data does not need to traverse the network, Herein lies the dichotomy: trying to fully manage an IoT environment through an entire cloud platform is not the optimal way to do things. The problem is, for a cloud to handle all the data created by these IoT devices, all of that data has to traverse the network to where that cloud capacity resides. This brings latency to the data itself, along with what can be a big hit to overall cloud bandwidth, even with the resource flexibility that the cloud puts at stake. Luckily, there is another option: edge computing.
Edge Computing Basics
The idea behind edge computing is to move all or part of data manipulation and analysis out of the core of an IT platform to where the data is created, minimizing data movement, improving performance and placing the necessary intelligence closer to the IoT devices themselves. Thus, a special computing unit, known as an edge device, can be placed within the environment to capture, manipulate, analyze, and make decisions about what actions should be carried out in what areas. Again, a good idea in theory, but problems arise in practice.
Advantages of Edge Computing
Advantages of edge computing include:
- Put data intelligence closer to where it’s needed. This means responses are improved, and with many IoT devices being actuators or other event-driven elements, there is a need for responses as close to real time as possible in some situations.
- Minimize data transfers over the wider network. This means more network bandwidth is available for more pressing data manipulation and analysis needs.
- It allows a more “onion skin” approach to be applied to data transfers. Here, the edge device can capture and analyze the data coming from a group of IoT devices and can filter out obviously useless data. You can also see if there is anything that points to an immediate problem and send that data for further analysis to the centralized cloud or another more capable edge device closer to the core.
Disadvantages of Edge Computing
However, edge computing also suffers from problems, including:
- Edge definition. Cloud platforms have already clouded the definition of where the edge of a computing platform and its componentslie . In the case of edge computing, which deals with many more physical entities in the world of IoT devices, it might seem obvious that these represent the edge, since they are end nodes. However, how many IoT devices should an edge device be responsible for? What types of IoT devices should a single edge device be responsible for?
- false positives and negatives. Since most IoT devices are relatively dumb devices, with little ability to analyze or manage their own devices, the edge device must take responsibility for this. However, edge devices must be cost effective; an edge device that takes care of, say, 10 IoT devices, cannot cost thousands of dollars. Therefore, edge devices could experience deficiencies in their capabilities, leading to poor data analysis and event initiation. Users should be careful with the choice of their edge devices.