The indispensable resource for the DevSecOps community - articles, blog posts, videos and breaking news.
It could very well be the case you have already heard of edge computing. Many organizations already use edge computing in their day-to-day business. Before we dive into more details about the possibilities and challenges of edge computing, it’s good to know what edge computing really is and and what you need to know about it.
Often until standardization and reference architectures are established, there is a lot of debate with regard to the definition of any new technology. For edge computing, this is also the case. The definition of edge computing depends to who you talk to and it also depends on your own perspective. From the definitions I found on the internet, I choose the one from Gartner:
“Edge computing is part of a distributed computing topology where information processing is located close to the edge, where things and people produce or consume that information”.
In addition to “edge computing”, also “the edge” leads to discussion. Some people argue that the edge is any location except the public cloud. The website of Zdnet argues that the edge is the location on the planet where processors delivers features to customers most efficient and most cost-effective. Smarak Bhuyan from Google uses this definition:
“An Edge location is a computing enclosure/ space/ facility geographically dispersed to be physically closer to the point of origin of data or a user base. In other words, for an Edge to exist there must be a hub or a core; therefore, dispersion of computing to the periphery would qualify as ‘Edge computing’ and the physical enclosure/ space/ facility can be defined as the ‘Edge facility’.”
Edge computing explained
So what exactly is edge computing? In short, edge computing brings compute power, storage and applications closer to where it needs to be (think of end-users, devices, facilities) to generate, process and consume data. We see a reverse trend when we compare this to cloud computing. The latter is all about the movement of compute power, storage, applications, etc to a centralized location (the cloud) in contrary to hosting all of this in private datacenters.
According to some experts in the field, edge computing is like a form of distributed computing like we saw it in the past, but with a twist. The twist comes from the use cases which benefit from edge computing. A major driver for the (upcoming) popularity lies in the field of Internet of Things and big data. More on that later.
Edge computing solves a problem we saw in the past as well: latency. The closer you move the processing power to where your end users or devices are, the faster the data will arrive at the desired location. Mission critical applications that require very low latency and very reliable connections benefit from it. And this accelerates a lot of new use cases. Those use cases did not exist before, so this is where the biggest difference is, compared to what we know of “distributed computing” from the past.
One of the major costs for organizations which utilize the cloud is data transfer. In the case of serverless, every call to a function in the cloud costs money. Imagine what happens to your cloud bill when you have a fleet of autonomous vehicles which sent several thousands of these requests to your (cloud native) services every second. It would be much cheaper to process all of this data closer to the location of where it is really used. This way we need less requests to the cloud native services, thus saving costs.
Autonomous vehicles need an answer “immediately” to take decisions without human intervention. They can’t wait for a delayed answer in order to do so. Every millisecond counts when it comes to complex situations. Let alone a failing internet connection which causes a disruption of the entire communication flow. A high latency ruins the entire business case of autonomous vehicles since they are unable to operate reliable. Faster response times are guaranteed in case the processing power is in the car itself or very close to it and latency is minimized.
In case applications and systems do not solely depend on the cloud to process data, bandwidth problems and sloppy connections can be avoided. This in turn increases the reliability of applications running in remote locations and also to avoid both planned and unplanned downtime. The processing of data starts at the location of where it is being needed most. And only the data which needs to be stored in the cloud is sent back to it in a more efficient way. Less storage space is needed for the requests, thus further reducing the cloud bill.
Examples & use cases
The example of autonomous vehicles highlights the benefits of edge computing from a network/latency perspective. But there is more. IoT devices like smart cameras, fridges, watches, speakers, etc are equipped with a lot of sensors. Sensors to measure the current temperature, motion detectors, HD cameras, etc. Those sensors generate a lot of data which needs to be processed. Small devices like these often lack the compute power and the network connection to do so.
An example: have you ever thought of your security camera processing all of the recordings of last week? Nope. Those devices rely on compute power which is close to the location of their current position to process this data.
More examples and applications
The list of applications grows every day. Consider the following examples and use cases to get a good impression:
- The handling of financial transactions at banks. As of now, banks are doing a lot to investigate suspicious transactions by extensive research. It would be much more efficient to check fraudulent transactions right after the money is being transferred. This would save the time and (human labor) to analyze all of the transactions. Speed is king to prevent more malicious actions.
- Logistics companies can use IoT and edge computing to track the movement of goods through their warehouses and transportation vehicles. Edge computing can help to create an optimal planning to deliver those goods to customers.
- Hospitals use edge computing to process data which comes from nearby operating rooms. Surgeons can take smarter decisions to cure patients if the data is readily available.
- Content Delivery Networks – CDNs. A lot of cloud providers already offer these kinds of services. They offer so called “edge locations” closer to their customers than their traditional datacenters. It allows them to service content from websites faster and more reliable. An advantage for everyone who visits a website of which speed is a very important factor. Many CDNs now offer serverless to offer compute services close to where users are.
All of these examples sound great and ready to use. But edge computing is still in its infancy, with new possibilities coming very day, but also many challenges. Consider the following challenges in case you want to use it for your applications / situations:
- IoT devices generate a lot of data which needs to be processed. It’s a challenge to get the right connections and bandwidth to handle this information. Remember the example of the video camera: how to (securely) get all of that data out and transfer it to the compute facility nearby. Also think of what to do with the useful data? Where should that go and how will that data being transferred to its destination?
- Compared to a single application at just one location, the attack surface becomes a bit larger when using a lot of applications on multiple devices. Those devices need to be secured using (existing and new) security mechanisms.
- Special training for IT personnel is needed to be able to operate and maintain those devices.
- From a business point of view, it is important to regularly check the use cases of the application and devices. This is important to make sure the right business goals are met. In the world of DevOps where there is constant change, this quickly becomes a time consuming topic. For clarity: think of managing a bunch of micro-services compared to managing a single monolith.
A special topic for edge computing and IoT is security. Let’s start with a simple example.
Keeping the default passwords for devices like security cameras is a painful mistake which can lead to a lot of problems. Consider the following security aspects which are just the beginning:
- In most cases, IoT devices are not build with security in mind. Sometimes they do not even receive updates and upgrades to patch problems, or have static passwords that are easy to guess and cannot be changed. This is completely different compared to the public cloud in which security is a major factor, whatever you do.
- The attack surface of an application running in a datacenter is much smaller compared to a lot of IoT devices working together to collect and analyze data. The number of connection endpoints is much higher.
- You have to carefully consider the network which is being used to capture and send sensitive data. Think of the example for the surgeons. Sometimes an operator controls the network and monitors it. Sometimes it will be the internet – on which anything can happen. Just with the cloud – zero trust is a good starting point.
And what about physical security? It’s not just one location you need to secure. Stolen and tampered-with physical devices on the edge is a problem for data security. How should you protect a small computing devices mounted in a building from one of your customers? Physical security is a lot more challenging since all locations and situations are different to each other.
Security by design as a best practice of shifting security left could be an answer to a lot of these challenges. But we are far from that scenario yet.
Edge computing provides us a lot of benefits for use cases which demand ultra low latency and a lot of compute power “on the spot”. Industries are seeing the trends and act upon it. Challenges and security issues are still very relevant. I hope this article gave you good introduction to what edge computing is and it provides you a starting point for your organization.