I’m sure you all use voice assistants like Alexa, Siri, etc. Suppose you ask Alexa what is the weather today? Alexa will handle your request in the cloud by sending a compressed file of your speech to the cloud which is then uncompressed and your request is resolved by obtaining the necessary information from the weather site and then the answer is returned back from the cloud. This is a lot of effort to know the weather when you could have just looked outside! But jokes aside, it might be easy for one Alexa to transmit your request to the cloud through the network, but what about thousands of other Alexa’s that are also transmitting data. And what about the millions of other IoT devices that also transmit data from the cloud and obtain data in return?
Well, this is the data age, and data is generated at exponential levels. IoT devices generate a lot of data that is delivered back to the cloud via the internet. Similarly, IoT devices also access data from the cloud. However, if the physical data storage devices for the cloud are far away from where the data is collected, it is very costly to transfer this data because the bandwidth costs are insane and there is also a higher data latency. That’s where Edge Computing comes in!
What is Edge Computing?
Edge Computing makes sure that the computational and data storage centers are closer to the edge of the topology. But what is this edge after all? That’s a little fuzzy! The edge may be the network edge where the device communicates with the internet or where the local network which contains the device communicates with the internet. Whatever the edge, the important part of edge computing is that the computational and data storage centers are geographically close to the devices where the data is created or where it is consumed.
This is a better alternative than having these storage centers in a central geographical location which is actually thousands of miles from the data being produced or used. Edge Computing ensures that there is no latency in the data that can affect an application’s performance, which is even more important for real-time data. It also processes and stores the data locally in storage devices rather than in central cloud-based locations which means companies also save money in data transmission.
Advantages of Edge Computing
Let’s check out some of the advantages of Edge Computing:
1. Decreased Latency
Edge computing can reduce the latency for devices as the data is processed and stored closer to the device where it is generated and not in a faraway data storage center. Let’s use the example of personal assistants given above. If your personal assistant has to send your request to the cloud and then communicate with a data server in some part of the world to obtain the answer you want and then relay that answer to you, it will take a lot more time. Now, if edge computing is utilized, there will be less latency as the personal assistant can easily obtain your answer from a nearby data storage center. That’s like running halfway around the world vs running to the edge of your city. Which is faster?!
2. Decreased Bandwidth Costs
These days all devices installed in homes and offices like cameras, printers, thermostats, AC’s, or even toasters are smart devices! In fact, there might be around 75 billion IoT devices installed worldwide by 2025. All these IoT devices generate a lot of data that is transferred to the cloud and far-off data storage centers. This requires a lot of bandwidth. But there is only a limited amount of bandwidth and other cloud resources and they are all expensive. In such a scenario, Edge Computing is a god sent as it processes and stores the data locally rather than in central cloud-based locations which means companies also save money in bandwidth costs.
3. Decreased Network Traffic
As we have already seen, there is an insane amount of IoT devices available currently with a projected increase to 75 billion in 2025. When these many IoT devices generate data that is transferred to and from the cloud, naturally there is an increase in the network traffic which results in bottlenecks of data and higher strain on the cloud. Imagine lots of traffic on a busy highway? What will happen? Large traffic jams and lots of time in getting anywhere. That’s exactly what happens here! This network traffic results in increased data latency. So the best solution is using edge computing which processes and stores the data locally rather than in far away cloud-based data storage centers. If the data is stored locally, it is much easier to access leading to decreased global network traffic and decreased data latency as well.
Disadvantages of Edge Computing
Let’s check out some of the disadvantages of Edge Computing:
1. Reduced Privacy and Security
Edge Computing can lead to issues in data security. It is much easier to secure data that is stored together in a centralized or cloud-based system as opposed to data that is stored in different edge systems in the world. It’s the same concept that it is much easier to secure a pile of money in one location with the best cutting edge technology than it is to secure smaller piles of money at the same efficiency level. So companies using Edge Computing should be doubly conscious about security and use data encryption, VPN tunneling, access control methods, etc. to make sure the data is secure.
2. Increased Hardware Costs
Edge computing requires that the data is stored locally in storage centers rather than in central cloud-based locations. But this also requires much more local hardware. For example, while an IoT camera just needs a basic build in hardware locally to send raw video data to a cloud web server where much more complex systems are used to analyze and save this video. But if Edge computing is used, then a sophisticated computer with more processing power will be needed to locally analyze and save this video. However, the good news is that hardware prices are continually dropping which means it is much easier now to build sophisticated hardware locally.
Applications of Edge Computing in Various Industries
There are a lot of wearable IoT devices in the healthcare industry such as fitness trackers, heart monitoring smartwatches, glucose monitors, etc. All of these devices collect data every second which is then analyzed to obtain insights. But it is useless if the data analysis is slow for this real-time data. Suppose that the heart monitor picks up the data for a heart attack but it takes a little time to analyze it? This can be catastrophic! That is why Edge Computing is so important in Healthcare so that the data can be analyzed and understood instantly. An example of this is GE Healthcare, a company that uses NVIDIA chips in its medical devices to utilize edge computing in improving data processing.
Edge computing has lots of applications in the Transportation Industry, particularly in Self-Driving cars. These autonomous cars require lots of sensors ranging from 360-degree cameras, motion sensors, radar-based systems, GPS, etc. to make sure they work correctly. And if the data from these sensors is transferred to a cloud-based system for analysis and then retrieved back by the sensors, this may lead to a time lag which can be fatal in a self-driving car. In the time that it takes to analyze the data that there is a tree in front, the car may even crash into that tree! So Edge computing is very useful in autonomous cars as data can be analyzed from nearby data centers which reduces the time lag in the car.
Many retail stores these days are going tech-savvy! This means that customers can swipe into the store with their phone app or a QR code and starting picking whatever they want to buy. Then customers can just exit the store and the price of whatever they have bought will be automatically deducted from their balance. Stores can do this using a combination of motion sensors and in-store cameras to analyze what all customers are buying. But this also requires Edge Computing as to much time lag in data analysis can lead to the customers just picking up stuff and leaving for free! One example of this is the Amazon Go store which was first launched in January 2018.
- The Future of Cognitive Computing
- Edge Computing
- Distributed Objects Computing: The next generation of client-server computing
- Quantum Computing - The Computing Technology of Tomorrow
- Importance of Hashing
- What is Tableau and its Importance in Data Visualization?
- What is the Importance of Mathematics in Computer Science?
- Significant Importance of GIS in Driving Analytics
- Importance of Sudo GATE CS Test Series
- Why Cross Browser Testing Gaining Importance?
- The Future Of Web Development
- Blockchain - Into the Future
- Future of home automation
- 5G: The Future Of Wireless Networks?
- Design Patterns: Understand The Importance With Real Life Examples
- Is It Possible For Robots To Rule Over The Future World?
- 5 Dangers of Artificial Intelligence in the Future
- DARQ – Technology Powerhouse of Future
- 5 Reasons Why Online Learning is the Future
- Is the future with snake(Python) or Coffee(Java) ?
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.