Overview of Apache Spark
In this article, we are going to discuss the introductory part of Apache Spark, and the history of spark, and why spark is important. Let’s discuss one by one.
According to Databrick’s definition “Apache Spark is a lightning-fast unified analytics engine for big data and machine learning. It was originally developed at UC Berkeley in 2009.”
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.
Databricks is one of the major contributors to Spark includes yahoo! Intel etc. Apache spark is one of the largest open-source projects for data processing. It is a fast and in-memory data processing engine.
History of spark :
Spark started in 2009 in UC Berkeley R&D Lab which is known as AMPLab now. Then in 2010 spark became open source under a BSD license. After that spark transferred to ASF (Apache Software Foundation) in June 2013. Spark researchers previously working on Hadoop map-reduce. In UC Berkeley R&D Lab they observed that was inefficient for iterative and interactive computing jobs. In Spark to support in-memory storage and efficient fault recovery that Spark was designed to be fast for interactive queries and iterative algorithms. In the below-given diagram, we are going to describe the history of Spark. Let’s have a look.
Features of Spark :
- Apache spark can use to perform batch processing.
- Apache spark can also use to perform stream processing. For stream processing, we were using Apache Storm / S4.
- It can be used for interactive processing. Previously we were using Apache Impala or Apache Tez for interactive processing.
- Spark is also useful to perform graph processing. Neo4j / Apache Graph was using for graph processing.
- Spark can process the data in real-time and batch mode.
So, we can say that Spark is a powerful open-source engine for data processing.
Apache Spark References