Skip to content
Related Articles

Related Articles

Improve Article
Save Article
Like Article

Difference Between Hadoop and Apache Spark

  • Last Updated : 23 Nov, 2021

Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. 

Hadoop is built in Java, and accessible through many programming languages, for writing MapReduce code, including Python, through a Thrift client. It’s available either open-source through the Apache distribution, or through vendors such as Cloudera (the largest Hadoop vendor by size and scope), MapR, or HortonWorks. 

Apache Spark is an open-source distributed general-purpose cluster-computing framework. Spark provides an interface for programming entire clusters with implicit data parallelism and fault tolerance. 

GeeksforGeeks Courses

Spark is structured around Spark Core, the engine that drives the scheduling, optimizations, and RDD abstraction, as well as connects Spark to the correct filesystem (HDFS, S3, RDBMS, or Elasticsearch). There are several libraries that operate on top of Spark Core, including Spark SQL, which allows you to run SQL-like commands on distributed data sets, MLLib for machine learning, GraphX for graph problems, and streaming which allows for the input of continually streaming log data. 
 

Hadoop-vs-Apache-Spark

Hadoop vs Apache Spark

FeaturesHadoopApache Spark
Data ProcessingApache Hadoop provides batch processingApache Spark provides both batch processing and stream processing
Memory usageHadoop is disk-bound Spark uses large amounts of RAM
SecurityBetter security featuresIts security is currently in its infancy
Fault ToleranceReplication is used for fault tolerance.RDD and various data storage models are used for fault tolerance.
Graph ProcessingAlgorithms like PageRank is used.Spark comes with a graph computation library called GraphX.
Ease of UseDifficult to use.Easier to use.
Real-time data processingIt fails when it comes to real-time data processing.It can process real-time data.
SpeedHadoop’s MapReduce model reads and writes from a disk, thus it slows down the processing speed.Spark reduces the number of read/write cycles to disk and store intermediate data in memory, hence faster-processing speed.
LatencyIt is high latency computing framework.It is a low latency computing and can process data interactively
My Personal Notes arrow_drop_up
Recommended Articles
Page :

Start Your Coding Journey Now!