Difference Between Apache Hadoop and Amazon Redshift

Hadoop is an open-source software framework built on the cluster of machines. It is used for distributed storage and distributed processing for very large data sets i.e. Big Data. It is done using the Map-Reduce programming model. Implemented in Java, a development-friendly tool backs the Big Data Application. It easily processes voluminous volumes of data on a cluster of commodity servers. It can mine any form of data i.e. structured, unstructured, or semi-structured. It is highly scalable. It consists of 3 components:

  • HDFS: Reliable storage system with half of the world data stored in it.
  • Map Reduce: The layer consist of distributed processor.
  • Yarn: The layer consist of a resource manager.

Amazon RedShift is a large scale data warehouse service based on the cloud. Amazon Redshift has a commercial license and is a part of Amazon web services. It handles large scale of data and is known for its scalability. It does parallel processing of multiple data. It uses ACID properties as its working principle and is very popular. It is implemented in C language and has high availability. Feature of Amazon Redshift – fast, simple, cost-effective data warehousing service. 

Below is a table of differences between Apace Hadoop vs Amazon Redshift:

APACHE HADOOP

AMAZON REDSHIFT

Hadoop is 10 times costlier than Redshift. It costs about $200 per month. It is cheaper than Hadoop and costs $20 per month as the price depends on the region of the server.
Map Reduce jobs are slower in Hadoop. Redshift performs much faster than Hadoop cluster. For example: Redshift 16 node cluster performed a lot faster than a Hive/Elastic Map Reduce 44 node cluster.
Hadoop has a storage layer and stores data as files without taking into account any underlying data structure. Redshift is a columnar database which is designed to work with complex queries spanning millions of rows. Data is arranged in a table format and supports the structures based on PostgreSQL standard.
Use the HDFS set and get shell command to copy data to the Hadoop cluster. Data in Redshift are copied firstly by using Amazon S3 and then by copy command.
Scaling is not a limiting factor in Hadoop as one can scale to any amount of storage space by managing and integrating the nodes process properly. Redshift can only scale up to 2 PB.
Slower in comparison to Redshift. Ten times faster than Hadoop.
Hadoop is a Open Source Framework by Apache Projects. Red Shift is a priced Services provided by Amazon.
Hadoop is more flexible with local file system and any database Redshift can only load data from Amazon S3 or DynamoDB.
Administrative activities are complex and trickier to handle in Hadoop. Redshift has automated backups to Amazon S3 and data warehouse administration. 
It is provided by Hortonworks and Cloudera providers etc., It is developed and provided by Amazon Web services.
My Personal Notes arrow_drop_up

Check out this Author's contributed articles.

If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.


Article Tags :
Practice Tags :


Be the First to upvote.


Please write to us at contribute@geeksforgeeks.org to report any issue with the above content.