Prerequisites – Introduction to Hadoop, Computing Platforms and Technologies
Apache Hive is a data warehouse and an ETL tool which provides an SQL-like interface between the user and the Hadoop distributed file system (HDFS) which integrates Hadoop. It is built on top of Hadoop. It is a software project that provides data query and analysis. It facilitates reading, writing and handling wide datasets that stored in distributed storage and queried by Structure Query Language (SQL) syntax. It is not built for Online Transactional Processing (OLTP) workloads. It is frequently used for data warehousing tasks like data encapsulation, Ad-hoc Queries, and analysis of huge datasets. It is designed to enhance scalability, extensibility, performance, fault-tolerance and loose-coupling with its input formats.
Initially Hive is developed by Facebook and Amazon, Netflix and It delivers standard SQL functionality for analytics. Traditional SQL queries are written in the MapReduce Java API to execute SQL Application and SQL queries over distributed data. Hive provides portability as most data warehousing applications functions with SQL-based query languages like NoSQL.
Components of Hive:
- HCatalog –
- WebHCat –
It provides a service which can be utilized by the user to run Hadoop MapReduce (or YARN), Pig, Hive tasks or function Hive metadata operations with an HTTP interface.
It is a Hive component and is a table as well as a store management layer for Hadoop. It enables user along with various data processing tools like Pig and MapReduce which enables to read and write on the grid easily.
Modes of Hive:
Hive is functioned in two major modes which are described below. These modes are depended on the size of data nodes in Hadoop.
- Local Mode –
- Map Reduce Mode –
It is used, when Hadoop is built with multiple data nodes and data is divided across various nodes, it will function on huge datasets and query is executed parallelly, and to achieve enhanced performance in processing large datasets.
It is used, when the Hadoop is built under pseudo mode which have only one data node, when the data size is smaller in term of restricted to single local machine, and when processing will be faster on smaller datasets existing in the local machine.
Characteristics of Hive:
- Databases and tables are built before loading the data.
- Hive as data warehouse is built to manage and query only structured data which is residing under tables.
- At the time of handling structured data, MapReduce lacks optimization and usability function such as UDFs whereas Hive framework have optimization and usability.
- Programming in Hadoop deals directly with the files. So, Hive can partition the data with directory structures to improve performance on certain queries.
- Hive is compatible for the various file formats which are TEXTFILE, SEQUENCEFILE, ORC, RCFILE, etc.
- Hive uses derby database in single user metadata storage and it uses MYSQL for multiple user Metadata or shared Metadata.
Features of Hive:
- It provides indexes, including bitmap indexes to accelerate the queries. Index type containing compaction and bitmap index as of 0.10.
- Metadata storage in a RDBMS, reduces the time to function semantic checks during query execution.
- Built in user-defined functions (UDFs) to manipulation of strings, dates, and other data-mining tools. Hive is reinforced to extend the UDF set to deal with the use-cases not reinforced by predefined functions.
- DEFLATE, BWT, snappy, etc are the algorithms to operation on compressed data which is stored in Hadoop Ecosystem.
- It stores schemas in a database and processes the data into the Hadoop File Distributed File System (HDFS).
- It is built for the Online Analytical Processing (OLAP).
- It delivers various types od querying language which are frequently known as Hive Query Language (HVL or HiveQL).
- Architecture and Working of Hive
- Introduction to Apache Pig
- Introduction to Apache Cassandra
- How to make sure that Apache service keeps running in Ubuntu
- How MapReduce completes a task?
- How Job runs on MapReduce
- Job Initialisation in MapReduce
- MapReduce Job Execution
- Volunteer and Grid Computing | Hadoop
- Data with Hadoop
- How MapReduce handles data query ?
- RDMS vs Hadoop
- Distributed Cache in Hadoop MapReduce
- How to Execute WordCount Program in MapReduce using Cloudera Distribution Hadoop(CDH)
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.