Prerequisite – ETL (Extraction, Transformation, and Loading) Process
ETL stands for Extraction, Transform and Load.
These are three database functions that are incorporated into one tool to pull data out from one database and to put data into another database.
Big Data encompasses a wide range of enormous data that can either be structured or unstructured. RDBMS finds it challenging to handle huge volumes of data. Also, RDBMS is designed for steady data retention rather than rapid growth. This is where data warehouses come in.
Data warehouse supports all types of data and can also handle the rapid growth of data. Thus, for data analysis, data needs to be shifted from databases to data warehouses. The working of the ETL process can be well explained with the help of the following diagram.
Applications of the ETL process are :
- To move data in and out of data warehouses. Databases are not suitable for big data analytics therefore, data needs to be moved from databases to data warehouses which is done via the ETL process.
- Data strategies are more complex than they have ever been. ETL facilitates to transform vast quantities of data into actionable business intelligence.
There are two approaches in ETL :
- Top Down Approach :
The data flow in the top-down OLAP environment begins with data extraction from the operational data sources. This data is loaded into the staging area and validated and consolidated for ensuring a level of correctness and then moved to the Operational Data Store (ODS).
The ODS stage is sometimes skipped if it is another copy of the operational databases. Data is loaded into the Data warehouse in a parallel to avoid extracting it from the ODS. Data is routinely extracted from the ODS and temporarily hosted in the staging area for aggregation, summarization and then extracted and loaded into the Data warehouse.
The need to have an ODS is determined by the business requirements. If there is a need for detailed data in the Data warehouse then ODS must be created. Once the Data warehouse aggregation and summarization processes are complete, the data mart will extract the data from the Data warehouse into the staging area and perform a new set of transformations on them. This will help organize the data in particular structures as required by data marts.
Afterward, the data marts can be loaded with the data and the OLAP environment becomes available to the users. The data in a data warehouse is historical data. A top‐down model approach was proposed by Inmon, to create a centralized Enterprise Data Warehouse using traditional database modeling techniques (ER Model), where the data is stored in 3NF. The data warehouse now acts as a data source for the new data marts.
- Kimball Methodology (Bottom-Up Approach) :
The bottom‐up approach reverses the positions of the Datawarehouse and the Data marts. Data marts are directly loaded with the data through the staging area. The existence of ODS depends on business requirements. The data flow in the bottom-up approach starts with the extraction of data from operational databases into the staging area where it is processed and consolidated and then loaded into the ODS.
The data in the ODS is either appended to or replaced by the fresh data being loaded. Once the ODS is refreshed, the present data is once again extracted into the staging area and processed. The data from data mart is pulled to the staging area aggregated, summarized, and so on and loaded into the Data Warehouse and made available to the end-user for analysis.
ETL Tools :
Some of the most commonly used ETL tools are MarkLogic, Oracle, Sybase, Hevo, and Xplenty.
Advantages of ETL Tools :
- Easy to use.
- Load data from different targets at same time.
- Performs data transformation as per need.
- Better for complex rules and transformations.
- Inbuilt Error handling functionality.
- Based on GUI and offer visual flow.
- Save Cost and generate higher revenue.
Disadvantages of ETL Tools :
- Not suitable for near real-time data access.
- Inclined more towards batch data processing
- Difficult to keep up with changing requirements.
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.
- ETL Process in Data Warehouse
- Difference between ELT and ETL
- Basic approaches for Data generalization (DWDM)
- KDD Process in Data Mining
- Process Synchronization | Set 2
- Data Mining: Data Warehouse Process
- How To Develop A Standard SQL Suite For A Robust Streaming Process?
- Python - Process images of a video using OpenCV
- Difference between Program and Process
- What is P2P(Peer-to-peer process) ?
- Introduction and Installation process of Gulp
- Difference between process.nextTick() and setImmediate() Methods
- Node.js Child Process
- Data Mining Process
- Global, Process and buffer in Node.js
- Normalization Process in DBMS
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.