Data loading, or simply loading, is a part of data processing where data is moved between two systems so that it ends up in a staging area on the target system.
With the traditional extract, transform and load (ETL) method, the load job is the last step, and the data that is loaded has already been transformed. With the alternative method extract, load and transform (ELT), the loading job is the middle step, and the transformed data is loaded in its original format for data transformation in the target system.
Traditionally, loading jobs on large systems have taken a long time, and have typically been run at night outside a company's opening hours.
Two main goals of data loading are to obtain fresher data in the systems after loading, and that the loading is fast so that the data can be updated frequently. For full data refresh, faster loading can be achieved by turning off referential integrity, secondary indexes and logging, but this is usually not allowed with incremental update or trickle feed.
Data loading can be done either by complete update (immediate), incremental loading and updating (immediate), or trickle feed (deferred). The choice of technique may depend on the amount of data that is updated, changed or added, and how up-to-date the data must be. The type of data delivered by the source system, and whether historical data delivered by the source system can be trusted are also important factors.
Full data refresh means that existing data in the target table is deleted first. All data from the source is then loaded into the target table, new indexes are created in the target table, and new measures are calculated for the updated table.
Full refresh is easy to implement, but involves moving of much data which can take a long time, and can make it challenging to keep historical data.[1]
See main article: Change data capture. Incremental update or incremental refresh means that only new or updated data is retrieved from the source system.[2] [3] The updated data is then added to the existing data in the target system, and the existing data in the target system is updated. The indices and statistics are updated accordingly. Incremental update can make loading faster and make it easier to keep track of history, but can be demanding to set up and maintain.[4]
Tricle feed or trickle loading means that when the source system is updated, the changes in the target system will occur almost immediately.[5] [6]
See main article: Real-time computing. When loading data into a system that is currently in use by users or other systems, one must decide when the system should be updated and what will happen to tables that are in use at the same time as the system is to be updated. One possible solution is to make use of shadow tables.[7] [8]