Continuous analytics is a data science process that abandons ETLs and complex batch data pipelines in favor of cloud-native and microservices paradigms. Continuous data processing enables real time interactions and immediate insights with fewer resources.
Analytics is the application of mathematics and statistics to big data. Data scientists write analytics programs to look for solutions to business problems, like forecasting demand or setting an optimal price. The continuous approach runs multiple stateless engines which concurrently enrich, aggregate, infer and act on the data. Data scientists, dashboards and client apps all access the same raw or real-time data derivatives with proper identity-based security, data masking and versioning in real-time.
Traditionally, data scientists have not been part of IT development teams, like regular Java programmers. This is because their skills set them apart in their own department not normally related to IT, i.e., math, statistics, and data science. So it is logical to conclude that their approach to writing software code does not enjoy the same efficiencies as the traditional programming team. In particular traditional programming has adopted the Continuous Delivery approach to writing code and the agile methodology. That releases software in a continuous circle, called iterations.
Continuous analytics then is the extension of the continuous delivery software development model to the big data analytics development team. The goal of the continuous analytics practitioner then is to find ways to incorporate writing analytics code and installing big data software into the agile development model of automatically running unit and functional tests and building the environment system with automated tools.
To make this work means getting data scientists to write their code in the same code repository that regular programmers use so that software can pull it from there and run it through the build process. It also means saving the configuration of the big data cluster (sets of virtual machines) in some kind of repository as well. That facilitates sending out analytics code and big data software and objects in the same automated way as the continuous integration process.[1] [2] [3] [4]