CloverETL Server is an enterprise-grade runtime platform for automation and orchestration of data integration processes.
Deploy data transformation jobs on a platform that takes care of scheduling, execution, event triggers, monitoring of jobs and system resources, as well as notifications and user permissions.
Automate running of jobs with a built-in scheduler. You can schedule anything from running a data transformation, a complex job flow, a custom piece of code or just to send regular email reports. For fast paced turnarounds, Server will cache resources for repeated jobs so that individual starts will be blazingly fast.
Server can continually watch preconfigured folders, message queues, mailboxes or API endpoints so that you can set up real-time or batch reactions to these events. By employing job flows, you can set up elaborate logic and retain visibility into what and when happens. Of course, Server will keep a log of all events. You can chain events, for example run a cleanup job after a batch has been processed.
Control sequences of transformations, file transfers and movement, Web Service API calls, shell scripts and other tasks with an integrated workflow management system. Jobflows orchestration is a strong part of the Server, providing advanced functionality that is on par with external dedicated orchestration platforms...
You can publish a dynamic data transformation as a API endpoint without burdening the consumer with how, when and where the data comes from. With Launch Services on the Server you can publish data transformations as Web Services through Server’s API. You can either have a complex transformation behind it or a quick lookup for real-time applications, both gives you an unified way of creating an interface between applications and the data integration platform.
With Data Partitioning you can mark sections of a transformation to be executed in parallel, without
changing the design of the transformation. Instead, with just a single configuration option, you
tell Server to automatically partition the data into multiple streams, process data in parallel and
then merge it back. You can control the number of parallel streams even dynamically. Imagine using
this for throttled Web Services, allowing you to dynamically control number of connections, or CPU
intensive tasks which can’t be parallelized on their own. These can now run in multiple independent
instances. And, you’re using the same configuration concepts as with Cluster on multiple nodes.
Check out a video demonstrating 15X shorter processing time when using Data Partitioning with API calls.
CloverETL is at home both on-premise and in virtualized environments in private or public cloud. You can deploy Server into both Windows and Linux/Unix based containers. Many our customers moved all their infrastructure to Amazon EC2, Azure or Rackspace with CloverETL Server playing a key role in moving data in and out the platform.
Includes Subgraphs and Data Quality packages. Includes embedded runtime for manual execution of jobs.