The first thing I wonder when I get a report is whether the data is correct. (CXO)
It monitors values and trends in data exchange between multiple sources and can execute a vast number of different types of controls.
Service Mgmt. & Support
- 24/7 for production systems.
- Business hours for others (GMT +03:00, negotiable).
- Remote support provided via ticketing system.
- SLA defined for critical/major/minor issues.
- No additional difference for bronze/silver/gold service levels.
- Informatica Data Quality
Use Cases & Pain Points Addressed
Recurring issues in report reconciliation (e.g. financial reports), or issues recognized late in data consistency and completeness... especially in CXO reports.
Lots of information getting stored and transferred during daily operations in all business areas. Large organization struggle with the handling and controlling of large amounts of diverse data, even more so when it is being transformed and the same information is replicated in different systems for various purposes.
These anomalies may have following impacts on business:
- Billing & reporting problems because of duplicated, outdated, or corrupted data.
- Customer trust issues because of miscalculations or missing/incomplete data.
- Bad business decisions/actions based on reports with corrupted or missing data.
- High project costs because of problematic migrations and late detection.
- … and many more, on both transactional side and business intelligence.
This tool enables data accountability and mandates data quality verification, upon detecting of anomalies or inconsistencies, and possibly driving auto correction. It helps formulating data quality and data governance standards.
As a simple but robust IT tool, it addresses a need often missing in operations:
- Creating alerts to prevent bad customer experience.
- Easy operations, e.g. making easy visual data consistency rules and check results.
Thereby, it helps getting rid of manual code, or SQL scripts, for each control (+ maintenance), and provides a single platform from which all controls can be designed and executed.
This tool can also be used to verify consistency of complex order entry, e.g. in a complex enterprise when multiple sources are required to contribute to the fulfilment of an order.
It is also suitable in big migration projects, both for testing structures and consistencies and for quality assurance of the data migration itself... all with same license and tool.
Results can be reported through available reporting tools (e.g. OBI, BO... ), or else it has its own GUI and reporting capabilities, providing also internal reports.
By including multiple parties in escalating warnings/issues early on, it reduces IT's bottleneck effect when investigating urgent issues... often too little, too late.
Key Features & Differentiators
- High performance REST architecture.
- For large data sets, grid computing system and in-memory processing is possible (depends on performance requirements and data volumes).
- In-memory is done with Apache Ignite, virtualizing a single shared memory from the grid.
- Processing also leverages database resources.
The tool can be integrated with common scheduling tools, or has its own sophisticated scheduler, or else: plug-into ETL processes and performs controller checks as they execute.
It can compare multiple columns within a table, as well as across tables and schema's, and can simultaneously connect to multiple systems and run complex checking rules.
It supports column-to-column and row-based comparisons (in case there is no PK defined for that table: take the entire row and adds a surrogate key).
It has far more control options and formulas than the competitor product (which supports comparison only between two tables, i.e. row counts and compare columns), and supports far more connectivity/inter-operability and integration scenarios.
Tactically: the tool runs independently of ETL tools. If ETL tool is changed, or technologies are mixed: ICC stays unchanged. Its purpose is precisely being independent and self-sufficient.
Supports Oracle, SQL Server, PostgreSQL, Teradata, Netezza, HANA, AS400, Big Data Hadoop (Hive).
The solution is dockerized, works with any Cloud provider, but can also be deployed on premise.
Costs/Expenses: comes at a fraction of the cost of international vendor tools (List Price comparison). The pricing model is independent of the number of administrators and users (flat rate).
Performance benchmark from large corporations:
- The processing of 10 Mio. rows, in a 1-1 checking of columns, completed in 20 minutes.