Available for Enterprise Edition only.
Pipeline Monitoring features
You can seamlessly address data health issues and monitor scheduled or ad-hoc runs without the need to switch to Databricks or Snowflake by using the following features:- Detect and monitor: Identify errors at runtime, and monitor scheduled production runs.
- Alert: Get prompt alerts in case of failures according to severity.
- Troubleshoot and fix with recommended solutions: Identify the cause of failures, fix them AI recommended solutions, and rerun failed or skipped tasks. Prophecy’s Pipeline Monitoring encompasses all functionalities equivalent to those found in Databricks Workflows and Airflow jobs.
Possible Pipeline errors and failures
During runtime, a pipeline can fail due to different kinds of errors or failures such as the following:- Failure before plan execution started by Spark
- Failure when the gem has diagnostics or compilation issues because of a change in some common component
- Runtime error due to unexpected data, such as data type mismatch
- Error during write, such as write mode error or target data type mismatch
- Driver/Executor errors like exceeding memory limits (Out of Memory errors)

