Skip to main content
The pipeline lifecycle consists of different stages, including development, production, and reporting. You can run your pipelines in different ways during each stage of the lifecycle.
  • Development: Press play in the canvas to run your pipeline and validate transformations by previewing data at each step.
  • Production: Use scheduled runs to automate pipeline execution at defined intervals.
  • Reporting: Use Prophecy Apps to provide form-based interfaces for non-technical users to trigger pipeline runs that generate reports.
This page explores these different run types in detail.

Interactive runs in the canvas

Prophecy lets you interactively run your pipeline in the pipeline canvas and preview data between each gem. This way, you can ensure that gems produce the expected output. There are two ways to start an interactive run:
  • Click the large play button on the bottom of the pipeline canvas. The whole pipeline runs.
  • Click the play button on a gem. All gems up to and including that gem run. This lets you test a small part of the pipeline, without consuming resources to run the whole pipeline.
As gems run in your pipeline, sample outputs will appear after those gems. When you click on a data sample, Prophecy loads the data and opens the Data Explorer. The Data Explorer lets you sort, filter, and search through the gem output.

Scheduled runs

Scheduling allows you to automate your data pipelines at predefined intervals. For each pipeline in your project, you can configure independent schedules that specify how often a pipeline runs and whether to send alerts during the automated runs. The execution environment of the scheduled run is determined during project publication. To learn more about deploying projects to specific execution environments, see Versioning and Scheduling.

Executing pipelines via apps

You can also run pipelines with Prophecy Apps in Prophecy. These apps enable non-technical users to run data pipelines through intuitive, form-based interfaces. By restricting access to pipelines themselves, you can provide proper guardrails for pipeline execution via Prophecy Apps.

External data handling

Prophecy supports external sources and targets through connections. Because SQL transformations require tables, Prophecy Automate dynamically creates temporary tables in your SQL warehouse to process this data. Temporary tables act as intermediaries that allow external data to be processed using SQL logic. In other words, they enable dbt and SQL to transform external data as if it were native to the warehouse. These tables use ephemeral materialization, which means they exist only during query execution and do not appear in your warehouse or pipeline canvas.

Where pipelines run

All pipeline runs—whether interactive, scheduled, or triggered through apps—execute using:
  • SQL warehouse: Processes SQL transformations using dbt
  • Prophecy Automate: Extends SQL warehouse capabilities by providing orchestration and ingress/egress features
Different components of the same pipeline can run in different places. SQL transformations execute in your SQL warehouse, while orchestration, data ingestion, and data egress operations run through Prophecy Automate. Your Prophecy fabric configuration determines the compute resources, connection details, and runtime settings that apply to all run types.