Overview
Prophecy SQL provides a consistent interface for authoring pipelines, but execution ultimately occurs on the configured compute engine. This means that the generated SQL must conform to the syntax and behavior of the selected dialect. In practice, the flow looks like this:At a glance
| Area | Databricks SQL | BigQuery | See also |
|---|---|---|---|
| Object naming | Uses catalog.schema.table structure | Uses project.dataset.table structure | Connection mapping |
| SQL dialect | Based on Spark SQL | Uses GoogleSQL | Function mapping |
| Data types | Spark-compatible types | GoogleSQL types | Data type mapping |
| Temporary objects | Temporary views and tables supported | Temporary tables with dataset rules | FILL IN INFORMATION HERE |
| Query execution | Runs on Databricks compute clusters or SQL warehouses | Runs on BigQuery managed execution environment | FILL IN INFORMATION HERE |
Naming and object references
Different engines organize database objects differently. Databricks SQL typically references tables using a hierarchical structure:Type coercion and casting
Both Databricks SQL and BigQuery support implicit type conversions, but the exact casting rules and supported data types may differ. Examples include:- Differences in how numeric precision and scale are handled.
- Variations in timestamp precision and timezone behavior.
- Differences in string-to-number conversions.
Date and time behavior
Date and timestamp handling may vary across SQL engines. Potential differences include:- Default timezone behavior.
- Timestamp precision.
- Available date/time functions.
NULL handling and comparisons
SQL engines may differ in how they treatNULL values in expressions, comparisons, and sorting.
Examples include:
- Ordering behavior for
NULLvalues inORDER BY. - Conditional logic involving
NULLcomparisons. - Differences in functions that handle missing values.
Table creation and write behavior
Imported workflows may create or overwrite tables depending on the configuration of Target gems. Table creation semantics may vary between SQL engines. For example:- Some engines support CREATE OR REPLACE TABLE semantics.
- Some require explicit dataset or schema specification.
- Some enforce stricter constraints on table overwrites or merges.
Performance and cost considerations
Execution models differ between engines, which may affect pipeline performance or cost. For example:- Databricks queries execute on clusters or SQL warehouses.
- BigQuery queries execute using a serverless model with usage-based pricing.
What remains consistent
Despite dialect differences, most aspects of imported workflows remain the same:- Tool mappings from Alteryx to Prophecy SQL remain consistent across engines.
- Pipeline structure and transformations remain unchanged.
- The Prophecy authoring experience is identical regardless of execution engine.
Related pages
For more detailed information, see:- Data type mapping
- Function mapping
- Connection mapping
- Tool mapping

