Skip to main content
When you import Alteryx workflows into Prophecy using the Transpiler, the resulting pipeline is expressed in Prophecy SQL. Prophecy SQL represents the logical operations from your Alteryx workflow and is then executed on the underlying SQL engine configured for your environment. Prophecy SQL pipelines can execute in Databricks or Google BigQuery. In most cases, imported workflows behave the same regardless of the execution engine. However, some differences in SQL dialects can affect how queries behave or how objects are referenced. This page highlights key conceptual differences between these engines so you can better understand how imported workflows may behave after migration.

Overview

Prophecy SQL provides a consistent interface for authoring pipelines, but execution ultimately occurs on the configured compute engine. This means that the generated SQL must conform to the syntax and behavior of the selected dialect. In practice, the flow looks like this:
Alteryx workflow

Prophecy SQL pipeline

Databricks OR BigQuery execution

While most transformations and tool mappings are engine-independent, some behavior may differ between SQL dialects.

At a glance

AreaDatabricks SQLBigQuerySee also
Object namingUses catalog.schema.table structureUses project.dataset.table structureConnection mapping
SQL dialectBased on Spark SQLUses GoogleSQLFunction mapping
Data typesSpark-compatible typesGoogleSQL typesData type mapping
Temporary objectsTemporary views and tables supportedTemporary tables with dataset rulesFILL IN INFORMATION HERE
Query executionRuns on Databricks compute clusters or SQL warehousesRuns on BigQuery managed execution environmentFILL IN INFORMATION HERE

Naming and object references

Different engines organize database objects differently. Databricks SQL typically references tables using a hierarchical structure:

catalog.schema.table

BigQuery references tables using a project and dataset structure:

project.dataset.table

When migrating workflows, you may need to adjust how tables are referenced or ensure the correct project, catalog, schema, or dataset exists in the target environment. For details on configuring these connections, see Connection mapping.

Type coercion and casting

Both Databricks SQL and BigQuery support implicit type conversions, but the exact casting rules and supported data types may differ. Examples include:
  • Differences in how numeric precision and scale are handled.
  • Variations in timestamp precision and timezone behavior.
  • Differences in string-to-number conversions.
See Data type mapping for a detailed comparison of how Alteryx data types translate into each SQL dialect.

Date and time behavior

Date and timestamp handling may vary across SQL engines. Potential differences include:
  • Default timezone behavior.
  • Timestamp precision.
  • Available date/time functions.
If your Alteryx workflow performs extensive date manipulation, you should verify that the resulting SQL behaves as expected in the target engine. See Function mapping for more details.

NULL handling and comparisons

SQL engines may differ in how they treat NULL values in expressions, comparisons, and sorting. Examples include:
  • Ordering behavior for NULL values in ORDER BY.
  • Conditional logic involving NULL comparisons.
  • Differences in functions that handle missing values.
FILL IN INFORMATION HERE ABOUT SPECIFIC NULL SEMANTIC DIFFERENCES.

Table creation and write behavior

Imported workflows may create or overwrite tables depending on the configuration of Target gems. Table creation semantics may vary between SQL engines. For example:
  • Some engines support CREATE OR REPLACE TABLE semantics.
  • Some require explicit dataset or schema specification.
  • Some enforce stricter constraints on table overwrites or merges.
FILL IN INFORMATION HERE ABOUT WRITE BEHAVIOR DIFFERENCES BETWEEN DATABRICKS AND BIGQUERY.

Performance and cost considerations

Execution models differ between engines, which may affect pipeline performance or cost. For example:
  • Databricks queries execute on clusters or SQL warehouses.
  • BigQuery queries execute using a serverless model with usage-based pricing.
These differences do not change the logical structure of your imported pipeline, but they may influence how you optimize workloads in production. FILL IN INFORMATION HERE ABOUT PERFORMANCE OR COST CONSIDERATIONS.

What remains consistent

Despite dialect differences, most aspects of imported workflows remain the same:
  • Tool mappings from Alteryx to Prophecy SQL remain consistent across engines.
  • Pipeline structure and transformations remain unchanged.
  • The Prophecy authoring experience is identical regardless of execution engine.
In most cases, you can import an Alteryx workflow once and run it on either Databricks SQL or BigQuery with minimal changes. For more detailed information, see:
  • Data type mapping
  • Function mapping
  • Connection mapping
  • Tool mapping