Skip to main content
/images/icon.png
Available for Enterprise Edition only.
This example shows how to use Jenkins to:
  • validate and test Prophecy pipelines on pull requests
  • deploy pipelines to Databricks environments after merge
Each environment (develop, qa, prod) maps to a separate Databricks workspace. Typical promotion flow:
feature → develop → qa → prod

Prerequisites

You should have:
  • A Git repository containing a Prophecy project
  • A Jenkins server with permission to create pipelines
  • Databricks workspaces for each environment

Required Jenkins plugins

  • GitHub Pull Request Builder (for test pipeline)
  • GitHub plugin (for deploy pipeline)
Check plugin compatibility with your Jenkins version before installing.

Configuration

Secrets

Configure the following credentials in Jenkins:
  • DEMO_DATABRICKS_HOST
  • DEMO_DATABRICKS_TOKEN
  • PROD_DATABRICKS_HOST
  • PROD_DATABRICKS_TOKEN

Fabric ID

Find your Fabric ID from:
Metadata → Fabrics → <your fabric>

Testing pipeline (PR validation)

This pipeline:
  • runs on pull requests to develop, qa, and prod
  • validates pipelines
  • runs unit tests

Trigger

Use GitHub Pull Request Builder to trigger on:
  • new PRs
  • updates to PRs

Jenkinsfile (test)

// .jenkins/deploy-declarative.groovy
pipeline {
    agent any
    environment {
        PROJECT_PATH = "./hello_project"
        VENV_NAME = ".venv"
    }
    stages {
        stage('checkout') {
            steps {
                git branch: '${ghprbSourceBranch}', credentialsId: 'jenkins-cicd-runner-demo', url: 'git@github.com:prophecy-samples/external-cicd-template.git'
                sh "apt-get install -y python3-venv"
            }
        }
        stage('install pbt') {
            steps {
                sh """
                python3 -m venv $VENV_NAME
                source ./$VENV_NAME/bin/activate
                pip install -U pip build pytest wheel pytest-html pyspark prophecy-build-tool
                """
            }
        }
        stage('validate') {
            steps {
                sh ". ./$VENV_NAME/bin/activate && python3 -m pbt validate --path $PROJECT_PATH"
            }
        }
        stage('test') {
            steps {
                sh ". ./$VENV_NAME/bin/activate && python3 -m pbt test --path $PROJECT_PATH"
            }
        }
    }
}

What this pipeline does

  1. Checks out the PR branch.
  2. Installs PBT and dependencies.
  3. Validates pipeline syntax.
  4. Runs unit tests.

Deploy pipeline (post-merge)

This pipeline:
  • runs on commits to develop, qa, prod.
  • deploys pipelines to the corresponding Databricks environment.

Trigger

Use a GitHub webhook to trigger on push events.

Jenkinsfile (deploy)

// .jenkins/test-declarative.groovy
def DEFAULT_FABRIC = "1174"
def fabricPerBranch = [
    prod: "4004",
    qa: "4005",
    develop: DEFAULT_FABRIC
]

pipeline {
    agent any
    environment {
        DATABRICKS_HOST = credentials("${env.GIT_BRANCH == "prod" ? "DEMO_PROD_DATABRICKS_HOST" : "DEMO_DATABRICKS_HOST"}")
        DATABRICKS_TOKEN = credentials("${env.GIT_BRANCH == "prod" ? "DEMO_PROD_DATABRICKS_TOKEN" : "DEMO_DATABRICKS_TOKEN"}")
        PROJECT_PATH = "./hello_project"
        VENV_NAME = ".venv"
        FABRIC_ID = fabricPerBranch.getOrDefault("${env.GIT_BRANCH}", DEFAULT_FABRIC)
    }
    stages {
        stage('install pbt') {
            steps {
                sh """
                python3 -m venv $VENV_NAME
                source ./$VENV_NAME/bin/activate
                pip install -U pip build pytest wheel pytest-html pyspark prophecy-build-tool
                """
            }
        }
        stage('deploy') {
            steps {
                sh ". ./$VENV_NAME/bin/activate && python3 -m pbt deploy --fabric-ids $FABRIC_ID --path $PROJECT_PATH"
            }
        }
    }
}

What this pipeline does

  1. Selects the target environment based on branch.
  2. Installs PBT.
  3. Builds pipelines into .jar / .whl artifacts.
  4. Uploads artifacts to Databricks.
  5. Creates or updates jobs.

Notes

  • Each sh step runs in a separate shell, so the virtual environment must be reactivated.
  • For Scala pipelines, ensure JDK 11 is installed on Jenkins nodes.
  • Jenkins files are stored in the repository; Jenkins stores only triggers and credentials.