Run Workflow

Description

This job triggers a connector using this endpoint.

Use Cases

We recommend creating a Databricks Run Workflow Task for every "Workflow" within Databricks.

This way, you can use Orchestra to trigger your reverse ELT on a cron or event based schedule. This has a number of advantages vs. using Databricks' in-built scheduler/Databricks Workflows:

  • You can co-ordinate tasks outside of Databricks- these would typically be other Spark jobs, other notebooks, or other tasks in Databricks-adjacent environments e.g. ADF Data Loads

    • A common use-case is to have a dbt run jobs that run after Databricks workflows that are running autoloader scrips

  • You can use Orchestra to trigger jobs across Databricks Accounts / Environments

  • When Databricks jobs run, Data Warehouse cost is incurred. Running these operations on a schedule you set explicitly ensures these costs do not go out of hand

  • We aggregate metadata from the Databricks Task in the same place as the metadata from other operations in your Pipeline

Parameters

These parameters are required to run the Run Workflow Task

NameData typeRestrictionsExample

Job ID

Number

N.A.

12345

Error handling

API Requests

If we receive the following error codes from Databricks, we'll raise an error and the task will move to a failed state.

Code

Description

Handling

401

Unauthorised

We will raise an error and parse the raw error message from the Databricks response as the Orchestra message

404

Not Found

We will raise an error with the HTTP Reason as the Orchestra message

Other error code

We will raise an error with the HTTP Reason as the Orchestra message

Last updated