Verify and Run Jobs
Last updated
Last updated
To ensure a successful operation of Gradient, there are a few final steps and checks that will depend on how you currently configure your jobs.
After these steps, you should have a successful first run of Gradient. If you need any help, feel free to reach out to us via intercom or email us at support@synccomputing.com.
For the Databricks Job you want to optimize, go to the Databricks console Job's page.
Instance profiles: Users must have an instance profile correctly configured with access to AWS's S3 and describe cluster functions. See the AWS additional steps instructions for more information.
Enable Logging: Logging must be enabled for your existing job, either DBFS or S3 log location is fine
The following items should have been automatically completed via the job import step. These are just steps to verify the setup completed correctly. Ideally, no further action is required.
Webhook notification: Ensure that the job has webhooks enabled.
Spark environmental variables: Ensure the Spark environmental variables are populated with secrets.
Cluster init scripts: Ensure the Sync init script is selected
Your Databricks job should be fully connected to Gradient. Simply run your job as you normally would via the Databricks UI and click on "Run"
After your job is completed, a secondary "Record Job Run" job will be started which will automatically collect the logs generated and transmit them to Gradient.
The "Record Job Run" job can take 10-15 minutes to complete.
After the completion of the "Record Job Run" job, head over to the Gradient UI, where you should see the first datapoint populated: