Auto-training notebook

Gradient requires several iterations of your job to run before cost or runtime improvements can be observed. The fastest way to observe improvement is to run your Databricks job back-to-back several times.

The following steps should be performed after a job has been fully imported and on-boarded. If you have not done so, go back and complete the setup.

1. Import the training notebook into Databricks

To assist with this, we provide a notebook users can directly import into a Databricks notebook via the Github link below:

2. Input parameters

Attach the notebook to any small compute resource. Run the first cell to generate the input fields up top which are:

  • Databricks Host

  • Databricks Job ID

  • Databricks Token

  • Sync API Key ID

  • Sync API Key Secret

  • Training Runs

3. Run the notebook

Running the notebook will start the Job with the specified job-id and rerun it for the number of times in Training Run. Depending on the length of the job, the total time required to run the notebook will be: Training_runs*Job_runtime.

During or after the notebook is completed, go into the Gradient UI to see the final results!

Last updated