Learning to
Run Tests

Developer Fundamentals — Module 2 of 4

Click to start

Learning objectives

“As a developer who has registered a model with ​ValidMind, I want to identify relevant tests to run from ​ValidMind’s test repository, run and log tests for my model, and insert the test results into my model’s documentation.”


This second module is part of a four-part series:

Developer Fundamentals

Module 2 — Contents

First, let’s make sure you can log in to ​ValidMind.

Training is interactive — you explore ​ValidMind live. Try it!

, , SPACE , N — next slide     , , P , H — previous slide     ? — all keyboard shortcuts

Before you begin

To continue, you need to have been onboarded onto ValidMind Academy with the Developer role and completed the first module of this course:

Already logged in and refreshed this module? Click to continue.

  1. Log in to check your access:

Be sure to return to this page afterwards.

  1. After you successfully log in, refresh the page to connect this training module up to the ValidMind Platform:

ValidMind for model development

Jupyter Notebook series

When you run these notebooks, they will generate a draft of model documentation and upload it to ​ValidMind, complete with test supporting test results.


You will need to have already completed 1 — Set up the ValidMind Library during the first module to proceed.

​ValidMind for model development

Our series of four introductory notebooks for model developers include sample code and how-to information to get you started with ​ValidMind:

1 — Set up the ValidMind Library
2 — Start the model development process
3 — Integrate custom tests
4 — Finalize testing and documentation

In this second module, we’ll run through 2 — Start the model development process together.

Let’s continue our journey with 2 — Start the model development process on the next page.

2 — Start the model development process

During this course, we’ll run through these notebooks together, and at the end of your learning journey you’ll have a fully documented sample model ready for review.

For now, scroll through this notebook to explore. When you are done, click to continue.

Explore ValidMind tests

Get your code snippet

​ValidMind generates a unique code snippet for each registered model to connect with your developer environment:

  1. From the Inventory, select the name of your model to open up the model details page.
  2. On the left sidebar that appears for your model, click Getting Started.
  3. Locate the code snippet and click Copy snippet to clipboard.

When you’re done, click to continue.

Can’t load the ValidMind Platform?

Make sure you’re logged in and have refreshed the page in a Chromium-based web browser.

Connect to your model

With your code snippet copied to your clipboard:

  1. Open 2 — Start the model development process: JupyterHub
  2. Run the following cells in the Setting up section:
    Initialize the ValidMind Library / Import sample dataset.

When you’re done, return to this page and click to continue.

Identify qualitative tests

Next, we’ll use the list_tests() function to pinpoint tests we want to run:

  1. Continue with 2 — Start the model development process: JupyterHub
  2. Run all the cells under the Setting up section: Identify qualitative tests

When you’re done, return to this page and click to continue.

Initialize ​ValidMind datasets

Then, we’ll use the init_dataset() function to connect the sample data with a ​ValidMind Dataset object in preparation for running tests:

  1. Continue with 2 — Start the model development process: JupyterHub
  2. Run the following cell in the Setting up section: Initialize the ​ValidMind datasets

When you’re done, return to this page and click to continue.

Run ValidMind tests

Run tabular data tests

You run individual tests by calling the run_test function provided by the validmind.tests module:

  1. Continue with 2 — Start the model development process: JupyterHub
  2. Run all the cells under the Running tests section: Run tabular data tests.

When you’re done, return to this page and click to continue.

Utilize test output

You can utilize the output from a ValidMind test for further use, for example, if you want to remove highly correlated features:

  1. Continue with 2 — Start the model development process: JupyterHub
  2. Run all the cells under the Running tests section: Utilize test output.

When you’re done, return to this page and click to continue.

Log ValidMind tests

Document test results


Try it live on the next page.

Every test result returned by the run_test() function has a .log() method that can be used to send the test results to the ValidMind Platform:

  • When using run_documentation_tests(), documentation sections will be automatically populated with the results of all tests registered in the documentation template.
  • When logging individual test results to the platform, you’ll need to manually add those results to the desired section of the model documentation.

Run & log multiple tests

The run_documentation_tests() function allows you to run multiple tests at once and automatically log the results to your documentation:

  1. Continue with 2 — Start the model development process: JupyterHub
  2. Run the following cell in the Documenting results section: Run and log multiple tests.

When you’re done, return to this page and click to continue.

Run & log an individual test

Next, we’ll run an individual test and log the result to the ValidMind Platform:

  1. Continue with 2 — Start the model development process: JupyterHub
  2. Run the following cell in the Running tests section: Run and log an individual test.

When you’re done, return to this page and click to continue.

Work with test results


Try it live on the next page.

Add individual test results to model documentation

With the test results logged, let’s head to the model we connected to at the beginning of this notebook and insert our test results into the documentation:

  1. From the Inventory in the ValidMind Platform, go to the model you connected to earlier.

  2. In the left sidebar that appears for your model, click Documentation.

  3. Locate the Data Preparation section and click on 2.3 Correlations and Interactions to expand that section.

  4. Hover under the Pearson Correlation Matrix content block until a horizontal dashed line with a + button appears, indicating that you can insert a new block.

  5. Click + and then select Test-Driven Block under from library:

    • Click on VM Library under test-driven in the left sidebar.
    • In the search bar, type in HighPearsonCorrelation.
    • Select HighPearsonCorrelation:balanced_raw_dataset as the test.
  6. Finally, click Insert 1 Test Result to Document to add the test result to the documentation.

    Confirm that the individual results for the high correlation test has been correctly inserted into section 2.3 Correlations and Interactions of the documentation.

Insert a test-driven block

2.3 Correlations and Interactions — HighPearsonCorrelation:balanced_raw_dataset

When you’re done, click to continue.

Test an existing model

Model testing with ​ValidMind

Try it live on the next pages.


So far, we’ve focused on the data assessment and pre-processing that usually occurs prior to any models being built. Now, let’s instead assume we have already built a model and we want to incorporate some model results into our documentation:

Using ​ValidMind tests, we’ll train a simple logistic regression model on our dataset and evaluate its performance by using the LogisticRegression class from the sklearn.linear_model.

The last step for evaluating the model’s performance is to initialize the ​ValidMind Dataset and Model objects in preparation for assigning model predictions to each dataset.

Once the model has been registered you can assign model predictions to the training and test datasets. The assign_predictions() method from the Dataset object can link existing predictions to any number of models.

In this next example, we’ll focus on running the tests within the Model Development section of the model documentation. Only tests associated with this section will be executed, and the corresponding results will be updated in the model documentation.

Train your model

Using ​ValidMind tests, we’ll train a simple logistic regression model on our dataset and evaluate its performance:

  1. Continue with 2 — Start the model development process: JupyterHub
  2. Run all the cells under the Model testing section: Train simple logistic regression model.

When you’re done, return to this page and click to continue.

Initialize a model object

Use the init_dataset and init_model functions to initialize these objects:

  1. Continue with 2 — Start the model development process: JupyterHub
  2. Run the cell under the following Model testing section: Initialize model evaluation objects.

When you’re done, return to this page and click to continue.

Assign predictions

Use the assign_predictions() method from the Dataset object to link existing predictions to any number of models:

  1. Continue with 2 — Start the model development process: JupyterHub
  2. Run the cell under the following Model testing section: Assign predictions.

When you’re done, return to this page and click to continue.

Run the model evaluation tests

Finally, we’ll run only the tests within the Model Development section of the model documentation:

  1. Continue with 2 — Start the model development process: JupyterHub
  2. Run the cell under the following Model testing section: Run the model evaluation tests.

When you’re done, return to this page and click to continue.

In summary

Learning to run tests

In this second module, you learned how to:


Continue your model development journey with:

Implementing custom tests