Developer Fundamentals — Module 2 of 4
Click to start
“As a developer who has registered a model with ValidMind, I want to identify relevant tests to run from ValidMind’s test repository, run and log tests for my model, and insert the test results into my model’s documentation.”
This second module is part of a four-part series:
Developer Fundamentals
First, let’s make sure you can log in to ValidMind.
Training is interactive — you explore ValidMind live. Try it!
→ , ↓ , SPACE , N — next slide ← , ↑ , P , H — previous slide ? — all keyboard shortcuts
To continue, you need to have been onboarded onto ValidMind Academy with the Developer role and completed the first module of this course:
Be sure to return to this page afterwards.
Jupyter Notebook series
When you run these notebooks, they will generate a draft of model documentation and upload it to ValidMind, complete with test supporting test results.
You will need to have already completed 1 — Set up the ValidMind Library during the first module to proceed.
Our series of four introductory notebooks for model developers include sample code and how-to information to get you started with ValidMind:
1 — Set up the ValidMind Library
2 — Start the model development process
3 — Integrate custom tests
4 — Finalize testing and documentation
In this second module, we’ll run through 2 — Start the model development process together.
Let’s continue our journey with 2 — Start the model development process on the next page.
ValidMind test repository
ValidMind provides a wealth out-of-the-box of tests to help you ensure that your model is being built appropriately.
In this module, you’ll become familiar with the individual tests available in ValidMind, as well as how to run them and change parameters as necessary.
For now, scroll through these test descriptions to explore. When you’re done, click to continue.
ValidMind generates a unique code snippet for each registered model to connect with your developer environment:
Can’t load the ValidMind Platform?
Make sure you’re logged in and have refreshed the page in a Chromium-based web browser.
Connect to your model
With your code snippet copied to your clipboard:
When you’re done, return to this page and click to continue.
Identify qualitative tests
Next, we’ll use the list_tests()
function to pinpoint tests we want to run:
When you’re done, return to this page and click to continue.
Initialize ValidMind datasets
Then, we’ll use the init_dataset()
function to connect the sample data with a ValidMind Dataset
object in preparation for running tests:
When you’re done, return to this page and click to continue.
Run tabular data tests
You run individual tests by calling the run_test
function provided by the validmind.tests
module:
When you’re done, return to this page and click to continue.
Utilize test output
You can utilize the output from a ValidMind test for further use, for example, if you want to remove highly correlated features:
When you’re done, return to this page and click to continue.
Every test result returned by the run_test()
function has a .log()
method that can be used to send the test results to the ValidMind Platform:
run_documentation_tests()
, documentation sections will be automatically populated with the results of all tests registered in the documentation template.Run & log multiple tests
The run_documentation_tests()
function allows you to run multiple tests at once and automatically log the results to your documentation:
When you’re done, return to this page and click to continue.
Run & log an individual test
Next, we’ll run an individual test and log the result to the ValidMind Platform:
When you’re done, return to this page and click to continue.
With the test results logged, let’s head to the model we connected to at the beginning of this notebook and insert our test results into the documentation:
From the Inventory in the ValidMind Platform, go to the model you connected to earlier.
In the left sidebar that appears for your model, click Documentation.
Locate the Data Preparation section and click on 2.3 Correlations and Interactions to expand that section.
Hover under the Pearson Correlation Matrix content block until a horizontal dashed line with a + button appears, indicating that you can insert a new block.
Click + and then select Test-Driven Block under from library:
HighPearsonCorrelation
.HighPearsonCorrelation:balanced_raw_dataset
as the test.Finally, click Insert 1 Test Result to Document to add the test result to the documentation.
Confirm that the individual results for the high correlation test has been correctly inserted into section 2.3 Correlations and Interactions of the documentation.
Model testing with ValidMind
Try it live on the next pages.
So far, we’ve focused on the data assessment and pre-processing that usually occurs prior to any models being built. Now, let’s instead assume we have already built a model and we want to incorporate some model results into our documentation:
Using ValidMind tests, we’ll train a simple logistic regression model on our dataset and evaluate its performance by using the LogisticRegression
class from the sklearn.linear_model
.
The last step for evaluating the model’s performance is to initialize the ValidMind Dataset
and Model
objects in preparation for assigning model predictions to each dataset.
Once the model has been registered you can assign model predictions to the training and test datasets. The assign_predictions()
method from the Dataset
object can link existing predictions to any number of models.
In this next example, we’ll focus on running the tests within the Model Development section of the model documentation. Only tests associated with this section will be executed, and the corresponding results will be updated in the model documentation.
Train your model
Using ValidMind tests, we’ll train a simple logistic regression model on our dataset and evaluate its performance:
When you’re done, return to this page and click to continue.
Initialize a model object
Use the init_dataset
and init_model
functions to initialize these objects:
When you’re done, return to this page and click to continue.
Assign predictions
Use the assign_predictions()
method from the Dataset
object to link existing predictions to any number of models:
When you’re done, return to this page and click to continue.
Run the model evaluation tests
Finally, we’ll run only the tests within the Model Development section of the model documentation:
When you’re done, return to this page and click to continue.
Learning to run tests
In this second module, you learned how to:
Continue your model development journey with:
Implementing custom tests
ValidMind Academy | Home