Try out ValidMind Academy
Our training modules are interactive. They combine instructional content with our live product and are easy to use.
EU AI Act Compliance — Read our original regulation brief on how the EU AI Act aims to balance innovation with safety and accountability, setting standards for responsible AI use
October 22, 2024
We’ve added more flexible features to enable better model risk management, including support for ongoing monitoring plans, the ability to archive and delete models, the ability to insert metric over time blocks into your documentation, and more.
Monitoring is a critical component of model risk management, as emphasized in regulations such as SR 11-7, SS1/23, and E-24. With this release of ValidMind, we officially support ongoing monitoring. You can enable this feature for both existing and new models.
Scenarios where ongoing monitoring is warranted:
The monitoring template for your model automatically populates with data as your code runs, providing a comprehensive view of performance over time. You can access these results in the ValidMind Platform, identify deviations, and take corrective actions as needed.
To start uploading ongoing monitoring results for a model to ValidMind, you enable monitoring in your code and then select a monitoring template. Our user guide can walk you through the process of setting up ongoing monitoring step-by-step.
We’ve also added a new notebook that demonstrates how to log ongoing monitoring results for a model.
This notebook provides a hands-on example of setting up and conducting monitoring tests, helping you ensure your models are performing consistently over time.
The latest update to the ValidMind Library introduces new features and improvements that enhance bias and fairness testing.
To let you evaluate model bias and fairness effectively across protected classes, four new metrics are available:
ProtectedClassesCombination
ProtectedClassesDescription
ProtectedClassesDisparity
ProtectedClassesThresholdOptimizer
There is also an additional parameter class_of_interest
in the SHAPGlobalImportance tool for better SHAP value selection.
A new Jupyter Notebook guides you through building a credit risk model with integrated bias and fairness analysis.
Accompanying this, a dataset called lending_club_biased.csv.gz
and its processing script have been added for testing purposes, alongside an update to the dataset module initializer.
When running a test, you now have the option to filter specific columns in your dataset.
Tests can run on a subset of columns in a dataset without creating a new dataset with columns removed.
The following example shows how to run the DatasetDescription
test with only a single column from a dataset:
from validmind.tests import run_test
result = run_test(
test_id="validmind.data_validation.DatasetDescription",
inputs={
"dataset": {
"input_id": "my_dataset",
"columns": ["col1"]
}
},
)
Notice how the dataset
input is set to a dictionary whose key input_id
maps to a particular input dataset. And whose columns
key sets the subset of columns from the original dataset to pass to the DatasetDescription
test.
A new Jupyter Notebook demonstrates how to selectively include columns in your analysis based on your specific criteria.
When you need to decommission models that you no longer need, you can now archive and then delete them. This feature helps keep your model inventory accurate and up to date with your organization’s current resources.
You now have new stages for inventory models, including ACTIVE
, ARCHIVED
, and DELETED
, which are shown as a new column in the model inventory and as field in the model overview.
Metric Over Time
content blockYou now can add Metrics Over Time
blocks to model documentation or as part of your ongoing monitoring of a model, in addition to the previously available Text
and Test-Driven
blocks.
This update enhances the user experience by introducing a new email verification error page.
If you need to verify your email address but have been unable to do so, this page allows you to resend the verification email and navigate to the dashboard to access ValidMind once verification is complete.
This update enhances the custom field functionality, making it possible to define and test custom Python code to calculate an output.
The results of your code execution are displayed clearly, and you can test your custom code using a new testing feature directly within the interface.
Watch the demo:
You can now define attachments as model inventory fields for supporting documentation.
The inventory model page displays attachments when the attachment field is defined:
You can now manage attachments within findings.
Manage Attachment
permission for roles, ensuring that only authorized users can handle attachment-related tasks.We added a new Currency
type for Number
fields which provides formatting for currencies and large number abbreviations. This field allows you to record pricing data within a model, for example.
You can configure the currency type as well as the precision (decimals) when creating the field:
You can now delete findings as a validator.
As an organization admin, you can now rename any system role, except for the Customer Admin role, and edit the role description.
You can now filter model activity by type, allowing you to easily view specific actions.
The recent activity section on the model details page also provides a clearer overview of updates. This enhancement simplifies navigating through activity logs, making it easier to find relevant information based on the selected filters.
The new activity type filters are available on:
You can now use rich text editing in the validation guideline description dialog, available under Settings > Risk Areas & Validation Guidelines.
Fixed an issue where scrollbars were unnecessarily displayed in the layout sidebar on browsers running on Windows.
Our training modules now feature ValidMind’s new color scheme, making them more visually appealing and easier to use.
Sections are linked directly from the training overview for quicker navigation.
These changes aim to improve the overall look and usability of our training materials.
Additionally, we’ve introduced a new, easy-to-remember URL for accessing our training content.
We created a series of short videos to help you better understand the model validation process. These videos introduce you to the steps you need to follow when validating models on our platform.
We added a short FAQ-style video to show you how to find the tests ValidMind provides and add them to your own model documentation.
We expanded our documentation on deployment options for ValidMind.
You can now find detailed information on both multi-tenant cloud and single-tenant options, giving you more clarity on how to deploy ValidMind based on your needs.
To make it easier to find our open-source software on GitHub, we added a link to the code samples page.
To access the latest version of the ValidMind Platform,1 hard refresh your browser tab:
Ctrl
+ Shift
+ R
OR Ctrl
+ F5
⌘ Cmd
+ Shift
+ R
OR hold down ⌘ Cmd
and click the Reload
buttonTo upgrade the ValidMind Library:2
In your Jupyter Notebook:
Then within a code cell or your terminal, run:
You may need to restart your kernel after running the upgrade package for changes to be applied.